instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
plt.show() not showing images | My code is as follow:
import torch
import torchvision
import torchvision.transforms as transforms
import numpy
import matplotlib
import matplotlib.pyplot as plt
torch.set_printoptions(linewidth=120)
train_set = torchvision.datasets.FashionMNIST(
root='./data/FashionMNIST',
train=True,
download=True,
transform=transforms.Compose([
transforms.ToTensor()
])
)
train_loader = torch.utils.data.DataLoader(
train_set, batch_size=10
)
sample = next(iter(train_set))
image, label = sample
plt.imshow(image.squeeze(), cmap='gray')
plt.show()
print(f"label:{label}")
I try to print an image via matploblib.pylot but nothing happens.
Plus, I'm doing this on my linux server, while the same code works quite well locally on my vscode.
| As an alternative to viewing your matplolib window remotely, you can always save your plot as an image file and copy it to your local machine. This is as simple as using plt.savefig
plt.save(f'label:{label}.png')
| https://stackoverflow.com/questions/72894769/ |
Save normalized tensor to png in a loop using pytorch | I am working on a GAN and cannot make it work to save images that I transformed into tensors back to βnormalβ pngs within a loop. The same goes for the tensors that are generated by the Generator.
I applied the following transformation for the original images I am using for the training in the GAN ( I hope i did it the right way):
transform = transforms.Compose(
[
transforms.ToPILImage(),
transforms.Resize(img_size),
transforms.CenterCrop(img_size),
transforms.ToTensor(),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
]
)
When trying to save the tensors as png images in a loop with the following code they do not come out the right way:
real_samples = next(iter(train_loader))
for i in range(4):
torchvision.utils.save_image(real_samples[i, :, :, :],
βReal_Images/real_image{}.pngβ.format(i))
On the left is an example of the original image after transformation and on the right an example of the βwronglyβ saved ones:
Can anyone please help me out with saving the images in the right way?
| You apply normalization with mean 0.5 and std 0.5, so your images are transformed from range (0., 1.) to (-1., 1.). You should denormalize them and bring back to the original range before saving them.
In your case, simply doing
real_samples = real_samples * 0.5 + 0.5
before saving should work.
| https://stackoverflow.com/questions/72896171/ |
What do BatchNorm2d's running_mean / running_var mean in PyTorch? | I'd like to know what exactly the running_mean and running_var that I can call from nn.BatchNorm2d.
Example code is here where bn means nn.BatchNorm2d.
vector = torch.cat([
torch.mean(self.conv3.bn.running_mean).view(1), torch.std(self.conv3.bn.running_mean).view(1),
torch.mean(self.conv3.bn.running_var).view(1), torch.std(self.conv3.bn.running_var).view(1),
torch.mean(self.conv5.bn.running_mean).view(1), torch.std(self.conv5.bn.running_mean).view(1),
torch.mean(self.conv5.bn.running_var).view(1), torch.std(self.conv5.bn.running_var).view(1)
])
I couldn't figure out what running_mean and running_var mean in the Pytorch official documentation and user community.
What do nn.BatchNorm2.running_mean and nn.BatchNorm2.running_var mean?
| From the original Batchnorm paper:
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,Seguey Ioffe and Christian Szegedy, ICML'2015
You can see on Algorithm 1. how to measure the statistics of a given batch.
However what is kept in memory across batches is the running stats, i.e. the statistics which are measured iteratively at each batch inference. The computation of the running mean and running variance is actually quite well explained in the documentation page of nn.BatchNorm2d:
By default, the momentum coefficient is set to 0.1, it regulates how much of the current batch statistics will affect the running statistics:
closer to 1 means the new running stat is closer to the current batch statistics, whereas
closer to 0 means the current batch stats will not contribute much to updating the new running stats.
It's worth pointing out that Batchnorm2d is applied across spatial dimensions, * in addition*, to the batch dimension of course. Given a batch of shape (b, c, h, w), it will compute the statistics across (b, h, w). This means the running statistics are shaped (c,), i.e. there are as many statistics components as there are in input channels (for both mean and variance).
Here is a minimal example:
>>> bn = nn.BatchNorm2d(10)
>>> x = torch.rand(2,10,2,2)
Since track_running_stats is set to True by default on BatchNorm2d, it will track the running stats when inferring on training mode.
The running mean and variance are initialized to zeros and ones, respectively.
>>> running_mean, running_var = torch.zeros(x.size(1)),torch.ones(x.size(1))
Let's perform inference on bn in training mode and check its running stats:
>>> bn(x)
>>> bn.running_mean, bn.running_var
(tensor([0.0650, 0.0432, 0.0373, 0.0534, 0.0476,
0.0622, 0.0651, 0.0660, 0.0406, 0.0446]),
tensor([0.9027, 0.9170, 0.9162, 0.9082, 0.9087,
0.9026, 0.9136, 0.9043, 0.9126, 0.9122]))
Now let's compute those stats by hand:
>>> (1-momentum)*running_mean + momentum*xmean
tensor([[0.0650, 0.0432, 0.0373, 0.0534, 0.0476,
0.0622, 0.0651, 0.0660, 0.0406, 0.0446]])
>>> (1-momentum)*running_var + momentum*xvar
tensor([[0.9027, 0.9170, 0.9162, 0.9082, 0.9087,
0.9026, 0.9136, 0.9043, 0.9126, 0.9122]])
| https://stackoverflow.com/questions/72899079/ |
Visualizing Information from each layer: Reconstructing images with Feature Maps | I'm interested in visualizing information from a particular layer of the model. In this instance I'm using a pytorch model for ResNet18 source code for which can be found here.
Essentially the idea is to get the information each layer has for any input image that it is being trained on, and reconstruct the input image with the information that particular Conv Layer contains for the input image with the feature maps. For example, if a convolutional layer with with Nth filter corresponding to a Dogs Ear, I'd like to be able to view which CNN layer corresponds to which attribute of the image.
A given input image vector x is encoded
in each layer of the CNN by the filter responses to that image. A layer with N distinct filters
has Nl feature maps each of size Ml, where Ml is the height times the width of the feature map.
Passing the data via the pytorch dataloaders:
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = '/content/drive/MyDrive/Colab Notebooks/Animal Data/'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
I'm training the model here:
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print(f'Epoch {epoch}/{num_epochs - 1}')
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print(f'Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s')
print(f'Best val Acc: {best_acc:4f}')
# load best model weights
model.load_state_dict(best_model_wts)
return model
I understand that I likely have to work with the model itself, but I'm lost on how I'd begin doing that. Any tips, or sources are highly appreciated.
| I suggest that you search and read about PyTorch hooks, you can use hooks to observe the input and the output of any layer in the network, and then you can call a function to construct what you want.
You can start by reading the documentation about it, you can find it here. The idea is that you should hook a function to a layer and this function will receive the input and the output of this layer and inside this function, you will write your code to construct what you want.
| https://stackoverflow.com/questions/72903147/ |
Pytorch, how to get the parameters of my network | I have a question about getting all parameters of the network. My network is defined as follow:
activation = nn.ReLU()
class OneInputBasis(nn.Module):
def __init__(self):
super().__init__()
bo_b = True
bo_last = False
self.l1 = nn.Linear(200, 100, bias = bo_b).to(device)
self.l4 = nn.Linear(100, 100, bias = bo_last).to(device)
def forward(self, v):
v = activation ( self.l1(v) )
v = ( self.l4(v) )
return v
and
class node(nn.Module):
def __init__(self):
super().__init__()
bo_b = True
bo_last = False
self.set_lay = []
for jj in range(dim_output_space_basis):
self.set_lay.append(OneInputBasis())
def forward(self, v):
w = self.set_lay[0](v)
for ii in range(dim_output_space_basis-1):
w = torch.cat((w, self.set_lay[ii+1](v)), dim = 1 )
return w
and
class mesh(nn.Module):
def __init__(self):
super().__init__()
bo_b = True
bo_last = False
self.l3 = nn.Linear(2, 100, bias = bo_b).to(device)
self.l4 = nn.Linear(100, 100, bias = bo_b).to(device)
self.l7 = nn.Linear(100,10, bias = bo_last).to(device)
def forward(self, w):
w = activation ( self.l3(w) )
w = activation ( self.l4(w) )
w = ( self.l7(w) )
return w
finally, I have
activation = nn.ReLU()
class Test(nn.Module):
def __init__(self):
super().__init__()
bo_b = True
bo_last = False
self.top = node()
self.bottom = mesh()
def forward(self, v, w, y):
v = self.top(v)
w = self.bottom(w)
e = torch.bmm(w ,torch.bmm(v, y))
return e[:, :, 0]
Now I define the network:
fnn_adam = Test()
When I print the parameters of the network, as
for p in fnn_adam.parameters():
print(p)
I can only see the parameters associated with fnn_adam.bottom, how can I print out the parameters associated with fnn_adam.top? Are the parameters associated with .top trainable? Thank you!
| Calling self.set_lay.append(OneInputBasis()) with the instantiation of node does not register the fully-connected layers
self.l1 = nn.Linear(200, 100, bias = bo_b).to(device)
self.l4 = nn.Linear(100, 100, bias = bo_last).to(device)
to the instance fnn_adam of class Test. This is why the respective parameters do not show up in your code above.
Without loss of generality, I chose
import torch
import torch.nn as nn
import torch.nn.functional as F
dim_output_space_basis = 2
device ='cpu'
and modified the init method of class node. The remainder of your code is perfectly fine. Please see below:
class node(nn.Module):
def __init__(self):
super().__init__()
bo_b = True
bo_last = False
# self.set_lay = [] # Legacy
attributeNames = ['l_btm{}'.format(i) for i in range(dim_output_space_basis)]
for jj_index, jj in enumerate(range(dim_output_space_basis)):
# self.set_lay.append(OneInputBasis()) # Legacy
setattr(self, attributeNames[jj_index], OneInputBasis())
Now, the parameters register as evidenced by running fnn_adam._modules and observing its output
OrderedDict([('top',
node(
(l_btm0): OneInputBasis(
(l1): Linear(in_features=200, out_features=100, bias=True)
(l4): Linear(in_features=100, out_features=100, bias=False)
)
(l_btm1): OneInputBasis(
(l1): Linear(in_features=200, out_features=100, bias=True)
(l4): Linear(in_features=100, out_features=100, bias=False)
)
)),
('bottom',
mesh(
(l3): Linear(in_features=2, out_features=100, bias=True)
(l4): Linear(in_features=100, out_features=100, bias=True)
(l7): Linear(in_features=100, out_features=10, bias=False)
))])
| https://stackoverflow.com/questions/72905562/ |
Sagemaker creates output folders but no model.tar.gz after successful completion of the Training Job | I am running a Training Job using the Sagemaker API. The code for configuring the estimator looks as follows (I shrinked the full path names a bit):
s3_input = "s3://sagemaker-studio-****/training-inputs".format(bucket)
s3_images = "s3://sagemaker-studio-****/dataset"
s3_labels = "s3://sagemaker-studio-****/labels"
s3_output = 's3://sagemaker-studio-****/output'.format(bucket)
cfg='{}/input/models/'.format(s3_input)
weights='{}/input/data/weights/'.format(s3_input)
outpath='{}/'.format(s3_output)
images='{}/'.format(s3_images)
labels='{}/'.format(s3_labels)
hyperparameters = {
"epochs": 1,
"batch-size": 2
}
inputs = {
"cfg": TrainingInput(cfg),
"images": TrainingInput(images),
"weights": TrainingInput(weights),
"labels": TrainingInput(labels)
}
estimator = PyTorch(
entry_point='train.py',
source_dir='s3://sagemaker-studio-****/input/input.tar.gz',
image_uri=container,
role=get_execution_role(),
instance_count=1,
instance_type='ml.g4dn.xlarge',
input_mode='File',
output_path=outpath,
train_output=outpath,
base_job_name='visualsearch',
hyperparameters=hyperparameters,
framework_version='1.9',
py_version='py38'
)
estimator.fit(inputs)
Everything runs fine and I get the success message:
Results saved to #033[1mruns/train/exp#033[0m
2022-07-08 08:38:35,766 sagemaker-training-toolkit INFO Waiting for the process to finish and give a return code.
2022-07-08 08:38:35,766 sagemaker-training-toolkit INFO Done waiting for a return code. Received 0 from exiting process.
2022-07-08 08:38:35,767 sagemaker-training-toolkit INFO Reporting training SUCCESS
2022-07-08 08:39:08 Uploading - Uploading generated training model
2022-07-08 08:39:08 Completed - Training job completed
ProfilerReport-1657268881: IssuesFound
Training seconds: 558
Billable seconds: 558
CPU times: user 1.34 s, sys: 146 ms, total: 1.48 s
Wall time: 11min 20s
When I call estimator.model_data I get a path poiting to a model.tar.gz file s3://sagemaker-studio-****/output/.../model.tar.gz
Sagemaker generated subfoldes into the output folder (which in turn contain a lot of json files and other artifacts):
But the file model.tar.gz is missing. This file is nowhere to be found. Is there anything I need to change or to add, in order to obtain my model?
Any help is much appreciated.
| you need to make sure to store your model output to the right location inside the training container. Sagemaker will upload everything that is stored in the MODEL_DIR directory. You can find the location in the ENV of the training job:
model_dir = os.environ.get("SM_MODEL_DIR")
Normally it is set to opt/ml/model
Ref:
https://github.com/aws/sagemaker-training-toolkit/blob/master/ENVIRONMENT_VARIABLES.md#sm_model_dir
https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo-output.html
| https://stackoverflow.com/questions/72909085/ |
`from_numpy()` leads to Expected vec.is_mps() to be true, but got false | When I try to convert the data (numpy nd array) to a tensor with from_numpy() when using the mps backend with torch.
I initiliaze the model as such:
device = "mps" if torch.has_mps else "cpu"
model = NeuralNetwork().to(device)
and it is using the mps backend:
Using mps device
NeuralNetwork(...)
Then using it as follows:
observations = env.reset()
X = torch.from_numpy(observations)
logits = model(X)
the model throws the error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [9], in <cell line: 3>()
1 observations = env.reset()
2 X = torch.from_numpy(observations)
----> 3 logits = model(X)
File lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
Input In [2], in NeuralNetwork.forward(self, x)
13 def forward(self, x):
---> 14 logits = self.linear_relu_stack(x)
15 return logits
File lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File lib/python3.8/site-packages/torch/nn/modules/container.py:139, in Sequential.forward(self, input)
137 def forward(self, input):
138 for module in self:
--> 139 input = module(input)
140 return input
File lib/python3.8/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File lib/python3.8/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
RuntimeError: Expected vec.is_mps() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
If I change device to cpuinstead of mps, it works. How do I use numpy arrays with the mps backend?
I am running it on an M1 chip and torch.has_mps is True.
| You would have to first send your model to the mps device, then send your input explicitly to the mps device as well. In code:
model.to('mps')
logits = model(X.to('mps'))
Something like this worked for me using torch nightly on M1 pro, using a model that does not contain ops and data types that mps does not currently support.
| https://stackoverflow.com/questions/72910108/ |
Is there any way to read .pth(dataset) and turn them into csv? | I have a repo that provided a model architecture, but not pretrained model. it actually provides a .pth file but it's dataset inside the file, is there any way to make the dataset to csv?
| .pth files saved with toch.save() are supposed to be binaries. using torch.load() you can get the dataset, and then save it again as a .csv with pandas for example
| https://stackoverflow.com/questions/72911820/ |
PyTorch UNet semantic segmentation dice score more than 1 | I'm trying to develop a program that finds road lanes using semantic segmentation with UNet backend. But while training the model, it's giving me dice score more than 1. Why is this happening?
Batch size: 16
Num_workers: 2
Epochs: 50
IMAGE_HEIGHT = 80
IMAGE_WIDTH = 120
PIN_MEMORY = True
Here's my accuracy function:
def check_accuracy(loader, model, device="cpu"):
num_correct = 0
num_pixels = 0
dice_score = 0
model.eval()
with torch.no_grad():
for x, y in loader:
x = x.to(device)
y = y.to(device).unsqueeze(1)
preds = torch.sigmoid(model(x))
preds = (preds > 0.5).float()
num_correct += (preds == y).sum()
num_pixels += torch.numel(preds)
dice_score += (2 * (preds * y).sum()) / ((preds + y).sum() + 1e-8)
print(f"Got {num_correct/num_pixels} with accuracy {num_correct/num_pixels*100:.2f}")
print(f"Dice score: {dice_score/len(loader)}")
model.train()
x's are images, and y's are ground truths.
Here's an example for images and ground truths:
Image:
Ground truth:
Here's my dataset's __getitem__ method:
def __getitem__(self, index):
img_path = os.path.join(self.root_dir, self.images[index])
mask_path = os.path.join(self.val_dir, self.images[index])
image = np.array(Image.open(img_path).convert("RGB"))
mask = np.array(Image.open(mask_path).convert("L"), dtype=np.float32)
mask[mask == 255.0] = 1.0
if self.transform is not None:
augmentations = self.transform(image=image, mask=mask)
image = augmentations["image"]
mask = augmentations["mask"]
return image, mask
Here's my transformations:
train_transform = A.Compose(
[
A.Resize(height=IMAGE_HEIGHT, width=IMAGE_WIDTH),
A.Rotate(limit=35, p=1.0),
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.1),
A.Normalize(
mean=[0.0, 0.0, 0.0],
std=[1.0, 1.0, 1.0],
max_pixel_value=255.0,
),
ToTensorV2(),
],
)
val_transforms = A.Compose(
[
A.Resize(height=IMAGE_HEIGHT, width=IMAGE_WIDTH),
A.Normalize(
mean=[0.0, 0.0, 0.0],
std=[1.0, 1.0, 1.0],
max_pixel_value=255.0,
),
ToTensorV2(),
],
)
And these are the accuracy metrics:
Epoch: 1/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.99it/s, loss=1.13]
=> Saving checkpoint
Epoch: 2/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.81it/s, loss=1.13]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 3/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=0.7]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 4/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.87it/s, loss=0.354]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 5/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.91it/s, loss=-.094]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 6/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.94it/s, loss=-.419]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 7/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-.914]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 8/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.92it/s, loss=-.7]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 9/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.87it/s, loss=-1.26]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 10/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-1.6]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 11/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.96it/s, loss=-2.04]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 12/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.89it/s, loss=-2.53]
=> Saving checkpoint Got 0.9816353917121887 with accuracy 98.16 Dice score: 0.0
Epoch: 13/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.90it/s, loss=-2.77]
=> Saving checkpoint Got 0.9814687371253967 with accuracy 98.15 Dice score: 0.0
Epoch: 14/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-3.14]
=> Saving checkpoint Got 0.9801874756813049 with accuracy 98.02 Dice score: 0.0
Epoch: 15/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-3.77]
=> Saving checkpoint Got 0.9771562218666077 with accuracy 97.72 Dice score: 0.0
Epoch: 16/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.85it/s, loss=-3.95]
=> Saving checkpoint Got 0.972906231880188 with accuracy 97.29 Dice score: 0.013821512460708618
Epoch: 17/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.92it/s, loss=-4.83]
=> Saving checkpoint Got 0.9672812223434448 with accuracy 96.73 Dice score: 0.09691906720399857
Epoch: 18/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.96it/s, loss=-4.96]
=> Saving checkpoint Got 0.9596353769302368 with accuracy 95.96 Dice score: 0.2153720110654831
Epoch: 19/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.86it/s, loss=-5.43]
=> Saving checkpoint Got 0.9514479041099548 with accuracy 95.14 Dice score: 0.34086111187934875
Epoch: 20/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.98it/s, loss=-5.81]
=> Saving checkpoint Got 0.9496874809265137 with accuracy 94.97 Dice score: 0.385390967130661
Epoch: 21/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.91it/s, loss=-5.86]
=> Saving checkpoint Got 0.9429270625114441 with accuracy 94.29 Dice score: 0.4814487397670746
Epoch: 22/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.98it/s, loss=-6.34]
=> Saving checkpoint Got 0.9388750195503235 with accuracy 93.89 Dice score: 0.5995486974716187
Epoch: 23/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.90it/s, loss=-6.92]
=> Saving checkpoint Got 0.9380520582199097 with accuracy 93.81 Dice score: 0.7058220505714417
Epoch: 24/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.90it/s, loss=-7.42]
=> Saving checkpoint Got 0.9415416717529297 with accuracy 94.15 Dice score: 0.8273581266403198
Epoch: 25/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.94it/s, loss=-7.84]
=> Saving checkpoint Got 0.9451770782470703 with accuracy 94.52 Dice score: 0.9627659916877747
Epoch: 26/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-8.32]
=> Saving checkpoint Got 0.9467916488647461 with accuracy 94.68 Dice score: 1.1096603870391846
Epoch: 27/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.84it/s, loss=-8.75]
=> Saving checkpoint Got 0.9468228816986084 with accuracy 94.68 Dice score: 1.2025865316390991
Epoch: 28/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.92it/s, loss=-8.6]
=> Saving checkpoint Got 0.942562460899353 with accuracy 94.26 Dice score: 1.3215676546096802
Epoch: 29/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.95it/s, loss=-9.3]
=> Saving checkpoint Got 0.9366874694824219 with accuracy 93.67 Dice score: 1.4410816431045532
Epoch: 30/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.85it/s, loss=-9.37]
=> Saving checkpoint Got 0.9291979074478149 with accuracy 92.92 Dice score: 1.5965386629104614
Epoch: 31/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.89it/s, loss=-9.29]
=> Saving checkpoint Got 0.9251145720481873 with accuracy 92.51 Dice score: 1.7157979011535645
Epoch: 32/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.89it/s, loss=-9]
=> Saving checkpoint Got 0.9198125004768372 with accuracy 91.98 Dice score: 1.7378650903701782
Epoch: 33/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.88it/s, loss=-9.82]
=> Saving checkpoint Got 0.9136666655540466 with accuracy 91.37 Dice score: 1.758676290512085
Epoch: 34/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.85it/s, loss=-9.84]
=> Saving checkpoint Got 0.9044270515441895 with accuracy 90.44 Dice score: 1.7790669202804565
Epoch: 35/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.92it/s, loss=-10.4]
=> Saving checkpoint Got 0.8995833396911621 with accuracy 89.96 Dice score: 1.7764993906021118
Epoch: 36/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.95it/s, loss=-10.7]
=> Saving checkpoint Got 0.8992916345596313 with accuracy 89.93 Dice score: 1.775970697402954
Epoch: 37/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-10.8]
=> Saving checkpoint Got 0.8976249694824219 with accuracy 89.76 Dice score: 1.77357017993927
Epoch: 38/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.95it/s, loss=-10.5]
=> Saving checkpoint Got 0.8952499628067017 with accuracy 89.52 Dice score: 1.782752513885498
Epoch: 39/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.90it/s, loss=-10.8]
=> Saving checkpoint Got 0.8901249766349792 with accuracy 89.01 Dice score: 1.7879419326782227
Epoch: 40/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-11]
=> Saving checkpoint Got 0.8890937566757202 with accuracy 88.91 Dice score: 1.7865288257598877
Epoch: 41/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.90it/s, loss=-11.2]
=> Saving checkpoint Got 0.8915520906448364 with accuracy 89.16 Dice score: 1.7899266481399536
Epoch: 42/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-11.1]
=> Saving checkpoint Got 0.8923645615577698 with accuracy 89.24 Dice score: 1.7993781566619873
Epoch: 43/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.96it/s, loss=-10.9]
=> Saving checkpoint Got 0.887333333492279 with accuracy 88.73 Dice score: 1.804529070854187
Epoch: 44/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.85it/s, loss=-11.9]
=> Saving checkpoint Got 0.8823854327201843 with accuracy 88.24 Dice score: 1.7975029945373535
Epoch: 45/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.91it/s, loss=-11.2]
=> Saving checkpoint Got 0.8859270811080933 with accuracy 88.59 Dice score: 1.802530288696289
Epoch: 46/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.92it/s, loss=-11.6]
=> Saving checkpoint Got 0.890500009059906 with accuracy 89.05 Dice score: 1.8090462684631348
Epoch: 47/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.90it/s, loss=-11.7]
=> Saving checkpoint Got 0.9020833373069763 with accuracy 90.21 Dice score: 1.8257639408111572
Epoch: 48/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.81it/s, loss=-12.1]
=> Saving checkpoint Got 0.9160521030426025 with accuracy 91.61 Dice score: 1.8463400602340698
Epoch: 49/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.91it/s, loss=-12]
=> Saving checkpoint Got 0.925125002861023 with accuracy 92.51 Dice score: 1.859954833984375
Epoch: 50/50
100%|ββββββββββ| 1/1 [00:00<00:00, 1.93it/s, loss=-11.4]
=> Saving checkpoint Got 0.933968722820282 with accuracy 93.40 Dice score: 1.873431921005249
As you can see 50th dice score is 1.873431921005249. Why dice score tis more than 1?
| The equation you are using to calculate the dice coefficient is wrong.
This will work.
dice_score += (2 * (preds * y).sum()) / (2 * (preds * y).sum()+ ((preds*y)<1).sum())
You can interpret it as 2 x correct_classified/(2 x correct_classified+wrong_classified).
Note that this only works in the binary case.
| https://stackoverflow.com/questions/72913658/ |
Is the output of GRU in Pytorch just the hidden states? | I understand how GRU works, but now I'm confused by the difference between "hidden" and "output" of GRU in Pytorch: is "output" just the hidden states of the GRU, or the hidden states after some transformations? If "output" is just the hidden states, why do we want both output and h_n as return value of GRU.forward, since we can just get h_n from the last element of output?
| According to the documentation:
output is a tensor of shape (batch_size, sequence_length, num_layers, num_directions * hidden_sizeβ)
h_n a tensor of shape (num_directions * num_layers, batch_size, hidden_size)
The first provides you with hidden states across the entire sequence, allowing you to use intermediate representations (e.g. for attention) or train for step-separable tasks (e.g. token-level classification).
The latter provides you with just a single summary vector per input sequence, which is handy if you're only interested in a sequence-level representation that doesn't involve token-wise attention.
There is no implicit hidden-to-output transformation.
| https://stackoverflow.com/questions/72918576/ |
TypeError: got an unexpected keyword argument 'image' | I am getting TypeError: get_train_augs() got an unexpected keyword argument 'image', I have my augmentation functions as follows
Augmentation functions
def get_train_augs():
return A.Compose([
A.Resize(IMAGE_SIZE,IMAGE_SIZE),
A.HorizontalFlip(p = 0.5),
A.VerticalFlip(p = 0.5),
])
def get_valid_augs():
return A.Compose([
A.Resize(IMAGE_SIZE,IMAGE_SIZE),
])
Custom segmentation dataset class
class SegmentationDataset(Dataset):
def __init__(self, df, augmentations=None):
self.df = df
self.augmentations = augmentations
def __len__(self):
return len(self.df)
def __getitem__(self,idx):
row = self.df.iloc[idx]
image_path = row.images
mask_path = row.masks
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
mask = cv2.imread(mask_path, cv2.IMREAD_GRAYSCALE) # (h, w, c)
mask = np.expand_dims(mask, axis = -1)
if self.augmentations is not None:
data = self.augmentations(image = image, mask = mask)
image = data['image']
mask = data['mask']
# (h, w, c) -> (c, h, w)
image = np.transpose(image,(2,0,1)).astype(np.float32)
mask = np.transpose(mask,(2,0,1)).astype(np.float32)
image = torch.Tensor(image)/255.0
mask = torch.round(torch.Tensor(mask)/255.0)
return image, mask
when I call trainset like this I am getting an error:
trainset = SegmentationDataset(train_df, get_train_augs)
validset = SegmentationDataset(valid_df, get_valid_augs)
calling a random index
idx = 3
image, mask = trainset[idx]
The error I am getting is:
TypeError Traceback (most recent call last)
<ipython-input-28-9b83781b7e3d> in <module>()
1 idx = 3
2
----> 3 image, mask = trainset[idx]
4
5 helper.show(image, mask)
<ipython-input-25-39872478644d> in __getitem__(self, idx)
20
21 if self.augmentations is not None:
---> 22 data = self.augmentations(image = image, mask = mask)
23
24 image = data['image']
TypeError: get_train_augs() got an unexpected keyword argument 'image'
| try
trainset = SegmentationDataset(train_df, get_train_augs())
validset = SegmentationDataset(valid_df, get_valid_augs())
| https://stackoverflow.com/questions/72921953/ |
Loss is Nan - PyTorch | I'm trying to train a simple neural net with the following architecture:
class ModifiedNet(nn.Module):
def __init__(self, num_inputs, num_outputs):
super(ModifiedNet, self).__init__()
self.linear = nn.Linear(num_inputs, 1000)
self.linear2 = nn.Linear(in_features=1000, out_features=num_outputs)
def forward(self, input):
input = input.view(-1, num_inputs) # reshape input to batch x num_inputs
output = self.linear(input)
output = self.linear2(output)
return output
The dataset is MNIST (num_inputs=784 and num_outputs=10).
I'm trying to plot the loss (we're using CrossEntropy) for each learning rate (0.01, 0.1, 1, 10), but the loss is NaN when I reach LR=1.
From looking at similar questions, I saw that you're supposed to lower the LR, but my task is to measure it with the given ones.
What am I doing wrong?
This is the code for the train and test:
# train and test functions
def train(epoch, network, optimizer=None):
losses = list()
network.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data), Variable(target)
if optimizer is not None:
optimizer.zero_grad()
output = network(data)
loss = F.cross_entropy(output, target).to(torch.float64)
losses.append(loss.item())
loss.backward()
if optimizer is not None:
optimizer.step()
if batch_idx % 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
return np.mean(np.array(losses))
def test(network):
network.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
#data, target = Variable(data, volatile=True), Variable(target)
output = network(data)
test_loss += F.cross_entropy(output, target, reduction='sum').to(torch.double).item() # sum up batch loss
#test_loss += F.cross_entropy(output, target, sum=True).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
return test_loss
and this is where I'm looping over the learning rates:
learning_rates = [0.01, 0.1, 1, 10]
for learning_rate in learning_rates:
net = ModifiedNet(num_inputs, num_outputs)
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
train_losses = dict()
for epoch_idx in range(10):
train_losses[epoch_idx] = train(epoch_idx, net, optimizer)
plot_graph(list(train_losses.keys()), list(train_losses.values()), "epoch", "train loss", str(learning_rate))
And this is the original question:
Retrain the model for 10 epochs with each of the learning rates in the
set {0.01, 0.1, 1, 10} and test the resulting model. Create a figure
and plot the loss curves of each of the four runs for comparison.
Explain the obtained (train and test) results.
Also, the net architecture is a given in the question, so I can't change it.
| I had a silly bug as usual, and didn't use an activation function.
| https://stackoverflow.com/questions/72922168/ |
Is there any replacement for tf.random_gamma in pytorch? | I'm converting a TensorFlow repository to PyTorch code. I came across this line of code:
tf.squeeze(tf.random_gamma(shape =(self.n_sample,),alpha=self.alpha+tf.to_float(self.B)))
I would like to know the equivalent of tf.random_gamma in PyTorch. I think torch.distributions.gamma.Gamma doesn't work the same way.
| It looks like torch.distributions.gamma.Gamma can be used in this case. Here is an example:
import torch
from torch.distributions.gamma import Gamma
def random_gamma(shape, alpha, beta=1.0):
alpha = torch.ones(shape) * torch.tensor(alpha)
beta = torch.ones(shape) * torch.tensor(beta)
gamma_distribution = Gamma(alpha, beta)
return gamma_distribution.sample()
print(random_gamma(shape=(10,), alpha=3.0))
Output:
tensor([2.7673, 1.5498, 6.5191, 5.2923, 3.3204, 3.9286, 1.4163, 1.2400, 3.9661, 1.7663])
The difference is that torch.distributions.gamma.Gamma requires complete tensors for alpha and beta instead of shape+values like it is in TF. Also, TF version has default value 1 for beta which I tried to imitate in the example code.
It makes sense to create distribution instance once though in case if the function will be used multiple times.
| https://stackoverflow.com/questions/72926041/ |
Issue between number of classes and shape of inputs in metric collection torch | I have a problem because I want to calculate some metrics in torchmetrics. but there is a problem:
ValueError: The implied number of classes (from shape of inputs) does not match num_classes.
The output is from UNet and the loss function is BCEWithLogitsLoss (binary segmentation)
channels = 1 because of grayscale img
Input shape: (batch_size, channels, h, w) torch.float32
Label shape: (batch_size, channels, h, w) torch.float32 for BCE
Output shape: (batch_size, channels, h, w): torch.float32
inputs, labels = batch
outputs = model(input)
loss = self.loss_function(outputs, labels)
prec = torchmetrics.Precision(num_classes=1)(outputs, labels.type(torch.int32)
| It seems that torchmetrics expects different shape. Try to flatten both output and labels:
prec = torchmetrics.Precision(num_classes=1)(outputs.view(-1), labels.type(torch.int32).view(-1))
| https://stackoverflow.com/questions/72928902/ |
How can I define the activation function as a hyperparameter in PyTorch through RAY.Tune? | This is the link to the main page of the topic I want.
https://docs.ray.io/en/latest/tune/examples/tune-pytorch-cifar.html#tune-pytorch-cifar-ref
But unfortunately, there is no good documentation to answer all the questions.
Also, if you know how I can define nested cross validation in this environment, please tell me.
| I solved this problem as follows.
n_samples,n_features=X_train.shape
class NeuralNetwork (nn.Module):
def __init__(self,n_input_features,l1, l2,l3,config):
super (NeuralNetwork, self).__init__()
self.config = config
self.linear1=nn.Linear(n_input_features,4*math.floor(n_input_features/2)+l1)
self.linear2=nn.Linear(l1+4*math.floor(n_input_features/2),math.floor(n_input_features/2)+l2)
self.linear3=nn.Linear(math.floor(n_input_features/2)+l2,math.floor(n_input_features/3)+l3)
self.linear4=nn.Linear(math.floor(n_input_features/3)+l3,math.floor(n_input_features/6))
self.linear5=nn.Linear(math.floor(n_input_features/6),1)
self.a1 = self.config.get("a1")
self.a2 = self.config.get("a2")
self.a3 = self.config.get("a3")
self.a4 = self.config.get("a4")
@staticmethod
def activation_func(act_str):
if act_str=="tanh" or act_str=="sigmoid":
return eval("torch."+act_str)
elif act_str=="silu" or act_str=="relu" or act_str=="leaky_relu" or act_str=="gelu":
return eval("torch.nn.functional."+act_str)
def forward(self,x):
out=self.linear1(x)
out=self.activation_func(self.a1)(out.float())
out=self.linear2(out)
out=self.activation_func(self.a2)(out.float())
out=self.linear3(out)
out=self.activation_func(self.a3)(out.float())
out=self.linear4(out)
out=self.activation_func(self.a3)(out.float())
out=torch.sigmoid(self.linear5(out))
y_predicted=out
return y_predicted
| https://stackoverflow.com/questions/72929703/ |
Masking the columns of pytorch matrix | I have a matrix of shape (batch_size, N, N) and a masking tensor of shape (batch_size, N).
I want to put -infinity values only for the columns (and not rows) of the matrix, according to the given mask.
| The solution is via repeating the mask in the correct dimension:
repeated_mask[mask.unsqueeze(-1).repeat(1,1,mask.shape[-1])!=1] = 0
| https://stackoverflow.com/questions/72932419/ |
PyTorch learning scheduler order changes loss in a drastic way | I'm a beginner to PyTorch and am trying to train a MNIST model based on a custom neural network class. My learning rate scheduler, loss function and optimizer are:
optimizer = optim.Adam(model.parameters(), lr=0.003)
loss_fn = nn.CrossEntropyLoss()
exp_lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
I'm also using Learning Rate scheduler for that purpose. Initially, I had my training loop like this:
# this training gives high loss and it doesn't varies that much
def training(epochs):
model.train()
for batch_idx, (imgs, labels) in enumerate(train_loader):
imgs = imgs.to(device=device)
labels = labels.to(device=device)
optimizer.zero_grad()
outputs = model(imgs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
exp_lr_scheduler.step() # inside the loop and after the optimizer
if (batch_idx + 1)% 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, (batch_idx + 1) * len(imgs), len(train_loader.dataset),
100. * (batch_idx + 1) / len(train_loader), loss.data))
But this training was not efficient and my loss was almost the same in every epoch.
Then, I changed my training function to this in the end:
# this training works perfectly
def training(epochs):
model.train()
exp_lr_scheduler.step() # out of the loop but before optimizer step
for batch_idx, (imgs, labels) in enumerate(train_loader):
imgs = imgs.to(device=device)
labels = labels.to(device=device)
optimizer.zero_grad()
outputs = model(imgs)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
if (batch_idx + 1)% 100 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, (batch_idx + 1) * len(imgs), len(train_loader.dataset),
100. * (batch_idx + 1) / len(train_loader), loss.data))
And now, it's working correctly. I just don't get the reason for this.
I have two queries:
Shouldn't exp_lr_scheduler.step() be in the for loop so that it also get's updated with every epoch? ; and
PyTorch latest version says to keep exp_lr_scheduler.step() after optimizer.step() but doing this in the my training function gives me worse loss.
What can be the reason or am I doing it wrong?
| StepLR updates the learning rate after every step_size by gamma, that means if step_size is 7 then learning rate will be updated after every 7 epoch by multiplying the current learning rate to gamma. That means that in your snippet, the learning rate is getting 10 times smaller every 7 epochs.
Have you tried increasing the starting learning rate? I would try 0.1 or 0.01. I think the problem could be at the size of the starting learning rate since the starting point it is already quite small. This causes that the gradient descent algorithm (or its derivatives, as Adam) cannot move towards the minimum because the step is too small and your results keep being the same (at the same point of the functional to minimize).
Hope it helps.
| https://stackoverflow.com/questions/72935355/ |
select a specific GPU to use in jupyter notebook | I'm coding on a .ipynb file on a linux server.
The linux server I use has multiple GPUs on it, but I should only use idle GPU so as not to accidentally abort others' programme.
I've already known that for common .py file we can add some instructions at the command line to choose a common GPU(e.g. export CUDA_VISIBLE_DEVICES=#), but will it work for jupyter notebook? If not, how can I specify a GPU to work on.
| You have to choose its name correctly. For instance there may be 3 GPU devices available namely "cuda:0","cuda:1","cuda:2". To choose the third one you need to run the following code:
if torch.cuda.is_available():
dev = "cuda:2"
else:
dev = "cpu"
device = torch.device(dev)
| https://stackoverflow.com/questions/72935704/ |
How to use yolov5 tflite export with OpenCV | I have exported a tflite file from Yolov5 and I got the output data using the code below:
import numpy as np
import tensorflow as tf
from PIL import Image
import os
img = Image.open(os.path.join('dataset', 'images','val','IMG_6099.JPG'))
img = img.resize((256,256),Image.ANTIALIAS)
numpydata = np.asarray(img)
interpreter = tf.lite.Interpreter(model_path="yolov5s-fp16.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
input_data = np.array(img,dtype=np.float32)
input_data = tf.expand_dims(input_data, 0)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print out output_data:
[[[1.6754180e-02 3.2771632e-02 8.4546164e-02 ... 2.2025524e-05
3.0189141e-05 6.1972853e-05]
[1.5505254e-02 3.5847023e-02 9.6953809e-02 ... 1.9333076e-05
1.5587271e-05 3.6931968e-05]
[1.6107641e-02 3.6390714e-02 8.2990780e-02 ... 1.6197217e-05
1.4623029e-05 3.6216315e-05]
...
[8.6931992e-01 8.8494051e-01 2.4040593e-01 ... 3.1457843e-05
2.4052188e-05 2.2471884e-05]
[8.6244017e-01 9.0521729e-01 4.4481179e-01 ... 5.1936011e-05
3.9207229e-05 3.5609013e-05]
[8.6841702e-01 9.0255147e-01 7.0057535e-01 ... 1.0812500e-04
1.0073676e-04 7.7818921e-05]]]
What are these numbers? and more important how can I show the results on the image?
I also see this post already.
and here is my code trying to capture objects in real time:
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
print(ret)
frame = cv2.resize(frame, (256 , 256))
for i in range(len(scores)):
if ((scores[i] > 0.1) and (scores[i] <= 1.0)):
H = frame.shape[0]
W = frame.shape[1]
xmin = int(max(1,(xyxy[0][i] * W)))
ymin = int(max(1,(xyxy[1][i] * H)))
xmax = int(min(H,(xyxy[2][i] * W)))
ymax = int(min(W,(xyxy[3][i] * H)))
# cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 2)
plt.imshow(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
| The post you attached is clear said:
[x ,y ,w ,h , conf, class0, class1, ...] total 85 columns and many objects (rows) detected.
col 0-3 is boxes, col 4 is conf, and the other 80 is the class
You need filter the rows with the conf. Otherwise you will get a lot false detected object.
Also [x ,y ,w ,h] is not the real scale of your input image.
To obtain the real boxes, you may need do some processing like e.g rescale, xywh to xyxy, NMS ( non max suppression) etc..
You can check the detect.py and utils/general.py in Yolov5 source code.
After you get the real boxes, draw box on image.
Below doc show you how to draw box on image using open cv
https://docs.opencv.org/4.x/dc/da5/tutorial_py_drawing_functions.html
If you google, there are a lot of example teach you how to draw using open cv
| https://stackoverflow.com/questions/72936529/ |
Forward function with multiple outputs? | Typically the forward function in nn.module of pytorch computes and returns predictions for inputs happening in the forward pass. Sometimes though, intermediate computations might be useful to return. For example, for an encoder, one might need to return both the encoding and reconstruction in the forward pass to be used later in the loss.
Question: Can Pytorch's nn.Module's forward function, return multiple outputs? Eg a tuple of outputs consisting predictions and intermediate values?
Does such a return value not mess up the backward propagation or autograd?
If it does, how would you handle cases where multiple functions of input are incorporated in the loss function?
(The question should be valid in tensorflow too.)
| "The question should be valid in Tensorflow too", but PyTorch and Tensorflow are different frameworks. I can answer for PyTorch at least.
Yes you can return a tuple containing any final and or intermediate result. And this does not mess up back propagation since the graph is saved implicitly from the tensors outputs using callbacks and cached tensors.
| https://stackoverflow.com/questions/72940912/ |
How to convert cv::Mat to torch::Tensor and feed it to libtorch model? | I read image with cv2.imread() and trying to feed it to torch model in c++. It has datatype cv::Mat. I think i need to convert it to tensor somehow and then use model.forward(), but i am confused how to do it. Is there some function similar to .Tensor() in python?
| The function torch::from_blob can be used to create a tensor view over the image data, like this:
torch::Tensor to_tensor(cv::Mat img) {
return torch::from_blob(img.data, { img.rows, img.cols, 3 }, torch::kUInt8);
}
| https://stackoverflow.com/questions/72944176/ |
Trouble Training Same Tensorflow Model in PyTorch | I have trained a model in Tensorflow and am having trouble replicating it in PyTorch. The Tensorflow model achieves near 100% accuracy (the task is simple), but the PyTorch model performs at random. I've spent a while trying to figure this out, and can't understand what the problem could be.
The model is trained for the task of binary classification. Given an input utterance describing a quadrant and a (x, y, z) coordinate, the model has to predict if the (x, z) portion of the coordinate is in the quadrant described. For example, if the input text was "quadrant 1" and the coordinate was (0.5, -, 0.5), then the prediction should be true, but if the region was "quadrant 2" with the same coordinate, then the prediction should be false.
I generated some data and trained the model in Tensorflow using this code:
x_data_placeholder = tf.placeholder(tf.float32, [FLAGS.batch_size, 1], name="x_data")
y_data_placeholder = tf.placeholder(tf.float32, [FLAGS.batch_size, 1], name="y_data")
z_data_placeholder = tf.placeholder(tf.float32, [FLAGS.batch_size, 1], name="z_data")
# text and labels placeholders
text_data = tf.placeholder(tf.int32, [FLAGS.batch_size, maxtextlength])
text_lengths = tf.placeholder(tf.int32, [FLAGS.batch_size])
y_labels_placeholder = tf.placeholder(tf.int64, [FLAGS.batch_size])
# encode text and coordinate
embeddings = tf.Variable(tf.random_uniform([100, embedding_size], -1, -1))
rnn_inputs = tf.nn.embedding_lookup(embeddings, text_data)
rnn_layers = [tf.compat.v1.nn.rnn_cell.LSTMCell(size, initializer=tf.compat.v1.keras.initializers.glorot_normal) for size in [256]]
multi_rnn_cell = tf.compat.v1.nn.rnn_cell.MultiRNNCell(rnn_layers, state_is_tuple=True)
text_outputs, text_fstate = tf.compat.v1.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=rnn_inputs,
dtype=tf.float32, sequence_length=text_lengths)
# have fully connected layers to map them the input coordinates into the same dimension as the LSTM output layer from above
x_output_layer = tf.compat.v1.layers.dense(x_data_placeholder, units=FLAGS.fc_column_size, activation=tf.nn.relu, name='x_coordinate')
y_output_layer = tf.compat.v1.layers.dense(y_data_placeholder, units=FLAGS.fc_column_size, activation=tf.nn.relu, name='y_coordinate')
z_output_layer = tf.compat.v1.layers.dense(z_data_placeholder, units=FLAGS.fc_column_size, activation=tf.nn.relu, name='z_coordinate')
# add the representations
total_output_layer = x_output_layer + y_output_layer + z_output_layer + lstm_output_layer
# make the predictions with two fully connected layers
fc_1 = tf.compat.v1.layers.dense(total_output_layer, units=FLAGS.hidden_layer_size, activation=tf.nn.relu, name='fc_1')
logits = tf.compat.v1.layers.dense(fc_1, units=FLAGS.output_dims, activation=None, name='logits')
# train the model
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y_labels_placeholder, logits=logits))
optimizer = tf.train.AdamOptimizer(learning_rate=FLAGS.learning_rate, epsilon=1e-7)
gradients, variables = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, FLAGS.gradient_clip_threshold)
optimize = optimizer.apply_gradients(zip(gradients, variables))
# then it'll be trained with sess.run ...
Now for the PyTorch replication:
class BaselineModel(nn.Module):
def __init__(self):
super(BaselineModel, self).__init__()
self.encode_x = nn.Linear(1, embed_size)
self.encode_y = nn.Linear(1, embed_size)
self.encode_z = nn.Linear(1, embed_size)
self._embeddings = nn.Embedding(vocab_size, self.embedding_table_size)
nn.init.uniform_(self._embeddings.weight, -1.0, 1.0)
self.num_layers = 1
self.rnn = nn.LSTM(self.embedding_table_size, self.hidden_size, batch_first=True)
self.fc_after_text_lstm = nn.Linear(self.hidden_size, 100)
self.fc = nn.Linear(100, 256)
self.fc_final = nn.Linear(256, 2)
self.relu_activation = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
def init_hidden(self, batch_size, device='cuda:0'):
# for LSTM, we need # of layers
h_0 = torch.zeros(1, batch_size, self.hidden_size).to(device)
c_0 = torch.zeros(1, batch_size, self.hidden_size).to(device)
return h_0, c_0
def forward(self, input_text, x_coordinate=None, y_coordinate=None, z_coordinate=None):
x_embed = self.relu_activation(self.encode_x(x_coordinate.cuda().to(torch.float32)).cuda())
y_embed = self.relu_activation(self.encode_y(y_coordinate.cuda().to(torch.float32))).cuda()
z_embed = self.relu_activation(self.encode_z(z_coordinate.cuda().to(torch.float32))).cuda()
embeds = self._embeddings(input_text)
embedding, hidden = self.rnn(embeds, self.hidden)
text_fc = self.relu_activation(self.fc_after_text_lstm(embedding[:, -1]))
representations_so_far_added = torch.sum(torch.stack([text_fc, x_embed, y_embed, z_embed]), dim=0)
pre_final_embedding = self.relu_activation(self.fc(representations_so_far_added))
return self.fc_final(pre_final_embedding )
### training code
optimizer = torch.optim.Adam(model.parameters(), lr=0.001, eps=1e-7)
criterion = nn.CrossEntropyLoss()
for input_text, x_coordinate, y_coordinate, z_coordinate, targets, train_data:
optimizer.zero_grad()
pred = model(input_text, x_coordinate=x_coordinate, y_coordinate=y_coordinate, z_coordinate=z_coordinate)
loss = criterion(pred.float(), targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 5.0)
optimizer.step()
scheduler.step()
# accuracy evaluation code, this is evaluated over the entire epoch
pred_idx = F.log_softmax(pred, dim=1)
target_labels = targets.cpu().int()
pred_labels = torch.argmax(pred_idx, dim=-1).cpu().data.int()
curr_acc = skm.accuracy_score(target_labels, pred_labels)
If anyone can spot any issue with the PyTorch implementation or maybe tell me what could be wrong, that would be much appreciated! I also tried to load the weights of the Tensorflow model into all the appropriate layers, and performance still struggles in PyTorch! Thanks in advance!
EDIT:
I have created a minimally reproducible example, because I still cannot figure out what the problem is. Any help would be still appreciated!
import torch
import torch.nn as nn
import numpy as np
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
lr = 0.0005
n_epochs = 10
input_dim = 4
hidden_dim = 128
layer_dim = 2
output_dim = 2
batch_size = 50
class FeatureDataSet(torch.utils.data.Dataset):
def __init__(self, x_train, y_train, x_coordinates):
self.x_train = torch.tensor(x_train, dtype=torch.long)
self.y_train = torch.tensor(y_train)
self.x_coordinates = torch.tensor(x_coordinates, dtype=torch.float32)
def __len__(self):
return len(self.y_train)
def __getitem__(self, idx):
return self.x_train[idx], self.y_train[idx], self.x_coordinates[idx]
class RNN(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim, batch_size):
super().__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
# linear layer to encode the coordinate
self.encode_x = nn.Linear(1, hidden_dim).cuda()
self._embeddings = nn.Embedding(40, 100).cuda()
# hidden_dim is 128
# layer_dim is 2
self.lstm = nn.LSTM(100, hidden_dim, layer_dim, batch_first=True).cuda()
self.fc = nn.Linear(2 * hidden_dim, output_dim).cuda()
self.batch_size = batch_size
self.hidden = None
def init_hidden(self, x):
h0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
c0 = torch.zeros(self.layer_dim, x.size(0), self.hidden_dim)
return [t.cpu() for t in (h0, c0)]
def forward(self, x, x_coordinate):
#initializing the hidden states
h0, c0 = self.init_hidden(x)
embeds = self._embeddings(x)
out, (hn, cn) = self.lstm(embeds.cuda(), (h0.cuda(), c0.cuda()))
x_embed = F.relu(self.encode_x(x_coordinate.cuda().to(torch.float32)).cuda())
representations_so_far_added = torch.cat([out[:, -1, :], x_embed], dim=1)
out = self.fc(representations_so_far_added)
return out
model = RNN(input_dim, hidden_dim, layer_dim, output_dim, batch_size)
criterion = nn.CrossEntropyLoss()
opt = torch.optim.Adam(model.parameters(), lr=0.001)
print('Start model training')
import sklearn.metrics as skm
import torch.nn.functional as F
x_train = []
x_coordinates = []
y_train = []
for i in range(10000):
# create the data. if x_coordinate > 0 and the sentence says that (represented by [1, 5, 6, 8]), then we should predict positive else negative (if the x_coordinate > 0)
# same applies if the x_coordinate < 0, just that the sentence is now [1, 5, 6, 9]
if np.random.randint(0, 2) == 0:
if np.random.randint(0, 2) == 0:
# x coordinate > 0
x_train.append([1, 5, 6, 8])
x_coordinates.append([round(np.random.uniform(0.01, 1.00, 1)[0], 2)])
y_train.append(1.0)
else:
# x coordinate > 0 negative
x_train.append([1, 5, 6, 8])
x_coordinates.append([round(np.random.uniform(-1.00, 0.00, 1)[0], 2)])
y_train.append(0.0)
else:
if np.random.randint(0, 2) == 0:
# x coordinate < 0
x_train.append([1, 5, 6, 9])
x_coordinates.append([round(np.random.uniform(-1.00, 0.00, 1)[0], 2)])
y_train.append(1.0)
else:
# x coordinate < 0 negative
x_train.append([1, 5, 6, 9])
x_coordinates.append([round(np.random.uniform(0.01, 1.00, 1)[0], 2)])
y_train.append(0.0)
# print a sample of data
print(x_train[:10])
print(y_train[:10])
print(x_coordinates[:10])
# create a dataloader
trainingDataset = FeatureDataSet(x_train=x_train, y_train=y_train, x_coordinates=x_coordinates)
train_loader = torch.utils.data.DataLoader(dataset=trainingDataset, batch_size=batch_size, shuffle=True)
# for each epoch
for epoch in range(1, n_epochs + 1):
acc_all = []
# each batch
for i, (x_batch, y_batch, x_coord_batch) in enumerate(train_loader):
x_batch = x_batch.to(device)
y_batch = y_batch.to(device)
x_coord_batch = x_coord_batch.to(device)
opt.zero_grad()
# pass in the text (x_batch) and coordinate (x_coord_batch)
out = model(x_batch, x_coordinate=x_coord_batch)
loss = criterion(out.float(), y_batch.type(torch.LongTensor).cuda())
loss.backward()
opt.step()
pred_idx = F.log_softmax(out, dim=1)
target_labels = y_batch.cpu().int()
pred_labels = torch.argmax(pred_idx, dim=-1).cpu().data.int()
curr_acc = skm.accuracy_score(target_labels, pred_labels)
acc_all.append(curr_acc)
print(np.mean(acc_all))
| I suppose perhaps there are some mistakes in your dataset implementation in the PyTorch version.
I tried your pytorch BaselineModel on both the dataset in your "minimally reproducible example" and my own dataset constructed according to your description, and find that it works fine.
The following is my codes for testing on my own dataset. Note that I add several hyperparameters to the code of BaselineModel to make it run. I got accuracy over 99%.
import random
import torch
import torch.nn as nn
import numpy as np
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
lr = 0.0005
n_epochs = 100
input_dim = 4
hidden_dim = 128
layer_dim = 2
output_dim = 2
batch_size = 50
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
class FeatureDataSet(torch.utils.data.Dataset):
def __init__(self, x_train, y_train, x_coordinates, y_coordinates, z_coordinates):
self.x_train = torch.tensor(x_train, dtype=torch.long)
self.y_train = torch.tensor(y_train)
self.x_coordinates = torch.tensor(x_coordinates, dtype=torch.float32)
self.y_coordinates = torch.tensor(y_coordinates, dtype=torch.float32)
self.z_coordinates = torch.tensor(z_coordinates, dtype=torch.float32)
def __len__(self):
return len(self.y_train)
def __getitem__(self, idx):
return self.x_train[idx], self.y_train[idx], self.x_coordinates[idx], self.y_coordinates[idx], self.z_coordinates[idx]
class BaselineModel(nn.Module):
def __init__(self):
super(BaselineModel, self).__init__()
vocab_size = 40
self.hidden_size = 100
self.embedding_table_size = self.hidden_size
self.encode_x = nn.Linear(1, self.hidden_size)
self.encode_y = nn.Linear(1, self.hidden_size)
self.encode_z = nn.Linear(1, self.hidden_size)
self._embeddings = nn.Embedding(vocab_size, self.embedding_table_size)
nn.init.uniform_(self._embeddings.weight, -1.0, 1.0)
self.num_layers = 1
self.rnn = nn.LSTM(self.embedding_table_size, self.hidden_size, batch_first=True)
self.fc_after_text_lstm = nn.Linear(self.hidden_size, 100)
self.fc = nn.Linear(100, 256)
self.fc_final = nn.Linear(256, 2)
self.relu_activation = nn.ReLU()
self.softmax = nn.Softmax(dim=1)
self.hidden = self.init_hidden(batch_size)
def init_hidden(self, batch_size, device='cuda:0'):
# for LSTM, we need # of layers
h_0 = torch.zeros(1, batch_size, self.hidden_size).to(device)
c_0 = torch.zeros(1, batch_size, self.hidden_size).to(device)
return h_0, c_0
def forward(self, input_text, x_coordinate=None, y_coordinate=None, z_coordinate=None):
x_embed = self.relu_activation(self.encode_x(x_coordinate.cuda().to(torch.float32)).cuda())
y_embed = self.relu_activation(self.encode_y(y_coordinate.cuda().to(torch.float32))).cuda()
z_embed = self.relu_activation(self.encode_z(z_coordinate.cuda().to(torch.float32))).cuda()
embeds = self._embeddings(input_text)
embedding, hidden = self.rnn(embeds, self.hidden)
text_fc = self.relu_activation(self.fc_after_text_lstm(embedding[:, -1]))
representations_so_far_added = torch.sum(torch.stack([text_fc, x_embed, y_embed, z_embed]), dim=0)
pre_final_embedding = self.relu_activation(self.fc(representations_so_far_added))
return self.fc_final(pre_final_embedding)
# model = RNN(input_dim, hidden_dim, layer_dim, output_dim, batch_size)
model = BaselineModel().cuda()
criterion = nn.CrossEntropyLoss()
opt = torch.optim.Adam(model.parameters(), lr=0.001)
print('Start model training')
import sklearn.metrics as skm
import torch.nn.functional as F
x_train = []
x_coordinates = []
y_coordinates = []
z_coordinates = []
y_train = []
for i in range(10000):
x_coordinate = round(np.random.uniform(-1, 1.00, 1)[0], 2)
y_coordinate = round(np.random.uniform(-1, 1.00, 1)[0], 2)
z_coordinate = round(np.random.uniform(-1, 1.00, 1)[0], 2)
x_coordinates.append([x_coordinate])
y_coordinates.append([y_coordinate])
z_coordinates.append([z_coordinate])
if np.random.randint(0, 2) == 0: # positive example
if x_coordinate <= 0 and z_coordinate <= 0:
x_train.append([1, 5, 6, 8])
elif x_coordinate <= 0 and z_coordinate > 0:
x_train.append([1, 5, 6, 9])
elif x_coordinate > 0 and z_coordinate <= 0:
x_train.append([1, 5, 6, 10])
elif x_coordinate > 0 and z_coordinate > 0:
x_train.append([1, 5, 6, 11])
y_train.append(1.0)
else:
if x_coordinate <= 0 and z_coordinate <= 0:
x_train.append(random.choice([[1, 5, 6, 9], [1, 5, 6, 10], [1, 5, 6, 11]]))
elif x_coordinate <= 0 and z_coordinate > 0:
x_train.append(random.choice([[1, 5, 6, 8], [1, 5, 6, 10], [1, 5, 6, 11]]))
elif x_coordinate > 0 and z_coordinate <= 0:
x_train.append(random.choice([[1, 5, 6, 8], [1, 5, 6, 9], [1, 5, 6, 11]]))
elif x_coordinate > 0 and z_coordinate > 0:
x_train.append(random.choice([[1, 5, 6, 8], [1, 5, 6, 9], [1, 5, 6, 10]]))
y_train.append(0.0)
# print a sample of data
print(x_train[:10])
print(y_train[:10])
print(x_coordinates[:10])
print(y_coordinates[:10])
print(z_coordinates[:10])
# create a dataloader
trainingDataset = FeatureDataSet(x_train=x_train, y_train=y_train, x_coordinates=x_coordinates, y_coordinates=y_coordinates, z_coordinates=z_coordinates)
train_loader = torch.utils.data.DataLoader(dataset=trainingDataset, batch_size=batch_size, shuffle=True)
# for each epoch
loss_meter = AverageMeter()
for epoch in range(1, n_epochs + 1):
acc_all = []
# each batch
loss_meter.reset()
for i, (x_batch, y_batch, x_coord_batch, y_coord_batch, z_coord_batch) in enumerate(train_loader):
x_batch = x_batch.to(device)
y_batch = y_batch.to(device)
x_coord_batch = x_coord_batch.to(device)
y_coord_batch = y_coord_batch.to(device)
z_coord_batch = z_coord_batch.to(device)
opt.zero_grad()
# pass in the text (x_batch) and coordinate (x_coord_batch)
out = model(x_batch, x_coordinate=x_coord_batch, y_coordinate=y_coord_batch, z_coordinate=z_coord_batch)
loss = criterion(out.float(), y_batch.type(torch.LongTensor).cuda())
loss.backward()
opt.step()
pred_idx = F.log_softmax(out, dim=1)
target_labels = y_batch.cpu().int()
pred_labels = torch.argmax(pred_idx, dim=-1).cpu().data.int()
curr_acc = skm.accuracy_score(target_labels, pred_labels)
acc_all.append(curr_acc)
loss_meter.update(loss.item())
print(np.mean(acc_all))
print("loss is %f" % loss_meter.val)
As for the "minimally reproducible example", I think the model RNN doesn't work is quite reasonable, as I have stated in the comments. I suppose that tensorflow can not fit as well, although I have not tried it. Your "minimally reproducible example" may be unrelated to your main problem.
| https://stackoverflow.com/questions/72944803/ |
What is called when `log_every_n_steps` of a pytorch lightning trainer is reached? | PL lightning trainer offers a parameter log_every_n_steps which it states controls "How often to add logging rows", however what is the function actually being called here? We can do our own logging every step with the example code below
def training_step(self, batch, batch_idx):
self.log("performance", {"acc": acc, "recall": recall})
But is the trainer doing the same at the every nth step?
| log_every_n_steps will make the training log every n batches. This value is used by self.log if on_step=True. If you want a less bloated log-file, with the results per epoch only, you could do self.log(metrics, on_step=False, on_epoch=True)
| https://stackoverflow.com/questions/72947846/ |
How to compute the outer sum (similar to outer product | Given tensors x and y, each with shape (num_batches, d), how can I use PyTorch to compute the sum of every combination of x and y within a batch?
This is similar to outer product, except we don't want to multiply, but sum. (This implies that I could solve this by exponentiating, outer product, and taking the log, but of course that has numerical and performance disadvantages).
It could be done via cartesian product and then summing each of the combinations.
Essentially, I'd like osum[b, i, j] == x[b, i] + y[b, j]. Can PyTorch do this in tensors, without loops?
| This can easily be done, by introducing singleton dimensions into x and y and broadcasting along these singleton dimensions:
osum = x[..., None] + y[:, None, :]
For example:
x = torch.arange(6).view(2,3)
y = x * 10
osum = x[..., None] + y[:, None, :]
Results with:
tensor([[[ 0, 10, 20],
[ 1, 11, 21],
[ 2, 12, 22]],
[[33, 43, 53],
[34, 44, 54],
[35, 45, 55]]])
Update (July, 14th): How it works?
You have two tensors, x and y of shape bxn, and you want to compute:
osum[b,i,j] = x[b, i] + y[b, j]
We can, conceptually, create new variables xx and yy by repeating each element of x and y along a third dimension, such that:
xx[b, i, j] == x[b, i] # for all j
yy[b, i, j] == y[b, j] # for all i
With these new variables, it is easy to see that:
osum = xx + yy
since, by deinition
osum[b, i, j] == xx[b, i, j] + yy[b, i, j] == x[b, i] + y[b, j]
Now, you can use commands such as torch.expand or torch.repeat to explicitly create xx and yy - but why bother? since their elements are just trivial repetitions of the elements along specific dimensions, broadcasting does this implicitly for you.
| https://stackoverflow.com/questions/72959617/ |
How to make the sum of output to 1 | My (PyTorch) sum of modelβs output isnβt 1. And this is the structure of model.
LSTM(4433, 64)
LSTM(64, 64)
Linear(64, 4433)
Sigmoid()
And this is the predicted output of the model.
Input
[1, 0, 0, β¦, 0, 0]
Output
[.7842, .5, .5, β¦, .5, .5]
Do you know any function that can make its sum 1?
| Sigmoid activation function maps every input to a value between [0, 1], without taking into account other elements in the input vector. However, Softmax does a similar transformation but the output vector sums 1.
TL;DR: use softmax instead of sigmoid.
| https://stackoverflow.com/questions/72959645/ |
Trouble Understanding ResNet Implementation | I'm having trouble understanding and replicating the original implementation of ResNet on the CIFAR-10 dataset, as described in the paper "Deep Residual Learning for Image Recognition". Specifically, I have a few questions about the following passage:
We use a weight decay of 0.0001 and momentum of 0.9,
and adopt the weight initialization in [13] and BN [16] but
with no dropout. These models are trained with a minibatch size of 128 on two GPUs. We start with a learning
rate of 0.1, divide it by 10 at 32k and 48k iterations, and
terminate training at 64k iterations, which is determined on
a 45k/5k train/val split. We follow the simple data augmentation in [24] for training: 4 pixels are padded on each side,
and a 32Γ32 crop is randomly sampled from the padded
image or its horizontal flip. For testing, we only evaluate
the single view of the original 32Γ32 image.
What does a minibatch size of 128 on two GPUs entail? Does this mean the batch size per GPU is 64?
How can I convert from iterations to epochs? Is the model trained for 64000 * 128/45000 = 182.04 epochs?
How can I implement the training and learning rate scheduling in PyTorch? Since 45000 isn't divisible by 128, should I drop the last 72 images every epoch? Also, since the 32k, 48k, and 64k milestones don't fall on a whole number of epochs, should I round them to the nearest epochs? Or is there a way to change the learning rate and terminate training in the middle of an epoch?
If anyone could point me in the right direction, I greatly appreciate it. I'm new to deep learning, so thank you for your help and kind understanding.
|
What does a minibatch size of 128 on two GPUs entail? Does this mean the batch size per GPU is 64?
When running two GPUs on the same machine then the batch size is split between the GPUs, as you've said. The gradient produced by both GPUs will be transfered, averaged and applied on one of the GPUs, or possibly on the CPU.
Here's more info: https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html
How can I convert from iterations to epochs? Is the model trained for 64000 * 128/45000 = 182.04 epochs?
I encourage everyone to think in terms of iterations rather than epochs. Each iteration equates to a single weight update, which is much more relevant to model convergence than an epoch is. If you think in epochs you have to adjust the number of epochs of training every time you try a different batch size. This isn't the case if you use think in terms of iterations (aka training steps, or weight updates). But your formula is correct in computing epochs.
How can I implement the training and learning rate scheduling in PyTorch?
I think this pytorch post answers the question, it looks like this was added to pytorch (sorry for a non authoritative answer here, I'm more familiar with Tensorflow):
https://forums.pytorchlightning.ai/t/training-for-a-set-number-of-iterations-without-setting-epochs/178
https://github.com/Lightning-AI/lightning/pull/5687
You can also just use epochs of course, and adjusting the learning rate doesn't have to happen exactly at the same point as the paper describes, as near as you can reasonably get with rounding error will work just fine.
| https://stackoverflow.com/questions/72960439/ |
Using SKORCH with PyCaret for Regression problems | Using the fantastic article https://towardsdatascience.com/pycaret-skorch-build-pytorch-neural-networks-using-minimal-code-57079e197f33 there is a great example of using SKORCH and PyCaret to do Classification problems, but I am having trouble getting it working for Regression problems.
import pycaret
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
from skorch import NeuralNetRegressor
from sklearn.pipeline import Pipeline
from skorch.helper import DataFrameTransformer
from pycaret.regression import *
from pycaret.datasets import get_data
data = get_data('boston')
target = "medv"
reg1 = setup(data = data,
target = target,
train_size = 0.8,
fold = 5,
session_id = 123,
silent = True)
class RegressorModule(nn.Module):
def __init__(
self,
num_units=100,
nonlin=F.relu,
):
super(RegressorModule, self).__init__()
self.num_units = num_units
self.nonlin = nonlin
self.dense0 = nn.Linear(14, num_units)
self.nonlin = nonlin
self.dense1 = nn.Linear(num_units, 10)
self.output = nn.Linear(10, 1)
def forward(self, X, **kwargs):
X = self.nonlin(self.dense0(X))
X = F.relu(self.dense1(X))
X = self.output(X)
return X
net_regr = NeuralNetRegressor(
RegressorModule,
max_epochs=20,
lr=0.1,
device='cuda'
)
nn_pipe = Pipeline(
[
("transform", DataFrameTransformer()),
("net", net_regr),
]
)
skorch_model = create_model(nn_pipe)
But it errors with:
ValueError: The target data shouldn't be 1-dimensional but instead
have 2 dimensions, with the second dimension having the same size as
the number of regression targets (usually 1). Please reshape your
target data to be 2-dimensional (e.g. y = y.reshape(-1, 1).
If I take the same data and normalise it, reshape it etc and pass that straight to SKORCH, it works fine, like so:
X = data.copy().to_numpy().astype(np.float32)
mean = X.mean(axis=0)
X -= mean
std = X.std(axis=0)
X /= std
y = data[target].to_numpy().astype(np.float32)
y = y.reshape(-1, 1)
net_regr.fit(X, y)
So the problem is somewhere in how it takes the PyCaret (DataFrame based) data and SKORCH converts for use in PyTorch, that the y is staying single dimension, which is fine for the Classification model in the above link, but not for regression where it needs to be 2D.
Is there anyway I can intercept / transform the y?
Thanks :)
| It is mentioned in Pytorch Dataset with Skorch. Anyway this will not solve the problem. If you overload the Fit of the NeuralNetworkRegressor like:
class MyNet(NeuralNetRegressor):
def fit(self, X, y):
if y.ndim == 1:
y = y.values.reshape(-1, 1)
return super().fit(X, y)
net_regr = MyNet(
RegressorModule,
max_epochs=20,
lr=0.1,
train_split=None
)
it should work.
| https://stackoverflow.com/questions/72963666/ |
Combine values from multiple dataframe rows into lists for every user | I have this csv file below (products rated by users) which into pandas dataframe:
--------------------------------
User_id | Product_id | Rating |
--------------------------------
1 | 00 | 3 |
1 | 02 | 5 |
2 | 01 | 1 |
2 | 00 | 2 |
2 | 02 | 2 |
I want to change the dataframe so that it has the same number of rows as the source table above, but only two columns:
Column 1: needs to be a list of L length (L = total number of existing kinds of products), and where the n-th value (n = product_id) in the list is the rating given by the user in this row to the product. All all other values in the list need to be zeros
column 2 should be a list of the same L length, where the n-ths values equal to ratings for n-ths products (n = product_id) for all product_ids rated by this user (in the entire table); all other (unrated) values that are not rated by the user need to be zeros
The desired result would be (consistent with the example above):
--------------------------------
User_id | col1 | col2 |
--------------------------------
1 | [3,0,0] | [3,0,5] |
1 | [0,0,5] | [3,0,5] |
2 | [0,1,0] | [2,1,2] |
2 | [2,0,0] | [2,1,2] |
2 | [0,0,2] | [2,1,2] |
I will greatly appreciate any help with this. Please do ask questions if i can make the question & explanation more clear.
| I managed to solve this, however it feels like a lot of expensive code & operations for something relatively simple. If you have any ideas how to simplify this, I'd appreciate it a lot.
df = pd.read_csv('interactionsv21test.csv')
number_of_products = df['product_id'].nunique()
#assign indexes to products https://stackoverflow.com/questions/38088652/pandas-convert-categories-to-numbers
df.product_id = pd.Categorical(df.product_id)
df['product_indx'] = df.product_id.cat.codes
print('source table')
print(df.sort_values(['user_id', 'product_id', 'product_indx'], ascending=True).head(n=3))
df1 = (df.groupby([df['user_id']])
.apply(lambda x: {int(i):int(k) for i,k in zip(x['product_indx'], x['rating'])})
.reset_index(name='rating'))
#add blank values for non existing dictionary values https://stackoverflow.com/questions/38987/how-do-i-merge-two-dictionaries-in-a-single-expression
df1['rating_y'] = (df1['rating'].apply(lambda x: {int(k): 0 for k in range(number_of_products )} | x))
df['rating_x'] = df.apply(lambda row: {row['product_indx']:row['rating']}, axis=1)
df['rating_x'] = (df['rating_x'].apply(lambda x: {int(k): 0 for k in range(number_of_products)} | x ))
df = df[['user_id', 'rating_x']].merge(df1[['user_id','rating_y']],how='inner',left_on=['user_id'],right_on=['user_id'])
pd.set_option('display.max_columns', 7)
pd.set_option('display.width', 1000)
print('final result')
print(df.head(n=3))
Output:
source table
user_id prod_id rating is_reviewed prod_indx
40 198 63 5 1 1
0 198 2590 4 1 41
5 198 6960 4 1 51
final result
user_id rating_x rating_y
...
40 198 {0: 0, 1: 5, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: ... {0: 0, 1: 5, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: ...
41 198 {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: ... {0: 0, 1: 5, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: ...
42 198 {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: ... {0: 0, 1: 5, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: ...
| https://stackoverflow.com/questions/72964625/ |
Numpy/Torch insert smallest value in case of collision | I have an empty numpy array, a list of indices, and list of values associated with the indices. The issue is that there may be duplicates in the indices. In all these "collision" cases, I'd like the smallest value to be picked. Just wondering what is the best way to go about it.
Eg:
array = [0,0,0,0,0,0,0]
indices = [0, 0, 2, 3, 2, 4]
values = [1.0, 3.0, 3.5, 1.5, 2.5, 8.0]
Result:
out = [1.0, 0, 2.5, 1.5, 8.0, 0.0, 0.0]
| You can always implement something manually like:
import numpy as np
def index_reduce(arr, indices, out, reducer=min):
touched = np.zeros_like(out, dtype=np.bool_)
for i, x in enumerate(indices):
if not touched[x]:
out[x] = arr[i]
touched[x] = True
else:
out[x] = reducer(out[x], arr[i])
return out
which essentially loops through the indices and assign the values of arr to out if not already touched (keeping track of this with the touched array) and reducing the output with the specified reducer.
NOTE: The reducer function needs to be such that the final result can only depend on the current and previous value.
The usage of this would be:
indices = [0, 0, 2, 3, 2, 4]
values = [1.0, 3.0, 3.5, 1.5, 2.5, 8.0]
array = np.zeros(7)
index_reduce(values, indices, array)
# array([1. , 0. , 2.5, 1.5, 8. , 0. , 0. ])
If performances are of concern, you can also accelerate the above code with Numba with a simple decoration provided that also the values and indices inputs are NumPy arrays:
import numba as nb
index_reduce_nb = nb.njit(index_reduce)
indices = np.array([0, 0, 2, 3, 2, 4])
values = np.array([1.0, 3.0, 3.5, 1.5, 2.5, 8.0])
array = np.zeros(7)
index_reduce_nb(values, indices, array)
# array([1. , 0. , 2.5, 1.5, 8. , 0. , 0. ])
Benchmarks
The above solutions can be compared to a Torch-based solution (reworked from @Shai's answer):
import torch
def index_reduce_torch(arr, indices, out, reduce_="amin"):
arr = torch.from_numpy(arr)
indices = torch.from_numpy(indices)
out = torch.from_numpy(out)
return out.index_reduce_(dim=0, index=indices, source=arr, reduce=reduce_, include_self=False).numpy()
or, with additional skipping of Torch gradients:
index_reduce_torch_ng = torch.no_grad()(index_reduce_torch)
index_reduce_torch_ng.__name__ = "index_reduce_torch_ng"
and a Pandas-based solution (reworked from @bpfrd's answer):
import pandas as pd
def index_reduce_pd(arr, indices, out, reducer=min):
df = pd.DataFrame(data=zip(indices, arr))
df1 = df.groupby(0, as_index=False).agg(reducer)
out[df1[0]] = df1[1]
return out
using the following code:
funcs = index_reduce, index_reduce_nb, index_reduce_pd, index_reduce_torch, index_reduce_torch_ng
timings = {}
for i in range(4, 18):
n = 2 ** i
print(f"n = {n}, i = {i}")
extrema = 0, 2 * n
indices = np.random.randint(*extrema, n)
values = np.random.random(n)
out = np.zeros(extrema[1] + 1)
timings[n] = []
base = funcs[0](values, indices, out)
for func in funcs:
res = func(values, indices, out)
is_good = np.allclose(base, res)
timed = %timeit -r 16 -n 16 -q -o func(values, indices, out)
timing = timed.best * 1e6
timings[n].append(timing if is_good else None)
print(f"{func.__name__:>24} {is_good} {timing:10.3f} Β΅s")
to produce with the additional lines:
import matplotlib.pyplot as plt
df = pd.DataFrame(data=timings, index=[func.__name__ for func in funcs]).transpose()
df.plot(marker='o', xlabel='Input size / #', ylabel='Best timing / Β΅s', figsize=(6, 4))
df.plot(marker='o', xlabel='Input size / #', ylabel='Best timing / Β΅s', ylim=[0, 500], figsize=(6, 4))
fig = plt.gcf()
fig.patch.set_facecolor('white')
these plots:
(the second is a zoomed-in version of the first).
These indicate that the Numba accelerated solution could be the fastest, closely followed by the Torch-based solution while the Pandas approach could be the slowest, even slower than the explicit solution without acceleration.
| https://stackoverflow.com/questions/72971291/ |
How to I reshape the 2D array like this? (By using tensor) | I want to resize my image from 32 * 32 to 16 * 16. (By using torch.tensor)
Like decreasing the resolution?
Can anyone help me?
| If you have an image (stored in a tensor) and you want to decrease it's resolution, then you are not reshaping it, but rather resizing it.
To that end, you can use pytorch's interpolate:
import torch
from torch.nn import functional as nnf
y = nnf.interpolate(x[None, None, ...], size=(16, 16), mode='bicubic', align_corners=False, antialias=True)
Notes:
nnf.interpolate operates on batches of multi-channel images, that is, it expects its input x to have 4 dimensions: batch-channels-height-width. So, if your x is a single image with a single channel (e.g., an MNIST digit) you'll have to create a singleton batch dimension and a singleton channel dimension.
Pay close attention to align_corners and antialias -- make sure you are using the right configuration for your needs.
For more information regarding aliasing and alignment when resizing images you can look at ResizeRight.
| https://stackoverflow.com/questions/72974251/ |
Why is PyTorch inference non deterministic even when setting `model.eval()` | I have fine-tuned a PyTorch transformer model using HuggingFace, and I'm trying to do inference on a GPU. However, even after setting model.eval() I still get slightly different outputs if I run inference multiple times on the same data.
I have tried a number of things and have done some ablation analysis and found out that the only way to get deterministic output is by also setting
torch.cuda.manual_seed_all(42)
(or any other seed number).
Why is this the case? And is this normal? The model's weights are fixed, and there are no undefined or randomly initialized weights (when I load the trained model I get the All keys matched successfully message), so where is the randomness coming from if I don't set the cuda seed manually? Is this randomness to be expected?
| You can use torch.use_deterministic_algorithms to force non-deterministic modules to perform deterministically, where supported e.g:
>>> a = torch.randn(100, 100, 100, device='cuda').to_sparse()
>>> b = torch.randn(100, 100, 100, device='cuda')
# Sparse-dense CUDA bmm is usually nondeterministic
>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()
False
>>> torch.use_deterministic_algorithms(True)
# Now torch.bmm gives the same result each time, but with reduced performance
>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()
True
# CUDA kthvalue has no deterministic algorithm, so it throws a runtime error
>>> torch.zeros(10000, device='cuda').kthvalue(1)
RuntimeError: kthvalue CUDA does not have a deterministic implementation...
| https://stackoverflow.com/questions/72979303/ |
Replacing THC/THC.h module to ATen/ATen.h module | I have question about replacing <THC/THC.h> method.
Recently, I'm working on installing different loss functions compiled with cpp and cuda.
However, what I faced was a fatal error of
'THC/THC.h': No such file or directory
I found out that TH(C) methods were currently deprecated in recent version of pytorch, and was replaced by ATen API (https://discuss.pytorch.org/t/question-about-thc-thc-h/147145/8).
For sure, downgrading my pytorch version will solve the problem. However, due to my GPU compatibility issue, I have no choice but to modify the script by myself. Therefore, my question can be summarized into follows.
First, how can I replace codes that have dependency of TH(C) method using ATen API?. Below are codes that I have to modify, replacing those three lines looked enough for my case.
#include <THC/THC.h>
extern THCState *state;
cudaStream_t stream = THCState_getCurrentStream(state);
Second, will single modification on cpp file be enough to clear the issue that I'm facing right now? (This is just a minor question, answer on first question will suffice me).
For reference, I attach the github link of the file I'm trying to build (https://github.com/sshaoshuai/Pointnet2.PyTorch).
| After struggling for a while, I found the answer for my own.
In case of THCState_getCurrentStream, it could directly be replaced by at::cuda::getCurrentCUDAStream(). Therefore, modified code block was formulated as below.
//Comment Out
//#include <THE/THC.h>
//extern THCState *state;
//cudaStream_t stream = THCState_getCurrentStream(state);
//Replace with
#include <ATen/cuda/CUDAContext.h>
#include <ATen/cuda/CUDAEvent.h>
cudaStream_t stream = at::cuda::getCurrentCUDAStream();
After replacing the whole source code, I was able to successfully build the module.
Hope this helps.
| https://stackoverflow.com/questions/72988735/ |
Pytorch: Why does altering the scale of the loss functions improve the convergence in some models? | I have a question surrounding a pretty complex loss function I have.
This is a variational autoencoder loss function and it is fairly complex. It is made of two reconstruction losses, KL divergence and a discriminator as a regularizer. All of those losses are on the same scale, but I have found out that increasing one of the reconstruction losses by a factor of 20 (while leaving the rest on the previous scale) heavily increases the performance of my model.
Since I am still fairly novice on DL, I dont completely understand why this happens, or how I could identify this sort of thing on successive models.
Any advice/explanation is greatly appreciated.
| To summarize your setting first:
loss = alpha1 * loss1 + alpha2 * loss2
When computing the gradients for backpropagation, we compute back through this formular. By backpropagating through our error function we get the gradient:
dError/dLoss
To continue our propagation downwards, we now want to compute dError/dLoss1 and dError/dLoss2.
dError/dLoss1 can be expanded to dError/dLoss * dLoss/dLoss1 via the cain rule (https://en.wikipedia.org/wiki/Chain_rule).
We already computed dError/dLoss so we only need to compute dLoss derived with respect to dLoss1, which is
dLoss/dLoss1 = alpha1
The backpropagation now continues until we reach our weights (dLoss1/dWeight). The gradient our weight receives is:
dError/dWeight = dError/dLoss * dLoss/dLoss1 * dLoss1/dWeight = dError/dLoss * alpha1 * dLoss1/dWeight
As you can see, the gradient used to update our weight does now depend on alpha1, the factor we use to scale Loss1.
If we increase alpha1 while not changing alpha2 the gradients depending on Loss1 will have higher different impact than the gradients of Loss2 and therefor changing the optimization of our model.
| https://stackoverflow.com/questions/72993856/ |
pytorch error in multiplying matrices in neural network | I was trying to make a Neural Network in PyTorch, however I ran into the error below. I'm still new to this topic so I am not able to understand how I should go about solving this.
Code:
class ANN_Model(nn.Module):
def __init__(self,input_features=8,hidden1=8,hidden2=200,hidden3=200,hidden4=300,hidden5=300,hidden6=400,hidden7=400,hidden8=300,hidden9=300,out_features=2):
super().__init__()
self.f_connected1=nn.Linear(input_features,hidden1)
self.f_connected2=nn.Linear(hidden1,hidden2)
self.f_connected2=nn.Linear(hidden2,hidden3)
self.f_connected2=nn.Linear(hidden3,hidden4)
self.f_connected2=nn.Linear(hidden4,hidden5)
self.f_connected2=nn.Linear(hidden5,hidden6)
self.f_connected2=nn.Linear(hidden6,hidden7)
self.f_connected2=nn.Linear(hidden7,hidden8)
self.f_connected2=nn.Linear(hidden8,hidden9)
self.out=nn.Linear(hidden9,out_features)
def forward(self,x):
x=F.relu(self.f_connected1(x))
x=F.relu(self.f_connected2(x))
x=F.relu(self.f_connected3(x))
x=F.relu(self.f_connected4(x))
x=F.relu(self.f_connected5(x))
x=F.relu(self.f_connected6(x))
x=F.relu(self.f_connected7(x))
x=F.relu(self.f_connected8(x))
x=F.relu(self.f_connected9(x))
x=self.out(x)
return x
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr = 0.01)
epochs = 500
final_losses = []
for i in range(epochs):
i = i + 1
y_pred = model.forward(X_train)
loss=loss_function(y_pred, y_train)
final_losses.append(loss.item())
if i%10==1:
print("Epoch number: {} and the loss: {}".format(i, loss.item()))
optimizer.zero_grad()
loss.backward()
optimizer.step()
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Input In [13], in <cell line: 3>()
3 for i in range(epochs):
4 i = i + 1
----> 5 y_pred = model.forward(X_train)
6 loss=loss_function(y_pred, y_train)
7 final_losses.append(loss.item())
Input In [8], in ANN_Model.forward(self, x)
14 def forward(self,x):
15 x=F.relu(self.f_connected1(x))
---> 16 x=F.relu(self.f_connected2(x))
17 x=F.relu(self.f_connected3(x))
18 x=F.relu(self.f_connected4(x))
File ~/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~/miniconda3/lib/python3.9/site-packages/torch/nn/modules/linear.py:114, in Linear.forward(self, input)
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (691x8 and 300x300)
I was trying to make a Neural Network in PyTorch, however I ran into the error below. I'm still new to this topic so I am not able to understand how I should go about solving this.
| I found it, in your model's constructor __init__ every layer is named self.f_connected2 and because of that it expects a shape of (batch_size,300).
| https://stackoverflow.com/questions/72995001/ |
Can't import VecFrameStackFrame from Stable-baselines3 - importing problem | I have a problem when importing some dependencies from stable baselines 3 library, I installed it with this command
pip install stable-baselines3[extra]
But When I import my dependencies
import gym
from stable_baselines3 import A2C
from stable_baselines3.common.vec_env import VecFrameStackFrame
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.env_util import make_atari_env
import os
I face this error
ImportError: cannot import name 'VecFrameStackFrame' from 'stable_baselines3.common.vec_env' (C:\Users\User\anaconda3\envs\rl_learning\lib\site-packages\stable_baselines3\common\vec_env\__init__.py)
Any advice?
| I knew that stable baselines new version has changed the name from
from stable_baselines3.common.vec_env import VecFrameStackFrame
To
from stable_baselines3.common.vec_env import vec_frame_stack
and it worked for me
| https://stackoverflow.com/questions/73006909/ |
Is it more beneficial to read many small files or fewer large files of the exact same data? | I am working on a project where I am combining 300,000 small files together to form a dataset to be used for training a machine learning model. Because each of these files do not represent a single sample, but rather a variable number of samples, the dataset I require can only be formed by iterating through each of these files and concatenating/appending them to a single, unified array. With this being said, I unfortunately cannot avoid having to iterate through such files in order to form the dataset I require. As such, the process of data loading prior to model training is very slow.
Therefore my question is this: would it be better to merge these small files together into relatively larger files, e.g., reducing the 300,000 files to 300 (merged) files? I assume that iterating through less (but larger) files would be faster than iterating through many (but smaller) files. Can someone confirm if this is actually the case?
For context, my programs are written in Python and I am using PyTorch as the ML framework.
Thanks!
| Usually working with one bigger file is faster than working with many small files.
It needs less open, read, close, etc. functions which need time to
check if file exists,
check if you have privilege to access this file,
get file's information from disk (where is beginning of file on disk, what is its size, etc.),
search beginning of file on disk (when it has to read data),
create system's buffer for data from disk (system reads more data to buffer and later function read() can read partially from buffer instead of reading partially from disk).
Using many files it has to do this for every file and disk is much slower than buffer in memory.
| https://stackoverflow.com/questions/73006936/ |
Why does this custom function cost too much time while backward in pytorch? | I'm revising a baseline method in pytorch. But when I add a custom function in the training phase, the cost time of backward increases 4x on a single V100. Here is an example of the custom function:
def batch_function(M, kernel_size=21, sf=2):
'''
Input:
M: b x (h*w) x 2 x 2 torch tensor
sf: scale factor
Output:
kernel: b x (h*w) x k x k torch tensor
'''
M_t = M.permute(0,1,3,2) # b x (h*w) x 2 x 2
INV_SIGMA = torch.matmul(M_t, M).unsqueeze(2).unsqueeze(2) # b x (h*w) x 1 x 1 x 2 x 2
X, Y = torch.meshgrid(torch.arange(kernel_size), torch.arange(kernel_size))
Z = torch.stack((Y, X), dim=2).unsqueeze(3).to(M.device) # k x k x 2 x 1
Z = Z.unsqueeze(0).unsqueeze(0) # 1 x 1 x k x k x 2 x 1
Z_t = Z.permute(0,1,2,3,5,4) # 1 x 1 x k x k x 1 x 2
raw_kernel = torch.exp(-0.5 * torch.squeeze(Z_t.matmul(INV_SIGMA).matmul(Z))) # b x (h*w) x k x k
# Normalize
kernel = raw_kernel / torch.sum(raw_kernel, dim=(2,3)).unsqueeze(-1).unsqueeze(-1) # b x (h*w) x k x k
return kernel
where b is the batch size, 16; h and w are the spatial dimensions, 100; k is equal to 21. I'm not sure if the large dimension of M causes the cost time longer.
Why does the cost time longer? And are there other methods to rewrite this code to improve it?
I'm new here, so if the problem is not clearly described, please let me know!
| You might be able to get a performance boost on the double tensor multiplication by using torch.einsum:
>>> o = torch.einsum('acdefg,bshigj,kldejm->bsdefm', ZZ_t, INV_SIGMA, ZZ)
The resulting tensor o will be shaped (b, h*w, k, k, 1, 1)
For details on the subscript notation:
b: batch dimension.
s: 's' for spatial, i.e. the h*w dimension.
d and e: the two k dimensions which are paired across ZZ_t and ZZ.
A simple 2D matrix multiplication applying matmul with ij,jk->ik.
Keeping that in mind, we have in your case:
A first multiplication: r = ZZ_t@INV_SIGMA which does something like *fg,*gj->*fj, the asterisk sign * refers to leading dimensions.
A second matrix multiplication: r@INV_SIGMA which comes down to *fj,*jm->*fm.
Overall, if we combine both, we get directly: *fg,*gj,*jm->*fm.
Finally, I have assigned all other dimensions to random but different subscript letters:
a, c, f, h, i, k, l
Replacing the asterisk above with those notations, we get the following subscript input:
# * fg, * gj, * jm-> * fm
# acdefg,bshigj,kldejm->bsdefm
| https://stackoverflow.com/questions/73010388/ |
Pytorch error mat1 and mat2 shapes cannot be multiplied in Autoencoder to compress images | I receive this error. Whereas the size of my input image is 3x120x120, so I flatten the image by the following code, however, I received this error:
mat1 and mat2 shapes cannot be multiplied (720x120 and 43200x512)
I have tu use an autoencoder to compress my images of a factor of 360 ( So i started from 3x120x120 input to 120 in the encoder).
my code:
class AE(torch.nn.Module):
def __init__(self):
super().__init__()
self.encoder = torch.nn.Sequential(
torch.nn.Linear(3*120*120, 512),
torch.nn.ReLU(),
torch.nn.Linear(512, 256),
torch.nn.ReLU(),
torch.nn.Linear(256, 128),
torch.nn.ReLU(),
torch.nn.Linear(128, 120)
)
self.decoder = torch.nn.Sequential(
torch.nn.Linear(120, 128),
torch.nn.ReLU(),
torch.nn.Linear(128, 256),
torch.nn.ReLU(),
torch.nn.Linear(256, 512),
torch.nn.ReLU(),
torch.nn.Linear(512, 3*120*120),
torch.nn.Sigmoid()
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
| I think you missed some basic characteristics of nn.Linear function. Its inputs are input and output channel dimension, respectively. And therefore only 1D input is allowable (considering batch, which is 3 in your case, it will be 2D in total ). Therefore you should first flatten your x for later compute compatibility.
class AE(torch.nn.Module):
def __init__(self):
super().__init__()
self.encoder = torch.nn.Sequential(
torch.nn.Linear(120*120, 512),
torch.nn.ReLU(),
torch.nn.Linear(512, 256),
torch.nn.ReLU(),
torch.nn.Linear(256, 128),
torch.nn.ReLU(),
torch.nn.Linear(128, 120)
)
self.decoder = torch.nn.Sequential(
torch.nn.Linear(120, 128),
torch.nn.ReLU(),
torch.nn.Linear(128, 256),
torch.nn.ReLU(),
torch.nn.Linear(256, 512),
torch.nn.ReLU(),
torch.nn.Linear(512, 3*120*120),
torch.nn.Sigmoid()
)
def forward(self, x):
x = x.reshape(3, -1)
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
I recommend you to carefully search for the structure and dimensional change of common neural networks. It will help you alot for sure.
| https://stackoverflow.com/questions/73012301/ |
PyTorch CUDA : the provided PTX was compiled with an unsupported toolchain | I am using Nvidia V100 with the following specs:
(pytorch) [s.1915438@cl1 aneurysm]$ srun nvidia-smi
Sun Jul 17 16:17:27 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.29.05 Driver Version: 495.29.05 CUDA Version: 11.5 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-PCIE... On | 00000000:D8:00.0 Off | 0 |
| N/A 31C P0 25W / 250W | 0MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
The Python, Pytorch and CUDA version is as follows:
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.12.0+cu113'
When I run a python file, containing a machine learning model, I get the following error.
(pytorch) [s.1915438@cl1 aneurysm]$ srun python aneurysm.py
terminate called after throwing an instance of 'std::runtime_error'
what(): the provided PTX was compiled with an unsupported toolchain.
srun: error: ccs2114: task 0: Aborted
Is it some kind of compatibility issue? Should I fallback to CUDA 10
.2 as the V100 is very old GPU?
| Anyone using an old GPU from an HPC cluster is probably out of luck. In my case, I had Nvidia Driver 495 which is not very old. In fact, for CUDA 11.5 they recommend Nvidia Driver 470.
This is the official reply from Nvidia for a similar problem. They also recommend updating the driver. And most of the time HPC centres won't update the driver on personal requests.
| https://stackoverflow.com/questions/73013020/ |
Why my dataloader output structure differs from structure in pytorch official guide? | I was following DCGAN tutorial(https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html) and I've got problem with format of output they get from their dataloader. So, in short, if they need any data from dataloader they call it like that:
real_batch = next(iter(dataloader))
(real_batch[0].to(device)[:64]
or
for i, data in enumerate(dataloader, 0):
real_cpu = data[0].to(device)
and that data[0] they are calling has batch size (set up to be 128), in the first example they need 64 data samples, so they use [:64] to cut them.
The problem is that my dataloader doesn't follow such behaviour, and it spit out batch size just after next call or in the enumerate cycle, calling data[0] for my dataloader return just one sample, not entire batch like in the example. I found this extremely weird, because just by removing [0] in each data load I make my code run without any errors, but I'm afraid that I'm missing some important part of making data to specific shape and that could cause some errors.
This is how their dataloader has been set up:
dataset = dset.ImageFolder(root=dataroot,
transform=transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]))
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
shuffle=True, num_workers=workers)
My dataloader set up is a bit tricky, I have a simple custom dataset with data being a list of one channel images (I'm creating this list myself with augmentation function called on images stored on disk, and I'm not sure if this is a good way to store such data...), and basically more or less the same dataloader params.
class MyDataset(torch.utils.data.Dataset):
def __init__(self, dataset, image_size):
super(MyDataset, self).__init__()
# dataset is [list] of PIL images with 1 channel
self.dataset = dataset
self.image_size = image_size
self.transform=transforms.Compose([transforms.Resize(self.image_size),
transforms.ToTensor(),
transforms.Normalize((0.5), (0.5))])
def __getitem__(self, idx):
x = self.dataset[idx]
return self.transform(x)
def __len__(self):
return len(self.dataset)
and dataloader itself:
train_set = MyDataset(data, image_size=image_size)
data_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True)
| The difference is that typically the implemented datasets will return both the image AND the corresponding label, i.e. the implementation of the __getitem__ method is something like:
def __getitem__(self, idx):
return self.image[idx], self.target[idx]
Then, the dataloader returns a tuple: data = (images, targets), both of the same batch size. They access images by taking data[0].
In your case, your __getitem__ returns only one output, and the dataloader will collate it into simple data = images.
So, removing [0], as you tried, is actually a correct thing to do! For compatibility with other already implemented datasets, I sometimes return a dummy label together with the sample in __getitem__, e.g. return self.transform(x), 0 (you can try it -- then calling data[0] will work).
| https://stackoverflow.com/questions/73014013/ |
Target size (torch.Size([32, 9])) must be the same as input size (torch.Size([32, 10])) | I have 10 classes. I have a model such as;
from brevitas.nn import QuantLinear, QuantReLU
import torch.nn as nn
# Setting seeds for reproducibility
torch.manual_seed(0)
model = nn.Sequential(
QuantLinear(input_size, hidden1, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden1),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden1, hidden2, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden2),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden2, hidden3, bias=True, weight_bit_width=weight_bit_width),
nn.BatchNorm1d(hidden3),
nn.Dropout(0.5),
QuantReLU(bit_width=act_bit_width),
QuantLinear(hidden3, num_classes, bias=True, weight_bit_width=weight_bit_width)
)
model.to(device)
and I have defined my training phase as:
def train(model, train_loader, optimizer, criterion):
losses = []
# ensure model is in training mode
model.train()
for i, data in enumerate(train_loader, 0):
inputs, target = data['pointcloud'].to(device).float(), data['category'].to(device)
target = torch.nn.functional.one_hot(target)
optimizer.zero_grad()
# forward pass
output = model(inputs)
loss = criterion(output, target.float())
# backward pass + run optimizer to update weights
loss.backward()
optimizer.step()
# keep track of loss value
losses.append(loss.data.cpu().numpy())
return losses
As I run the training code:
import numpy as np
from sklearn.metrics import accuracy_score
from tqdm import tqdm, trange
# Setting seeds for reproducibility
torch.manual_seed(0)
np.random.seed(0)
running_loss = []
running_test_acc = []
t = trange(num_epochs, desc="Training loss", leave=True)
for epoch in t:
loss_epoch = train(model, train_loader, optimizer,criterion)
test_acc = test(model, valid_loader)
t.set_description("Training loss = %f test accuracy = %f" % (np.mean(loss_epoch), test_acc))
t.refresh() # to show immediately the update
running_loss.append(loss_epoch)
running_test_acc.append(test_acc)
I get an error as:
Target size (torch.Size([32, 9])) must be the same as input size
(torch.Size([32, 10]))
Please help me about what can be possibly the solution. I have added one hot encoding because I have seen some solutions like that before.
| The code error is pretty straightforward - the criterion (that you didn't show here in the code) expects both the input and the target arguments to be the same size, but they're not.
The problem is that you're using torch.nn.functional.one_hot(target) without telling it how many classes you need for one-hot encoding; it is then infered as the largest values in target +1 (see: https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html). You should change it to torch.nn.functional.one_hot(target, num_classes=10)
| https://stackoverflow.com/questions/73014435/ |
modify image to black text on white background | I have an image that need to do OCR (Optical Character Recognition) to extract all data.
First I want to convert color image to black text on white background in order to improve OCR accuracy.
I try below code
from PIL import Image
img = Image.open("data7.png")
img.convert("1").save("result.jpg")
it gave me below unclear image
I expect to have this image
Then, I will use pytesseract to get a dataframe
import pytesseract as tess
file = Image.open("data7.png")
text = tess.image_to_data(file,lang="eng",output_type='data.frame')
text
Finally,the dataframe I want to get like below
| Here's a vanilla Pillow solution. Just grayscaling the image gives us okay results, but the green text is too faint.
So, we first scale the green channel up (sure, it might clip, but that's not a problem here), then grayscale, invert and auto-contrast the image.
from PIL import Image, ImageOps
img = Image.open('rqDRe.png').convert('RGB')
r, g, b = img.split()
img = Image.merge('RGB', (
r,
g.point(lambda i: i * 3), # brighten green channel
b,
))
img = ImageOps.autocontrast(ImageOps.invert(ImageOps.grayscale(img)), 5)
img.save('rqDRe_processed.png')
output
| https://stackoverflow.com/questions/73017670/ |
torch.optim.LBFGS() does not change parameters | I'm trying to optimize the coordinates of the corners of an image. A similar technique works fine in Ceres Solver. But in torch.optim I'm having some issues. In particular, the optimizer for some reason does not change the parameters being optimized. I don't have much experience with pytorch, so I'm pretty sure the error is trivial. Unfortunately, reading the documentation did not help me much.
Optimization model class:
class OptimizeCorners(torch.nn.Module):
def __init__(self, real_corners):
super().__init__()
self._real_corners = torch.nn.Parameter(real_corners)
def forward(self, real_image, synt_image, synt_corners, _threshold):
# Find homography
if visualize_warp_interpolate:
real_image_before_processing = real_image
synt_image_before_processing = synt_image
homography_matrix = kornia.geometry.homography.find_homography_dlt(synt_corners,
self._real_corners,
weights=None)
# Warp and resize synt image
synt_image = kornia.geometry.transform.warp_perspective(synt_image.float(),
homography_matrix,
dsize=(int(real_image.shape[2]),
int(real_image.shape[3])),
mode='bilinear',
padding_mode='zeros',
align_corners=True,
fill_value=torch.zeros(3))
# Interpolate images
real_image = torch.nn.functional.interpolate(real_image.float(),
scale_factor=5,
mode='bicubic',
align_corners=None,
recompute_scale_factor=None,
antialias=False)
synt_image = torch.nn.functional.interpolate(synt_image.float(),
scale_factor=5,
mode='bicubic',
align_corners=None,
recompute_scale_factor=None,
antialias=False)
# Calculate loss
loss_map = torch.sub(real_image, synt_image, alpha=1)
# if element > _threshold: element = 0
loss_map = torch.nn.Threshold(_threshold, 0)(loss_map)
cumulative_loss = torch.sqrt(torch.sum(torch.pow(loss_map, 2)) /
(loss_map.size(dim=2) * loss_map.size(dim=3)))
return torch.autograd.Variable(cumulative_loss.data, requires_grad=True)
The way, how I am trying to execute optimization:
# Convert corresponding images to PyTorch tensors
_image = kornia.utils.image_to_tensor(_image, keepdim=False)
_synt_image = kornia.utils.image_to_tensor(_synt_image, keepdim=False)
_corners = torch.from_numpy(_corners)
_synt_corners = torch.from_numpy(_synt_corners)
# Optimizer L-BFGS
n_iters = 100
h_lbfgs = []
lr = 1
optimize_corners = OptimizeCorners(_corners)
optimizer = torch.optim.LBFGS(optimize_corners.parameters(),
lr=lr)
for it in tqdm(range(n_iters), desc='Fitting corners',
leave=False, position=1):
loss = optimize_corners(_image, _synt_image, _synt_corners, _threshold)
optimizer.zero_grad()
loss.backward()
optimizer.step(lambda: optimize_corners(_image, _synt_image, _synt_corners, _threshold))
h_lbfgs.append(loss.item())
print(h_lbfgs)
Output from console:
pic
So, as you can see, parameters to be optimized do not change.
UPD:
I changed return torch.autograd.Variable(cumulative_loss.data, requires_grad=True) to return cumulative_loss.requires_grad_(), and it actually works, but now I get this error after few iterations:
console output
UPD: this happens because the parameters being optimized turn into NaN after a few iterations.
| After some time spent hugging the debugger, I found out that the main problem is that after a few iterations, the backward() method starts to calculate the gradient incorrectly and output NaN's. Thus, the parameters being optimized are also calclulated as NaN's. I didn't have a chance to find out exactly why this is happening, because all the traces (I used torch.autograd.set_detect_anomaly(True) method) pointed to the fact that the error occurs on the side of the C ++ Torch engine in the POW and SVD functions.
In the end, in my case, the problem was solved by the fact that I cast all parameters form float32 to float64 and reduce learning rate.
Here is the final code update can be found:
# Convert corresponding images to PyTorch tensors
_image = kornia.utils.image_to_tensor(_image, keepdim=False).double()
_synt_image = kornia.utils.image_to_tensor(_synt_image, keepdim=False).double()
_corners = torch.from_numpy(_corners).double()
_synt_corners = torch.from_numpy(_synt_corners).double()
# Optimizer L-BFGS
optimize_corners = OptimizeCorners(_corners)
optimizer = torch.optim.LBFGS(optimize_corners.parameters(),
max_iter=20,
lr=0.01)
torch.autograd.set_detect_anomaly(True)
def closure():
optimizer.zero_grad()
loss = optimize_corners(_image, _synt_image, _synt_corners, _threshold)
loss.backward()
return loss
for it in tqdm(range(100), desc="Fitting corners", leave=False, position=1):
optimizer.step(closure)
def forward(self, real_image, synt_image, synt_corners, _threshold):
# Find homography
if visualize_warp_interpolate:
real_image_before_processing = real_image
synt_image_before_processing = synt_image
homography_matrix = kornia.geometry.homography.find_homography_dlt(synt_corners,
self._real_corners,
weights=None)
# Warp and resize synt image
synt_image = kornia.geometry.transform.warp_perspective(synt_image,
homography_matrix,
dsize=(int(real_image.shape[2]),
int(real_image.shape[3])),
mode='bilinear',
padding_mode='zeros',
align_corners=True,
fill_value=torch.zeros(3))
# Interpolate images
real_image = torch.nn.functional.interpolate(real_image,
scale_factor=10,
mode='bicubic',
align_corners=None,
recompute_scale_factor=None,
antialias=False)
synt_image = torch.nn.functional.interpolate(synt_image,
scale_factor=10,
mode='bicubic',
align_corners=None,
recompute_scale_factor=None,
antialias=False)
# Calculate loss
loss_map = torch.sub(real_image, synt_image, alpha=1)
# if element > _threshold: element = 0
loss_map = torch.nn.Threshold(_threshold, 0)(loss_map)
cumulative_loss = torch.sqrt(torch.sum(torch.pow(loss_map, 2)) /
(loss_map.size(dim=2) * loss_map.size(dim=3)))
return cumulative_loss.requires_grad_()
| https://stackoverflow.com/questions/73019486/ |
Finding the maximum elements whose sum is less than a given value in PyTorch | Given any PyTorch 2D tensor, what will be the most efficient way to compute the number of top-K values for each row whose sum is less than a given value?
Input:
tensor([[0.6607, 0.1165, 0.0278, 0.1950],
[0.0529, 0.4607, 0.2729, 0.2135],
[0.3267, 0.0902, 0.4578, 0.1253]])
Required Output for the given value 0.8:
tensor([[1], #as 0.6607+0.1950 > 0.8
[2], #as 0.4607+0.2729+0.2135 > 0.8
[2]]) #as 0.4578+0.3267+0.1253 > 0.8
| You can manage such operation by using a combination of sorting, cumulative sum, and max pooling.
First sort the values by row with torch.Tensor.sort
>>> v = x.sort(dim=1, descending=True).values
tensor([[0.6607, 0.1950, 0.1165, 0.0278],
[0.4607, 0.2729, 0.2135, 0.0529],
[0.4578, 0.3267, 0.1253, 0.0902]])
Then construct a mask on the cumulative sorted values that you get from applying torch.cumsum:
>>> torch.cumsum(v, dim=1) > .8
tensor([[False, True, True, True],
[False, False, True, True],
[False, False, True, True]])
Applying a torch.Tensor.max on that mask will return the index of the first occurring True value, i.e. the location of the cumulative element which is above the threshold value 0.8:
>>> mask.max(1, True).indices
tensor([[1],
[2],
[2]])
| https://stackoverflow.com/questions/73022356/ |
Mask the top k elements in a tensor in PyTorch (different k for each row) | For any 2D tensor X, how to get the mask for top K elements for each row where K is a tensor (not restricted to an int)?
Input:
tensor([[0.6607, 0.1165, 0.0278, 0.1950],
[0.0529, 0.4607, 0.2729, 0.2135],
[0.3267, 0.0902, 0.4578, 0.1253]])
Desired output: for K = torch.tensor([2,3,1])
tensor([[ True, False, False, True],
[ False, True, True, True],
[ False, False, True, False]])
I have tried these [1], [2], but can not succeed.
| You can use the torch.topk and torch.tensor.scatter_ methods for this:
K = torch.tensor([2,3,1])
for idx, k in enumerate(K):
top_k = torch.topk(x[idx], k)
x[idx].scatter_(0, top_k.indices, 1)
mask = x.eq(1)
| https://stackoverflow.com/questions/73024439/ |
Is there a way to send location of pytorch tensor in gpu memory between docker containers and build them in different containers | To quickly sum up the problem, I need to transfer images (size is (1920,1200,3)) between PyTorch docker containers and process them. Containers are located in the same system. Speed is very important and transfer should not take more than 2-3ms one way. Two containers will be shared via IPC so I find no problem transferring NumPy arrays via shared memory using buffers (example https://docs.python.org/3/library/multiprocessing.shared_memory.html). I am curious is there a similar way to do that with PyTorch tensors allocated on GPU?
From what I've learned, CUDA Tensors are already in the shared memory. I tried transferring them and Pytorch Tensor Storage objects via socket but it takes around 50-60ms one way, which is way too slow. For testing purposes, I just run 2 programs in separate terminals.
Container 1 code:
import torch
import zmq
def main():
ctx = zmq.Context()
sock = ctx.socket(zmq.REQ)
sock.connect('tcp://0.0.0.0:6000')
x = torch.randn((1, 1920, 1200, 3), device='cuda')
storage = x.storage()
while True:
sock.send_pyobj(storage)
sock.recv()
if __name__ == "__main__":
main()
Container 2 code:
import torch
import zmq
import time
def main():
ctx = zmq.Context()
sock = ctx.socket(zmq.REP)
sock.bind('tcp://*:6000')
for i in range(10):
before = time.time()
storage = sock.recv_pyobj()
tensor = torch.tensor((), device=storage.device)
tensor.set_(storage)
after = time.time()
print(after - before)
sock.send_string('')
if __name__ == "__main__":
main()
Edit:
I found a similar topic discussed 4 years ago. There person extracts additional information from storage using share_cuda() function, which gives cudaIpcMemHandle_t.
Is there a way to reconstruct Storage/Tensor using cudaIpcMemHandle_t or information extracted from share_cuda() function using Pytoch functional? or there is a better way to achieve the same result?
| I found a function in torch.multiprocessing.reductions that rebuilds tensors from the output generated by _share_cuda_(). Now my code looks something like this:
Container 1 code:
import torch
import zmq
def main():
ctx = zmq.Context()
sock = ctx.socket(zmq.REQ)
sock.connect('tcp://0.0.0.0:6000')
image = torch.randn((1, 1920, 1200, 3), dtype=torch.float, device='cuda:0')
storage = image.storage()
(storage_device, storage_handle, storage_size_bytes, storage_offset_bytes,
ref_counter_handle, ref_counter_offset, event_handle, event_sync_required) = storage._share_cuda_()
while True:
sock.send_pyobj({
"dtype": image.dtype,
"tensor_size": (1920, 1200, 3),
"tensor_stride": image.stride(),
"tensor_offset": image.storage_offset(), # !Not sure about this one.
"storage_cls": type(storage),
"storage_device": storage_device,
"storage_handle": storage_handle,
"storage_size_bytes": storage_size_bytes,
"storage_offset_bytes":storage_offset_bytes,
"requires_grad": False,
"ref_counter_handle": ref_counter_handle,
"ref_counter_offset": ref_counter_offset,
"event_handle": event_handle,
"event_sync_required": event_sync_required,
})
sock.recv_string()
if __name__ == "__main__":
main()
Container 2 code:
import torch
import zmq
import time
from torch.multiprocessing.reductions import rebuild_cuda_tensor
def main():
ctx = zmq.Context()
sock = ctx.socket(zmq.REP)
sock.bind('tcp://*:6000')
for i in range(10):
before = time.time()
cuda_tensor_info = sock.recv_pyobj()
rebuilt_tensor = rebuild_cuda_tensor(torch.Tensor, **cuda_tensor_info)
after = time.time()
print(after - before)
sock.send_string('')
if __name__ == "__main__":
main()
| https://stackoverflow.com/questions/73024975/ |
transforms.ToTensor() and Numpy Issue | I'm working on MNIST datasets using Pytorch and I'm trying to scale the images, I ran into problems associated with Numpy
train_dataset = datasets.MNIST(root='data',
train=True,
transform=transforms.ToTensor(),
download=True)
And here's my error, really confused on how to solve this
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-ef2899ce3492> in <module>
6 train=True,
7 transform=transforms.ToTensor(),
----> 8 download=True)
9
10 test_dataset = datasets.MNIST(root='data',
~\miniconda3\envs\6.86x\lib\site-packages\torchvision\datasets\mnist.py in __init__(self, root, train, transform, target_transform, download)
91 ' You can use download=True to download it')
92
---> 93 self.data, self.targets = self._load_data()
94
95 def _check_legacy_exist(self):
~\miniconda3\envs\6.86x\lib\site-packages\torchvision\datasets\mnist.py in _load_data(self)
110 def _load_data(self):
111 image_file = f"{'train' if self.train else 't10k'}-images-idx3-ubyte"
--> 112 data = read_image_file(os.path.join(self.raw_folder, image_file))
113
114 label_file = f"{'train' if self.train else 't10k'}-labels-idx1-ubyte"
~\miniconda3\envs\6.86x\lib\site-packages\torchvision\datasets\mnist.py in read_image_file(path)
507
508 def read_image_file(path: str) -> torch.Tensor:
--> 509 x = read_sn3_pascalvincent_tensor(path, strict=False)
510 assert(x.dtype == torch.uint8)
511 assert(x.ndimension() == 3)
~\miniconda3\envs\6.86x\lib\site-packages\torchvision\datasets\mnist.py in read_sn3_pascalvincent_tensor(path, strict)
496 parsed = np.frombuffer(data, dtype=m[1], offset=(4 * (nd + 1)))
497 assert parsed.shape[0] == np.prod(s) or not strict
--> 498 return torch.from_numpy(parsed.astype(m[2])).view(*s)
499
500
RuntimeError: Numpy is not available
| Figured it out, my Numpy version was outdated compared to pytorch. Updated it basically!
| https://stackoverflow.com/questions/73028716/ |
Does PyTorch allocate GPU memory eagerly? | Consider the following script:
import torch
def unnecessary_compute():
x = torch.randn(1000,1000, device='cuda')
l = []
for i in range(5):
print(i,torch.cuda.memory_allocated())
l.append(x**i)
unnecessary_compute()
Running this script with PyTorch (1.11) generates the following output:
0 4000256
1 8000512
2 12000768
3 16001024
4 20971520
Given that PyTorch uses asynchronous computation and we never evaluated the contents of l or of a tensor that depends on l, why did PyTorch eagerly allocate GPU memory to the new tensors? Is there a way of invoking these tensors in an utterly lazy way (i.e., without triggering GPU memory allocation before it is required)?
| torch.cuda.memory_allocated() returns the memory that has been allocated, not the memory that has been "used".
In a typical GPU compute pipeline, you would record operations in a queue along with whatever synchronization primitives your API offers. The GPU will then dequeue and execute those operations, respecting the enqueued synchronization primitives. However, GPU memory allocation is not usually an operation which even goes on the queue. Rather, there's usually some sort of fundamental instruction that the CPU can issue to the GPU in order to allocate memory, just as recording operations is another fundamental instruction. This means that the memory necessary for a GPU operation has to be allocated before the operation has even been enqueued; there is no "allocate memory" operation in the queue to synchronize with.
Consider Vulkan as a simple example. Rendering operations are enqueued on a graphics queue. However, memory is typically allocated via calls to vkAllocateMemory(), which does not accept any sort of queue at all; it only accepts the device handle and information about the allocation (size, memory type, etc). From my understanding, the allocation is done "immediately" / synchronously (the memory is safe to use by the time the function call returns on the CPU).
I don't know enough about GPUs to explain why this is the case, but I'm sure there's a good reason. And perhaps the limitations vary from device to device. But if I were to guess, memory allocation probably has to be a fairly centralized operation; it can't be done by just any core executing recorded operations on a queue. This would make sense, at least; the space of GPU memory is usually shared across cores.
Let's apply this knowledge to answer your question: When you call l.append(x**i), you're trying to record a compute operation. That operation will require memory to store the result, and so PyTorch is likely allocating the memory prior to enqueuing the operation. This explains the behavior you're seeing.
However, this doesn't invalidate PyTorch's claims about asynchronous compute. The memory might be allocated synchronously, but it won't be populated with the result of the operation until the operation has been dequeued and completed by the GPU, which indeed happens asynchronously.
| https://stackoverflow.com/questions/73030553/ |
How to use transforms.FiveCrop() outside the DataLoader? | I have a dataloader that returns a batch of shape torch.Size([bs, c, h, w]) where bs=4, c=1,and (h, w=128). Now I want to apply some custom transformations to the returned batch. Note that I can not apply transformations in the Dataloader as I need to feed the returned batch as is to one network and a transformed one to another network.
More specifically, I want to apply the following transformations to the returned batch:
1. CenterCrop(100)
2. FiveCrop(16)
3. Resize(128)
4. ToTensor()
5. Normalize([0.5], [0.5])
I have created a function to achieve the following task as follows:
# DataLoader code
#
#
orig_img = next(iter(DataLoader))
patches = get_patches(orig_img)
def get_patches(orig_img):
# orig_img.shape = torch.Size([4, 1, 128, 128])
images = [TF.to_pil_image(x) for x in orig_img.cpu()]
resized_imgs = []
for img in images:
img = transforms.CenterCrop(100)(img)
five_crop = transforms.FiveCrop(64)(img)
f_crops = transforms.Lambda(lambda crops: torch.stack([transforms.Normalize([0.5], [0.5])(transforms.ToTensor()(transforms.Resize(128)(crop))) for crop in crops]))(five_crop)
resized_imgs.append(f_crops)
return resized_imgs
The problem right now is that when I get the resized_imgs list, every tensor inside it looses the batch size dimension i.e. resized_imgs[0].shape = torch.Size([ncrops, c, h, w]) (4d) whereas, I expect the shape to be torch.Size([bs, ncrops, c, h, w]) (5d).
| Your data loader will return a tensor of shape (bs, c, h, w). Therefore orig_img is shaped the same way and iterating through it will provide you with a tensor img shaped as (c, h, w). Applying FiveCrop will create an additional dimension such that five_crop is shaped (5, c, h, w). Then f_crops will be shaped (5, c, 128, 128). Finally, the tensor is appended with the others in resized_imgs (the list containing the different patched images). All in all resized_imgs contains bs elements since orig_img.size(0) = bs, and each element is a tensor shaped (5, c, 128, 128) (five patches per image) as we've described above.
Another way of writing this function would be:
def get_patches(orig_img):
# orig_img.shape = (4, 1, 128, 128)
img_t = T.Compose([T.ToPILImage(),
T.CenterCrop(100),
T.FiveCrop(64)])
patch_t = T.Compose([T.Resize(128),
T.ToTensor(),
T.Normalize([0.5], [0.5])])
resized_imgs = []
for img in orig_img:
five_crop = img_t(img)
f_crops = torch.stack(list(map(patch_t, five_crop)))
resized_imgs.append(f_crops)
return torch.stack(resized_imgs)
The last line will stack all image patches into a single tensor of shape (bs, 5, c, 128, 128).
| https://stackoverflow.com/questions/73034372/ |
constant loss and accuracy in Pytorch model | I have created a binary classification model. During training the model I am getting both loss and accuracy as constant. I tried the same code for other datasets and it is giving the same thing in every case. How to fix this?
Here is the code
model=Model()
lossfn=nn.BCEWithLogitsLoss()
optimizer=torch.optim.Adam(model.parameters(),lr=0.01)
#training model
epochs=10
train_acc=[]
test_acc=[]
losses=torch.zeros(epochs)
#training over loops
for i in range(epochs):
model.train()
batchAcc=[]
batchLoss=[]
for X,y in train_loader:
#forward pass
yHat=model(X)
#lossfunction
loss=lossfn(y,yHat)
#backprop
optimizer.zero_grad()
loss.backward()
optimizer.step()
batchLoss.append(loss.item())
# predictions=(torch.sigmoid(yHat)>0.5).float()
predictions=yHat>0
batchLoss.append(loss.item())
batchAcc.append(100*torch.mean(((yHat>0) == y).float()))
losses[i]=np.mean(batchLoss)
train_acc.append(np.mean(batchAcc))
#Evaluation mode
model.eval()
X,y=next(iter(test_loader))
with torch.no_grad():
yHat=model(X)
test_acc.append( 100*torch.mean(((yHat>0) == y).float()))
print(f'Epoch:{i+1} loss:{loss.item()} train accuracy:{train_acc[-1]} test accuracy:{test_acc[-1]}')
Output of the code
Epoch:1 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:2 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:3 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:4 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:5 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:6 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:7 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:8 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:9 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
Epoch:10 loss:0.6931471824645996 train accuracy:33.33646011352539 test accuracy:0.0
| The other answers are wrong, the order of
optimizer.zero_grad() # Clears the gradient for all parameters
loss.backward() # Populates the gradient for all parameters
optimizer.step() # Uses the gradient to update the parameters
is correct!
Likely culprit is this line:
loss=lossfn(y,yHat)
In your case, y is the ground truth and yHat is the prediction (which is confusing in the usual convention, hence the mistake).
However, PyTorch losses expect the prediction first and the target as the second argument; the gradient is not propagated correctly through the targets. Therefore, you should change it to
loss=lossfn(yHat,y)
Or even better, call yHat the ground truth target and y the prediction.
| https://stackoverflow.com/questions/73040621/ |
Is there a way to specify the output dimension of pytorch least square solution? | With 3 by n by k tensor A and 1 by k by m tensor x we can haveAx = B where B has shape of [3, n, m]
torch.linalg.lstsq(A, B) returns a 3 x k x mtensor as solution. Is there a way to find the 1 by k by m tensor x?
| The difference between torch.lingalg.lstsq and torch.matmul is that torch.lingalg.lstsq computes its answer based on batch-wise operation while torch.matmul does not.
And your 1 by k by m solution will be non-batch wise solution or some kind of global solution that can commonly be applied across whole batch. This case, you can simply reduce the batch dimension and obtain your least square solution.
A_re = A.reshape(1,-1, k)
B_re = B.reshape(1, -1, m) # or torch.matmul(A_re, x)
x = torch.linalg.lstsq(A_re, B_re)
x.size()
> torch.Size([1, k, m])
| https://stackoverflow.com/questions/73043185/ |
Pytorch matrix size issue does not multiply | This may have been answered before so happy about any links. I am new to pytorch and do not understand why my Conv2d pipeline is failing with
mat1 and mat2 shapes cannot be multiplied (64x49 and 3136x512)
self.net = nn.Sequential(
nn.Conv2d(in_channels=c, out_channels=32, kernel_size=8, stride=4),
nn.ReLU(),
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=2),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1),
nn.ReLU(),
nn.Flatten(),
nn.Linear(3136, 512),
nn.ReLU(),
nn.Linear(512, output_size)
)
with input shape 1x84x84.
I did the calculation and this is the size that breaks down over the different steps with the kernel and size settings per layer.
84 -> K:8 , S:4 => 20
20 -> K:3 , S:2 => 9
9 -> K:3 , S:1 => 7
7^2 * 64 => 3136 for the flattened layer
I am not sure where the 64x49 is coming from .
| I have tried your model and your calculation is totally correct. The problem lies in your input. For torch calculation, if your input shape is 1x84x84, a 3d torch, you should input a 4d torch indeed, where the first dimension represent the batch-size. You may find more information about batch, which is widely used to enhance computation speed.
If you just want to test on single data, you may just add a dimension like x = x[None, :] to make it become 4d torch. This will be a quick fix to your problem
| https://stackoverflow.com/questions/73043406/ |
Is it necessary to use with torch.no_grad() for feature extraction? | I'm attempting feature extraction in an unorthodox way. I extract features in eval() mode to switch off the batch norm and dropout layers and use the running means and std provided by ImageNet.
I use a feature extractor to extract features from two related images and concatenate the two tensors stackwise before passing through a linear dense classifier model for training. I'm wondering whether I can avoid using with torch.no_grad() as the two models are unrelated.
Here is a simplified version:
num_classes = 2
num_epochs = 10
criterion = nn.CrossEntropyLoss().to(device)
optimizer = torch.optim.Adam(classifier.parameters(), lr=0.001)
densenet= DenseNetConv()
# set densenet to eval to switch off batch norm and dropout layers and use ImageNet running means/ std devs
densenet.eval()
densenet.to(device)
classifier = nn.Linear(4416, num_classes)
classifier.to(device)
for epoch in range(num_epochs):
classifier.train()
for i, (inputs_1, inputs_2, labels) in enumerate(dataloaders_dict['train']):
inputs_1= inputs_1.to(device)
inputs_2 = inputs_2.to(device)
labels = labels.to(device)
features_1 = densenet(inputs_1) # extract features 1
features_2 = densenet(inputs_2) # extract features 2
combined = torch.cat([features_1, features_2], dim=1) # combine features
combined = combined(-1, 4416) # reshape
optimizer.zero_grad()
# Forward pass to get output/logits
outputs = classifier(combined)
# Calculate Loss: softmax --> cross entropy loss
loss = criterion(outputs, labels)
_, pred = torch.max(outputs, 1)
equality_check = (labels.data == pred)
# Getting gradients w.r.t. parameters
loss.backward()
optimizer.step()
As you can see, I do not call with torch.no_grad(), despite having densenet.eval() as my separate feature extractor. Is there an issue with the way this is implemented or can I assume that this will not interfere with the classifier model?
| If you are doing inference on a model, applying torch.no_grad() won't have any effect on the resulting output. As you've said only nn.Module.eval will since it modifies how the forward operation is performed (namely which statistics to use to normalize the batch elements).
It is recommended to switch off gradient computation when backpropagation is not necessary. This avoids caching activations on forward call resulting in faster inference time.
In your case, you can either wrap your inference call on densenet with torch.no_grad:
torch.no_grad():
features_1 = densenet(inputs_1) # extract features 1
features_2 = densenet(inputs_2) # extract features 2
Or alternatively, switch off the requires_grad flag on your module's parameter tensors using nn.Module.requires_grad_:
densenet.eval()
densenet.requires_grad_(False)
| https://stackoverflow.com/questions/73045594/ |
(Pytorch) Why conv2d results are all different. Their data type is all integer, no float | I've tried and compared three different methods for convolution computation with a custom kernel in Pytorch. Their results are different but I don't understand why that is.
Setup code:
import torch
import torch.nn.functional as F
inp = torch.arange(3*500*700).reshape(1,3,500,700).to(dtype=torch.float32)
wgt = torch.ones((1,3,3,3)).to(dtype=torch.float32)
stride = 1
padding = 0
h = inp.shape[2] - wgt.shape[2] + 1
w = inp.shape[3] - wgt.shape[3] + 1
Method 1
out1 = torch.zeros((1,h,w)).to(dtype=torch.float32)
for o in range(1):
for i in range(3):
for j in range(h):
for k in range(w):
out1[o,j,k] = out1[o,j,k] + (inp[0, i, j*stride:j*stride+3, k*stride:k*stride+3] * wgt[0,i]).sum()
out1 = out1.to(dtype=torch.int)
Method 2
inp_unf = F.unfold(inp, (3,3))
out_unf = inp_unf.transpose(1,2).matmul(wgt.view(1,-1).t()).transpose(1,2)
out2 = F.fold(out_unf, (h,w), (1,1))
out2 = out2.to(dtype=torch.int)
Method 3
out3 = F.conv2d(inp, wgt, bias=None, stride=1, padding=0)
out3 = out3.to(dtype=torch.int)
And here are the results comparison:
>>> h*w
347604
>>> (out1==out2).sum().item()
327338
>>> (out2 == out3).sum().item()
344026
>>> (out1 == out3).sum().item()
330797
>>> out1.shape
(1, 498, 698)
>>> out2.shape
(1, 1, 498, 698)
>>> out3.shape
(1, 1, 498, 698)
Their data types are all int so floating point won't the result. When I use a squared input format such as h=500 and w=500, all three results are all matching. But not for non-squared inputs, such as the one above with h=500 and w=700. Any insight?
| All three results are cast to integer data types but keep in mind their computation is done with float32... It is often preferred to check equality between two tensors using torch.isclose:
>>> torch.isclose(out1, out2).float().mean()
tensor(1.)
>>> torch.isclose(out2, out3).float().mean()
tensor(1.)
>>> torch.isclose(out1, out3).float().mean()
tensor(1.)
| https://stackoverflow.com/questions/73045968/ |
Pytorch: How to prepare 1d dataset from pandas dataframe? | I am trying to make 1d Dataset from a pandas data frame, however, the output is weird.
I wrote the code to convert dataset from pandas dataframe: size is 8000x512,
# create dataset
class carte_dataset(Dataset):
def __init__(self,root):
self.root = root
self.df = pd.read_csv(root,index_col=0)
self.X = torch.tensor(self.df.iloc[:,1:].values)
self.regi_no = self.df.iloc[:,0].values
def __len__(self):
return len(self.regi_no)
def __getitem__(self,idx):
return self.X[idx],self.regi_no[idx]
Then, I confirmed the tensor size
dataset = carte_dataset(root)
data,_ = dataset.__getitem__(0)
data.size()
I expected the size was torch.Size([1,512]), but the shape was torch.Size([512]).
Is the way to make 1d dataset from the pandas dataframe appropriate? Also, if this way is incorrect, how I should revise this code?
| What you need to do is to wrap the dataset with the dataloader which will have the effect of
retrieving the individual element tuple pairs from the underlying dataset: self.X[idx], self.regi_no[idx], shaped (512,) and (1,) respectively.
and collating them to form two batches of input/labels shaped (bs, 512) and bs, 1) where bs is the batch size.
The standard dataloader utility in PyTorch is torch.utils.data.DataLoader:
>>> dataloader = data.DataLoader(dataset, batch_size=1, shuffle=False)
Then you can iterate through the dataset via the dataloader:
>>> for x, y in dataloader:
... # x shaped (1, 512), corresponds to [X[0]]
... # y shaped (1, 1), corresponds to [regi_no[0]]
| https://stackoverflow.com/questions/73048023/ |
Pytorch, get the index of the first 0 in as mask? | I have a tensor that looks like: (1, 1, 1, 1, 1, 1, 1, 1, 0, 0).
I want to get the index where the first zero appears.
What would the be best way do this?
| Not the best usage of argmin but it should work here I think:
>>> torch.tensor([1, 1, 1, 1, 1, 1, 1, 1, 0, 0]).argmin()
tensor(8)
| https://stackoverflow.com/questions/73049694/ |
Pytorch: Set indexes in a tensor based on a list of tensor indices | Is there a way to efficiently set the values of a tensor based on a tensor of indices and a tensor of values?
tensor_to_change = tensor([[-36.9127, -45.6596, -47.1595],
[-36.9409, -45.7024, -47.2050],
[-36.9865, -45.7665, -47.2711],
[-36.3202, -36.9561, -47.2066],
[-36.2929, -36.9333, -47.1702]]
tensor_of_indices = tensor([[0],
[0],
[0],
[1],
[1]])
tensor_of_values = tensor([[-37.9409],
[-38.4865],
[-36.9561],
[-34.9561],
[-38.7562]])
I can accomplish this in with a for loop, but this step then becomes really slow:
for i, a in enumerate(tensor_of_indices):
tensor_to_change[i][a] = tensor_of_values[i]
Is there a torch function which can do this faster?
| Try this:
rows = torch.arange(tensor_to_change.size(0))
cols = tensor_of_indices.squeeze()
tensor_to_change[rows, cols] = tensor_of_values.squeeze()
Output:
tensor_to_change
>tensor([[-37.9409, -45.6596, -47.1595],
[-38.4865, -45.7024, -47.2050],
[-36.9561, -45.7665, -47.2711],
[-36.3202, -34.9561, -47.2066],
[-36.2929, -38.7562, -47.1702]])
| https://stackoverflow.com/questions/73049958/ |
When training with pytorch, debugger hangs, even though running works fine | Trying to train with pytorch hangs in debug mode, but works in run mode.
sampler_train = WeightedRandomSampler(
sample_weights_train,
num_samples=len(sample_weights_train),
replacement=True
)
train_loader = torch.utils.data.DataLoader(
train_set,
sampler=sampler_train,
batch_size=32,
num_workers=2
)
for epoch in range(10):
for i, data in enumerate(train_loader, 0):
model.train()
print("something")
After placing a breakpoint on model.train(), then moving on to the next line, "something" is never printed in debug mode, but is printed in run mode in Pycharm.
How to debug my code?
| After a long search, I found the answer here, which led to here
Setting Gevent Compatible in Preferences | Build, Execution,
Deployment | Python Debugger solves the issue.
| https://stackoverflow.com/questions/73051400/ |
pytorch batchwise indexing | I am searching for a way to do some batchwise indexing for tensors.
If I have a variable Q of size 1000, I can get the elements I want by
Q[index], where index is a vector of the wanted elements.
Now I would like to do the same for more dimensional tensors.
So suppose Q is of shape n x m and I have a index matrix of shape n x p.
My goal is to get for each of the n rows the specific p elements out of the m elements.
But Q[index] is not working for this situation.
Do you have any thoughts how to handle this?
| You can seem to be a simple application of torch.gather which doesn't require any additional reshaping of the data or index tensor:
>>> Q = torch.rand(5, 4)
tensor([[0.8462, 0.3064, 0.2549, 0.2149],
[0.6801, 0.5483, 0.5522, 0.6852],
[0.1587, 0.4144, 0.8843, 0.6108],
[0.5265, 0.8269, 0.8417, 0.6623],
[0.8549, 0.6437, 0.4282, 0.2792]])
>>> index
tensor([[0, 1, 2],
[2, 3, 1],
[0, 1, 2],
[2, 2, 2],
[1, 1, 2]])
The following gather operation applied on dim=1 return a tensor out, such that:
out[i, j] = Q[i, index[i,j]]
This is done with the following call of torch.Tensor.gather on Q:
>>> Q.gather(dim=1, index=index)
tensor([[0.8462, 0.3064, 0.2549],
[0.5522, 0.6852, 0.5483],
[0.1587, 0.4144, 0.8843],
[0.8417, 0.8417, 0.8417],
[0.6437, 0.6437, 0.4282]])
| https://stackoverflow.com/questions/73051895/ |
Does using queue in a multiprocessing Process use pickling? | Consider the following example from Python documentation. Does multiprocessing.Process use serialization (pickle) to put items in the shared queue?
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
if __name__ == '__main__':
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print(q.get()) # prints "[42, None, 'hello']"
p.join()
I understand multiprocessing.Process uses pickle to serialize / deserialize data to communicate with the main process. And threading.Thread does not need serialization, so it does not use pickle. But I'm not sure how communication with Queue happens in a multiprocessing.Process.
Additional Context
I want multiple workers to fetch data from a database (or local storage) to fill the shared queue where the items are consumed by the main process sequentially. Each record that is fetched is large (1-1.5 mb). The problem with using multiprocessing.Process is that serialization / deserialization of the data takes a long time. Pytorch's DataLoader makes use of this and is therefore unsuitable for my use case.
Is multi-threading the best alternative for such a use case?
| Yes, mutiprocessing's queues does use Pickle internally. This can be seen in multiprocessing/queues.py of the CPython implementation. In fact, AFAIK CPython uses Pickle for transferring any object between interpreter processes. The only way to avoid this is to use shared memory but it introduces strong limitation and cannot basically be used for any type of objects.
Multithreading is limited by the Global Interpreter Lock (GIL) which basically prevent any parallel speed up except of operations releasing the GIL (eg. some Numpy functions) and IO-based ones.
Python (and especially CPython) is not the best languages for parallel computing (nor for high performance). It has not been designed with that in mind and this is nowadays a pretty strong limitation regarding the recent sharp increase of the number of cores per processor.
| https://stackoverflow.com/questions/73058545/ |
Reading .pyth filetype in PyTorch | There is pretrained model in a repository that its file type is .pyth. I searched the web to find out about this file type and which language is able to read that but I could not find anything. Since I am working with PyTorch, is it possible to read such file in PyTorch? Moreover, normally how it is possible to read and generate that?
To be clearer, in the repository of the TimeSformer model, the pretrained models are of this filetype and as an example, you can find the following commands in that repository:
import torch
from timesformer.models.vit import TimeSformer
model = TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time', pretrained_model='/path/to/pretrained/model.pyth')
dummy_video = torch.randn(2, 3, 8, 224, 224) # (batch x channels x frames x height x width)
pred = model(dummy_video,) # (2, 400)
| The file extension can literally be anything, it doesn't change the file contents. If you run torch.load("file.pyth") it will load a weight dictionary. You can find this in the code in the repo you included. They save the model using this code:
path_to_checkpoint = get_path_to_checkpoint(path_to_job, epoch + 1)
with PathManager.open(path_to_checkpoint, "wb") as f:
torch.save(checkpoint, f)
and the get_path_to_checkpoint function can be found here:
def get_path_to_checkpoint(path_to_job, epoch):
"""
Get the full path to a checkpoint file.
Args:
path_to_job (string): the path to the folder of the current job.
epoch (int): the number of epoch for the checkpoint.
"""
name = "checkpoint_epoch_{:05d}.pyth".format(epoch)
return os.path.join(get_checkpoint_dir(path_to_job), name)
So, they just pass a filename with an extension .pyth to torch.save.
| https://stackoverflow.com/questions/73065101/ |
How to bound the output of a layer in pytorch | I want my model to output a single value, how can I constrain the value to (a, b)?
for example, my code is:
class ActorCritic(nn.Module):
def __init__(self, num_state_features):
super(ActorCritic, self).__init__()
# value
self.critic_net = nn.Sequential(
nn.Linear(num_state_features, 64),
nn.ReLU(),
nn.Linear(64, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 1)
)
# policy
self.actor_net = nn.Sequential(
nn.Linear(num_state_features, 64),
nn.ReLU(),
nn.Linear(64, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 1),
)
def forward(self, state):
value = self.critic_net(state)
policy_mean = self.actor_net(state)
return value, policy_mean
and I want the policy output to be in the range (500, 3000), how can I do this?
(I have tried torch.clamp(), this does not work well since the policy would stay always the same if it is near the limit, for example the output goes to -1000000 and it will then stay 500 forever, or takes really long time to change. The same is true for function like nn.Sigmoid())
| Use an activation function on the final layer that bounds the outputs in some range, then normalize to your desired range. For instance, sigmoid function bound the output in the range [0,1].
output = torch.sigmoid(previous_layer_output) # in range [0,1]
output_normalized = output*(b-a) + a # in range [a,b]
| https://stackoverflow.com/questions/73071399/ |
Cannot import Pytorch Dataset class written in another py file | I am trying to make my main.py file concise so I wrote my Dataset class in another py file.
from torch.utils.data import DataLoader, Dataset
from sklearn.model_selection import train_test_split
from torch.nn.utils.rnn import pad_sequence
class mydata(Dataset):
def __init__(self, X, y):
self.X = torch.FloatTensor(X)
self.y = torch.FloatTensor(y)
def __len__(self):
return len(self.X)
def __getitem__(self, index):
y = self.y[index]
X = self.X[index]
return X,
When I try to import it, I got an import error.
ImportError: cannot import name 'mydata' from 'tools' (d:\A\Pycodes\tools.py)```
| The problem is solved as tools.py is restricted and probably a function already existed in the environment.
| https://stackoverflow.com/questions/73071814/ |
Does model.eval() placement matter in the code? | Might be a bit silly but I need to make sure this is correct. Does it matter if I place my code like this?:
model.eval()
with torch.no_grad():
Or can I get the same behaviour like this:
with torch.no_grad():
model.eval()
I'm just wondering because I have a function that has model.eval() inside of it which goes inside of a loop where with torch.no_grad(): is before it ...
| both of them are correct, you just need to use the model. eval() Before you explore,
you should put the model in eval mode, both in general and so that batch norm
doesn't cause you issues and is using its eval statistics
| https://stackoverflow.com/questions/73074362/ |
Conda is installing a very old version of pytorch-lightning | I tried installing pytorch lightning by runnning:
conda install -c conda-forge pytorch-lightning
as described here: https://anaconda.org/conda-forge/pytorch-lightning
This link seems updated to version 1.6.5
However, when I run this command, an old version of pytorch-lightning is installed, as can be seen here:
> The following NEW packages will be INSTALLED:
>
> absl-py pkgs/main/noarch::absl-py-0.15.0-pyhd3eb1b0_0
> blinker pkgs/main/linux-64::blinker-1.4-py39h06a4308_0
> google-auth-oauth~ pkgs/main/noarch::google-auth-oauthlib-0.4.1-py_2
> oauthlib pkgs/main/noarch::oauthlib-3.2.0-pyhd3eb1b0_0
> pytorch-lightning conda-forge/noarch::pytorch-lightning-0.8.5-py_0
> requests-oauthlib pkgs/main/noarch::requests-oauthlib-1.3.0-py_0
> tensorboard pkgs/main/noarch::tensorboard-2.6.0-py_1
> tensorboard-data-~
> pkgs/main/linux-64::tensorboard-data-server-0.6.0-py39hca6d32c_0
> tensorboard-plugi~ pkgs/main/noarch::tensorboard-plugin-wit-1.6.0-py_0
As you can see, version 0.8.5 is being installed. Is there a way for me to use conda and get a newer version of pytorch-lightning?
Things I have tried:
updating conda
using both linux and windows
| The problem was solved by launching a new conda environment. Apparently there were some conflicting pre installed dependencies.
| https://stackoverflow.com/questions/73084185/ |
CUDA error: invalid device ordinal when using python 3.9 | I'm trying to excute a code, but I keep geting this error when compiling this piece of code:
import tensorflow as tf
from xba import XBA
import torch
torch.tensor([1, 2, 3, 4]).to(device="cuda:2")
torch.tensor([1, 2, 3, 4]).to(device="cuda:2") generates this error: "
RuntimeError: CUDA error: invalid device ordinal
CUDA kernel errors might be asynchronously reported at some other API call,so the
stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
AnyIdea about the origine of the bug, it's just the first line of code!!
| "cuda:2"
selects the third GPU in your system. If you don't have 3 GPUs (at least) in your system, you'll get this error.
Assuming you have at least 1 properly installed and set up CUDA GPU available, try:
"cuda:0"
| https://stackoverflow.com/questions/73085720/ |
Normalized Cross Entropy Loss Implementation Tensorflow/Keras | I am trying to implement a normalized cross entropy loss as described in this publication
The math given is:
This paper provided a PyTorch implementation:
@mlconfig.register
class NormalizedCrossEntropy(torch.nn.Module):
def __init__(self, num_classes, scale=1.0):
super(NormalizedCrossEntropy, self).__init__()
self.device = device
self.num_classes = num_classes
self.scale = scale
def forward(self, pred, labels):
pred = F.log_softmax(pred, dim=1)
label_one_hot = torch.nn.functional.one_hot(labels, self.num_classes).float().to(self.device)
nce = -1 * torch.sum(label_one_hot * pred, dim=1) / (- pred.sum(dim=1))
return self.scale * nce.mean()
But I need this to be translated to tensorflow for my ongoing project. Can anyone help me implement this normalized crossentropy loss in tensorflow?
| I think is just a matter of translating methods name:
# given y_pred as 1-hot and y-true the multiclass probabilities
def NCE(y_true, y_pred):
num = - tf.math.reduce_sum(tf.multiply(y_true, y_pred), axis=1)
denom = -tf.math.reduce_sum(y_pred, axis=1)
return tf.reduce_mean(num / denom)
t = tf.constant([[1,0,0], [0,0,1]], dtype=tf.float64)
y = tf.constant([[0.3,0.6,0.1], [0.1,0.1,0.8]], dtype=tf.float64)
NCE(t,y)
# <tf.Tensor: shape=(), dtype=float64, numpy=0.55>
Just check if the resulting loss is the same since I've not tested it
| https://stackoverflow.com/questions/73095123/ |
Loading a modified pretrained model using strict=False in PyTorch | I want to use a pretrained model as the encoder part in my model. You can find a version of my model:
class MyClass(nn.Module):
def __init__(self, pretrained=False):
super(MyClass, self).__init__()
self.encoder=S3D_featureExtractor_multi_output()
if pretrained:
weight_dict=torch.load(os.path.join('models','weights.pt'))
model_dict=self.encoder.state_dict()
list_weight_dict=list(weight_dict.items())
list_model_dict=list(model_dict.items())
for i in range(len(list_model_dict)):
assert list_model_dict[i][1].shape==list_weight_dict[i][1].shape
model_dict[list_model_dict[i][0]].copy_(weight_dict[list_weight_dict[i][0]])
for i in range(len(list_model_dict)):
assert torch.all(torch.eq(model_dict[list_model_dict[i][0]],weight_dict[list_weight_dict[i][0]].to('cpu')))
print('Loading finished!')
def forward(self, x):
a, b = self.encoder(x)
return a, b
Because I modified some parts of the code of this pretrained model, based on this post I need to apply strict=False to avoid facing error, but based on the scenario that I load the pretrained weights, I cannot find a place in the code to apply strict=False. How can I apply that or how can I change the scenario of loading the pretrained model taht makes it possible to apply strict=False?
| strict = False is to specify when you use load_state_dict() method. state_dict are just Python dictionaries that helps you save and load model weights.
(for more details, see https://pytorch.org/tutorials/recipes/recipes/what_is_state_dict.html)
If you use strict=False in load_state_dict, you inform PyTorch that the target model and the original model are not identical, so it just initialises the weights of layers which are present in both and ignores the rest.
(see https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=load_state_dict#torch.nn.Module.load_state_dict)
So, you will need to specify the strict argument when you load the pretrained model weights. load_state_dict can be called at this step.
If the model for which weights must be loaded is self.encoder
and if state_dict can be retrieved from the model you just loaded, you can just do this
loaded_weights = torch.load(os.path.join('models','weights.pt'))
self.encoder.load_state_dict(loaded_weights, strict=False)
for more details and a tutorial, see https://pytorch.org/tutorials/beginner/saving_loading_models.html .
| https://stackoverflow.com/questions/73102413/ |
Accessing a specific layer in a pretrained model in PyTorch | I want to extract the features from certain blocks of the TimeSformer model and also want to remove the last two layers.
import torch
from timesformer.models.vit import TimeSformer
model = TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time', pretrained_model='/path/to/pretrained/model.pyth')
The print of the model is as follows:
TimeSformer(
(model): VisionTransformer(
(dropout): Dropout(p=0.0, inplace=False)
(patch_embed): PatchEmbed(
(proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))
)
(pos_drop): Dropout(p=0.0, inplace=False)
(time_drop): Dropout(p=0.0, inplace=False)
(blocks): ModuleList( #************
(0): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): Identity()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): DropPath()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
.
.
.
.
.
.
(11): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): DropPath()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
(norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True) **** I want to remove this layer*****
(head): Linear(in_features=768, out_features=400, bias=True) **** I want to remove this layer*****
)
)
Specifically, I want to extract the outputs of the 4th, 8th and 11th blocks of the model and removing the lats two layers. How can I do this. I tried using TimeSformer.blocks[0] but that was not working.
Update :
I have a Class and I need to access the aforementioned blocks of the TimeSformer as the output of this class. The input of this class is a 5D tensor. This is the non-modified code that I use for extracting the outputs of the aforementioned blocks:
class Model(nn.Module):
def __init__(self, pretrained=False):
super(Model, self).__init__()
self.model =TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time',
pretrained_model='/home/user/models/TimeSformer_divST_16x16_448_K400.pyth')
self.activation = {}
def get_activation(name):
def hook(model, input, output):
self.activation[name] = output.detach()
return hook
self.model.model.blocks[4].register_forward_hook(get_activation('block4'))
self.model.model.blocks[8].register_forward_hook(get_activation('block8'))
self.model.model.blocks[11].register_forward_hook(get_activation('block11'))
block4_output = self.activation['block4']
block8_output = self.activation['block8']
block11_output = self.activation['block11']
def forward(self, x, out_consp = False):
features2, features3, features4 = self.model(x)
| To extract the intermediate output from specific layers, you can register it as a hook, the example is showed by the snipcode below:
import torch
from timesformer.models.vit import TimeSformer
model = TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time', pretrained_model='/path/to/pretrained/model.pyth')
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
model.model.blocks[4].register_forward_hook(get_activation('block4'))
model.model.blocks[8].register_forward_hook(get_activation('block8'))
model.model.blocks[11].register_forward_hook(get_activation('block11'))
x = torch.randn(3,3,224,224)
output = model(x)
block4_output = activation['block4']
block8_output = activation['block8']
block11_output = activation['block11']
To remove the last two layers, you can replace them with Identity:
model.norm = torch.nn.Identity()
model.head= torch.nn.Identity()
| https://stackoverflow.com/questions/73102541/ |
Supress UserWarning from torchmetrics | When I train a neural network using PyTorch, I get the following warning caused by the torchmetrics library:
/Users/dev/miniconda/envs/pytorch/lib/python3.10/site-packages/torchmetrics/utilities/prints.py:36:
UserWarning: Torchmetrics v0.9 introduced a new argument class
property called full_state_update that has not been set for this
class (SMAPE). The property determines if update by default needs
access to the full metric state. If this is not the case, significant
speedups can be achieved and we recommend setting this to False. We
provide an checking function from torchmetrics.utilities import check_forward_no_full_state that can be used to check if the
full_state_update=True (old and potential slower behaviour, default
for now) or if full_state_update=False can be used safely.
I tried to suppress this warning by using the warnings package in my script:
with warnings.catch_warnings():
warnings.simplefilter("ignore")
However, the warning is still shown which is probably due to a function in prints.py of torchmetrics:
def _warn(*args: Any, **kwargs: Any) -> None:
warnings.warn(*args, **kwargs)
Is it possible to get rid of this warning from my script without changing the library code?
| Use -W argument to control how python deals with warnings, consider following simple example, let dowarn.py content be
import warnings
warnings.warn("I am UserWarning", UserWarning)
warnings.warn("I am FutureWarning", FutureWarning)
then
python dowarn.py
gives
dowarn.py:2: UserWarning: I am UserWarning
warnings.warn("I am UserWarning", UserWarning)
dowarn.py:3: FutureWarning: I am FutureWarning
warnings.warn("I am FutureWarning", FutureWarning)
and
python -W ignore dowarn.py
gives empty output and
python -W ignore::UserWarning dowarn.py
gives
dowarn.py:3: FutureWarning: I am FutureWarning
warnings.warn("I am FutureWarning", FutureWarning)
See python man page for discussion of -W values
| https://stackoverflow.com/questions/73106528/ |
Accessing a module inside of a block in a pretrained model | How can I access the output of a specific layer in a specific block of a pretrained model. To be clearer, the print of the TimeSformer model is as follows:
The print of the model is as follows:
TimeSformer(
(model): VisionTransformer(
(dropout): Dropout(p=0.0, inplace=False)
(patch_embed): PatchEmbed(
(proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))
)
(pos_drop): Dropout(p=0.0, inplace=False)
(time_drop): Dropout(p=0.0, inplace=False)
(blocks): ModuleList(
(0): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): Identity()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention( # *********
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True) # @@@@@@@
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): DropPath()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
.
.
.
.
.
.
(11): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): DropPath()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
(norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(head): Linear(in_features=768, out_features=400, bias=True)
)
Based on the answer that was proposed in this post it is possible to have the access to the output of a block:
import torch
from timesformer.models.vit import TimeSformer
model = TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time', pretrained_model='/path/to/pretrained/model.pyth')
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
model.model.blocks[4].register_forward_hook(get_activation('block4'))
model.model.blocks[8].register_forward_hook(get_activation('block8'))
model.model.blocks[11].register_forward_hook(get_activation('block11'))
x = torch.randn(3,3,224,224)
output = model(x)
block4_output = activation['block4']
block8_output = activation['block8']
block11_output = activation['block11']
My question is that how can I have access to a module inside the blocks or a layer inside of a module of a block that are represented by * and @ signs. To be clearer, How can I have access the output of the (temporal_attn) and also output of the (proj) that is inside of the (temporal_attn).
| Having access to those blocks, you can easily proceed with accessing the sub-modules via the dot notation assuming those blocks are custom nn.Module (i.e. they are not subscriptable and the bracket notation can't be used). For instance with block nΒ°4:
>>> model.model.blocks[4].temporal_attn \
.register_forward_hook(get_activation('attn_block4'))
>>> model.model.blocks[4].temporal_attn.proj \
.register_forward_hook(get_activation('attn_proj_block4'))
| https://stackoverflow.com/questions/73107859/ |
Convert a tensorflow script to pytorch (TransformedDistribution) | I am trying to rewrite a tensorflow script in pytorch. I have a problem finding the equivalent part in torch for the following line from this script:
import tensorflow_probability as tfp
tfd = tfp.distributions
a_distribution = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0.0, scale=1.0),
bijector=tfp.bijectors.Chain([
tfp.bijectors.AffineScalar(shift=self._means,
scale=self._mags),
tfp.bijectors.Tanh(),
tfp.bijectors.AffineScalar(shift=mean, scale=std),
]),
event_shape=[mean.shape[-1]],
batch_shape=[mean.shape[0]])
In particular, I have a huge problem for replacing the tfp.bijectors.Chain component.
I wrote the following lines in torch, but I am wondering whether these lines in pytorch compatible with the above tensorflow code and whether I can specify the batch_shape somewhere?
base_distribution = torch.normal(0.0, 1.0)
transforms = torch.distributions.transforms.ComposeTransform([torch.distributions.transforms.AffineTransform(loc=self._action_means, scale=self._action_mag, event_dim=mean.shape[-1]), torch.nn.Tanh(),torch.distributions.transforms.AffineTransform(loc=mean, scale=std, event_dim=mean.shape[-1])])
a_distribution = torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms)
Any solution?
| In Pytorch, the base distribution class Distribution expects both a batch_shape and a event_shape parameter. Now notice that the subclass TransformedDistribution does not take such parameters (src code). That's because they are inferred from the base distribution class provided on initialization: see here and here.
You already found out about AffineTransform and ComposeTransform. Keep in mind you must stick with classes from the torch.distributions.
This holds for torch.normal which should be replaced with Normal. With this class, the shape is inferred from the provided loc and scale tensors.
And nn.Tanh which should be replaced with TanhTransform.
Here is a minimal example using your transformation pipeline:
Imports:
from torch.distributions.normal import Normal
from torch.distributions import transforms as tT
from torch.distributions.transformed_distribution import TransformedDistribution
Parameters:
mean = torch.rand(2,2)
std = 1
_action_means, _action_mag = 0, 1
event_dim=mean.shape[-1]
Distribution definition:
a_distribution = TransformedDistribution(
base_distribution=Normal(loc=torch.full_like(mean, 0),
scale=torch.full_like(mean, 1)),
transforms=tT.ComposeTransform([
tT.AffineTransform(loc=_action_means, scale=_action_mag, event_dim=event_dim),
tT.TanhTransform(),
tT.AffineTransform(loc=mean, scale=std, event_dim=event_dim)]))
| https://stackoverflow.com/questions/73110443/ |
The essence of learnable positional embedding? Does embedding improve outcomes better? | I was recently reading the bert source code from the hugging face project. I noticed that the so-called "learnable position encoding" seems to refer to a specific nn.Parameter layer when it comes to implementation.
def __init__(self):
super()
positional_encoding = nn.Parameter()
def forward(self, x):
x += positional_encoding
β Could be this feeling, then performed the learnable position encoding. Whether that means it's that simple or not, I'm not sure I understand it correctly, I want to ask someone with experience.
In addition, I noticed a classic bert structure whose location is actually coded only once at the initial input. Does this mean that the subsequent bert layers, for each other, lose the ability to capture location information?
BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(...)
...
(pooler): BertPooler(...)
Would I get better results if the results of the previous layer were re-positional encoded before the next BERT layer?
| What is the purpose of positional embeddings?
In transformers (BERT included) the only interaction between the different tokens is done via self-attention layers. If you look closely at the mathematical operation implemented by these layers you will notice that these layers are permutation equivariant: That is, the representation of
"I do like coding"
and
"Do I like coding"
is the same, because the words (=tokens) are the same in both sentences, only their order is different.
As you can see, this "permutation equivariance" is not a desired property in many cases.
To break this symmetry/equivariance one can simply "code" the actual position of each word/token in the sentence. For example:
"I_1 do_2 like_3 coding_4"
is no longer identical to
"Do_1 I_2 like_3 coding_4"
This is the purpose of positional encoding/embeddings -- to make self-attention layers sensitive to the order of the tokens.
Now to your questions:
learnable position encoding is indeed implemented with a simple single nn.Parameter. The position encoding is just a "code" added to each token marking its position in the sequence. Therefore, all it requires is a tensor of the same size as the input sequence with different values per position.
Is it enough to introduce position encoding once in a transformer architecture? Yes! Since transformers stack multiple self-attention layers it is enough to add positional embeddings once at the beginning of the processing. The position information is "fused" into the semantic representation learned per token.
A nice visualization of this effect in Vision Transformers (ViT) can be found in this work:
Shir Amir, Yossi Gandelsman, Shai Bagon and Tali Dekel Deep ViT Features as Dense Visual Descriptors (arXiv 2021).
In sec. 3.1 and fig. 3 they show how the position information dominates the representation of tokens at early layers, but as you go deeper in a transformer, semantic information takes over.
| https://stackoverflow.com/questions/73113261/ |
Order-independent Deep Learning Model | I have a dataset with parallel time series. The column 'A' depends on columns 'B' and 'C'. The order (and the number) of dependent columns can change. For example:
A B C
2022-07-23 1 10 100
2022-07-24 2 20 200
2022-07-25 3 30 300
How should I transform this data, or how should I build the model so the order of columns 'B' and 'C' ('A', 'B', 'C' vs 'A', C', 'B'`) doesn't change the result? I know about GCN, but I don't know how to implement it. Maybe there are other ways to achieve it.
UPDATE:
I want to generalize my question and make one more example. Let's say we have a matrix as a singe observation (no time series data):
col1 col2 target
0 1 a 20
1 2 a 30
2 3 b 30
3 4 b 40
I would like to predict one value 'target' per each row/instance. Each instance depends on other instances. The order of rows is irrelevant, and the number of rows in each observation can change.
| You are looking for a permutation invariant operation on the columns.
One way of achieving this would be to apply column-wise operation, followed by a global pooling operation.
How that achieves your goal:
column-wise operations are permutation equivariant; that is, applying the operation on the columns and permuting the output, is the same as permuting the columns and then applying the operation.
A global pooling operation (e.g., max-pool, avg-pool) across the columns is permutation invariant: the result of an average pool does not depend on the order of the columns.
Applying a permutation invariant operation on top of a permutation equivariant one results in an overall permutation invariant function.
Additionally, you should look at self-attention layers, which are also permutation equivariant.
What I would try is:
Learn a representation (RNN/Transformer) for a single time series. Apply this representation to A, B and C.
Learn a transformer between the representation of A to those of B and C: that is, use the representation of A as "query" and those of B and C as "keys" and "values".
This will give you a representation of A that is permutation invariant in B and C.
Update (Aug 3rd, 2022):
For the case of "observations" with varying number of rows, and fixed number of columns:
I think you can treat each row as a "token" (with a fixed dimension = number of columns), and apply a Transformer encoder to predict the target for each "token", from the encoded tokens.
| https://stackoverflow.com/questions/73115170/ |
What is the difference between ... and : in Pytorch tensors and numpy indexing | I am trying to get used to Pytorch indexing. However I couldn't understand the difference between tensor[:,-1] (which should print the last column) and tensor[...,-1] which is printing different output (output2)
import torch
tensor = torch.rand([3,3,3,3])
print('Output1')
print(tensor[:,-1])
print('Output2')
print(tensor[...,-1])
| It looks like the following indices are equivalent
tensor[:, -1] == tensor[:, -1, :, :]
tensor[..., -1] == tensor[:, :, :, -1]
| https://stackoverflow.com/questions/73115407/ |
Printing the size of the input and output of all the layers of a pretrained model | I want to print the sizes of the inputs and outputs of all the layers of a pretrained model. I uae this pretrained model as self.feature in my class.
The print of this pretrained model is as follows:
TimeSformer(
(model): VisionTransformer(
(dropout): Dropout(p=0.0, inplace=False)
(patch_embed): PatchEmbed(
(proj): Conv2d(3, 768, kernel_size=(16, 16), stride=(16, 16))
)
(pos_drop): Dropout(p=0.0, inplace=False)
(time_drop): Dropout(p=0.0, inplace=False)
(blocks): ModuleList(
(0): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): Identity()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
(1): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention( # *********
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True) # @@@@@@@
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): DropPath()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
.
.
.
.
.
.
(11): Block(
(norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_norm1): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(temporal_attn): Attention(
(qkv): Linear(in_features=768, out_features=2304, bias=True)
(proj): Linear(in_features=768, out_features=768, bias=True)
(proj_drop): Dropout(p=0.0, inplace=False)
(attn_drop): Dropout(p=0.0, inplace=False)
)
(temporal_fc): Linear(in_features=768, out_features=768, bias=True)
(drop_path): DropPath()
(norm2): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(mlp): Mlp(
(fc1): Linear(in_features=768, out_features=3072, bias=True)
(act): GELU()
(fc2): Linear(in_features=3072, out_features=768, bias=True)
(drop): Dropout(p=0.0, inplace=False)
)
)
)
(norm): LayerNorm((768,), eps=1e-06, elementwise_affine=True)
(head): Linear(in_features=768, out_features=400, bias=True)
)
This the the code of my class and my method for printing the size of layers:
class Class(nn.Module):
def __init__(self, pretrained=False):
super(Class, self).__init__()
self.feature =TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time',
pretrained_model='path/to/the/weight.pyth')
def forward(self, x):
for layer in self.feature:
x = layer(x)
print(x.size())
return x
I'm using the following approach for printing
But I am facing this error:
TypeError: 'TimeSformer' object is not iterable
How can I print the sizes of all the layers?
Update:
using the Following code receives the error mentioned in the comment:
def forward(self, x, out_consp = False):
layers=list(self.featureExtractor.children())
for layer in layers:
x = layer(x)
print(x.size())
return x
| You can use hooks to print the shape of the input and the output of each layer. You can use this code to do what you want.
def hook_function(module, input, output):
print(f'{module.name} :')
print(module)
#print(module)
if isinstance(input[0], tuple):
print('input shapes:')
for elem in input[0]:
print(elem.shape)
else:
print(f'input shape: {input[0].shape}')
if isinstance(output, tuple):
print('output shapes:')
for elem in output:
print(elem.shape)
else:
print(f'output shape: {output.shape}')
print('')
def set_names(net):
def recurs(net,parent_name=None):
for name, mod in net.named_children():
if parent_name is not None:
name = '_'.join([parent_name, name])
recurs(mod, name)
setattr(mod,'name',name)
recurs(net)
def print_shapes(network, dummy_input_shape, device='cuda', eval=True):
network = network.to(device)
if eval:
network.eval()
else:
network.train()
assert dummy_input_shape[0] > 1
#print(network)
dummy = torch.randn(dummy_input_shape, device=device)
set_names(network)
handles = []
def attach_hooks(net):
leaf_layers = 0
for mod in net.children():
leaf_layers += 1
attach_hooks(mod)
if leaf_layers == 0:
handles.append(net.register_forward_hook(hook_function))
attach_hooks(network)
network(dummy)
# if needed
for handle in handles:
handle.remove()
Example:
network = TimeSformer(img_size=224, num_classes=400, num_frames=8, attention_type='divided_space_time',
pretrained_model='path/to/the/weight.pyth')
# The behaviour of a forward function could be different during training
print_shapes(network,(1,3,224,224),'cpu', eval=True)
print_shapes(network,(2,3,224,224),'cpu', eval=False)
A snippet of the output includes a layer that is defined before 'temporal_norm1' layer in 'Block' module but called or executed later (norm1).
model_blocks_11_temporal_fc :
Linear(in_features=768, out_features=768, bias=True)
input shape: torch.Size([2, 1568, 768])
output shape: torch.Size([2, 1568, 768])
model_blocks_11_norm1 :
LayerNorm((768,), eps=1e-06, elementwise_affine=True)
input shape: torch.Size([16, 197, 768])
output shape: torch.Size([16, 197, 768])
| https://stackoverflow.com/questions/73121935/ |
Reshaping the dimension of a tensor in PyTorch | There is a tensor with the shape of [b, nt*nh*nw, dim]. The values of nt, nh, and nw are in hand. How can I reshape this tensor to the form of [b, dim, nt, nh, nw]? For example, how it is possible to reshape [2, 3x2x4, 512] to [2,512,3,2,4]?
| It all depends on your data layout in memory.
However, assuming nt, nh, and nw are in the correct ordering in your underlying data tensor then you can do so by permuting and reshaping your tensor.
First swap dimensions to place dim as the 2nd axis using torch.transpose or torch.permute. Then reshape the tensor to the desired shape with torch.view or torch.reshape:
>>> x.transpose(1,2).view(b, dim, nt, nh, nw)
| https://stackoverflow.com/questions/73123850/ |
Pytorch dataloaders : Bad file descriptor and EOF for workers>0 | Description of the problem
I am encountering a strange behavior during a neural network training with Pytorch dataloaders made from a custom dataset. The dataloaders are set with workers=4, pin_memory=False.
Most of the time, the training finished with no problems.
Sometimes, the training stopped at a random moment with the following errors:
OSError: [Errno 9] Bad file descriptor
EOFError
It looks like the error occurs during socket creation to access dataloader elements.
The error disappears when I set the number of workers to 0, but I need to accelerate my training with multiprocessing.
What could be the source of the error ? Thank you !
The versions of python and libraries
Python 3.9.12, Pyorch 1.11.0+cu102
EDIT: The error is occurring only on clusters
Output of error file
Traceback (most recent call last):
File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 145, in _serve
Epoch 17: 52%|ββββββ | 253/486 [01:00<00:55, 4.18it/s, loss=1.73]
Traceback (most recent call last):
File "/my_directory/bench/run_experiments.py", line 251, in <module>
send(conn, destination_pid)
File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 50, in send
reduction.send_handle(conn, new_fd, pid)
File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 183, in send_handle
with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s:
File "/my_directory/.conda/envs/geoseg/lib/python3.9/socket.py", line 545, in fromfd
return socket(family, type, proto, nfd)
File "/my_directory/.conda/envs/geoseg/lib/python3.9/socket.py", line 232, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 9] Bad file descriptor
main(args)
File "/my_directory/bench/run_experiments.py", line 183, in main
run_experiments(args, save_path)
File "/my_directory/bench/run_experiments.py", line 70, in run_experiments
) = run_algorithm(algorithm_params[j], mp[j], ss, dataset)
File "/my_directorybench/algorithms.py", line 38, in run_algorithm
data = es(mp,search_space, dataset, **ps)
File "/my_directorybench/algorithms.py", line 151, in es
data = ss.generate_random_dataset(mp,
File "/my_directorybench/architectures.py", line 241, in generate_random_dataset
arch_dict = self.query_arch(
File "/my_directory/bench/architectures.py", line 71, in query_arch
train_losses, val_losses, model = meta_net.get_val_loss(
File "/my_directory/bench/meta_neural_net.py", line 50, in get_val_loss
return self.training(
File "/my_directorybench/meta_neural_net.py", line 155, in training
train_loss = self.train_step(model, device, train_loader, epoch)
File "/my_directory/bench/meta_neural_net.py", line 179, in train_step
for batch_idx, mini_batch in enumerate(pbar):
File "/my_directory/.conda/envs/geoseg/lib/python3.9/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
data = self._next_data()
File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1207, in _next_data
idx, data = self._get_data()
File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1173, in _get_data
success, data = self._try_get_data()
File "/my_directory/.local/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1011, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/queues.py", line 122, in get
return _ForkingPickler.loads(res)
File "/my_directory/.local/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 295, in rebuild_storage_fd
fd = df.detach()
File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/resource_sharer.py", line 58, in detach
return reduction.recv_handle(conn)
File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 189, in recv_handle
return recvfds(s, 1)[0]
File "/my_directory/.conda/envs/geoseg/lib/python3.9/multiprocessing/reduction.py", line 159, in recvfds
raise EOFError
EOFError
EDIT : The way data is accessed
from PIL import Image
from torch.utils.data import DataLoader
# extract of code of dataset
class Dataset():
def __init__(self,image_files,mask_files):
self.image_files = image_files
self.mask_files = mask_files
def __getitem__(self, idx):
img = Image.open(self.image_files[idx]).convert('RGB')
mask=Image.open(self.mask_files[idx]).convert('L')
return img, mask
# extract of code of trainloader
train_loader = DataLoader(
dataset=train_dataset,
batch_size=4,
num_workers=4,
pin_memory=False,
shuffle=True,
drop_last=True,
persistent_workers=False,
)
| I have finally found a solution. Adding this configuration to the dataset script works:
import torch.multiprocessing
torch.multiprocessing.set_sharing_strategy('file_system')
By default, the sharing strategy is set to 'file_descriptor'.
I have tried some solutions explained in :
this issue (increase shared memory, increase max number of opened file descriptors, torch.cuda.empty_cache() at the end of each epoch, ...)
and this other issue, that turns out to solve the problem
As suggested by @AlexMeredith, the error may be linked to the distributed filesystem (Lustre) that some clusters use. The error may also come from distributed shared memory.
| https://stackoverflow.com/questions/73125231/ |
The output of the PyTourch DL network doesn't match the last layer provided in the network | I am trying to build a PyTorch model that can predict the rank of a text where the output is a float number between 0 and 1.
My input details are
My batch size is 32.
Max length for the tokenizer is 116
In addition to the masks and Ids generated from the tokenizer, I am adding 11 values that were generated through preprocessing to the input text.
S the entire input shape would be 32 for batch and array with 127 item for each sample text provided
My layers are as follows:
a DistilBERT uncased transformer. and I am using the DistilBERT tokenizer over the text.
The following layer is a CNN that takes the output of the DistilBERT (127 channel) as input and provide 64 channels as output, with kernel=1
After this, 6 CNN layers each input is 64 and output is 64 with a kernel size of 3 and dilation increasing from 2 to 32. On top of each CNN, there is a relu and a maxpooling with 2 as kernal size.
My last CNN layer (and where the issue is happening) have 64 input channels and 32 output channels with a kernel size of 1 and a relu with AdaptiveMaxPool1d with size of 32 on top of it
Linear layer takes 32 and output 16
Linear layer takes 16 and output 1
below is my code
class Dataset(Dataset):
def __init__(self, df, max_len, bert_model_name, multi=1):
super().__init__()
self.df = df.reset_index(drop=True)
self.max_len = max_len
self.tokenizer = DistilBertTokenizer.from_pretrained(
bert_model_name,
do_lower_case=True,
strip_accents=True,
wordpieces_prefix=None,
use_fast=True
)
self.multiplier = multi
def __getitem__(self, index):
row = self.df.iloc[index]
inputs = self.tokenizer.encode_plus(
row.source,
None,
add_special_tokens=True,
max_length=self.max_len,
padding="max_length",
return_token_type_ids=True,
truncation=True
)
return (
t.LongTensor(t.cat([
t.LongTensor([
row.n_total_cells * self.multiplier,
row.n_code_cells * self.multiplier,
row.n_markdown_cells * self.multiplier,
row.word_counts * self.multiplier,
row.line_counts * self.multiplier,
row.empty_line_counts * self.multiplier,
row.full_lines_count * self.multiplier,
row.text_lines_count * self.multiplier,
row.tag_lines_count * self.multiplier,
row.weight * self.multiplier,
row.weight_counts * self.multiplier,
]),
t.LongTensor(inputs['input_ids']),
], 0)),
t.LongTensor(t.cat([
t.ones(11, dtype=t.long),
t.LongTensor(inputs['attention_mask']),
], 0)),
)
class BModel(nn.Module):
def __init__(self, bert_model_name):
super(BModel, self).__init__()
self.distill_bert = DistilBertModel.from_pretrained(bert_model_name)
self.hidden_size = self.distill_bert.config.hidden_size
print(self.hidden_size) # 768
self.relu = nn.ReLU()
self.dropout = nn.Dropout(0.3)
self.cnn_layers()
def forward(self, inputs):
dbert = self.cnn_forward(inputs[0], inputs[1])
return dbert
def cnn_layers(self):
self.layers = 4
kernel_size = 3
inp = 127
out = 32
grades = [2, 4, 8, 16, 32, 64, ]
self.convs = nn.ModuleList()
self.relus = nn.ModuleList()
self.maxs = nn.ModuleList()
self.norms = nn.ModuleList()
self.start_conv = nn.Conv1d(
in_channels=inp,
out_channels=64,
kernel_size=1,
bias=True
)
for i in range(self.layers):
# dilated convolutions
self.convs.append(nn.Conv1d(
in_channels=64,
out_channels=64,
kernel_size = kernel_size,
bias=False,
dilation=grades[i]
))
self.relus.append(nn.ReLU())
self.maxs.append(nn.MaxPool1d(
kernel_size=kernel_size-1,
))
self.norms.append(nn.BatchNorm1d(
num_features=64,
))
self.end_conv = nn.Conv1d(
in_channels=64,
out_channels=out,
kernel_size=1,
bias=True
)
self.max_pool = nn.AdaptiveMaxPool1d(out)
self.top1 = nn.Linear(out, 16)
self.top2 = nn.Linear(16, 1)
def cnn_forward(self, ids, masks):
x = self.distill_bert(ids, masks)[0]
x = self.relu(x)
x = self.dropout(x)
print(f"X size after BERT:", x.size())
x = self.start_conv(x)
print(f"X size after First Conv:", x.size())
for i in range(self.layers):
x = self.norms[i](self.maxs[i](self.relus[i](self.convs[i](x))))
print(f"X size after {i} CNN dilation:", x.size())
x = self.max_pool(t.abs(self.end_conv(x)))
print("X size after AdaptiveMaxPool1d:", x.size())
x = self.top1(x)
print("X size after before-last linear:", x.size())
x = self.top2(x)
print("X size after last linear:", x.size())
return x
Printing the output size after each layer would be as below
X size after First Conv: torch.Size([32, 64, 768])
X size after 0 CNN dilation: torch.Size([32, 64, 382])
X size after 1 CNN dilation: torch.Size([32, 64, 187])
X size after 2 CNN dilation: torch.Size([32, 64, 85])
X size after 3 CNN dilation: torch.Size([32, 64, 26])
X size after AdaptiveMaxPool1d: torch.Size([32, 32, 32])
X size after before-last linear: torch.Size([32, 32, 16])
X size after last linear: torch.Size([32, 32, 1]
The issue I am facing is after the AdaptiveMaxPool1d, the output of this layer suppose to be 2 dimensions instead of 3 [32, 32] instead of [32, 32, 32]
The output of AdaptiveMaxPool1d fits into the linear layer but is with one extra dimension causing the output pred to differ from the true input
when I check the pred size vs the true size it would be
y_pred shape (12480,)
y_val shape (390,)
and the code blow with this error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [13], in <cell line: 21>()
17 # print(mkdn_train_loader, mkdn_val_loader)
18
19 ########################################################################################################################
File E:\KAGGLE_COMP\pt_model.py:796, in train(model, train_loader, val_loader, epochs, patience, path)
793 print('y_val shape', y_val.shape)
794 print(y_pred[:10])
--> 796 print("Validation MSE:", np.round(mean_squared_error(y_val, y_pred), 4))
797 print()
799 early_stopping(np.round(mean_squared_error(y_val, y_pred), 4), model)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\metrics\_regression.py:438, in mean_squared_error(y_true, y_pred, sample_weight, multioutput, squared)
378 def mean_squared_error(
379 y_true, y_pred, *, sample_weight=None, multioutput="uniform_average", squared=True
380 ):
381 """Mean squared error regression loss.
382
383 Read more in the :ref:`User Guide <mean_squared_error>`.
(...)
436 0.825...
437 """
--> 438 y_type, y_true, y_pred, multioutput = _check_reg_targets(
439 y_true, y_pred, multioutput
440 )
441 check_consistent_length(y_true, y_pred, sample_weight)
442 output_errors = np.average((y_true - y_pred) ** 2, axis=0, weights=sample_weight)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\metrics\_regression.py:94, in _check_reg_targets(y_true, y_pred, multioutput, dtype)
60 def _check_reg_targets(y_true, y_pred, multioutput, dtype="numeric"):
61 """Check that y_true and y_pred belong to the same regression task.
62
63 Parameters
(...)
92 the dtype argument passed to check_array.
93 """
---> 94 check_consistent_length(y_true, y_pred)
95 y_true = check_array(y_true, ensure_2d=False, dtype=dtype)
96 y_pred = check_array(y_pred, ensure_2d=False, dtype=dtype)
File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\utils\validation.py:332, in check_consistent_length(*arrays)
330 uniques = np.unique(lengths)
331 if len(uniques) > 1:
--> 332 raise ValueError(
333 "Found input variables with inconsistent numbers of samples: %r"
334 % [int(l) for l in lengths]
335 )
ValueError: Found input variables with inconsistent numbers of samples: [390, 12480]
I need to know what I must change to make this run and the size is passed with correct shape.
| From AdaptiveMaxPool1d Documentation:
If the Input is in Shape of (N, C, L_in), then your output would be in the shape of (N, C, L_out).
Since your input shape to the AdaptiveMaxPool1d is in the shape of (32, 32, 26) and you've set the output_size to 32 ( value of "out" variable ), your output shape comes out as (32, 32, 32).
I suggest to set output_size as 1 and use squeeze(2) to squish down the dimension.
Something like this:
# For initialization of maxpool layer.
nn.AdaptiveMaxPool1d(1)
# ---------
# In forward add squeeze(2) after max_pool like this:
x = self.max_pool(t.abs(self.end_conv(x))).squeeze(2)
| https://stackoverflow.com/questions/73126202/ |
Equivalent to tokenizer() in Transformers 2.5.0? | I am trying to convert the following code to work with Transformers 2.5.0. As written, it works in version 4.18.0, but not 2.5.0.
# Converting pretrained BERT classification model to regression model
# i.e. extracting base model and swapping out heads
from transformers import BertTokenizer, BertModel, BertConfig, BertForMaskedLM, BertForSequenceClassification, AutoConfig, AutoModelForTokenClassification
import torch
import numpy as np
old_model = BertForSequenceClassification.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=1)
model.bert = old_model.bert
# Ensure that model parameters are equivalent except for classifier head layer
for param_name in model.state_dict():
if 'classifier' not in param_name:
sub_param, full_param = model.state_dict()[param_name], old_model.state_dict()[param_name] # type: torch.Tensor, torch.Tensor
assert (sub_param.cpu().numpy() == full_param.cpu().numpy()).all(), param_name
tokenizer = BertTokenizer.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**inputs).logits
output_value = np.array(logits)[0][0]
print(output_value)
tokenizer is not callable with transformers 2.5.0, resulting the following:
TypeError Traceback (most recent call last)
<ipython-input-1-d83f0d613f4b> in <module>
19
20
---> 21 inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
22
23 with torch.no_grad():
TypeError: 'BertTokenizer' object is not callable
However, attempting to replace tokenizer() with tokenizer.tokenize() results in the following:
TypeError Traceback (most recent call last)
<ipython-input-2-1d431131eb87> in <module>
21
22 with torch.no_grad():
---> 23 logits = model(**inputs).logits
24
25 output_value = np.array(logits)[0][0]
TypeError: BertForSequenceClassification object argument after ** must be a mapping, not list
Any help would be greatly appreciated.
Solution
Using tokenizer.encode_plus() as suggested by @cronoik:
tokenized = tokenizer.encode_plus("Hello, my dog is cute", return_tensors="pt")
with torch.no_grad():
logits = model(**tokenized)
output_value = np.array(logits)[0]
print(output_value)
| Sadly their documentation for the old versions is broken, but you can use encode_plus as shown in the following (he oldest available documentation of encode_plus is from 2.10.0):
import torch
from transformers import BertTokenizer
t = BertTokenizer.from_pretrained("textattack/bert-base-uncased-yelp-polarity")
tokenized = t.encode_plus("Hello, my dog is cute", return_tensors='pt')
print(tokenized)
Output:
{'input_ids': tensor([[ 101, 7592, 1010, 2026, 3899, 2003, 10140, 102]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1]])}
| https://stackoverflow.com/questions/73127139/ |
When is `stage is None` in pytorch lightning? | Some official pytorch lightning docs have code that refer to stage as Optional[str] with for example the following code
import pytorch_lightning as pl
from torch.utils.data import random_split, DataLoader
# Note - you must have torchvision installed for this example
from torchvision.datasets import MNIST
from torchvision import transforms
class MNISTDataModule(pl.LightningDataModule):
def __init__(self, data_dir: str = "./"):
super().__init__()
self.data_dir = data_dir
self.transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
def prepare_data(self):
# download
MNIST(self.data_dir, train=True, download=True)
MNIST(self.data_dir, train=False, download=True)
def setup(self, stage: Optional[str] = None):
# Assign train/val datasets for use in dataloaders
if stage == "fit" or stage is None:
mnist_full = MNIST(self.data_dir, train=True, transform=self.transform)
self.mnist_train, self.mnist_val = random_split(mnist_full, [55000, 5000])
# Assign test dataset for use in dataloader(s)
if stage == "test" or stage is None:
self.mnist_test = MNIST(self.data_dir, train=False, transform=self.transform)
if stage == "predict" or stage is None:
self.mnist_predict = MNIST(self.data_dir, train=False, transform=self.transform)
def train_dataloader(self):
return DataLoader(self.mnist_train, batch_size=32)
def val_dataloader(self):
return DataLoader(self.mnist_val, batch_size=32)
def test_dataloader(self):
return DataLoader(self.mnist_test, batch_size=32)
def predict_dataloader(self):
return DataLoader(self.mnist_predict, batch_size=32)
When does stage take the value of None? I could find no docs describing this.
| The Trainer will never send stage=None to the setup hook, or any of the other hooks that take this argument. The type is annotated optional and the default value is None for historical reasons. The values will always be one of "fit", "validate", "test", "predict".
There is an RFC to change this to a required argument to avoid confusion. The link provides some more context why it has been like this for the past.
| https://stackoverflow.com/questions/73130005/ |
How can I stop pytorch model from downloading vgg .pth pretrained file every time I do inference? | I'm trying to use the u-net segmentation model at https://github.com/khanhha/crack_segmentation, and incorporate it into my pipeline. However, I noticed that whenever I use 'inference_unet.py', for the first time in the session, it downloads a .pth file for vgg.
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to C:\Users\hedey/.cache\torch\hub\checkpoints\vgg16-397923af.pth
It's not practical to download that file every time I make an inference, especially that this will be a part of an application. How can I avoid having to download that file every time?
Here is the code at 'inference_unet.py':
import sys
import os
import numpy as np
from pathlib import Path
import cv2 as cv
import torch
import torch.nn.functional as F
from torch.autograd import Variable
import torchvision.transforms as transforms
from unet.unet_transfer import UNet16, input_size
import matplotlib.pyplot as plt
import argparse
from os.path import join
from PIL import Image
import gc
from utils import load_unet_vgg16, load_unet_resnet_101, load_unet_resnet_34
from tqdm import tqdm
def evaluate_img(model, img):
input_width, input_height = input_size[0], input_size[1]
img_1 = cv.resize(img, (input_width, input_height), cv.INTER_AREA)
X = train_tfms(Image.fromarray(img_1))
X = Variable(X.unsqueeze(0)).cuda() # [N, 1, H, W]
mask = model(X)
mask = F.sigmoid(mask[0, 0]).data.cpu().numpy()
mask = cv.resize(mask, (img_width, img_height), cv.INTER_AREA)
return mask
def evaluate_img_patch(model, img):
input_width, input_height = input_size[0], input_size[1]
img_height, img_width, img_channels = img.shape
if img_width < input_width or img_height < input_height:
return evaluate_img(model, img)
stride_ratio = 0.1
stride = int(input_width * stride_ratio)
normalization_map = np.zeros((img_height, img_width), dtype=np.int16)
patches = []
patch_locs = []
for y in range(0, img_height - input_height + 1, stride):
for x in range(0, img_width - input_width + 1, stride):
segment = img[y:y + input_height, x:x + input_width]
normalization_map[y:y + input_height, x:x + input_width] += 1
patches.append(segment)
patch_locs.append((x, y))
patches = np.array(patches)
if len(patch_locs) <= 0:
return None
preds = []
for i, patch in enumerate(patches):
patch_n = train_tfms(Image.fromarray(patch))
X = Variable(patch_n.unsqueeze(0)).cuda() # [N, 1, H, W]
masks_pred = model(X)
mask = F.sigmoid(masks_pred[0, 0]).data.cpu().numpy()
preds.append(mask)
probability_map = np.zeros((img_height, img_width), dtype=float)
for i, response in enumerate(preds):
coords = patch_locs[i]
probability_map[coords[1]:coords[1] + input_height, coords[0]:coords[0] + input_width] += response
return probability_map
def disable_axis():
plt.axis('off')
plt.gca().axes.get_xaxis().set_visible(False)
plt.gca().axes.get_yaxis().set_visible(False)
plt.gca().axes.get_xaxis().set_ticklabels([])
plt.gca().axes.get_yaxis().set_ticklabels([])
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-img_dir',type=str, help='input dataset directory')
parser.add_argument('-model_path', type=str, help='trained model path')
parser.add_argument('-model_type', type=str, choices=['vgg16', 'resnet101', 'resnet34'])
parser.add_argument('-out_viz_dir', type=str, default='', required=False, help='visualization output dir')
parser.add_argument('-out_pred_dir', type=str, default='', required=False, help='prediction output dir')
parser.add_argument('-threshold', type=float, default=0.2 , help='threshold to cut off crack response')
args = parser.parse_args()
if args.out_viz_dir != '':
os.makedirs(args.out_viz_dir, exist_ok=True)
for path in Path(args.out_viz_dir).glob('*.*'):
os.remove(str(path))
if args.out_pred_dir != '':
os.makedirs(args.out_pred_dir, exist_ok=True)
for path in Path(args.out_pred_dir).glob('*.*'):
os.remove(str(path))
if args.model_type == 'vgg16':
model = load_unet_vgg16(args.model_path)
elif args.model_type == 'resnet101':
model = load_unet_resnet_101(args.model_path)
elif args.model_type == 'resnet34':
model = load_unet_resnet_34(args.model_path)
print(model)
else:
print('undefind model name pattern')
exit()
channel_means = [0.485, 0.456, 0.406]
channel_stds = [0.229, 0.224, 0.225]
paths = [path for path in Path(args.img_dir).glob('*.*')]
for path in tqdm(paths):
#print(str(path))
#train_tfms = transforms.Compose([transforms.ToTensor(), transforms.Normalize(channel_means, channel_stds)])
train_tfms = transforms.Compose([transforms.ToTensor()])
img_0 = Image.open(str(path))
img_0 = np.asarray(img_0)
if len(img_0.shape) != 3:
print(f'incorrect image shape: {path.name}{img_0.shape}')
continue
img_0 = img_0[:,:,:3]
img_height, img_width, img_channels = img_0.shape
#img_height, img_width = img_0.shape
prob_map_full = evaluate_img(model, img_0)
if args.out_pred_dir != '':
#cv.imwrite(filename=join(args.out_pred_dir, f'{path.stem}.jpg'), img=(prob_map_full * 255).astype(np.uint8))
cv.imwrite(filename=join(args.out_pred_dir, f'{path.stem}.jpg'), img=(prob_map_full).astype(np.uint8))
if args.out_viz_dir != '':
# plt.subplot(121)
# plt.imshow(img_0), plt.title(f'{img_0.shape}')
if img_0.shape[0] > 2000 or img_0.shape[1] > 2000:
img_1 = cv.resize(img_0, None, fx=0.2, fy=0.2, interpolation=cv.INTER_AREA)
else:
img_1 = img_0
# plt.subplot(122)
# plt.imshow(img_0), plt.title(f'{img_0.shape}')
# plt.show()
prob_map_patch = evaluate_img_patch(model, img_1)
#plt.title(f'name={path.stem}. \n cut-off threshold = {args.threshold}', fontsize=4)
prob_map_viz_patch = prob_map_patch.copy()
prob_map_viz_patch = prob_map_viz_patch/ prob_map_viz_patch.max()
prob_map_viz_patch[prob_map_viz_patch < args.threshold] = 0.0
fig = plt.figure()
st = fig.suptitle(f'name={path.stem} \n cut-off threshold = {args.threshold}', fontsize="x-large")
ax = fig.add_subplot(231)
ax.imshow(img_1)
ax = fig.add_subplot(232)
ax.imshow(prob_map_viz_patch)
ax = fig.add_subplot(233)
ax.imshow(img_1)
ax.imshow(prob_map_viz_patch, alpha=0.4)
prob_map_viz_full = prob_map_full.copy()
prob_map_viz_full[prob_map_viz_full < args.threshold] = 0.0
ax = fig.add_subplot(234)
ax.imshow(img_0)
ax = fig.add_subplot(235)
ax.imshow(prob_map_viz_full)
ax = fig.add_subplot(236)
ax.imshow(img_0)
ax.imshow(prob_map_viz_full, alpha=0.4)
plt.savefig(join(args.out_viz_dir, f'{path.stem}.jpg'), dpi=500)
plt.close('all')
gc.collect()
Here is the code at 'utils.py':
import json
from datetime import datetime
from pathlib import Path
import random
import numpy as np
import torch
import tqdm
from unet.unet_transfer import UNet16, UNetResNet
class AverageMeter(object):
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def cuda(x):
#return x.cuda(async=True) if torch.cuda.is_available() else x
return x.cuda(non_blocking=True) if torch.cuda.is_available() else x
def write_event(log, step, **data):
data['step'] = step
data['dt'] = datetime.now().isoformat()
log.write(json.dumps(data, sort_keys=True))
log.write('\n')
log.flush()
def check_crop_size(image_height, image_width):
"""Checks if image size divisible by 32.
Args:
image_height:
image_width:
Returns:
True if both height and width divisible by 32 and False otherwise.
"""
return image_height % 32 == 0 and image_width % 32 == 0
def create_model(device, type ='vgg16'):
assert type == 'vgg16' or type == 'resnet101'
if type == 'vgg16':
model = UNet16(pretrained=True)
elif type == 'resnet101':
model = UNetResNet(pretrained=True, encoder_depth=101, num_classes=1)
else:
assert False
model.eval()
return model.to(device)
def load_unet_vgg16(model_path):
model = UNet16(pretrained=True)
#model = UNet16(pretrained=False)
checkpoint = torch.load(model_path)
if 'model' in checkpoint:
model.load_state_dict(checkpoint['model'])
elif 'state_dict' in checkpoint:
model.load_state_dict(checkpoint['check_point'])
else:
raise Exception('undefind model format')
model.cuda()
model.eval()
return model
def load_unet_resnet_101(model_path):
#model = UNetResNet(pretrained=True, encoder_depth=101, num_classes=1)
model = UNetResNet(pretrained=True, encoder_depth=101, num_classes=8)
checkpoint = torch.load(model_path)
if 'model' in checkpoint:
model.load_state_dict(checkpoint['model'])
elif 'state_dict' in checkpoint:
model.load_state_dict(checkpoint['check_point'])
else:
raise Exception('undefind model format')
model.cuda()
model.eval()
return model
def load_unet_resnet_34(model_path):
model = UNetResNet(pretrained=True, encoder_depth=34, num_classes=1)
checkpoint = torch.load(model_path)
if 'model' in checkpoint:
model.load_state_dict(checkpoint['model'])
elif 'state_dict' in checkpoint:
model.load_state_dict(checkpoint['check_point'])
else:
raise Exception('undefind model format')
model.cuda()
model.eval()
return model
def train(args, model, criterion, train_loader, valid_loader, validation, init_optimizer, n_epochs=None, fold=None,
num_classes=None):
lr = args.lr
n_epochs = n_epochs or args.n_epochs
optimizer = init_optimizer(lr)
root = Path(args.model_path)
model_path = root / 'model_{fold}.pt'.format(fold=fold)
if model_path.exists():
state = torch.load(str(model_path))
epoch = state['epoch']
step = state['step']
model.load_state_dict(state['model'])
print('Restored model, epoch {}, step {:,}'.format(epoch, step))
else:
epoch = 1
step = 0
save = lambda ep: torch.save({
'model': model.state_dict(),
'epoch': ep,
'step': step,
}, str(model_path))
report_each = 10
log = root.joinpath('train_{fold}.log'.format(fold=fold)).open('at', encoding='utf8')
valid_losses = []
for epoch in range(epoch, n_epochs + 1):
model.train()
random.seed()
tq = tqdm.tqdm(total=(len(train_loader) * args.batch_size))
tq.set_description('Epoch {}, lr {}'.format(epoch, lr))
losses = []
tl = train_loader
try:
mean_loss = 0
for i, (inputs, targets) in enumerate(tl):
inputs = cuda(inputs)
with torch.no_grad():
targets = cuda(targets)
outputs = model(inputs)
#print(outputs.shape, targets.shape)
loss = criterion(outputs, targets)
optimizer.zero_grad()
batch_size = inputs.size(0)
loss.backward()
optimizer.step()
step += 1
tq.update(batch_size)
losses.append(loss.item())
mean_loss = np.mean(losses[-report_each:])
tq.set_postfix(loss='{:.5f}'.format(mean_loss))
if i and i % report_each == 0:
write_event(log, step, loss=mean_loss)
write_event(log, step, loss=mean_loss)
tq.close()
save(epoch + 1)
valid_metrics = validation(model, criterion, valid_loader, num_classes)
write_event(log, step, **valid_metrics)
valid_loss = valid_metrics['valid_loss']
valid_losses.append(valid_loss)
except KeyboardInterrupt:
tq.close()
print('Ctrl+C, saving snapshot')
save(epoch)
print('done.')
return
Here is the code at 'unet_transfer.py':
from torch import nn
from torch.nn import functional as F
import torch
from torchvision import models
import torchvision
input_size = (448, 448)
class Interpolate(nn.Module):
def __init__(self, size=None, scale_factor=None, mode='nearest', align_corners=False):
super(Interpolate, self).__init__()
self.interp = nn.functional.interpolate
self.size = size
self.mode = mode
self.scale_factor = scale_factor
self.align_corners = align_corners
def forward(self, x):
x = self.interp(x, size=self.size, scale_factor=self.scale_factor,
mode=self.mode, align_corners=self.align_corners)
return x
def conv3x3(in_, out):
return nn.Conv2d(in_, out, 3, padding=1)
class ConvRelu(nn.Module):
def __init__(self, in_, out):
super().__init__()
self.conv = conv3x3(in_, out)
self.activation = nn.ReLU(inplace=True)
def forward(self, x):
x = self.conv(x)
x = self.activation(x)
return x
class DecoderBlockV2(nn.Module):
def __init__(self, in_channels, middle_channels, out_channels, is_deconv=True):
super(DecoderBlockV2, self).__init__()
self.in_channels = in_channels
if is_deconv:
"""
Paramaters for Deconvolution were chosen to avoid artifacts, following
link https://distill.pub/2016/deconv-checkerboard/
"""
#self.block = nn.ModuleList(
self.block = nn.Sequential(
ConvRelu(in_channels, middle_channels),
nn.ConvTranspose2d(middle_channels, out_channels, kernel_size=4, stride=2,
padding=1),
nn.ReLU(inplace=True)
)
else:
self.block = nn.Sequential(
Interpolate(scale_factor=2, mode='bilinear'),
ConvRelu(in_channels, middle_channels),
ConvRelu(middle_channels, out_channels),
)
def forward(self, x):
return self.block(x)
class UNet16(nn.Module):
def __init__(self, num_classes=1, num_filters=32, pretrained=False, is_deconv=False):
#def __init__(self, num_classes=8, num_filters=32, pretrained=False, is_deconv=False):
"""
:param num_classes:
:param num_filters:
:param pretrained:
False - no pre-trained network used
True - encoder pre-trained with VGG16
:is_deconv:
False: bilinear interpolation is used in decoder
True: deconvolution is used in decoder
"""
super().__init__()
self.num_classes = num_classes
self.pool = nn.MaxPool2d(2, 2)
#print(torchvision.models.vgg16(pretrained=pretrained))
self.encoder = torchvision.models.vgg16(pretrained=pretrained).features
#self.encoder = torchvision.models.vgg16(pretrained=False).features
self.relu = nn.ReLU(inplace=True)
self.conv1 = nn.Sequential(self.encoder[0],
self.relu,
self.encoder[2],
self.relu)
self.conv2 = nn.Sequential(self.encoder[5],
self.relu,
self.encoder[7],
self.relu)
self.conv3 = nn.Sequential(self.encoder[10],
self.relu,
self.encoder[12],
self.relu,
self.encoder[14],
self.relu)
self.conv4 = nn.Sequential(self.encoder[17],
self.relu,
self.encoder[19],
self.relu,
self.encoder[21],
self.relu)
self.conv5 = nn.Sequential(self.encoder[24],
self.relu,
self.encoder[26],
self.relu,
self.encoder[28],
self.relu)
self.center = DecoderBlockV2(512, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec5 = DecoderBlockV2(512 + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec4 = DecoderBlockV2(512 + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec3 = DecoderBlockV2(256 + num_filters * 8, num_filters * 4 * 2, num_filters * 2, is_deconv)
self.dec2 = DecoderBlockV2(128 + num_filters * 2, num_filters * 2 * 2, num_filters, is_deconv)
self.dec1 = ConvRelu(64 + num_filters, num_filters)
self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1)
def forward(self, x):
conv1 = self.conv1(x)
conv2 = self.conv2(self.pool(conv1))
conv3 = self.conv3(self.pool(conv2))
conv4 = self.conv4(self.pool(conv3))
conv5 = self.conv5(self.pool(conv4))
center = self.center(self.pool(conv5))
dec5 = self.dec5(torch.cat([center, conv5], 1))
dec4 = self.dec4(torch.cat([dec5, conv4], 1))
dec3 = self.dec3(torch.cat([dec4, conv3], 1))
dec2 = self.dec2(torch.cat([dec3, conv2], 1))
dec1 = self.dec1(torch.cat([dec2, conv1], 1))
if self.num_classes > 1:
x_out = F.log_softmax(self.final(dec1), dim=1)
else:
x_out = self.final(dec1)
#x_out = F.sigmoid(x_out)
return x_out
class UNetResNet(nn.Module):
def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2,
pretrained=False, is_deconv=False):
super().__init__()
self.num_classes = num_classes
self.dropout_2d = dropout_2d
if encoder_depth == 34:
self.encoder = torchvision.models.resnet34(pretrained=pretrained)
bottom_channel_nr = 512
elif encoder_depth == 101:
self.encoder = torchvision.models.resnet101(pretrained=pretrained)
bottom_channel_nr = 2048
elif encoder_depth == 152:
self.encoder = torchvision.models.resnet152(pretrained=pretrained)
bottom_channel_nr = 2048
else:
raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented')
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU(inplace=True)
#self.conv1 = nn.Sequential(self.encoder.conv1,
# self.encoder.bn1,
# self.encoder.relu,
# self.pool)
self.conv1 = nn.Sequential(nn.Conv2d(1,64,kernel_size=(7,7),stride=(2,2),padding=(3,3),bias=False), # 1 Here is for grayscale images, replace by 3 if you need RGB/BGR
nn.BatchNorm2d(64),
nn.ReLU(),
self.pool
)
self.conv2 = self.encoder.layer1
self.conv3 = self.encoder.layer2
self.conv4 = self.encoder.layer3
self.conv5 = self.encoder.layer4
self.center = DecoderBlockV2(bottom_channel_nr, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec5 = DecoderBlockV2(bottom_channel_nr + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec4 = DecoderBlockV2(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8,
is_deconv)
self.dec3 = DecoderBlockV2(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2,
is_deconv)
self.dec2 = DecoderBlockV2(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2,
is_deconv)
self.dec1 = DecoderBlockV2(num_filters * 2 * 2, num_filters * 2 * 2, num_filters, is_deconv)
self.dec0 = ConvRelu(num_filters, num_filters)
self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1)
#self.final = nn.Conv2d(num_filters, 1, kernel_size=1)
def forward(self, x):
conv1 = self.conv1(x)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
conv4 = self.conv4(conv3)
conv5 = self.conv5(conv4)
pool = self.pool(conv5)
center = self.center(pool)
dec5 = self.dec5(torch.cat([center, conv5], 1))
dec4 = self.dec4(torch.cat([dec5, conv4], 1))
dec3 = self.dec3(torch.cat([dec4, conv3], 1))
dec2 = self.dec2(torch.cat([dec3, conv2], 1))
dec1 = self.dec1(dec2)
dec0 = self.dec0(dec1)
return self.final(F.dropout2d(dec0, p=self.dropout_2d))
'''
class UNetResNet(nn.Module):
"""PyTorch U-Net model using ResNet(34, 101 or 152) encoder.
UNet: https://arxiv.org/abs/1505.04597
ResNet: https://arxiv.org/abs/1512.03385
Proposed by Alexander Buslaev: https://www.linkedin.com/in/al-buslaev/
Args:
encoder_depth (int): Depth of a ResNet encoder (34, 101 or 152).
num_classes (int): Number of output classes.
num_filters (int, optional): Number of filters in the last layer of decoder. Defaults to 32.
dropout_2d (float, optional): Probability factor of dropout layer before output layer. Defaults to 0.2.
pretrained (bool, optional):
False - no pre-trained weights are being used.
True - ResNet encoder is pre-trained on ImageNet.
Defaults to False.
is_deconv (bool, optional):
False: bilinear interpolation is used in decoder.
True: deconvolution is used in decoder.
Defaults to False.
"""
def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2,
pretrained=False, is_deconv=False):
super().__init__()
self.num_classes = num_classes
self.dropout_2d = dropout_2d
if encoder_depth == 34:
self.encoder = torchvision.models.resnet34(pretrained=pretrained)
bottom_channel_nr = 512
elif encoder_depth == 101:
self.encoder = torchvision.models.resnet101(pretrained=pretrained)
bottom_channel_nr = 2048
elif encoder_depth == 152:
self.encoder = torchvision.models.resnet152(pretrained=pretrained)
bottom_channel_nr = 2048
else:
raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented')
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU(inplace=True)
self.conv1 = nn.Sequential(self.encoder.conv1,
self.encoder.bn1,
self.encoder.relu,
self.pool)
self.conv2 = self.encoder.layer1
self.conv3 = self.encoder.layer2
self.conv4 = self.encoder.layer3
self.conv5 = self.encoder.layer4
self.center = DecoderBlockV2(bottom_channel_nr, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec5 = DecoderBlockV2(bottom_channel_nr + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec4 = DecoderBlockV2(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8,
is_deconv)
self.dec3 = DecoderBlockV2(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2,
is_deconv)
self.dec2 = DecoderBlockV2(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2,
is_deconv)
self.dec1 = DecoderBlockV2(num_filters * 2 * 2, num_filters * 2 * 2, num_filters, is_deconv)
self.dec0 = ConvRelu(num_filters, num_filters)
self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1)
def forward(self, x):
self.conv1 = torch.nn.Conv2d(1, 64, (7, 7), (2, 2), (3, 3), bias=False)
conv1 = self.conv1(x)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
conv4 = self.conv4(conv3)
conv5 = self.conv5(conv4)
pool = self.pool(conv5)
center = self.center(pool)
dec5 = self.dec5(torch.cat([center, conv5], 1))
dec4 = self.dec4(torch.cat([dec5, conv4], 1))
dec3 = self.dec3(torch.cat([dec4, conv3], 1))
dec2 = self.dec2(torch.cat([dec3, conv2], 1))
dec1 = self.dec1(dec2)
dec0 = self.dec0(dec1)
return self.final(F.dropout2d(dec0, p=self.dropout_2d))
'''
| In 'utils.py', 'load_unet_vgg16' function receives a path to a checkpoint to load the model from, so the initialization of the weights step (which happens before loading from the checkpoint) is unnecessary.
The function 'load_unet_vgg16' could be as follows:
def load_unet_vgg16(model_path, pretrained=True):
model = UNet16(pretrained=pretrained)
checkpoint = torch.load(model_path)
if 'model' in checkpoint:
model.load_state_dict(checkpoint['model'])
elif 'state_dict' in checkpoint:
model.load_state_dict(checkpoint['check_point'])
else:
raise Exception('undefind model format')
model.cuda()
model.eval()
return model
Then you pass a path to a checkpoint and 'pretrained=False' since you are making an inference.
| https://stackoverflow.com/questions/73130095/ |
Masking dimensions outside of a window | I have a Pytorch tensor t of shape (n, x, y), and I'd like to apply a mask such that, for all y > x + k (with k being a constant), t[n, x, y] = -inf.
I believe I can do this with advanced indexing, but can't figure out how.
If not, a simple way to do this is to construct a mask like that with loops (slow, but do it one time and cache it), and then t += mask, since -inf + z == -inf for all z.
Is there a better way to do this?
| Notice that the condition y β₯ x corresponds to the upper triangle, while y > x is the strict upper triangle. Therefore y > x + k is the upper triangle part with a shift equal to 1 + k.
You can construct a triangle mask using torch.triu which actually allows for a shift argument named diagonal, referring to the position of the diagonal. Assign the desired value, here -torch.inf, using this mask will allow you to obtain the desired result.
Overall it comes down to:
>>> m = torch.ones_like(t, dtype=bool).triu(1+k)
>>> t[m] = -torch.inf
Alternatively, a one-liner is possible using torch.where:
>>> torch.where(torch.ones_like(t).bool().triu(1+k), -torch.inf, t)
Since the mask is equal for all batch elements, you can get away with creating a single 2D mask and masking t on its 2nd and 3rd axes:
>>> m = torch.ones_like(t[0], dtype=bool).triu(1+k)
>>> t[:,m] = -torch.inf
| https://stackoverflow.com/questions/73130577/ |
PyTorch Lightning - Display metrics after validation epoch | I've implemented validation_epoch_end to produce and log metrics, and when I run trainer.validate, the metrics appear in my notebook.
However, when I run trainer.fit, only the training metrics appear; not the validation ones.
The validation step is still being run (because the validation code calls a print statement, which does appear), but the validation metrics don't appear, even though they're logged. Or, if they do appear, the next epoch immediately erases them, so that I can't see them.
(Likewise, tensorboard sees the validation metrics)
How can I see the validation epoch end metrics in a notebook, as each epoch occurs?
| You could do the following. Let's say you have the following LightningModule:
class MNISTModel(LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
# prog_bar=True will display the value on the progress bar statically for the last complete train epoch
self.log("train_loss", loss, on_step=False, on_epoch=True, prog_bar=True)
return loss
def validation_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
# prog_bar=True will display the value on the progress bar statically for the last complete validation epoch
self.log("val_loss", loss, on_step=False, on_epoch=True, prog_bar=True)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
The trick is to use prog_bar=True in combination with on_step and on_epoch depending on when you want the update on the progress bar. So, in this case, when training:
# Train the model β‘
trainer.fit(mnist_model, MNIST_dm)
you will see:
Epoch 4: 100% -------------------------- 939/939 [00:09<00:00, 94.51it/s, loss=0.636, v_num=4, val_loss=0.743, train_loss=0.726]
Where loss will be updating each batch as it is the step loss. However, val_loss and train_loss will be static values that will only change after each validation or train epoch respectively.
| https://stackoverflow.com/questions/73131597/ |
How to compute partial derivatives of a component of a vector-valued function? | Letβs say I have a function Psi with a 4-dimensional vector output, that takes a 3-dimensional vector u as input. I would like to compute the gradient of the first three components of Psi w.r.t. the respective three components of u:
import torch
u = torch.tensor([1.,2.,3.], requires_grad=True)
psi = torch.zeros(4)
psi[0] = 2*u[0]
psi[1] = 2*u[1]
psi[2] = 2*u[2]
psi[3] = torch.dot(u,u)
grad_Psi_0 = torch.autograd.grad(psi[0], u[0])
grad_Psi_1 = torch.autograd.grad(psi[1], u[1])
grad_Psi_2 = torch.autograd.grad(psi[2], u[2])
And I get the error that u[0],u[1], and u[2] are not used in the graph:
---> 19 grad_Psi_0 = torch.autograd.grad(psi[0], u[0])
20 grad_Psi_1 = torch.autograd.grad(psi[1], u[1])
21 grad_Psi_2 = torch.autograd.grad(psi[2], u[2])
File ~/.local/lib/python3.10/site-packages/torch/autograd/__init__.py:275, in grad(outputs, inputs, grad_outputs, retain_graph, create_graph, only_inputs, allow_unused, is_grads_batched)
273 return _vmap_internals._vmap(vjp, 0, 0, allow_none_pass_through=True)(grad_outputs)
274 else:
--> 275 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
276 outputs, grad_outputs_, retain_graph, create_graph, inputs,
277 allow_unused, accumulate_grad=False)
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
This is strange, as components of u are used to compute components of psi, so it should be possible to compute derivatives of the components of psi w.r.t to components of u. How to fix this?
Answer: based on answer from @Ivan, for higher-order derivatives, component-wise calls to gradient require create_graph=True, otherwise the same error happens as described above.
import torch
# u = grad(Phi) = [2*u0, 2*u1, 2*u2]
# Phi = u0**2 + u1**2 + u2**2 = dot(u,u)
u = torch.tensor([1.,2.,3.], requires_grad=True)
psi = torch.zeros(4)
psi[0] = 2*u[0]
psi[1] = 2*u[1]
psi[2] = 2*u[2]
psi[3] = torch.dot(u,u)
print("u = ",u)
print("psi = ",psi)
grad_v_x = torch.autograd.grad(psi[0], u, retain_graph=True)[0]
print(grad_v_x)
grad_v_y = torch.autograd.grad(psi[1], u, retain_graph=True)[0]
print(grad_v_y)
grad_v_z = torch.autograd.grad(psi[2], u, retain_graph=True)[0]
print(grad_v_z)
div_v = grad_v_x[0] + grad_v_y[1] + grad_v_z[2]
# Divergence of the vector phi[0:3]=2u0 + 2u1 + 2u2 w.r.t [u0,u1,u2] = 2+2+2=6
print (div_v)
# laplace(psi[3]) = \partial_u0^2 psi[3] + \partial_u1^2 psi[3] + \partial_u2^2 psi[3]
# = \partial_u0 2x + \partial_u1 2u1 + \partial_u2 2u2 = 2 + 2 + 2 = 6
d_phi_du = torch.autograd.grad(psi[3], u, create_graph=True, retain_graph=True)[0]
print(d_phi_du)
dd_phi_d2u0 = torch.autograd.grad(d_phi_du[0], u, retain_graph=True)[0]
dd_phi_d2u1 = torch.autograd.grad(d_phi_du[1], u, retain_graph=True)[0]
dd_phi_d2u2 = torch.autograd.grad(d_phi_du[2], u, retain_graph=True)[0]
laplace_phi = torch.dot(dd_phi_d2u0 + dd_phi_d2u1 + dd_phi_d2u2, torch.ones(3))
print(laplace_phi)
| The reason why is because u[0] is actually a copy so the one used on the following line:
psi[0] = 2*u[0]
is different to the one used here
grad_Psi_0 = torch.autograd.grad(psi[0], u[0])
which means they are not linked in the computation graph.
A possible solution is to assign u[0] to a separate variable such that it can be used on both calls:
>>> u0 = u[0]
>>> psi[0] = 2*u0
>>> torch.autograd.grad(psi[0], u0)
(tensor(2.),)
Alternatively, you can call autograd.grad directly on u:
>>> psi[0] = 2*u[0]
>>> torch.autograd.grad(psi[0], u)
(tensor([2., 0., 0.]),)
| https://stackoverflow.com/questions/73133352/ |
How to fix the input dimension from convolution flatten to feed forward layer? | I am using nni framework on python to do Neural Architecture Search. In that I have defined model as:
from nni.nas.pytorch import mutables
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = mutables.LayerChoice([
nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1),
nn.Conv2d(3, 32, kernel_size=5, stride=1, padding=1)
]) # try 3x3 kernel and 5x5 kernel
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.dropout1 = nn.Dropout2d(0.25)
self.dropout2 = nn.Dropout2d(0.5)
self.fc1 = nn.Linear(14400, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x) #Here is error coming
x = F.relu(x)
x = self.dropout2(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
What the above code does apart from building the model is it also gives a choice to below algorithm to choose between two layers as the first convolution layer, either layer with 3X3 kernel or 5X5 kernel.
Also I am new to pyTorch so let me know if you can already see a mistake in above.
Moving on, it is coupled by below code:
dataset_train = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
dataset_valid = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), 0.05, momentum=0.9, weight_decay=1.0E-4)
# use NAS here
def top1_accuracy(output, target):
# this is the function that computes the reward, as required by ENAS algorithm
batch_size = target.size(0)
_, predicted = torch.max(output.data, 1)
return (predicted == target).sum().item() / batch_size
def metrics_fn(output, target):
# metrics function receives output and target and computes a dict of metrics
return {"acc1": top1_accuracy(output, target)}
from nni.algorithms.nas.pytorch import enas
trainer = enas.EnasTrainer(model,
loss=criterion,
metrics=metrics_fn,
reward_function=top1_accuracy,
optimizer=optimizer,
batch_size=128,
num_epochs=10, # 10 epochs
dataset_train=dataset_train,
dataset_valid=dataset_valid,
log_frequency=10) # print log every 10 steps
trainer.train() # training
trainer.export(file="model_dir/final_architecture.json") # export the final architecture to file
What the above does is downloads and gets cifar10 dataset, uses the above generated model to train on it and finds which model performs best (based on two choices of layers, you can have more choices as well). But it raises an error:
22 x = self.dropout1(x)
23 x = torch.flatten(x, 1)
---> 24 x = self.fc1(x)
25 x = F.relu(x)
26 x = self.dropout2(x)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)
112
113 def forward(self, input: Tensor) -> Tensor:
--> 114 return F.linear(input, self.weight, self.bias)
115
116 def extra_repr(self) -> str:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x12544 and 14400x128)
I know this is because the flatten layer converts it to a dimension which is not what the first fully connected layer expects. When I do convert it to what the error says, I get the below error:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (128x14400 and 12544x128)
I believe it happens because of the choice in first convolution layer. My question is how do I fix this? And if nni or something feels not understandable to you, there is the option of just putting the dimensions of fully connected layer as number of hidden units in that layer without mentioning the input in KERAS. But I suppose pyTorch requires input dimension to be correctly put, is there a way I can just say after flatten, to go for a hidden fully connected layer with just the number of units and not the input shape as well which I believe is causing the problems?
| For conv with kernel_zise=5 you need to padding=2 and not 1.
Fix:
self.conv1 = mutables.LayerChoice([
nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1),
nn.Conv2d(3, 32, kernel_size=5, stride=1, padding=1)
])
to
self.conv1 = mutables.LayerChoice([
nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1),
nn.Conv2d(3, 32, kernel_size=5, stride=1, padding=2) # match padding size to kernel size
])
Update:
Recent versions of pytorch allow you to specify padding='same' and avoid the need to come up with the correct value for padding.
However, I strongly urge you to use the formula for computing the output shape of a convolution layer (found here) and manually compute the correct value for padding. This is a good sanity check to ensure you understand what you are doing.
| https://stackoverflow.com/questions/73136384/ |
How to train a network with two datasets and two output heads? | I am tying to train two datasets at the same time to get good results on both datasets.
data_loader_iterator = iter(data_loader_second)
for batch_idx, (image1, label1) in enumerate(data_loader):
image1 = image1.to(args.local_rank)
label1 = label1.to(args.local_rank)
label1 = label1.squeeze()
try:
image2, label2 = next(data_loader_iterator)
except StopIteration:
data_loader_iterator = iter(data_loader_second)
image2, label2 = next(data_loader_iterator)
image2 = image2.to(args.local_rank)
label2 = label2.to(args.local_rank)
label2 = label2.squeeze()
embedding1 = backbone.forward(image1)
embedding2 = backbone.forward(image2)
output1 = head1.forward(embedding1, label1)
output2 = head2.forward(embedding2, label2)
loss1 = criterion(output1, label1)
loss2 = criterion(output2, label2)
loss = loss1 + loss2
optimizer.zero_grad()
loss.backward()
optimizer.step()
...
head1 = HeadFactory(args.head_type, args.head_conf_file, 751577).get_head()
head2 = HeadFactory(args.head_type, args.head_conf_file, 253788).get_head()
...
optimizer = torch.optim.AdamW(params=[{"params": backbone.parameters()},{"params": head1.parameters()},{"params": head2.parameters()}], lr = args.lr, weight_decay=0.05)
...
criterion = torch.nn.CrossEntropyLoss().to(args.local_rank)
This can't work correctly WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2919 closing signal SIGTERM and I wonder how to declare the optimizer
| It's unclear from your error what exactly went wrong and where.
Let's try a few things:
How do you merge two datasets with different labels into the same DataLoader? how can you ensure that the first label belongs to head1 while the second to head2?
I would recommend two Datasets with two DataLoaders to be iterated using a zip function:
for batch_idx, (image1, label1), (image2, label2) in enumerate(zip(data_loader1, dataloader2)):
# ...
I wonder if the way you pass all the parameters to the optimizer is not causing trouble.
How about wrapping everything into a single nn.Container?
container = nn.ModuleList([backbone, head1, head2])
optimizer = torch.optim.AdamW(container.parameters(), lr=...)
If you need to wrap the model for parallelism, you can wrap the container.
Additionally, Are you training in a multi-GPU settings? In that case, you need to add a few pieces to the code:
Wrap both backbone and heads with a distributed module.
You data loaders need to be aware of parallelism. This is usually done via DistributedSampler.
| https://stackoverflow.com/questions/73146927/ |
How to use `self` as a function, like `self(**data)` in the example code | Here is part of code from mmcv (mmcv/runner/base_module.py).
I don't understand how does self(**data) works.
class BaseDetector(BaseModule, metaclass=ABCMeta):
"""Base class for detectors."""
# ...
def train_step(self, data, optimizer):
losses = self(**data)
loss, log_vars = self._parse_losses(losses)
outputs = dict(
loss=loss, log_vars=log_vars, num_samples=len(data['img_metas']))
return outputs
class BaseModule(nn.Module, metaclass=ABCMeta):
def __init__(self, init_cfg: Optional[dict] = None):
# ...
Tried to find some function like __call__ in the parent class, but it seems not the right direction to go.
Could any one explain why self could be used as a function?
| This is nothing specific to PyTorch but rather to how Python works.
"Calling" an object is equivalent to calling a special Python function named __call__. Essentially, this is the function which implements the call behaviour. In your case self(**data) will call BaseDetector's __call__ function which has most probably been implemented by one of the parent classes.
Here is a minimal example to understand this behaviour:
class A():
def __call__(self):
return 'hello'
class B(A):
def foo(self):
return self() + ' world'
Initialize a new object from B:
b = B()
Then let us call the foo method on b:
>>> b.foo()
'hello world'
This function has been calling b() i.e. b.__call__():
>>> b() # same as b.__call__()
'hello'
As an additional note, you can explicitly call the implementation of the parent class. This is useful when you need to overload the original implementation in your child's class.
It is done by calling super:
class A():
def __call__(self):
return 'hello'
class B(A):
def __call__(self):
# calling self() here would create an infinite recursion loop
return super().__call__() + ' world'
Then you can call an instance of B the same way:
>>> b = B()
>>> b()
'hello world'
| https://stackoverflow.com/questions/73148885/ |
Plot Pytorch vectors with TSNE | I am using the ESM-1b model to train it with some protein sequences. I already have the vectors and now I wanted to plot them using TSNE. However, when I try to pass the vectors to the TSNE model I get:
'list' object has no attribute 'shape'`
How should I plot the Pytorch vectors (they are Pytorch tensors, actually)?
The code I have so far:
sequence_representations = []
for i, (_, seq) in enumerate(new_list):
sequence_representations.append(token_representations[i, 1 : len(seq) + 1].mean(0))
This is an example of the Pytorch tensors I have (sequence_representations):
[tensor([-0.0054, 0.1090, -0.0046, ..., 0.0465, 0.0426, -0.0675]),
tensor([-0.0025, 0.0228, -0.0521, ..., -0.0611, 0.1010, -0.0103]),
tensor([ 0.1168, -0.0189, -0.0121, ..., -0.0388, 0.0586, -0.0285]),......
TSNE:
X_embedded = TSNE(n_components=2, learning_rate='auto', init='random').fit_transform(sequence_representations) #Where I get the error
| Assuming you are using scipy's TSNE, you'll need sequence_representations to be
ndarray of shape (n_samples, n_features)
Right now have a list of pytorch tensors.
To convert sequence_representations to a numpy ndarray you'll need:
seq_np = torch.stack(sequence_representations) # from list of 1d tensors to a 2d tensor
seq_np = seq_np.numpy() # convert to numpy
| https://stackoverflow.com/questions/73151473/ |
Cannot load pretrained model for continue training | I trained my own model but decided to continue training.
When I use the code below, my model shows high BCELoss as it is a non-trained model.
Where is the problem?
Thank you
model_1 = SimpleCnn(n_classes) # model class
model_1.load_state_dict(torch.load('./model.pth', map_location='cuda:0'))
model_1.to(DEVICE) # torch cuda device
history = train(train_dataset, val_dataset, model=model_1, epochs=8, batch_size=16) # train function
torch.save(model_1.state_dict(), 'model_1.pth')
| In order to continue training, you need to save not only the state_dict of the model, but of the optimizer's as well.
That is, during training, you need to save not only the trained weights of the model, but some other parameters. For example:
def train_function(...):
for e in range(num_epochs):
...
# when done training
torch.save({'model', model.state_dict(),
'opt', optimizer.state_dict(),
'lr', lr_sched.state_dict(), # you also need to save the state of the learning rate scheduler
... # there might be other things that define the "state" of your training
}, PATH)
Then, if you resume training:
checkpoint = torch.load(PATH)
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['opt'])
lr_sched.load_state_dict(checkpoint['lr'])
... # might need to restore other things
| https://stackoverflow.com/questions/73151895/ |
No `predict_dataloader()` method defined to run `Trainer.predict` | I am trying to get predictions from my model based on the test set dataloader (will want to save both x and y^hat, which I need to test later on).
I tried:
my_results = trainer.predict(model = model, datamodule=dm)
With the following items present in my code:
class TimeseriesDataset(Dataset):
'''
Custom Dataset subclass.
Serves as input to DataLoader to transform X
into sequence data using rolling window.
DataLoader using this dataset will output batches
of `(batch_size, seq_len, n_features)` shape.
Suitable as an input to RNNs.
'''
def __init__(self, X: np.ndarray, y: np.ndarray, seq_len: int = 1):
self.X = torch.tensor(X).float()
self.y = torch.tensor(y).float()
self.seq_len = seq_len
def __len__(self):
return self.X.__len__() - (self.seq_len-1)
def __getitem__(self, index):
return (self.X[index:index+self.seq_len], self.y[index+self.seq_len-1])
class LSTMRegressor(pl.LightningModule):
'''
Standard PyTorch Lightning module:
https://pytorch-lightning.readthedocs.io/en/latest/lightning_module.html
'''
def __init__(self,
n_features,
hidden_size,
seq_len,
batch_size,
num_layers,
dropout,
learning_rate,
criterion):
super(LSTMRegressor, self).__init__()
self.n_features = n_features
self.hidden_size = hidden_size
self.seq_len = seq_len
self.batch_size = batch_size
self.num_layers = num_layers
self.dropout = dropout
self.criterion = criterion
self.learning_rate = learning_rate
self.lstm = nn.LSTM(input_size=n_features,
hidden_size=hidden_size,
num_layers=num_layers,
dropout=dropout,
batch_first=True)
self.linear = nn.Linear(hidden_size, 2)
def forward(self, x):
# lstm_out = (batch_size, seq_len, hidden_size)
lstm_out, _ = self.lstm(x)
y_pred = self.linear(lstm_out[:,-1])
return y_pred
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=self.learning_rate)
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.criterion(y_hat, y)
# result = pl.TrainResult(loss)
self.log('train_loss', loss)
return loss
def predict_step(self, batch, batch_idx):
with torch.no_grad():
x, y = batch
y_hat = self(x)
return x, y_hat
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.criterion(y_hat, y)
# result = pl.EvalResult(checkpoint_on=loss)
self.log('val_loss', loss)
enable_checkpointing = True, #ModelCheckpoint(monitor='val_loss')
# checkpoint_callback = ModelCheckpoint(
# monitor='val_loss',
# dirpath='./lstm',
# filename='lstm{epoch:02d}-val_loss{val/loss:.2f}',
# auto_insert_metric_name=False
# )
return loss
def test_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = self.criterion(y_hat, y)
# result = pl.EvalResult()
self.log('test_loss', loss)
enable_checkpointing = True, #ModelCheckpoint(monitor='test_loss') #TODO check if loss is the thing to return in this function
return loss
and:
class CryptoDataModule(pl.LightningDataModule):
'''
PyTorch Lighting DataModule subclass:
https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html
Serves the purpose of aggregating all data loading
and processing work in one place.
'''
def __init__(self, seq_len = 1, batch_size = 128, num_workers=0):
super().__init__()
self.seq_len = seq_len
self.batch_size = batch_size
self.num_workers = num_workers
self.X_train = None
self.y_train = None
self.X_val = None
self.y_val = None
self.X_test = None
self.X_test = None
self.columns = None
self.preprocessing = None
def setup(self, stage=None):
'''
Data is resampled to hourly intervals.
Both 'np.nan' and '?' are converted to 'np.nan'
'Date' and 'Time' columns are merged into 'dt' index
'''
if stage == 'fit' and self.X_train is not None:
return
if stage == 'test' or stage == 'predict' and self.X_test is not None:
return
if stage is None and self.X_train is not None and self.X_test is not None:
return
path = './eth_data_1d.csv'
df = pd.read_csv(
path,
sep=',',
infer_datetime_format=True,
low_memory=False,
na_values=['nan','?'],
index_col='Time'
)
y = pd.concat([df['Top'], df['Btm']], axis=1, keys=['Top', 'Btm'])
X = df.dropna().copy()
self.columns = X.columns
X_cv, X_test, y_cv, y_test = train_test_split(
X, y, test_size=0.2, shuffle=False
)
X_train, X_val, y_train, y_val = train_test_split(
X_cv, y_cv, test_size=0.25, shuffle=False
)
preprocessing = StandardScaler()
preprocessing.fit(X_train)
self.X_train = preprocessing.transform(X_train)
self.y_train = y_train.values.reshape((-1, 2))
self.X_val = preprocessing.transform(X_val)
self.y_val = y_val.values.reshape((-1, 2))
self.X_test = preprocessing.transform(X_test)
self.y_test = y_test.values.reshape((-1, 2))
def train_dataloader(self):
train_dataset = TimeseriesDataset(self.X_train,
self.y_train,
seq_len=self.seq_len)
train_loader = DataLoader(train_dataset,
batch_size = self.batch_size,
shuffle = False,
num_workers = self.num_workers)
return train_loader
def val_dataloader(self):
val_dataset = TimeseriesDataset(self.X_val,
self.y_val,
seq_len=self.seq_len)
val_loader = DataLoader(val_dataset,
batch_size = self.batch_size,
shuffle = False,
num_workers = self.num_workers)
return val_loader
def test_dataloader(self):
test_dataset = TimeseriesDataset(self.X_test,
self.y_test,
seq_len=self.seq_len)
test_loader = DataLoader(test_dataset,
batch_size = self.batch_size,
shuffle = False,
num_workers = self.num_workers)
return test_loader
Gives me the following error:
MisconfigurationException Traceback (most recent call last)
/Users/xxx/ai_bt/model.ipynb Cell 22 in <cell line: 34>()
1 # train on test set too! : see below
2 # trainer.test(dataloaders=test_dataloaders)
3
(...)
30 # with torch.no_grad():
31 # predictions = trainer.predict(model, dm)
---> 34 my_results = trainer.predict(model = model, datamodule=dm)
File /opt/homebrew/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:1025, in Trainer.predict(self, model, dataloaders, datamodule, return_predictions, ckpt_path)
1000 r"""
1001 Run inference on your data.
1002 This will call the model forward function to compute predictions. Useful to perform distributed
(...)
1022 Returns a list of dictionaries, one for each provided dataloader containing their respective predictions.
1023 """
1024 self.strategy.model = model or self.lightning_module
-> 1025 return self._call_and_handle_interrupt(
1026 self._predict_impl, model, dataloaders, datamodule, return_predictions, ckpt_path
1027 )
File /opt/homebrew/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py:723, in Trainer._call_and_handle_interrupt(self, trainer_fn, *args, **kwargs)
721 return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
...
--> 197 raise MisconfigurationException(f"No `{loader_name}()` method defined to run `Trainer.{trainer_method}`.")
199 # predict_step is not required to be overridden
200 if stage == "predict":
MisconfigurationException: No `predict_dataloader()` method defined to run `Trainer.predict`.
It must be something silly because I can't figure out what the dataloader refers to. the dm argument works fine during training...
Update: based on @Mikel B's answer I added:
def predict_dataloader(self):
predict_dataset = TimeseriesDataset(self.X_test,
self.y_test,
seq_len=self.seq_len)
predict_loader = DataLoader(predict_dataset,
batch_size = self.batch_size,
shuffle = False,
num_workers = self.num_workers)
return predict_loader
Which results in:
---> 44 output = model(batch)
45 output = model.proba(output) # if not part of forward already
46 prediction_list.append(output)
File /opt/homebrew/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
/Users/user/ai_bt/model.ipynb Cell 22 in LSTMRegressor.forward(self, x)
32 def forward(self, x):
33 # lstm_out = (batch_size, seq_len, hidden_size)
---> 34 lstm_out, _ = self.lstm(x)
35 y_pred = self.linear(lstm_out[:,-1])
36 return y_pred
...
--> 731 is_batched = input.dim() == 3
732 batch_dim = 0 if self.batch_first else 1
733 if not is_batched:
AttributeError: 'list' object has no attribute 'dim'
| You have not defined predict_dataloader() in you LightningDataModule:
class MNISTDataModule(pl.LightningDataModule):
def __init__(self, data_dir: str = "path/to/dir", batch_size: int = 32):
super().__init__()
self.data_dir = data_dir
self.batch_size = batch_size
def setup(self, stage: Optional[str] = None):
self.mnist_test = MNIST(self.data_dir, train=False)
self.mnist_predict = MNIST(self.data_dir, train=False)
mnist_full = MNIST(self.data_dir, train=True)
self.mnist_train, self.mnist_val = random_split(mnist_full, [55000, 5000])
def train_dataloader(self):
return DataLoader(self.mnist_train, batch_size=self.batch_size)
def val_dataloader(self):
return DataLoader(self.mnist_val, batch_size=self.batch_size)
def test_dataloader(self):
return DataLoader(self.mnist_test, batch_size=self.batch_size)
# THIS IS WHAT YOU ARE MISSING
def predict_dataloader(self):
return DataLoader(self.mnist_predict, batch_size=self.batch_size)
def teardown(self, stage: Optional[str] = None):
# Used to clean-up when the run is finished
...
Without this method the trainer does not know which data to load for the predict_step
| https://stackoverflow.com/questions/73154831/ |
How to get the inference compute graph of the pytorch model? | I want to hand write a framework to perform inference of a given neural network. The network is so complicated, so to make sure my implementation is correct, I need to know how exactly the inference process is done on device.
I tried to use torchviz to visualize the network, but what I got seems to be the back propagation compute graph, which is really hard to understand.
Then I tried to convert the pytorch model to ONNX format, following the instruction enter link description here, but when I tried to visualize it, it seems that the original layers of the model had been seperated into very small operators.
I just want to get the result like this
How can I get this? Thanks!
| Have you tried saving the model with torch.save (https://pytorch.org/tutorials/beginner/saving_loading_models.html) and opening it with Netron? The last view you showed is a view of the Netron app.
| https://stackoverflow.com/questions/73156669/ |
Label Smoothing in PyTorch - Using BCE loss -> doing it with the data itself | i am doing a classification task (binary) in PyTorch, so with labels 0 und 1.
No I want introduce label smoothing as another regularization technique.
Because I Use the ice loss, there is no such function to use label smoothing as
in the cross entropy loss (for man than 0,1).
Now I am considering to implement it not in the loss but in the data itself.
Would it be right to just replace my y_true to for example 0->0.1 and 1->0.9
before they go into the loss?
| You can replace the 0 with 0.1 and 1 with 0.9 if label smoothing is 0.1
criterion(disc_fake_pred, torch.zeros_like(disc_fake_pred)+0.1) #0.1
criterion(disc_real_pred, torch.ones_like(disc_real_pred)-0.1) #0.9
| https://stackoverflow.com/questions/73157016/ |
Multi-channel, 2D mask weights using BCEWithLogitsLoss in Pytorch | I have a set of 256x256 images that are each labeled with nine, binary 256x256 masks. I am trying to calculate the pos_weight in order to weight the BCEWithLogitsLoss using Pytorch.
The shape of my masks tensor is tensor([1000, 9, 256, 256]) where 1000 is the number of training images, 9 is the number of mask channels (all encoded to 0/1), and 256 is the size of each image side.
To calculate pos_weight, I have summed the zeros in each mask, and divided that number by the sum of all of the ones in each mask (following the advice suggested here.):
(masks[:,channel,:,:]==0).sum()/masks[:,channel,:,:].sum()
Calculating the weight for every mask channel provides a tensor with the shape of tensor([9]), which seems intuitive to me, since I want a pos_weight value for each of the nine mask channels. However when I try to fit my model, I get the following error message:
RuntimeError: The size of tensor a (9) must match the size of
tensor b (256) at non-singleton dimension 3
This error message is surprising because it suggests that the weights need to be the size of one of the image sides, but not the number of mask channels. What shape should pos_weight be and how do I specify that it should be providing weights for the mask channels instead of the image pixels?
| TLDR; This is a broadcasting issue which is surprisingly not handled by PyTorch's nn.BCEWithLogitsLoss namely F.binary_cross_entropy_with_logits. It might actually be worth putting out a Github issue linking to this SO thread to notify the developers of this undesirable behaviour.
In the documentation page of nn.BCEWithLogitsLoss, it is stated that the provided positive weights tensor pos_weight:
Must be a vector with length equal to the number of classes.
This is of course what you were expecting (rightly so) since positive weights refer to the weight given to the positive instances for every single class. Since your prediction and target tensors are multi-dimensional this seems to not be handled properly by PyTorch.
Anyhows, here is a minimal example showing how you can bypass this error and also showing the manual computation of the binary cross-entropy, as reference.
Here is the setup of the prediction and target tensors pred and label respectively:
>>> c=2;b=5;h=3;w=3
>>> pred = torch.rand(b,c,h,w)
>>> label = torch.randint(0,2, (b,c,h,w), dtype=float)
Now for the definition of the positive weight, notice the leading singletons dimensions:
>>> pos_weight = torch.rand(c,1,1)
In your case, with your existing 1D tensor of length c, you would simply have to unsqueeze two extra dimensions for the height and width dimensions. This means doing something like: pos_weight = pos_weight[:,None,None].
Calling the bce with logits function or its oop equivalent:
>>> F.binary_cross_entropy_with_logits(pred, label, pos_weight=pos_weight).mean()
Which is equivalent, in plain code to:
>>> z = torch.sigmoid(pred)
>>> bce = -(pos_weight*label*torch.log(z) + (1-label)*torch.log(1-z))
Note, that the built-in function would have the desired behaviour (i.e. no error message) if the class dimension was last in your prediction and target tensors.
>>> pos_weight = torch.rand(c)
>>> F.binary_cross_entropy_with_logits(
... pred.transpose(1,-1),
... label.transpose(1,-1),
... pos_weight=pos_weight)
In other words, we are applying the function with format NHWC which means the pos_weight of format C can be multiplied properly. So the result above effectively yields the same result as:
>>> F.binary_cross_entropy_with_logits(
... pred,
... label,
... pos_weight=pos_weight[:,None,None])
You can read more about the pos_weight in BCEWithLogitsLoss in another thread here
| https://stackoverflow.com/questions/73159568/ |
How to index an array/tensor on the "rest" of the indices given some indices? | Given some array (or tensor):
x = np.array([[0, 1, 0, 0, 0],
[0, 0, 0, 1, 0],
[1, 0, 0, 0, 0]])
and some indices of dimension equaling the number of rows in x:
idx = np.array([3, 1, 0]) # Index values range from (0: number of columns) in "x"!
Now if I wanted to add a certain value c to the columns of x depending on the indices idx, I would do the following:
x[range(3), idx] += c
To get:
x = np.array([[ 0, 1, 0, c, 0],
[ 0, c, 0, 1, 0],
[1+c, 0, 0, 0, 0]])
But what if I wanted to add the value c to every other column index in x, rather than the exact indices in idx?
The desired outcome (based on the above example) should be:
x = np.array([[c, 1+c, c, 0, c],
[c, 0, c, 1+c, c],
[1, c, c, c, c]])
| Create a boolean array to use as mask:
# set up default mask
m = np.ones(x.shape, dtype=bool)
# update mask
m[np.arange(m.shape[0]), idx] = False
# perform boolean indexing
x[m] += c
Output (c=9):
array([[ 9, 10, 9, 0, 9],
[ 9, 0, 9, 10, 9],
[ 1, 9, 9, 9, 9]])
m:
array([[ True, True, True, False, True],
[ True, False, True, True, True],
[False, True, True, True, True]])
| https://stackoverflow.com/questions/73161693/ |
CNN training loss is so unstable | my CNN network
Above is my config of the network.
l am training a CNN network on picture size of 192*192.
my target is a classification network of 11 kinds.
However, the loss and the accuracy on testing dataset appears to be very unstable. l have to run 15+ epochs to get a stable accuracy and loss. The maximum accuracy is only 50%.
What can l do to improve the performance?
| I would recommend you to first refer to models which are widely known like VGG-16, LeNET or VGG-19 and check out the way how the conv2D and max-pooling layers are placed.
Start with a very basic model without any batch normalization and Leaky ReLU layers. You just keep the conv2D and max pooling layers and train your model for a few epochs.
Next, try other activations like ReLU to TanH. Try Changing the max pooling to average pooling.
If you are solving a classification problem then use the softmax layer at the end. Also, introduce Dense layer(s) after flattening.
Your dataset should be large and also the target should be one-hot encoded if you wish to use the softmax layer.
| https://stackoverflow.com/questions/73162489/ |
RuntimeError: Expected number of channels in input to be divisible by num_groups, but got input of shape [64, 16, 32, 32] and num_groups=32 | I have EfficientNet working fine on my dataset. Now, I changed all the batch norm layers into group norm layers. I have already done this process with other networks like vgg16 and resnet18 and all was ok. On EfficientNet I have this error RuntimeError: Expected number of channels in input to be divisible by num_groups, but got input of shape [64, 16, 32, 32] and num_groups=32
Basically I have done this:
efficientnet_b0 = torchvision.models.efficientnet_b0(pretrained=False)
efficientnet_b0.classifier = nn.Linear(in_features=1280, out_features=10, bias=True)
efficientnet_b0.features[0][1] = nn.GroupNorm(32, 32)
efficientnet_b0.features[1][0].block[0][1] = nn.GroupNorm(32, 32)
efficientnet_b0.features[1][0].block[2][1] = nn.GroupNorm(32, 16)
efficientnet_b0.features[2][0].block[0][1] = nn.GroupNorm(32, 96)
efficientnet_b0.features[2][0].block[1][1] = nn.GroupNorm(32, 96)
efficientnet_b0.features[2][0].block[3][1] = nn.GroupNorm(32, 24)
efficientnet_b0.features[2][1].block[0][1] = nn.GroupNorm(32, 144)
efficientnet_b0.features[2][1].block[1][1] = nn.GroupNorm(32, 144)
efficientnet_b0.features[2][1].block[3][1] = nn.GroupNorm(32, 24)
efficientnet_b0.features[3][0].block[0][1] = nn.GroupNorm(32, 144)
efficientnet_b0.features[3][0].block[1][1] = nn.GroupNorm(32, 144)
efficientnet_b0.features[3][0].block[3][1] = nn.GroupNorm(32, 40)
efficientnet_b0.features[3][1].block[0][1] = nn.GroupNorm(32, 240)
efficientnet_b0.features[3][1].block[1][1] = nn.GroupNorm(32, 240)
efficientnet_b0.features[3][1].block[3][1] = nn.GroupNorm(32, 40)
efficientnet_b0.features[4][0].block[0][1] = nn.GroupNorm(32, 240)
efficientnet_b0.features[4][0].block[1][1] = nn.GroupNorm(32, 240)
efficientnet_b0.features[4][0].block[3][1] = nn.GroupNorm(32, 80)
efficientnet_b0.features[4][1].block[0][1] = nn.GroupNorm(32, 480)
efficientnet_b0.features[4][1].block[1][1] = nn.GroupNorm(32, 480)
efficientnet_b0.features[4][1].block[3][1] = nn.GroupNorm(32, 80)
efficientnet_b0.features[4][2].block[0][1] = nn.GroupNorm(32, 480)
efficientnet_b0.features[4][2].block[1][1] = nn.GroupNorm(32, 480)
efficientnet_b0.features[4][2].block[3][1] = nn.GroupNorm(32, 80)
efficientnet_b0.features[5][0].block[0][1] = nn.GroupNorm(32, 480)
efficientnet_b0.features[5][0].block[1][1] = nn.GroupNorm(32, 480)
efficientnet_b0.features[5][0].block[3][1] = nn.GroupNorm(32, 112)
efficientnet_b0.features[5][1].block[0][1] = nn.GroupNorm(32, 672)
efficientnet_b0.features[5][1].block[1][1] = nn.GroupNorm(32, 672)
efficientnet_b0.features[5][1].block[3][1] = nn.GroupNorm(32, 112)
efficientnet_b0.features[5][2].block[0][1] = nn.GroupNorm(32, 672)
efficientnet_b0.features[5][2].block[1][1] = nn.GroupNorm(32, 672)
efficientnet_b0.features[5][2].block[3][1] = nn.GroupNorm(32, 112)
efficientnet_b0.features[6][0].block[0][1] = nn.GroupNorm(32, 672)
efficientnet_b0.features[6][0].block[1][1] = nn.GroupNorm(32, 672)
efficientnet_b0.features[6][0].block[3][1] = nn.GroupNorm(32, 192)
efficientnet_b0.features[6][1].block[0][1] = nn.GroupNorm(32, 1152)
efficientnet_b0.features[6][1].block[1][1] = nn.GroupNorm(32, 1152)
efficientnet_b0.features[6][1].block[3][1] = nn.GroupNorm(32, 192)
efficientnet_b0.features[6][2].block[0][1] = nn.GroupNorm(32, 1152)
efficientnet_b0.features[6][2].block[1][1] = nn.GroupNorm(32, 1152)
efficientnet_b0.features[6][2].block[3][1] = nn.GroupNorm(32, 192)
efficientnet_b0.features[6][3].block[0][1] = nn.GroupNorm(32, 1152)
efficientnet_b0.features[6][3].block[1][1] = nn.GroupNorm(32, 1152)
efficientnet_b0.features[6][3].block[3][1] = nn.GroupNorm(32, 192)
efficientnet_b0.features[7][0].block[0][1] = nn.GroupNorm(32, 1152)
efficientnet_b0.features[7][0].block[1][1] = nn.GroupNorm(32, 1152)
efficientnet_b0.features[7][0].block[3][1] = nn.GroupNorm(32, 320)
efficientnet_b0.features[8][1] = nn.GroupNorm(32, 1280)
The original efficientnet is this:
EfficientNet(
(features): Sequential(
(0): ConvNormActivation(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): Sequential(
(0): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(32, 8, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(8, 32, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(2): ConvNormActivation(
(0): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.0, mode=row)
)
)
(2): Sequential(
(0): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(96, 4, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(4, 96, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.0125, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(144, 144, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=144, bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(144, 6, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(6, 144, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(144, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.025, mode=row)
)
)
(3): Sequential(
(0): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(24, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(144, 144, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=144, bias=False)
(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(144, 6, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(6, 144, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(144, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.037500000000000006, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(240, 240, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=240, bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(240, 10, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(10, 240, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(240, 40, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(40, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.05, mode=row)
)
)
(4): Sequential(
(0): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(40, 240, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(240, 240, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=240, bias=False)
(1): BatchNorm2d(240, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(240, 10, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(10, 240, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(240, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.0625, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=480, bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(480, 20, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(20, 480, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(480, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.07500000000000001, mode=row)
)
(2): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(480, 480, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=480, bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(480, 20, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(20, 480, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(480, 80, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(80, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.08750000000000001, mode=row)
)
)
(5): Sequential(
(0): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(80, 480, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(480, 480, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=480, bias=False)
(1): BatchNorm2d(480, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(480, 20, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(20, 480, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(480, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(672, 672, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=672, bias=False)
(1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(672, 28, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(28, 672, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1125, mode=row)
)
(2): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(672, 672, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=672, bias=False)
(1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(672, 28, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(28, 672, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(672, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.125, mode=row)
)
)
(6): Sequential(
(0): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(112, 672, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(672, 672, kernel_size=(5, 5), stride=(2, 2), padding=(2, 2), groups=672, bias=False)
(1): BatchNorm2d(672, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(672, 28, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(28, 672, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(672, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1375, mode=row)
)
(1): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(1152, 1152, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=1152, bias=False)
(1): BatchNorm2d(1152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1152, 48, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(48, 1152, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.15000000000000002, mode=row)
)
(2): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(1152, 1152, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=1152, bias=False)
(1): BatchNorm2d(1152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1152, 48, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(48, 1152, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1625, mode=row)
)
(3): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(1152, 1152, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2), groups=1152, bias=False)
(1): BatchNorm2d(1152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1152, 48, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(48, 1152, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(1152, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.17500000000000002, mode=row)
)
)
(7): Sequential(
(0): MBConv(
(block): Sequential(
(0): ConvNormActivation(
(0): Conv2d(192, 1152, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(1): ConvNormActivation(
(0): Conv2d(1152, 1152, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1152, bias=False)
(1): BatchNorm2d(1152, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
(2): SqueezeExcitation(
(avgpool): AdaptiveAvgPool2d(output_size=1)
(fc1): Conv2d(1152, 48, kernel_size=(1, 1), stride=(1, 1))
(fc2): Conv2d(48, 1152, kernel_size=(1, 1), stride=(1, 1))
(activation): SiLU(inplace=True)
(scale_activation): Sigmoid()
)
(3): ConvNormActivation(
(0): Conv2d(1152, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(stochastic_depth): StochasticDepth(p=0.1875, mode=row)
)
)
(8): ConvNormActivation(
(0): Conv2d(320, 1280, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(1280, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): SiLU(inplace=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=1)
(classifier): Sequential(
(0): Dropout(p=0.2, inplace=True)
(1): Linear(in_features=1280, out_features=1000, bias=True)
)
)
So, basically I changed all the batch norm layers into group norm layers. Each GN layer has 32 as num_groups and the number of channels is exactly the same of batch norm.
| I solved: basically, num_channels must be divisible by num_groups, so I used 8 in each layer rather than 32 as num_groups.
| https://stackoverflow.com/questions/73165767/ |
Pytorch torchvision.transforms execute randomly? | I am doing this transformation:
self.transform = transforms.Compose( {
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
} )
and then
image = Image.open(img_name)
if self.transform:
image = self.transform(image)
this works for the first epoch then how the hell it crashes for the second epoch?
why the f normalize getting PIL-image and not torch.tensor? is the execution of each transforms Compose items random?
Traceback (most recent call last): File
"/home/ubuntu/projects/ssl/src/train_supervised.py", line 63, in
main() File "/home/ubuntu/projects/ssl/src/train_supervised.py", line 60, in main
train() File "/home/ubuntu/projects/ssl/src/train_supervised.py", line 45, in train
for i, data in enumerate(tqdm_): File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/tqdm/std.py",
line 1195, in iter
for obj in iterable: File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torch/utils/data/dataloader.py",
line 530, in next
data = self._next_data() File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torch/utils/data/dataloader.py",
line 1224, in _next_data
return self._process_data(data) File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torch/utils/data/dataloader.py",
line 1250, in _process_data
data.reraise() File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torch/_utils.py",
line 457, in reraise
raise exception TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File
"/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index) File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index] File
"/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 49, in
data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ubuntu/projects/ssl/src/data_loader.py", line 44, in
getitem
image = self.transform(image) File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 95, in call
img = t(img) File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torch/nn/modules/module.py",
line 1110, in _call_impl
return forward_call(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 270, in forward
return F.normalize(tensor, self.mean, self.std, self.inplace) File
"/home/ubuntu/anaconda3/envs/pytorch-1.11.0/lib/python3.9/site-packages/torchvision/transforms/functional.py", line 341, in normalize
raise TypeError(f"Input tensor should be a torch tensor. Got {type(tensor)}.") TypeError: Input tensor should be a torch tensor.
Got <class 'PIL.Image.Image'>.
| Python set iteration order is not deterministic. Kindly use list instead ([] rather than {}).
| https://stackoverflow.com/questions/73167368/ |
converting tf.data.Dataset.from_tensor_slices to pytorch | I am trying to convert this model from tensorflow to pytorch. Unfortunately, I don't know tensorflow very well. I have a problem transferring ti data_loader from here and in general converting this function to pytorch
def _components_train_step(self, importance_weights, old_means, old_chol_precisions):
for i in range(self._model.num_components):
dt = (self._train_contexts, importance_weights[:, i], old_means, old_chol_precisions)
data = tf.data.Dataset.from_tensor_slices(dt)
data = data.shuffle(self._train_contexts.shape[0]).batch(self.c.components_batch_size)
for context_batch, iw_batch, old_means_batch, old_chol_precisions_batch in data:
iw_batch = iw_batch / tf.reduce_sum(iw_batch)
with tf.GradientTape() as tape:
samples = self._model.components[i].sample(context_batch)
losses = - tf.squeeze(self._dre(tf.concat([context_batch, samples], axis=-1)))
kls = self._model.components[i].kls_other_chol_inv(context_batch, old_means_batch[:, i],
old_chol_precisions_batch[:, i])
loss = tf.reduce_mean(iw_batch * (losses + kls))
gradients = tape.gradient(loss, self._model.components[i].trainable_variables)
self._c_opts[i].apply_gradients(zip(gradients, self._model.components[i].trainable_variables))
I implemented this function as follows
def _components_train_step(self, importance_weights, old_means, old_chol_precisions):
self._c_opts = [ torch.optim.Adam(self._model.components[i].trainable_variables, lr=self.c.components_learning_rate, betas=(0.5, 0.999)) for i in self._model.components]
for i in range(self._model.num_components):
dataset = torch.utils.data.TensorDataset(self._train_contexts, importance_weights[:, i], old_means, old_chol_precisions)
loader = torch.utils.data.DataLoader( dataset, shuffle = True, batch_size=self.c.components_batch_siz)
for batch_idx, (context_batch, iw_batch, old_means_batch, old_chol_precisions_batch) in enumerate(loader):
iw_batch = iw_batch / torch.sum(iw_batch)
samples = self._model.components[i].sample(context_batch)
losses = - torch.squeeze(self._dre(torch.cat([context_batch, samples], dim=-1)))
kls = self._model.components[i].kls_other_chol_inv(context_batch, old_means_batch[:, i],
old_chol_precisions_batch[:, i])
loss = torch.mean(iw_batch * (losses + kls))
loss.backward()
self._c_opts[i].zero_grad()
self._c_opts[i].step()
Any suggestion or help?
| I believe you can achieve a comparable result to tf.data.from_tensor_slices using PyTorch's data.TensorDataset which expects a tuple of tensors as input. This has the effect of zipping the different elements into a single dataset yielding tuple of the same length as there are elements.
Here is a minimal example:
feats = torch.tensor([[[1, 3], [2, 3]], [[2, 1], [1, 2]], [[3, 3], [3, 2]]])
tf_feats = tf.convert_to_tensor(feats.numpy())
labels = torch.tensor([[10, 10], [20, 20], [10, 20]])
tf_labels = tf.convert_to_tensor(labels.numpy())
Using TensorFlow:
>>> dataset = Dataset.from_tensor_slices((tf_feats, tf_labels))
>>> for x in dataset.as_numpy_iterator():
... print(x)
(array([[1, 3],
[2, 3]]), array([10, 10]))
(array([[2, 1],
[1, 2]]), array([20, 20]))
(array([[3, 3],
[3, 2]]), array([10, 20]))
Using PyTorch:
>>> dataset = data.TensorDataset(feats, labels)
>>> for x in dataset:
... print(x)
(tensor([[1, 3],
[2, 3]]), tensor([10, 10]))
(tensor([[2, 1],
[1, 2]]), tensor([20, 20]))
(tensor([[3, 3],
[3, 2]]), tensor([10, 20]))
| https://stackoverflow.com/questions/73169705/ |
How can I apply cuda to custom model in pytorch? | The type of inputs is dictionary of tensors. So while training I convert device to cuda to use gpu. And my custom model is like above. Also I assigned cuda to the model.
class EmbeddingLayer(nn.Module):
def __init__(self):
super(EmbeddingLayer, self).__init__()
# other features
self.other_features_embedding = []
for feature_name in OTHER_FEATURES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
embedding_dims = int(math.sqrt(len(vocabulary)))
embedding = nn.Embedding(len(vocabulary)+1, embedding_dims)
self.other_features_embedding.append(embedding)
# transformer features
item_vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY['item']
self.item_embedding_dims = int(math.sqrt(len(item_vocabulary)))
self.item_embedding = nn.Embedding(len(item_vocabulary)+1, self.item_embedding_dims)
def forward(self, inputs):
# other features
encoded_other_features = []
for i, feature_name in enumerate(OTHER_FEATURES):
embedding = self.other_features_embedding[i](inputs[feature_name])
encoded_other_features.append(embedding)
encoded_other_features = torch.cat(encoded_other_features, -1)
# transformer features
encoded_sequence_item = self.item_embedding(inputs['sequence_item'])
encoded_target_item = self.item_embedding(inputs['target_item'])
positions = inputs['target_timestamp'].repeat(sequence_length-1, 1).transpose(0, 1) - inputs['sequence_timestamp']
encoded_positions = positions.repeat(1, self.item_embedding_dims).reshape(-1, self.item_embedding_dims, sequence_length-1).transpose(1,2)
encoded_sequence_item_with_position = encoded_sequence_item + encoded_positions
encoded_transformer_features = torch.cat((encoded_sequence_item_with_position, encoded_target_item.reshape(-1, 1, self.item_embedding_dims)), 1)
return encoded_other_features, encoded_transformer_features
class BST(nn.Module):
def __init__(self, hidden_units, dropout, num_heads):
super(BST, self).__init__()
...
self.embedding_layer = EmbeddingLayer()
...
def forward(self, inputs):
other_features, transformer_features = self.embedding_layer(inputs)
...
return self.output(features)
model = BST([256, 128], 0.3, 1)
model.to(device)
def train(model, optimizer, dataloader):
model.train()
for inputs in tqdm(dataloader, total=len(dataloader)):
for k, v in inputs.items():
inputs[k] = v.to(device)
model.zero_grad()
pred = model(inputs)
...
But following error occurs:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)
I think the error occurs at embedding in EmbeddingLayer. How can I fix this error to use gpu while training?
| Your list of nn.Module is not registering the embedding layers as sub modules of your layer. In order to properly register a list of modules you should use nn.ModuleList. Therefore, you should add the following right after the loop in your __init__ function:
embeddings = []
for feature_name in OTHER_FEATURES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
embedding_dims = int(math.sqrt(len(vocabulary)))
embedding = nn.Embedding(len(vocabulary)+1, embedding_dims)
embeddings.append(embedding)
self.other_features_embedding = nn.ModuleList(embeddings)
| https://stackoverflow.com/questions/73176266/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.