instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Train multiple connected neural networks with a single optimizer | How can I jointly optimize the parameters of a model comprising two distinct neural networks with a single optimizer? What I've tried is the following, after having initialized an optimizer:
optim_global = optim.Adam(zip(model1.parameters(), model2.parameters()))
but I get this error
TypeError: optimizer can only optimize Tensors, but one of the params is tuple
| These are generator you can control either with the unpacking operator *:
>>> optim.Adam([*model1.parameters(), *model2.parameters()])
Or using itertools.chain
>>> optim.Adam(chain(model1.parameters(), model2.parameters()))
| https://stackoverflow.com/questions/72463796/ |
Concatenate N pytorch tensors (of the same shape) generated from within loop | Tensors of the same shape are being returned from within a loop and I want to concatenate them succinctly and as pythonically / pytorchly as possible.
Current solution:
import torch
for object_id in object_ids:
dataset = Dataset(object_id)
image_tensor = dataset.get_random_image_tensor()
if 'concatenated_image_tensors' in locals():
concatenated_image_tensors = torch.cat((merged_image_tensors, image_tensor))
else:
concatenated_image_tensors = image_tensor
Is there a better way?
| A good approach is to first append to a python list, then concatenate at the end the whole list. Otherwise you'll end up moving data around in memory each time the torch.cat is called.
all_img = []
for object_id in object_ids:
dataset = Dataset(object_id)
image_tensor = dataset.get_random_image_tensor()
all_img.append(image_tensor)
all_img = torch.cat(all_img)
| https://stackoverflow.com/questions/72463818/ |
PyTorch: How to sample from a tensor where each value in the tensor has a different likelihood of being selected? | Given tensor
A = torch.tensor([0.0316, 0.2338, 0.2338, 0.2338, 0.0316, 0.0316, 0.0860, 0.0316, 0.0860]) containing probabilities which sum to 1 (I removed some decimals but it's safe to assume it'll always sum to 1), I want to sample a value from A where the value itself is the likelihood of getting sampled. For instance, the likelihood of sampling 0.0316 from A is 0.0316. The output of the value sampled should still be a tensor.
I tried using WeightedRandomSampler but it doesn't allow the value selected to be a tensor anymore, instead it detaches.
One caveat that makes this tricky is that I want to also know the index of the sampled value as it appears in the tensor. That is, say I sample 0.2338, I want to know if it's index 1, 2 or 3 of tensor A.
| Selecting with the expected probabilities can be achieved by accumulating the weights and selecting the insertion index of a random float [0,1). The example array A is slightly adjusted to sum up to 1.
import torch
A = torch.tensor([0.0316, 0.2338, 0.2338, 0.2338, 0.0316, 0.0316, 0.0860, 0.0316, 0.0862], requires_grad=True)
p = A.cumsum(0)
#tensor([0.0316, 0.2654, 0.4992, 0.7330, 0.7646, 0.7962, 0.8822, 0.9138, 1.0000], grad_fn=<CumsumBackward0>))
idx = torch.searchsorted(p, torch.rand(1))
A[idx], idx
Output
(tensor([0.2338], grad_fn=<IndexBackward0>), tensor([3]))
This is faster than the more common approach with A.multinomial(1).
Sampling 10000 times one element to check that the distribution conforms to the probabilities
from collections import Counter
Counter(int(A.multinomial(1)) for _ in range(10000))
#1 loop, best of 5: 233 ms per loop
# vs @HatemAli's solution
dist=torch.distributions.categorical.Categorical(probs=A)
Counter(int(dist.sample()) for _ in range(10000))
# 10 loops, best of 5: 107 ms per loop
Counter(int(torch.searchsorted(p, torch.rand(1))) for _ in range(10000))
# 10 loops, best of 5: 53.2 ms per loop
Output
Counter({0: 319,
1: 2360,
2: 2321,
3: 2319,
4: 330,
5: 299,
6: 903,
7: 298,
8: 851})
| https://stackoverflow.com/questions/72467096/ |
Retrain facenet neural network | I want to use the pretained MTCNN model to train in a subset of images, and keep the layers that are important to get the features of the images. Also, I want to use another loss function.
import torch
from facenet_pytorch import InceptionResnetV1, MTCNN
from torch.utils.data import DataLoader
from torchvision import datasets
import numpy as np
import pandas as pd
import os
workers = 0 if os.name == 'nt' else 4
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Running on device: {}'.format(device))
mtcnn = MTCNN(
image_size=160, margin=0, min_face_size=20,
thresholds=[0.6, 0.7, 0.7], factor=0.709, post_process=True,
device=device
)
The problem is that I don't know how to face the problem of "freezing" some layers of the neural network that I don't want to tune and tune only the last layers. Also, how to apply the new criterion to this pretrained network.
| You can first print out or visualize the MTCNN model, so you will know the name (key) of the layer you want to freeze. Assuming the name of the layer is feature_layer, you can then simple do:
for name, param in model.named_parameters():
if not 'feature_layer' in name:
param.requires_grad = False
So it will freeze every other layers except the last layer feature_layer.
Then your other parameters will follow suit:
optimizer = torch.optim.Adam(params, lr=1e-3)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)
Now, to change the criterion, you change it here to one of the default ones, or use a custom one:
loss_fn = torch.nn.CrossEntropyLoss()
| https://stackoverflow.com/questions/72469207/ |
how does optimizer.step() takes the recent loss of the model? | I am looking at the example from pytorch of a model:
https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html#sphx-glr-beginner-blitz-cifar10-tutorial-py
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
print('Finished Training')
And I have a very basic question - the optimizer was never inserted or defined into the model (similarly to model.compile in keras). Nor it received the loss or labels of the last batch or epoch.
How does it "knows" to perform optimization step?
| Rather than thinking about how loss and parameters are related, you should consider them as separate events which are not linked. Indeed, there are two distinct elements that have an effect on parameters and their cached gradient.
The autograd mechanism (the process in charge of performing gradient computation) allows you to call backward on a torch.Tensor (your loss) and which will in turn backpropagate through all the nodes tensors that are allowed to compute this final tensor value. Doing so, it will navigate through what's called the computation graph, updating each of the parameters' gradients by changing their grad attribute. This means that at the end of a backward call the network's learned parameters that were used to compute this output will have a grad attribute containing the gradient of the loss with respect to that parameter.
loss.backward()
The optimizer is independent of the backward pass since it doesn't rely on it. You can call backward on your graph once, multiple times, or on different loss terms depending on your use case. The optimizer's task is to take the parameters of the model independently (that is irrespective of the network architecture or its computation graph) and update them using a given optimization routine (for example via Stochastic Gradient Descent, Root Mean Squared Propagation, etc...). It goes through all parameters it was initialized with and updates them using their respective gradient value (which is supposed to be stored in the grad attribute by at least one backpropagation.
optimizer.step()
Important notes:
Keep in mind though that the backward process and the actual update call using the optimizer are linked implicitly only by the fact that the optimizer will use the results computed by the backward preceding call.
In PyTorch parameter gradients are kept in memory so you have to clear them out before performing a new backward call. This is done using the optimizer's zero_grad function. In practice, it clears the grad attribute of the tensors it has registered as parameters.
| https://stackoverflow.com/questions/72472774/ |
Add padding based on partial sum | I have four given variables:
group size
total of groups
partial sum
1-D tensor
and I want to add zeros when the sum within a group reached the partial sum. For example:
groupsize = 4
totalgroups = 3
partialsum = 15
d1tensor = torch.tensor([ 3, 12, 5, 5, 5, 4, 11])
The expected result is:
[ 3, 12, 0, 0, 5, 5, 5, 0, 4, 11, 0, 0]
I have no clue how can I achieve that in pure pytorch. In python it would be something like this:
target = [0]*(groupsize*totalgroups)
cursor = 0
current_count = 0
d1tensor = [ 3, 12, 5, 5, 5, 4, 11]
for idx, ele in enumerate(target):
subgroup_start = (idx//groupsize) *groupsize
subgroup_end = subgroup_start + groupsize
if sum(target[subgroup_start:subgroup_end]) < partialsum:
target[idx] = d1tensor[cursor]
cursor +=1
Can anyone help me with that? I have already googled it but couldn't find anything.
| Some logic, Numpy and list comprehensions are sufficient here.
I will break it down step by step, you can make it slimmer and prettier afterwards:
import numpy as np
my_val = 15
block_size = 4
total_groups = 3
d1 = [3, 12, 5, 5, 5, 4, 11]
d2 = np.cumsum(d1)
d3 = d2 % my_val == 0 #find where sum of elements is 15 or multiple
split_points= [i+1 for i, x in enumerate(d3) if x] # find index where cumsum == my_val
#### Option 1
split_array = np.split(d1, split_points, axis=0)
padded_arrays = [np.pad(array, (0, block_size - len(array)), mode='constant') for array in split_array] #pad arrays
padded_d1 = np.concatenate(padded_arrays[:total_groups]) #put them together, discard extra group if present
#### Option 2
split_points = [el for el in split_points if el <len(d1)] #make sure we are not splitting on the last element of d1
split_array = np.split(d1, split_points, axis=0)
padded_arrays = [np.pad(array, (0, block_size - len(array)), mode='constant') for array in split_array] #pad arrays
padded_d1 = np.concatenate(padded_arrays)
| https://stackoverflow.com/questions/72477827/ |
Get 2D output from the embedding layer in pytorch | I have a X_train with size of (2, 100). I want to use the 250 first of the data, and use the second 250 of this matrix as the input of embedding and convert that to a matrix with size 2*3.
I read a lot about the embedding layer in pytorch, however I did not understand it well. I don't know how to get a 2*3 as the output of the embedding layer. Could you please help me with that? Here is a simple example.
import torch
import torch.nn as nn
X_train = np.random.randint(10, size = (2, 100))
X_train_notmbedding = X_train[:, 0:50] # not embedding (2,50)
X_train_mbedding = X_train[:, 50:100] #embedding (2, 50)
X_train_mbedding = torch.LongTensor([X_train_mbedding])
embedding = nn.Embedding(50, 3)
embeding_output = embedding(X_train_mbedding) # I want to get a embedding output as (2,3)
#X_train_new = torch.cat([X_train_notmbedding, embeding_output], 1) # here I want to build a matrix with size (2, 53)
| From the discussion, it looks like your understanding of Embeddings is not accurate.
Only use 1 Embedding for 1 feature. In your example you are combining dates, ids etc. in 1 Embedding. Even in the medium article, they are using separate embeddings.
Think of Embedding as one-hot encoding on steroids (less memory, data co-relation etc.). If you do not understand one-hot encoding I would start there first.
KWH is already a real value not categorical. Use it as a linear input to the network (after normalization).
ID: I do not know what ID denotes in your data, if it is a unique ID for each datapoint, it is not useful and should be excluded.
If the above does not make sense, I would start with a simple network using LSTM and make it work first before using an advanced architecture.
| https://stackoverflow.com/questions/72481548/ |
Installing cudatoolkit works with conda install but not with conda create -f | I have a PyTorch environment file:
name: torch
channels:
- defaults
- conda-forge
dependencies:
- python=3.7
- pytorch::pytorch
- pytorch::torchvision
- pytorch::torchaudio
- pytorch::cudatoolkit
- numpy
- scipy
- scikit-learn
- matplotlib
- pillow
- tqdm
- joblib
- visdom
- jsonpatch
- pip
- pip:
- torchsummary
- opencv-python==4.1.1.26
Trying to create a conda environment from it with conda create -f torch.yml fails:
(base) prompt@PC:~$ conda env create -f environment.yml
Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- pytorch::cudatoolkit
The environment is created without issues if I remove cudatoolkit from the list of dependencies.
However, conda install cudatoolkit -c pytorch finds and installs the package without issues. The same happens if I replace cudatoolkit with cudatoolkit=11.3 (the current most recent version listed on the PyTorch website) in both cases.
| I managed to resolve the issue by installing cudatoolkit from the nvidia channel, rather than pytorch. I'm still not sure why cudatoolkit is available from pytorch with one method but not the other, but this solves my issue (although the nvidia version seems to be larger, so it's probably a superset package of pytorch's cudatoolkit). My YAML file now looks like this:
name: ritnet
channels:
- defaults
- conda-forge
dependencies:
- python=3.7
- pytorch::pytorch
- pytorch::torchvision
- pytorch::torchaudio
- nvidia::cudatoolkit=11.3
- numpy
- scipy
- scikit-learn
- matplotlib
- pillow
- tqdm
- joblib
- visdom
- jsonpatch
- pip
- pip:
- torchsummary
- opencv-python==4.1.1.26
| https://stackoverflow.com/questions/72481690/ |
Working with large multiple datasets where each dataset contains multiple values - Pytorch | I'm training a Neural Network and have overall > 15GB of data inside a folder, the folder has multiple pickle files, and each file contains two lists that each holds multiple values.
This looks like the following:
dataset_folder:\
file.pickle
file_2.pickle
...
...
file_n.pickle
Each file_*.pickle contains a variable length list (list x and list y).
How to load all the data to train the model without having memory issue?
| By implementing the custom dataset class provided from Pytorch, we need to implement three methods so pytorch loader can work with your data
__len__
__getitem__
__init__
Let's go through how to implement each one of them seperatly.
__init__
def __init__(self):
# Original Data has the following format
"""
dict_object =
{
"x":[],
"y":[]
}
"""
DIRECTORY = "data/raw"
self.dataset_file_name = os.listdir(DIRECTORY)
self.dataset_file_name_index = 0
self.dataset_length =0
self.prefix_sum_idx = list()
# Loop over each file and calculate the length of overall dataset
# you might need to check if file_name is file
for file_name in os.listdir(DIRECTORY):
with (open(f'{DIRECTORY}/{file_name}', "rb")) as openfile:
dict_object = pickle.load(openfile)
curr_page_sum = len(dict_object["x"]) + len(dict_object["y"])
self.prefix_sum_idx.append(curr_page_sum)
self.dataset_length += curr_page_sum
# prefix sum so we have an idea of where each index appeared in which file.
for i in range (1,len(self.prefix_sum_idx)):
self.prefix_sum_idx[i] = self.prefix_sum_idx[i] + self.prefix_sum_idx[i-1]
assert self.prefix_sum_idx[-1] == self.dataset_length
self.x = []
self.y = []
As you can see above, the main idea is to use prefix sum to "treat" all the dataset as once, so the logic is whenever you need to get access to a specific index later, you simply look into prefix_sum_idx to see this where this idx appear.
In the image above, let's say we need to access the index 150. Thanks to prefix sum, we are now able to know that 150 exist in the second .pickle file. Still we need a fast mechanism to know where that idx exist in the prefix_sum_idx. This will be explained in the __getitem__
__getitem__
def read_pickle_file(self, idx):
file_name = self.dataset_file_name[idx]
dict_object = dict()
with (open(f'{YOUR_DIRECTORY}/{file_name}', "rb")) as openfile:
dict_object = pickle.load(openfile)
self.x = dict_object['x']
self.y = #some logic here
......
# Some logic here....
def __getitem__(self,idx):
# Similar to C++ std::upper_bound - O(log n)
temp = bisect.bisect_right(self.prefix_sum_idx, idx)
self.read_pickle_file(temp)
local_idx = idx - self.prefix_sum_idx[temp]
return self.x[local_idx],self.y[local_idx]
check bisect_right() docs for details on how it works, but simply it returns the rightmost place in the sorted list to insert the given element and keep it sorted. In our approach, we're interested only in the following question, "which file should I access in order to get the appropriate data". More importantly, it does so in O(log n)
__len__
def __len__(self):
return self.dataset_length
In order to get the length of our dataset, we loop through each file in and accumulate the results as shown in __init__.
The full code sample goes like this:
import pickle
import torch
import torch.nn as nn
import numpy
import os
import bisect
from torch.utils.data import Dataset, DataLoader
from src.data.make_dataset import main
from torch.nn import functional as F
class dataset(Dataset):
def __init__(self):
# Original Data has the following format
"""
dict_object =
{
"x":[],
"y":[]
}
"""
DIRECTORY = "data/raw"
self.dataset_file_name = os.listdir(DIRECTORY)
self.dataset_file_name_index = 0
self.dataset_length =0
self.prefix_sum_idx = list()
# Loop over each file and calculate the length of overall dataset
# you might need to check if file_name is file
for file_name in os.listdir(DIRECTORY):
with (open(f'{DIRECTORY}/{file_name}', "rb")) as openfile:
dict_object = pickle.load(openfile)
curr_page_sum = len(dict_object["x"]) + len(dict_object["y"])
self.prefix_sum_idx.append(curr_page_sum)
self.dataset_length += curr_page_sum
# prefix sum so we have an idea of where each index appeared in which file.
for i in range (1,len(self.prefix_sum_idx)):
self.prefix_sum_idx[i] = self.prefix_sum_idx[i] + self.prefix_sum_idx[i-1]
assert self.prefix_sum_idx[-1] == self.dataset_length
self.x = []
self.y = []
def read_pickle_file(self, idx):
file_name = self.dataset_file_name[idx]
dict_object = dict()
with (open(f'{YOUR_DIRECTORY}/{file_name}', "rb")) as openfile:
dict_object = pickle.load(openfile)
self.x = dict_object['x']
self.y = #some logic here
......
# Some logic here....
def __getitem__(self,idx):
# Similar to C++ std::upper_bound - O(log n)
temp = bisect.bisect_right(self.prefix_sum_idx, idx)
self.read_pickle_file(temp)
local_idx = idx - self.prefix_sum_idx[temp]
return self.x[local_idx],self.y[local_idx]
def __len__(self):
return self.dataset_length
large_dataset = dataset()
train_size = int (0.8 * len(large_dataset))
validation_size = len(large_dataset) - train_size
train_dataset, validation_dataset = torch.utils.data.random_split(large_dataset, [train_size, validation_size])
validation_loader = DataLoader(validation_dataset, batch_size=64, num_workers=4, shuffle=False)
train_loader = DataLoader(train_dataset,batch_size=64, num_workers=4,shuffle=False)
| https://stackoverflow.com/questions/72487788/ |
Getting random output every time on running Next Sentence Prediction code using BERT | Based on the code provided below, I am trying to run NSP (Next Sentence Prediction) on a custom dataset. The loss after training the model is different every time and the model give different accuracies every time. What am I missing or doing wrong?
pip install transformers[torch]
from transformers import BertTokenizer, BertForNextSentencePrediction
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased')
with open('clean.txt', 'r') as fp:
text = fp.read().split('\n')
bag = [item for sentence in text for item in sentence.split('.') if item != '']
bag_size = len(bag)
import random
sentence_a = []
sentence_b = []
label = []
for paragraph in text:
sentences = [
sentence for sentence in paragraph.split('.') if sentence != ''
]
num_sentences = len(sentences)
if num_sentences > 1:
start = random.randint(0, num_sentences-2)
# 50/50 whether is IsNextSentence or NotNextSentence
if random.random() >= 0.5:
# this is IsNextSentence
sentence_a.append(sentences[start])
sentence_b.append(sentences[start+1])
label.append(0)
else:
index = random.randint(0, bag_size-1)
# this is NotNextSentence
sentence_a.append(sentences[start])
sentence_b.append(bag[index])
label.append(1)
inputs = tokenizer(sentence_a, sentence_b, return_tensors='pt', max_length=512, truncation=True, padding='max_length')
inputs['labels'] = torch.LongTensor([label]).T
class MeditationsDataset(torch.utils.data.Dataset):
def __init__(self, encodings):
self.encodings = encodings
def __getitem__(self, idx):
return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
def __len__(self):
return len(self.encodings.input_ids)
dataset = MeditationsDataset(inputs)
loader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
from transformers import AdamW
# activate training mode
model.train()
# initialize optimizer
optim = AdamW(model.parameters(), lr=5e-6)
from tqdm import tqdm # for our progress bar
epochs = 2
for epoch in range(epochs):
# setup loop with TQDM and dataloader
loop = tqdm(loader, leave=True)
for batch in loop:
# initialize calculated gradients (from prev step)
optim.zero_grad()
# pull all tensor batches required for training
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
token_type_ids = batch['token_type_ids'].to(device)
labels = batch['labels'].to(device)
# process
outputs = model(input_ids, attention_mask=attention_mask,
token_type_ids=token_type_ids,
labels=labels)
# extract loss
loss = outputs.loss
# calculate loss for every parameter that needs grad update
loss.backward()
# update parameters
optim.step()
# print relevant info to progress bar
loop.set_description(f'Epoch {epoch}')
loop.set_postfix(loss=loss.item())
In the code below I am testing the model on unseen data:
from torch.nn import functional as f
from torch.nn.functional import softmax
prompt = "sentence 1 text"
prompt2 = "sentence 2 text"
output = tokenizer.encode_plus(prompt,prompt2, return_tensors="pt")
result = model(**output)[0]
prob = softmax(result, dim=1)
print(prob)
So, the value of prob and loss is different every single time for the same unseen data which to the best of my knowledge should be same.
| You need to put the model in evaluation mode. if you use i.e. dropout layers while testing the model you should turn it off.
You can do this with
model.eval()
If you don't use this, you will get a different output and loss value because the dropout in your model will close different neurons each time.
| https://stackoverflow.com/questions/72489570/ |
How to keep track of original images after using transforms in a pytorch model? | I am working on an AI related issue, where I need to track several human bodyparts on videos. I create a DataLoader with my images and i make several transforms when calling my Dataset class .
Here is a code sample :
transform = transforms.Compose(
[
transforms.Resize(img_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
]
)
dataset = NamedClassDataset(annotation_folder_path=path, transform=transform, img_size=img_size, normalized=normalize)
train_set, validation_set = torch.utils.data.random_split(dataset, get_train_test_size(dataset,train_percent))
train_loader = DataLoader(dataset=train_set, shuffle=shuffle, batch_size=batch_size,num_workers=num_workers,pin_memory=pin_memory)
validation_loader = DataLoader(dataset=validation_set, shuffle=shuffle, batch_size=batch_size,num_workers=num_workers, pin_memory=pin_memory)
The problem is : after running my model, I display images with the predicted points in order to see their quality. But since images are resized and normalized, I cannot retrieve their original quality and color. I would like to display points on the original images instead of the transformed images and I want to know what is the usual way to do this.
I already have thought of two solutions with their respective disadvantages :
Reverting transformations, but impossible when resize is called since we loose information
Returning an index as a third argument in the __getitem__ method of the NamedClassDataset (along with the image and labels). But pytorch methods expects only two outputs when using __getitem__ which are (image, associated labels).
EDIT : Here is the getitem of my NamedClassDataset class :
def __getitem__(self, index):
(img_path, coords) = self.annotations.iloc[index].values
img = Image.open(img_path).convert("RGB")
w,h = img.size
# Normalize by img size
if self.img_size is not None:
if self.normalized:
coords = coords/(w,h) # Normalized
else:
n_h,n_w = self.img_size
coords = coords/(w,h)*(n_w,n_h) # Not normalized
y_coords = torch.flatten(torch.tensor(coords)).float() # Flatten outputs and convert from double to float32
if self.transform is not None:
img = self.transform(img)
return (img, y_coords)
| I managed to do the trick by declaring another dataset with the original images.
# Create the same dataset with untransformed images for visualization purposes
org_dataset = NamedClassDataset(annotation_folder_path="./12_labels/extracted_swimmers", transform=None, img_size=None, normalized=False)
viz_train_set, viz_validation_set = random_split(org_dataset, get_train_test_size(org_dataset,train_percent,_print_size=False), generator=torch.Generator().manual_seed(seed))
And here is what I do in the __getitem__ when transform=None :
if self.transform is not None:
tr_img = self.transform(org_img)
return (tr_img, y_coords)
return (org_img, y_coords)
I then have access to original images by passing viz sets as parameters. Do note that this is a Dataset and not a Dataloader so you need to take in account your batch size in order to match the predictions.
e.g. :
plot_predictions(viz_set[0+i*batch_size][0], preds[0])
I let the feed open since I strongly believe that a more efficient answer can be provided.
| https://stackoverflow.com/questions/72490869/ |
Is SGD optimizer in PyTorch actually does Gradient Descent algorithm? | I'm working on trying to compare the converge rate of SGD and GD algorithms for the neural networks. In PyTorch, we often use SGD optimizer as follows.
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True)
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
for epoch in range(epochs):
running_loss = 0
for input_batch, labels_batch in train_dataloader:
input = input_batch
y_hat = model(input)
y = labels_batch
L = loss(y_hat, y)
optimizer.zero_grad()
L.backward()
optimizer.step()
running_loss += L.item()
My understanding about the optimizer here is that the SGD optimizer actually does the Mini-batch Gradient Descent algorithm because we feed the optimizer one batch of data at one time. So, if we set the batch_size parameter as the size of all data, the code actually does Gradient Descent for the neural network.
Is my understanding correct?
| Your understanding is correct. SGD is just updating weights based on the gradient computed by backpropagation. The flavor of gradient descent that it performs is therefore determined by the data loader.
Gradient descent (aka batch gradient descent): Batch size equal to the size of the entire training dataset.
Stochastic gradient descent: Batch size equal to one and shuffle=True.
Mini-batch gradient descent: Any other batch size and shuffle=True. By far the most common in practical applications.
| https://stackoverflow.com/questions/72496224/ |
Pytorch Model trained with Lightining has loss stuck at a baseline | I am building a model for the CIFAR10 dataset.
My model starts with a high loss but only goes to 2.3. Given there are 10 classes and it's logs natural for the loss, it's only giving 10% accuracy (ln(10)=~2.3). I am switching from keras/tensorflow and am lost to what I am doing wrong. Any help/advice/resources would be appreciated.
Model
class Net(pl.LightningModule):
def __init__(self):
super().__init__()
self.conv1 = nn.Sequential(nn.Conv2d(3, 6, 3), nn.ReLU())
self.conv2 = nn.Sequential(nn.Conv2d(6, 12, 3), nn.ReLU())
self.conv3 = nn.Sequential(nn.Conv2d(12, 24, 3), nn.ReLU())
self.conv4 = nn.Sequential(nn.Conv2d(24, 128, 3), nn.ReLU())
self.conv5 = nn.Sequential(nn.Conv2d(128, 256, 3), nn.ReLU())
self.conv6 = nn.Sequential(nn.Conv2d(256, 256, 3), nn.ReLU())
self.conv7 = nn.Sequential(nn.Conv2d(256, 512, 3), nn.ReLU())
self.conv8 = nn.Sequential(nn.Conv2d(512, 512, 3), nn.ReLU())
self.conv9 = nn.Sequential(nn.Conv2d(512, 512, 3), nn.ReLU(), nn.MaxPool2d(2, 2))
self.fc1 = nn.Linear(25088, 1024)
self.fc2 = nn.Linear(1024, 512)
self.fc3 = nn.Linear(512, 512)
self.fc4 = nn.Linear(512, 84)
self.last = nn.Linear(84, 10)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.conv8(x)
x = self.conv9(x)
x = torch.flatten(x, 1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
x = self.last(x)
return x
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
return loss
def test_step(self,batch,batch_nb):
x,y = batch
loss = F.cross_entropy(self(x),y)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
Then I simply do
trainer = pl.Trainer(max_epochs=15, gpus=1)
trainer.fit(net,train_dataloaders=trainloader)
Additionally, when printing out predictions, it always prints "truck".
My dataloading is near identical to the one on the PyTorch tutorial.
transform = transforms.Compose([
transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
batch_size = 128
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=4)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
| I change some of your code, and after 15 epochs, I get a 0.866 in loss.
For the CIFAR10 dataset, you do not need to create a large network. (This large network needs more time to train. Maybe, for this reason, the loss of your network slowly decreases.)
# !pip install pytorch_lightning
import torch
from torch import nn
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torchvision.datasets import CIFAR10
from torchvision import transforms
import pytorch_lightning as pl
class Net(pl.LightningModule):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
def training_step(self, batch, batch_idx):
x, y = batch
loss = F.cross_entropy(self(x), y)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
loss = F.cross_entropy(self(x),y)
return loss
# data
transform = transforms.Compose([
transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
batch_size = 128
trainset = CIFAR10(root='./data', train=True, download=True, transform=transform)
trainloader = DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=2)
testset = CIFAR10(root='./data', train=False,download=True, transform=transform)
testloader = DataLoader(testset, batch_size=batch_size, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
# model
model = Net()
# training
trainer = pl.Trainer(max_epochs=15, gpus=1)
trainer.fit(model, trainloader, testloader)
Output:
Files already downloaded and verified
Files already downloaded and verified
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
------------------------------------
0 | conv1 | Conv2d | 456
1 | pool | MaxPool2d | 0
2 | conv2 | Conv2d | 2.4 K
3 | fc1 | Linear | 48.1 K
4 | fc2 | Linear | 10.2 K
5 | fc3 | Linear | 850
------------------------------------
62.0 K Trainable params
0 Non-trainable params
62.0 K Total params
0.248 Total estimated model params size (MB)
cuda:0
Epoch 14: 100%...470/470 [00:14<00:00, 32.84it/s, loss=0.864, v_num=1]
| https://stackoverflow.com/questions/72498161/ |
What each parameter returned by cuda.get_device_properties represents | _CudaDeviceProperties(name='NVIDIA GeForce GTX 1050 Ti', major=6, minor=1, total_memory=4095MB, multi_processor_count=6)
|
Name of the card: NVIDIA GeForce GTX 1050 Ti
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 4GB
Multiprocessors: 6
| https://stackoverflow.com/questions/72500230/ |
Why is REINFORCE loss differentiable? | Here is some sample REINFORCE code found in the PyTorch distributions docs:
probs = policy_network(state)
m = Categorical(probs)
action = m.sample()
next_state, reward = env.step(action)
loss = -m.log_prob(action) * reward
loss.backward()
I don't understand why this loss is differentiable. In particular, how does m.log_prob(action) maintain the computational path of the network output probs? How are m.log_prob(action) and probs are 'connected'?
Edit: I looked at the implementation of log_prob, and it doesn't even seem to reference self.probs anywhere; only self.logits.
| As @lejlot noted in the comments, if a Categorical object is constructed with probs rather than logits, then logits is later defined in terms of probs. Hence, when logits is used in log_prob, the gradients from probs are propagated. I missed this connection between logits and probs because it doesn't occur in __init__, but instead, logits is a lazy_property.
| https://stackoverflow.com/questions/72501444/ |
Save a Bert model with custom forward function and heads on Hugginface | I have created my own BertClassifier model, starting from a pretrained and then added my own classification heads composed by different layers. After the fine-tuning, I want to save the model using model.save_pretrained() but when I print it upload it from pretrained i don't see my classifier head.
The code is the following. How can I save the all structure on my model and make it full accessible with AutoModel.from_preatrained('folder_path') ?
. Thanks!
class BertClassifier(PreTrainedModel):
"""Bert Model for Classification Tasks."""
config_class = AutoConfig
def __init__(self,config, freeze_bert=True): #tuning only the head
"""
@param bert: a BertModel object
@param classifier: a torch.nn.Module classifier
@param freeze_bert (bool): Set `False` to fine-tune the BERT model
"""
#super(BertClassifier, self).__init__()
super().__init__(config)
# Instantiate BERT model
# Specify hidden size of BERT, hidden size of our classifier, and number of labels
self.D_in = 1024 #hidden size of Bert
self.H = 512
self.D_out = 2
# Instantiate the classifier head with some one-layer feed-forward classifier
self.classifier = nn.Sequential(
nn.Linear(self.D_in, 512),
nn.Tanh(),
nn.Linear(512, self.D_out),
nn.Tanh()
)
def forward(self, input_ids, attention_mask):
# Feed input to BERT
outputs = self.bert(input_ids=input_ids,
attention_mask=attention_mask)
# Extract the last hidden state of the token `[CLS]` for classification task
last_hidden_state_cls = outputs[0][:, 0, :]
# Feed input to classifier to compute logits
logits = self.classifier(last_hidden_state_cls)
return logits
configuration=AutoConfig.from_pretrained('Rostlab/prot_bert_bfd')
model = BertClassifier(config=configuration,freeze_bert=False)
Saving the model after fine-tuning
model.save_pretrained('path')
Loading the fine-tuned model
model = AutoModel.from_pretrained('path')
Printing the model after loading shows I have as the last layer the following and missing my 2 linear layer:
(output): BertOutput(
(dense): Linear(in_features=4096, out_features=1024, bias=True)
(LayerNorm): LayerNorm((1024,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.0, inplace=False)
(adapters): ModuleDict()
(adapter_fusion_layer): ModuleDict()
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=1024, out_features=1024, bias=True)
(activation): Tanh()
)
(prefix_tuning): PrefixTuningPool(
(prefix_tunings): ModuleDict()
)
)
| Maybe something is wrong with the config_class attribute inside your BertClassifier class. According to the documentation you need to create an additional config class which inherits form PretrainedConfig and initialises the model_type attribute with the name of your custom model.
The BertClassifier's config_class has to be consistent with your custom config class type.
Afterwards you can register your config and model with the following calls:
AutoConfig.register('CustomModelName', CustomModelConfigClass)
AutoModel.register(CustomModelConfigClass, CustomModelClass)
And load your finetuned model with AutoModel.from_pretrained('YourCustomModelName')
An incomplete example based on your code could look like this:
class BertClassifierConfig(PretrainedConfig):
model_type="BertClassifier"
class BertClassifier(PreTrainedModel):
config_class = BertClassifierConfig
# ...
configuration = BertClassifierConfig()
bert_classifier = BertClassifier(configuration)
# do your finetuning and save your custom model
bert_classifier.save_pretrained("CustomModels/BertClassifier")
# register your config and your model
AutoConfig.register("BertClassifier", BertClassifierConfig)
AutoModel.register(BertClassifierConfig, BertClassifier)
# load your model with AutoModel
bert_classifier_model = AutoModel.from_pretrained("CustomModels/BertClassifier")
Printing the model output should be similiar to this:
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(classifier): Sequential(
(0): Linear(in_features=1024, out_features=512, bias=True)
(1): Tanh()
(2): Linear(in_features=512, out_features=2, bias=True)
(3): Tanh()
(4): Linear(in_features=2, out_features=512, bias=True)
(5): Tanh()
)
Hope this helps.
https://huggingface.co/docs/transformers/custom_models#registering-a-model-with-custom-code-to-the-auto-classes
| https://stackoverflow.com/questions/72503309/ |
How to modify inherited class for additional parameters? | I am working on a problem statement related to python classes:
I have two classes:
class MCC(object):
def __init__(self, problem_type, batch_size, dataset):
self.problem_type = problem_type
self.batch_size = batch_size
self.dataset = dataset
self.cls_weights = weights_calculation()
def weights_calculation(self):
class_weights = (1 - (self.dataset['labels'].value_counts().sort_index()/len(self.dataset))).values
return class_weights
second class
from transformers import Trainer
class WeightedTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
outputs = model(**inputs)
logits = outputs.get('logits')
labels = inputs.get('labels')
loss_func = nn.CrossEntropyLoss(weight = self.class_weights)
loss = loss_func(logits, labels)
return (loss, outputs) if return_outputs else loss
In the second class, I have to pass weight in nn.CrossEntropyLoss as in code loss_func = nn.CrossEntropyLoss(weight = self.class_weights)
I want to modify the inherited Trainer class to pass a new parameter custom_class_weight in MCC class.
What I have tried:
class MCC(object):
def __init__(self, problem_type, batch_size, dataset, model):
self.problem_type = problem_type
self.batch_size = batch_size
self.dataset = dataset
self.model = model
self.cls_weights = weights_calculation()
self.WeightedTrainer = WeightedTrainer(self.cls_weights)
self.trainer = self.WeightedTrainer(self.model)
def weights_calculation(self):
class_weights = (1 - (self.dataset['labels'].value_counts().sort_index()/len(self.dataset))).values
return class_weights
class WeightedTrainer(Trainer):
def __init__(self, custom_class_weight):
self.custom_class_weight = custom_class_weight
def compute_loss(self, model, inputs, return_outputs=False):
outputs = model(**inputs)
logits = outputs.get('logits')
labels = inputs.get('labels')
loss_func = nn.CrossEntropyLoss(weight = self.custom_class_weight)
loss = loss_func(logits, labels)
return (loss, outputs) if return_outputs else loss
Which is disturbing the inherited's Trainer class properties and thus giving me an error:
TypeError: 'WeightedTrainer' object is not callable
I also tried:
class WeightedTrainer(Trainer):
def __init__(self, custom_class_weight, **kwargs,):
self.custom_class_weight = custom_class_weight
super(WeightedTrainer, self).__init__(**kwargs)
def compute_loss(self, model, inputs, return_outputs=False):
outputs = model(**inputs)
logits = outputs.get('logits')
labels = inputs.get('labels')
loss_func = nn.CrossEntropyLoss(weight = self.custom_class_weight)
loss = loss_func(logits, labels)
return (loss, outputs) if return_outputs else loss
How to create a child class, using the Trainer class as a parent class and modify an inherited class while keeping all default properties of super class?
| Thank you for @JMA valuable comment, Here is the fix.
I was calling self.WeightedTrainer which is not a callable object. Full working code:
class MCC(object):
def __init__(self, problem_type, batch_size, dataset, model):
self.problem_type = problem_type
self.batch_size = batch_size
self.dataset = dataset
self.model = model
self.cls_weights = weights_calculation()
self.custom_trainer = WeightedTrainer
self.trainer = self.WeightedTrainer(self.model, custom_class_weight = self.cls_weights)
def weights_calculation(self):
class_weights = (1 - (self.dataset['labels'].value_counts().sort_index()/len(self.dataset))).values
return class_weights
class WeightedTrainer(Trainer):
def __init__(self, custom_class_weight, **kwargs,):
super().__init__(**kwargs)
self.custom_class_weight = custom_class_weight
def compute_loss(self, model, inputs, return_outputs=False):
outputs = model(**inputs)
logits = outputs.get('logits')
labels = inputs.get('labels')
loss_func = nn.CrossEntropyLoss(weight = self.custom_class_weight)
loss = loss_func(logits, labels)
return (loss, outputs) if return_outputs else loss
| https://stackoverflow.com/questions/72503765/ |
The equivalent of torch.nn.Parameter for LibTorch | I am trying to port a python PyTorch model to LibTorch in C++.
In python the line of code within a subclass of a torch.Module object
self.A = nn.Parameter(A) where A is a torch.tensor object with requires_grad=True.
What would be the equivalent of this for a torch::Tensor in a torch::nn::Module class in C++ ?
The autocomplete in my editor shows the classes ParameterDict, ParameterList,
ParameterDictImpl, ParamaterListImpl, but no Parameter. Do I need to wrap it in a list of size 1 or is there something else I'm missing. I wasn't able to find what I needed from a google search or the documentation, but I wasn't sure precisely what to search to be honest.
| To register a parameter (or tensor which requires gradients) to a module, you could use:
m.register_parameter("A", torch::ones({20, 1, 5, 5}), True);
in libtorch.
| https://stackoverflow.com/questions/72503923/ |
Why do we need to inherit from nn.Module in PyTorch? | I am going through Udacity's Intro To Deep Learning with Pytorch course In the Neural Network part the instructor says that "in the init method, we need to call super, we need to do that because then PyTorch will know to register all the different layers and operation, if you don't do this part it wont be able to track the things that you are adding to your network and it wont work". Could you kindly elaborate and explain what exactly is the role of super keyword here and what does nn.Module inherits which helps in "keeping track" of changes.
Further the Jupyter notebooks says the following about use of super,
class Network(nn.Module):
Here we're inheriting from nn.Module. Combined with
super().__init__() this creates a class that tracks the architecture
and provides a lot of useful methods and attributes. It is mandatory
to inherit from nn.Module when you're creating a class for your
network. The name of the class itself can be anything.
| class Network(nn.Module) means that you defined a Network class that inherits all the methods and properties from nn.Module (Network is child class and nn.Module is parent class)
By using super().__init__() , Network's __init__ will be same as its parent __init__.
class Network(nn.Module):
def __init__(self):
super(Network, self).__init__()
For example when you make a model:
model = Network()
by calling this model, actually nn.Module will handle this call and track changes.
visit this link if you want to see nn.Module source codes.
| https://stackoverflow.com/questions/72505199/ |
Pytorch custom criterion depending on target | Im doing a research project where I want to create a custom loss function depending on the targets. I.e. I want to penalize with BCEWithLogitsLoss plus adding a hyperparameter lambda. I only want to add this hyperparameter if the model is not correctly detecting a class.
With more detail, I have a pretrained model that I want to retrain freezing some of the layers. This model detects faces in images with some probability. I want to penalize certain kind of images if they are incorrectly classified with a factor lambda (suppose that the images that need that penalization have a special character in the name or so)
From the source code of pytorch:
import torch.nn.modules.loss as l
class CustomBCEWithLogitsLoss(l._Loss):
def __init__(self, weight: Optional[Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean',
pos_weight: Optional[Tensor] = None) -> None:
super(BCEWithLogitsLoss, self).__init__(size_average, reduce, reduction)
self.register_buffer('weight', weight)
self.register_buffer('pos_weight', pos_weight)
self.weight: Optional[Tensor]
self.pos_weight: Optional[Tensor]
def forward(self, input: Tensor, target: Tensor) -> Tensor:
return F.binary_cross_entropy_with_logits(input, target,
self.weight,
pos_weight=self.pos_weight,
reduction=self.reduction)
Here, forward has two tensors as inputs, so I dont know how to add here the class of the images that I want to penalize with lambda. Adding lambda to the constructor is ok, but how to do the forward pass if it only allows tensors?
Edit:
To clarify the question, Suppose that I have a training/testing folder with the images. The files with the character @ in the filename are the ones that I want to classify correctly way more than the files without the character, with a factor lambda.
How can I tell in the regular fashion of training a model in pytorch, that those files have to use a lambda penalization (let's say that the loss function is lambda * BCEWithLogitLoss) but the other ones not? I'm using DataLoader.
| You can create a custom class for your dataset or instead build on top of an existing built-in dataset. For instance, you can use datasets.ImageFolder as a base class. The logic added on top is to identify if the filename contains the special token, for example @ and provide this information in the element returned by __getitem__. Looking at the parent __getitem__ function from datasets.DatasetFolder, a minimal working implementation could be:
class Dataset(datasets.ImageFolder):
def __init__(self, token, *args, **kwargs):
super().__init__(*args, **kwargs)
self.token = token
def __getitem__(self, index):
path, _ = self.samples[index] # retrieve path of instance
match = self.token in path # determine if there is a match
x, y = super().__getitem__(index) # call parent to get input and label
return x, int(match), y
Each dataset element consists of the input image tensor and a label 0, or 1 designating whether that input has the token in its filename.
>>> ds = Dataset(token='@', root='root_to_dataset')
Which you would use like any other dataset: with a DataLoader wrapper
>>> dl = DataLoader(ds, batch_size=2)
Now when iterating over this daloader, you will have:
>>> for x, m, y in dl:
... # x is the batch of images (b, c, h, w)
... # m is the batch of {0,1} whether inputs have the pattern in their path (b,)
... # y is the batch of labels (b,)
Now that we have this, we need to apply the lambda factor on the loss terms. However, we can't assume here that all elements in a given minibatch will follow the criteria (i.e. have the pattern in their filename), therefore we need to handle this element-wise and not in reduced form.
If you take a look at the source file for built-in loss functions nn.modules.loss you will notice all loss functions are based on a class named _Loss which expects a reduction parameter. This will be useful for us.
First, consider switching off reduction on your loss function:
>>> bce = nn.BCEWithLogitLoss(reduction='none') # provide additional args if necessary
Considering we have the mask of "matches" m which contains a 1 when the input has the token in their filename, and 0 otherwise. And given a lambda factor lamb with which we want to weigh the elements where m=1, we can provide the following coefficient to our loss term to perform the desired operation:
>>> coeff = lamb*m + 1-m
# if m=0 => coeff=1;
# if m=1 => coeff=lamb;
To apply the loss strategy properly we simple point-wise multiply coeff with the unreduced loss term (which is shaped (b,)).
>>> weighted = coeff*bce_loss
All in all, this would look like this:
>>> for x, m, y in dl:
... y_pred = model(x)
... bce_loss = bce(y_pred, y)
... coeff = lamb*m + 1-m
... bce_weighted = torch.mean(coeff*bce_loss)
... bce_weighted.backward()
| https://stackoverflow.com/questions/72510225/ |
Pytorch: assigns values to a tensor by index | How to assign values to a Tensor by index like Numpy in python?
In numpy, we can fill values to an array by index:
array = np.zeros((10, 8, 3), dtype=np.float32)
for n in range(10):
for k in range(4):
array[n, k, :] = x, y, -2 # x,y are diffrent values in every loop
array[n, 4 + k, :] = x, y, 0.4
If there is a zeros tensor using torch.zeros, how to fill values to it in Pytorch by the indexes?
| Group the values to a tensor and then assign:
import torch
array = torch.zeros((10, 8, 3), dtype=torch.float32)
for n in range(10):
for k in range(4):
x, y = 1, -1
array[n, k, :] = torch.tensor([x, y, -2]) # x,y are diffent values in every loop
array[n, 4 + k, :] = torch.tensor([x, y, 0.4])
| https://stackoverflow.com/questions/72513114/ |
how to improve rotation in a spatial transformation network | I am applying a spatial transformation network to a dataset I created.
The dataset consists of boots and shoes that are slightly rotated (random rotation between 10° and 30°) as shown in the dataset images figure. I trained my model on the Fashionmnist dataset and also tested it. I expected to get images that are aligned, but I got something like this:
this how how my CNN and stn look likes:
class STN_CNN(nn.Module):
def __init__(self):
super(STN_CNN, self).__init__()
self.cnn = nn.Sequential(
nn.Conv2d(1, 10, kernel_size=3, stride=1, padding=0),
nn.MaxPool2d(2, stride=2),
nn.ReLU(),
nn.Conv2d(10, 16, kernel_size=3, stride=1, padding=0),
nn.MaxPool2d(2, stride=2),
nn.ReLU()
)
self.classifier = nn.Sequential(
nn.Linear(16*2*2, 32),
nn.ReLU(),
nn.Linear(32, 10)
)
self.localization = nn.Sequential(
nn.Conv2d(1, 20, kernel_size=5, stride=1, padding=0),
nn.MaxPool2d(2, stride=2),
nn.ReLU(),
nn.Conv2d(20, 20, kernel_size=5, stride=1, padding=0),
nn.ReLU()
)
self.fc_loc = nn.Sequential(
nn.Linear(20*8*8, 20),
nn.ReLU(),
nn.Linear(20, 6)
)
self.AvgPool = nn.AvgPool2d(2, stride=2)
self.fc_loc[2].weight.data.zero_()
self.fc_loc[2].bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0], dtype=torch.float))
def stn(self, x):
x_loc = self.localization(x)
x_loc = x_loc.view(-1, 20*8*8)
theta = self.fc_loc(x_loc)
theta = theta.view(-1, 2, 3)
grid = F.affine_grid(theta, x.size())
x = F.grid_sample(x, grid)
x = self.AvgPool(x)
return x
def forward(self, x):
x = self.stn(x)
x = self.cnn(x)
x = x.view(-1, 16*2*2)
x = self.classifier(x)
return x
i even trained my dataset for 100 epochs but i'm not getting any improvement
could someone just tell me how to improve the rotation part in my stn. Or Maybe if i'm doing something wrong just let me know. I'll be really happy if someone could help.
Thank you in advance.
| The problem was that I trained the model with a different pixel format. For example, imagine that the pixels in each image are normalized between 0 and 1 and the test image is not normalized example between 0-255. The matplotlib method allows plotting both form this has acted badly on the training.
| https://stackoverflow.com/questions/72516757/ |
How to inspect values in binarized FairSeq datasets? | Running the fairseq-preprocess script produces binary files with integer indices corresponding to token ids in a dictionary.
When I no longer have the original tokenized texts, what is the simplest way to explore the binarized dataset? The documentation does not say much about how a dataset can be loaded for debugging purposes.
| I worked around this by loading the trained model and using it to decode the binarized sentences back to strings:
from fairseq.models.transformer import TransformerModel
model_dir = ???
data_dir = ???
model = TransformerModel.from_pretrained(
model_dir,
checkpoint_file='checkpoint_best.pt',
data_name_or_path=data_dir,
bpe='sentencepiece',
sentencepiece_model=model_dir + '/sentencepiece.joint.bpe.model'
)
model.task.load_dataset('train')
data_bin = model.task.datasets['train']
train_pairs = [
(model.decode(item['source']), model.decode(item['target']))
for item in data_bin
]
| https://stackoverflow.com/questions/72517066/ |
Need clear concept of the dimensions of output and hidden from LSTM layers | I know that the output carries all hiddens from the last layer of all the time steps and the hidden is the last time step hiddens of all the layers.
This context has each document with 850 tokens. Each token is embedded into 100 dimension. I took a 2-layer LSTM with 100 dim hidden.
I thought it would take a token at a time step and produce 100 dim hidden. For 850 tokens in a document it will produce output = [1, 850, 100], hidden [1, 2, 100] and cell [1, 2, 100]. But the hidden and cell are [2, 850, 100].
input_dim = len(tok2indx) # size of the vocabulary
emb_dim = 100 # Embedding of each word
hid_dim = 100 # The dimention of each hiddenstate comming out from a time step
n_layers = 2 # LSTM layers
class Encoder(nn.Module):
def __init__(self):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout, device=device)
self.dropout = nn.Dropout(dropout)
def forward(self, X):
embedded = self.embedding(X).to(device)
outputs, (hidden, cell) = self.rnn(embedded)
return outputs, hidden, cell
If the encoder is passed a single document
enc = Encoder()
encd = enc.forward(train_x[:1])
print(encd[0].shape, encd[1].shape, encd[2].shape)
Output:
torch.Size([1, 850, 100]) torch.Size([2, 850, 100]) torch.Size([2, 850, 100])
With ten documents
encd = enc.forward(train_x[:10])
print(encd[0].shape, encd[1].shape, encd[2].shape)
Output:
torch.Size([10, 850, 100]) torch.Size([2, 850, 100]) torch.Size([2, 850, 100])
| What's tripping you up is the input format to LSTM. The default input shape to a LSTM layer is Sequence (L), batch (N), features (H). While in you code you are sending input as NLH (batch, sequence, features). To use this correctly set the parameter batch_first=True (to the LSTM layer), then the input and output will be as you expect.
But there is a catch here too. Only the output (1st of the outputs) will be NLH while both hidden and cell (2nd and 3rd of the outputs) will still be LNH format.
The second thing to note here is the hidden cell will have dimensionality equal to the number of layers ie 2 in your example (each layer will require fill of its own hidden weights), hence the output [2, 850, 100] instead of [1, 850, 100].
| https://stackoverflow.com/questions/72518461/ |
genereate unique row index in a 2D tensor as an output 1D tensor with PyTorch | When I implement target in in-batch multi-class classification on PyTorch (version 1.6), I have the following problem.
I got a variable D <class 'torch.Tensor'> (related to label description) of size as torch.Size([16, 128]), i.e. [data_size,token_id_size].
The original idea was to generate a target tensor of torch.Size([16]), each value is unique, corresponding to the rows in D, from 0 to 16 as [0,1,2,...,15], for in-batch multi-class classification.
This can be done using target = torch.LongTensor(torch.arange(16))
But there maybe repeated, non-unique rows in D, so I would like that the same, unique row in D has the its unique index in target. For example D has row0, row1, row8 the same token_ids or vector and the other rows are all different from each other, then target should be [0,0,2,3,4,5,6,0,8,9,10,11,12,13,14,15] or [0,0,1,2,3,4,5,0,6,7,8,9,10,11,12,13], wher the former has still indexes 0-15 (but no 1 and 7) and the latter has indexes of all in 0-13.
How can I implement this?
| See answers of the simplified question (i) generate 1D tensor as unique index of rows of an 2D tensor and (ii) generate 1D tensor as unique index of rows of an 2D tensor (keeping the order and the original index), which address the problem of this question.
But these seem not useful to improve the contrastive multi-class classification.
| https://stackoverflow.com/questions/72518965/ |
What replacement variables are available to filename in the ModelCheckpoint callback in Pytorch Lightning | What replacement variables {replace_me} are available to populate the filename attribute on the ModelCallback checkpoint in pytorch?
I want to pass the object_id hparam and the checkpoint version (the same as used to create the parent folder) ver like so:
pl.callbacks.ModelCheckpoint(filename='weights_{object_id:03}_{ver}_{epoch}-{step}')
# Saves to file:
# weights_object_id=000_ver=0_epoch=3-step=27
| Arbitrary metrics which have been logged and {epoch} and {step} can be used, as shown in ModelCheckpoint.
Metrics can be logged during training (in your model class which extends LightningModule) like:
def training_step(self, batch, batch_idx):
loss, logs = self.step(batch, batch_idx)
self.log_dict(
{f"train_{k}": v for k, v in logs.items()},
on_step=True,
on_epoch=True
)
return loss
If you want additional variables, which aren't logged, extend ModelCheckpoint and override the format_checkpoint_name() method.
| https://stackoverflow.com/questions/72519784/ |
AttributeError: module 'torch.utils' has no attribute 'make_grid' | I tried following along with the PyTorch transfer learning tutorial and found it pulled an error code when I brought up the make grid.
| It's because it's out of date. As of 1.11.0 it has been moved to torchvision.utils.make_grid instead of torch.utils.make_grid
https://pytorch.org/vision/main/generated/torchvision.utils.make_grid.html?highlight=make_grid#torchvision.utils.make_grid
| https://stackoverflow.com/questions/72523160/ |
Dropout in custom LSTM in pytorch | I have built a custom peephole lstm, and I want to imitate the dropout part in the already built in nn.lstm. So, how to add the dropout like what this intialization of this lstm, nn.LSTM(input_size, hidden_size, dropout=0.3), do? I have an idea of how to do it, which is by just applying a normal dropout just before returning the output, like this:
# init method
self.dropout = nn.Dropout(0.3)
# forward method
hidden_seq = self.dropout(hidden_seq)
return hidden_seq, (h_t, c_t)
I just want to make sure that this is the right way. If not what to do?
| nn.LSTM(... dropout=0.3) applies a Dropout layer on the outputs of each LSTM layer except the last layer. You can have multiple stacked layers by passing parameter num_layers > 1. If you want to add a dropout to the final layer (or if LSTM has only one layer), you have to add it as you are doing now.
If you want to replicate what LSTM dropout does (which is only in case of multiple layers), you can stack LSTM layers manually and add a dropout layer in between.
| https://stackoverflow.com/questions/72524303/ |
Tensorboard: How to view pytorch model summary? | I have the following network.
import torch
import torch.nn as nn
from torch.utils.tensorboard import SummaryWriter
class Net(nn.Module):
def __init__(self,input_shape, num_classes):
super(Net, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=(4,4)),
nn.Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=(4,4)),
)
x = self.conv(torch.rand(input_shape))
in_features = np.prod(x.shape)
self.classifier = nn.Sequential(
nn.Linear(in_features=in_features, out_features=num_classes),
)
def forward(self, x):
x = self.feature_extractor(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
net = Net(input_shape=(1,64,1292), num_classes=4)
print(net)
This prints the following:-
Net(
(conv): Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): ReLU(inplace=True)
(5): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=320, out_features=4, bias=True)
)
)
However, I am trying various experiments and I want to keep track of network architecture on Tensorboard. I know there is a function writer.add_graph(model, input_to_model) but it requires input, or at least its shape should be known.
So, I tried writer.add_text("model", str(model)), but formatting is screwed up in tensorboard.
My question is, is there a way to at least visualize the way I can see by using print function in the tensorboard?
| I can see everything is going right but there is just a formatting issue. Tensorboard understands markdown so you can actually replace \n with <br/> and with &nbsp;.
Here is a detailed walkthrough. Suppose you have the following model:-
import torch
import torch.nn as nn
from torch.utils.tensorboard import SummaryWriter
class Net(nn.Module):
def __init__(self,input_shape, num_classes):
super(Net, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=(4,4)),
nn.Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=(4,4)),
)
x = self.conv(torch.rand(input_shape))
in_features = np.prod(x.shape)
self.classifier = nn.Sequential(
nn.Linear(in_features=in_features, out_features=num_classes),
)
def forward(self, x):
x = self.feature_extractor(x)
x = x.view(x.size(0), -1)
x = self.classifier(x)
return x
net = Net(input_shape=(1,64,1292), num_classes=4)
print(net)
This prints the following and if can actually show it in the Tensorboard.
Net(
(conv): Sequential(
(0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False)
(3): Conv2d(64, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): ReLU(inplace=True)
(5): MaxPool2d(kernel_size=(4, 4), stride=(4, 4), padding=0, dilation=1, ceil_mode=False)
)
(classifier): Sequential(
(0): Linear(in_features=320, out_features=4, bias=True)
)
)
There is function in add_graph(model, input) in SummaryWriter but you must create dummy input and in some cases it is difficult of to always know them. Instead do following:-
writer = SummaryWriter()
model_summary = str(model).replace( '\n', '<br/>').replace(' ', '&nbsp;')
writer.add_text("model", model_summary)
writer.close()
Above produces following text in tensorboard:-
| https://stackoverflow.com/questions/72526514/ |
How padding=zeros works in pytorch in functional.conv1d | This following code below giving a output of shape (1,1,3) for the shape of xodd is (1,1,2). The given kernel shape is(112, 1, 1).
from torch.nn import functional as F
output = F.conv1d(xodd, kernel, padding=zeros)
How the padding=zeros works?
And also, How can I write an equivalent code in tensorflow so that the output is as same as the above output?
| What is padding=zeros?
If we set paddin=zeros, we don't need to add numbers at the right and the left of the tensor.
Padding=0:
from torch.nn import functional as F
import torch
inputs = torch.randn(33, 16, 6) # (minibatch,in_channels,features)
filters = torch.randn(20, 16, 5) # (out_channels, in_channels, kernel_size)
out_tns = F.conv1d(inputs, filters, stride=1, padding=0)
print(out_tns.shape)
# torch.Size([33, 20, 2]) # (minibatch,out_channels,(features-kernel_size+1))
Padding=2:(We want to add two numbers at the right and the left of the tensor)
inputs = torch.randn(33, 16, 6) # (minibatch,in_channels,features)
filters = torch.randn(20, 16, 5) # (out_channels, in_channels, kernel_size)
out_tns = F.conv1d(inputs, filters, stride=1, padding=2)
print(out_tns.shape)
# torch.Size([33, 20, 6]) # (minibatch,out_channels,(features-kernel_size+1+2+2))
How can I write an equivalent code in tensorflow:
import tensorflow as tf
input_shape = (33, 6, 16)
x = tf.random.normal(input_shape)
out_tf = tf.keras.layers.Conv1D(filters = 20,
kernel_size = 5,
strides = 1,
input_shape=input_shape[1:])(x)
print(out_tf.shape)
# TensorShape([33, 2, 20])
# If you want that tensor have shape exactly like pytorch you can transpose
tf.transpose(out_tf, [0, 2, 1]).shape
# TensorShape([33, 20, 2])
| https://stackoverflow.com/questions/72527642/ |
generate 1D tensor as unique index of rows of an 2D tensor | Let's say we transform a 2D tensor to a 1D tensor by giving each, different row a different index, from 0 to the number of rows - 1.
[[1,2],[1,3],[1,4]] -> [0,1,2]
But if there are same rows, then we repeate the index, like this below.
[[1,2],[1,2],[1,4]] -> [0,0,2]
[[1,2],[1,3],[1,2]] -> [0,1,0]
How to implement this on PyTorch?
| You can do so using torch.Tensor.unique and returning the inverse which provides the indices out of the box:
>>> _, i = x.unique(dim=0, return_inverse=True)
>>> i # first example
tensor([0, 1, 2])
| https://stackoverflow.com/questions/72529669/ |
Is AI Gym's action and state data normalized? | I am trying to implement a DDPG agent to control the Gym's Pendulum.
Since I am new to gym, I was wondering if the state data collected via env.step(action) is already normalized or I should do that manually. Also, should action be normalized or in the [-2, 2] range?
Thanks
| env.step(action) returns tuple (observation, reward, done, info). If you're referring to data in observation, then answer is no, it's not normalized (all with accordance to observation space section: three coordinates with values in [-1; 1] for the first two and [-8; 8] for the last one). action should be normalized to [-2; 2] range, though it'll be addinionally clipped to this range.
| https://stackoverflow.com/questions/72529895/ |
Using matrix inversion in Torch versus Numpy in Gaussian Process Regression (previous question solved) | I am building a Torch-based Gaussian Process model which allows me to use custom kernels and take advantage of auto derivatives. However, I find that even in the simplest case, the Numpy-based implementation gave a very result than that from Torch-based one.
Allow me to attached the code
import torch
import matplotlib.pyplot as plt
import numpy as np
length_scale, sigma_s, sigma_n = (0.8, 1.0, 0.1)
def secret_function(x, noise=0.0):
return torch.sin(x) + noise * torch.randn(x.shape)
X = 10 * torch.rand(40, 1) - 4
X = X.reshape(-1,1)
Y = secret_function(X, noise=1e-1)
x = torch.linspace(-8, 8, 100).reshape(-1, 1)
y = secret_function(x)
x_all = torch.cat([X, x],0)
# The following cdist compute 2-norm distance, not squared one
K = ((-0.5)*torch.cdist(x_all, x_all, p =2)/length_scale**2).exp()
K = K * sigma_s**2
L1 = X.shape[0]
K_X = K[:L1,:L1]
K_x = K[L1:,L1:]
K_xX = K[L1:,:L1]
K_Xx = K[:L1,L1:]
K_inv = torch.linalg.inv(K_X + sigma_n**2*torch.eye(L1))
tmp = torch.matmul(K_xX, K_inv)
mu = torch.matmul(tmp, Y)
covar = K_x - torch.matmul(tmp, K_Xx)
var = torch.diagonal(covar, 0)
std = torch.sqrt(var).reshape(-1,1)
plt.figure(figsize=(12, 6))
plt.plot(x.numpy(), mu.numpy(),'b-.')
plt.plot(X.numpy(), Y.numpy(), 'r.')
plt.fill_between(x.numpy().flat, (mu - 2 * std).numpy().flat, (mu + 2 * std).numpy().flat, color="#dddddd")
And here is what I get for the regression result
The exp kernel is used, which causes the predictive mean not smooth. Below I try with same regression but using Numpy library.
X = X.numpy()
Y = Y.numpy()
x = x.numpy()
y = y.numpy()
x_all = np.vstack([X,x])
dist2 = (x_all**2).sum(1)[:,None] + (x_all**2).sum(1) - 2*x_all.dot(x_all.T)
K_np = sigma_s**2 * np.exp((-0.5)*dist2/length_scale**2)
K_Xnp = K_np[:L1,:L1]
K_xnp = K_np[L1:,L1:]
K_xXnp = K_np[L1:,:L1]
K_Xxnp = K_np[:L1,L1:]
K_invnp = np.linalg.inv(K_Xnp + sigma_n**2*np.eye(L1))
tmp_np = np.matmul(K_xXnp, K_invnp)
mu_np = np.matmul(tmp_np, Y)
covar_np = K_xnp - np.matmul(tmp_np, K_Xxnp)
var_np = np.diagonal(covar_np, 0)
std_np = np.sqrt(var_np).reshape(-1,1)
plt.figure(figsize=(12, 6))
plt.plot(x, mu_np,'b-.')
#plt.plot(x, y,'k-')
plt.plot(X, Y, 'r.')
plt.fill_between(x.flat, (mu_np - 2 * std_np).flat, (mu_np + 2 * std_np).flat, color="#dddddd")
Now I got a much smoother prediction from the numpy-based regression.
What can I do in the torch version to get the same smooth result as
np's one? (lesson learned. cdist computes p-norm distance, not squared distance)
| It looks like one time you are computing pairwise distances
torch.cdist(x_all, x_all, p =2)
while the other time you're using squared distances
dist2 = (x_all**2).sum(1)[:,None] + (x_all**2).sum(1) - 2*x_all.dot(x_all.T)
But note that the latter is about the most expensive and unsable way of computing it, I'd recommend using scipy.spatial.distance.cdist or at least doing something like
dist2 = ((x_all[None, :, :] - x_all[:, None, :])**2).sum(axis=-1)
| https://stackoverflow.com/questions/72539195/ |
Pythonic squeeze and unsqueeze data dimensions | For giving the data in good dimensions to a PyTorch Model, I use squeeze en unsqueeze function like this:
inps = torch.FloatTensor(data[0])
tgts = torch.FloatTensor(data[1])
tgts = torch.unsqueeze(tgts, -1)
tgts = torch.unsqueeze(tgts, -1)
tgts = torch.unsqueeze(tgts, -1)
inps = torch.unsqueeze(inps, -1)
inps = torch.unsqueeze(inps, -1)
inps = torch.unsqueeze(inps, -1)
and this:
inps = torch.FloatTensor(data[0])
tgts = torch.FloatTensor(data[1])
tgts = torch.unsqueeze(tgts, 1)
tgts = torch.unsqueeze(tgts, 1)
tgts = torch.unsqueeze(tgts, 1)
inps = torch.unsqueeze(inps, 1)
inps = torch.unsqueeze(inps, 1)
inps = torch.unsqueeze(inps, 1)
But of course, I'm kinda embarrassed to have this repetitive part in my code. Is there another way, more pythonic and clean, to write this code, please?
| You can use torch.Tensor.view like below:
how_many_unsqueeze = 3
extra_dims = (1,) * how_many_unsqueeze
# extra_dims -> (1,1,1)
inps.view(-1, *extra_dims) # -> (-1,1,1,1)
tgts.view(-1, *extra_dims) # -> (-1,1,1,1)
You can use torch.reshape like below:
But after using like in your question you need back to original shape
Instead of unsqueeze
inps = torch.reshape(inps, (len(data[0]),1,1,1))
tgts = torch.reshape(tgts, (len(data[1]),1,1,1))
Instead of squeeze
inps = torch.reshape(inps, (len(data[0]),))
tgts = torch.reshape(tgts, (len(data[1]),))
| https://stackoverflow.com/questions/72543572/ |
Pytorch ;Optimizer got an empty parameter list | Am new in the deep learning concept using pytorch and am try to build a binary classifier model. I have tried some of the solution here on stack overflow but I can't seem to solve it. Maybe it is due to the nature of my code. Can someone figure out what could be the cause of this error in my code
Here is my code
import torch
import torch.nn as nn
import numpy as np
from sklearn.datasets import make_blobs
import matplotlib.pyplot as pyp
# creating a dummy dataset from the make_blobs dataset
number_of_samples=5000
#divide the dataset into training(80%) and testing(20%)
training_number=int(number_of_samples*0.8)
#creating the dummy datasest
x,y=make_blobs(n_samples=number_of_samples,centers=2,n_features=64,cluster_std=10,random_state=2020)
y=y.reshape(-1,1)
#converting the numpy arrays into torch tensors
x,y=torch.from_numpy(x),torch.from_numpy(y)
x,y=x.float(),y.float()
#splitting the datasets into training and testing
x_train,x_test=x[:training_number],x[training_number:]
y_train,y_test=y[:training_number],y[training_number:]
#printing the shapes of each dataset
print("x_train shape:",x_train.shape)
print("x_test shape:",x_test.shape)
print("y_train shape:",y_train.shape)
print("y_test shape:",y_test.shape)
#a class to define the neural network us torch nn module
#neural network will have 3 hidden layers and 1 output layer
#hidden layers will have 64,256 and 1024 neurons
#output layer will have a single neuron
class neuralnetwork(nn.Module):
def _init_(self):
super().__init__()
torch.manual_seed(2020)
self.fc1 = nn.Linear(64, 256)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(256, 1024)
self.relu2 = nn.ReLU()
self.out = nn.Linear(1024, 1)
self.final = nn.Sigmoid()
def forward(self, x):
op = self.fc1(x)
op = self.relu1(op)
op = self.fc2(op)
op = self.relu2(op)
op = self.out(op)
y = self.final(op)
return y
#defining the loss,optimizer and training function for the neural network
def train_network(model,optimizer,loss_function,num_epochs,batch_size,x_train,y_train):
#start model training
model.train()
loss_for_every_epoch=nn.ModuleList()
for epoch in range(num_epochs):
train_loss=0.0
for i in range(0,x_train.shape[0],batch_size):
#extract train batch from x and y
input_data=x_train[i:min(x_train.shape[0]),i+batch_size]
labels=y_train[i:min(y_train.shape[0]),i+batch_size]
#set gradients to zero before beginning optimization
optimizer.zero_grad()
#forwad pass
output_data=model(input_data)
#calculate loss
loss=loss_function(output_data,labels)
#backpropagate
loss.backward()
#update weights
optimizer.step()
train_loss+=loss.item()*batch_size
print("Epoch: {} - Loss:{:.4f}".format(epoch+1,train_loss ))
loss_for_every_epoch.extend([train_loss])
#predict
y_test_prediction=model(x_test)
a=np.where(y_test_prediction>0.5,1,0)
return loss_for_every_epoch
#create an object of the class
model=neuralnetwork()
#define the loss function
loss_function = nn.BCELoss()#binary cross entropy loss function
#define optimizer
adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
#define epochs and batch size
number_of_epochs=100
batch_size=16
#Calling the function for training and pass model, optimizer, loss and related paramters
adam_loss=train_network(model,adam_optimizer,loss_function,number_of_epochs,batch_size,x_train,y_train)
I get the error
Value error:optimizer got an empty parameter list
The error is mainly generated from this section of code
#create an object of the class
model=neuralnetwork()
#define the loss function
loss_function = nn.BCELoss()#binary cross entropy loss function
#define optimizer
adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
#define epochs and batch size
number_of_epochs=100
batch_size=16
#Calling the function for training and pass model, optimizer, loss and related paramters
adam_loss=train_network(model,adam_optimizer,loss_function,number_of_epochs,batch_size,x_train,y_train)
What could be the cause in my code.
Here is the full stack trade
Traceback (most recent call last)
g:\My Drive\CODE\pythondatascience\simpleneuralnetwork.ipynb Cell 7' in <cell line: 6>()
4 loss_function = nn.BCELoss()#binary cross entropy loss function
5 #define optimizer
----> 6 adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
7 #define epochs and batch size
8 number_of_epochs=100
File c:\Users\DAVE\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\optim\adam.py:81, in Adam.__init__(self, params, lr, betas, eps, weight_decay, amsgrad, maximize)
78 raise ValueError("Invalid weight_decay value: {}".format(weight_decay))
79 defaults = dict(lr=lr, betas=betas, eps=eps,
80 weight_decay=weight_decay, amsgrad=amsgrad, maximize=maximize)
---> 81 super(Adam, self).__init__(params, defaults)
File c:\Users\DAVE\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\optim\optimizer.py:49, in Optimizer.__init__(self, params, defaults)
47 param_groups = list(params)
48 if len(param_groups) == 0:
---> 49 raise ValueError("optimizer got an empty parameter list")
50 if not isinstance(param_groups[0], dict):
51 param_groups = [{'params': param_groups}]
ValueError: optimizer got an empty parameter list
| It should be def __init__, not def _init_ in neuralnetwork class. You are not initializing your model object at all. Thus, it does not have any parameters.
| https://stackoverflow.com/questions/72545656/ |
Initializing weights of each module in a Sequential Module in LibTorch | I am trying to port some code from PyTorch to LibTorch.
Supposing in a struct inheriting from torch::nn::Module I have a registered sequential module like
branch1 = register_module("branch1", torch::nn::Sequential(torch::nn::Conv2d(torch::nn::Conv2dOptions(in_channels, branch_channels, kernel_size).padding(0)),
torch::nn::BatchNorm2d(torch::nn::BatchNorm2dOptions(branch_channels)),
torch::nn::ReLU());
I am interested in applying a weight initialization function to each component separately (ideally with a different initialization algorithm per module type), say a function that takes in a torch::nn::Module or a pointer to a torch::nn::Module, what is the simplest way to achieve this?
Edit: My current attempt.
#include <torch/torch.h>
using namespace std;
void init_conv(torch::nn::Conv2d& conv) {
torch::NoGradGuard noGrad;
torch::nn::init::kaiming_normal_(conv->weight, 0.0, torch::kFanOut, torch::kReLU);
torch::nn::init::constant_(conv->bias, 0);
}
void init_bn_2d(torch::nn::BatchNorm2d& bn_2d) {
torch::NoGradGuard noGrad;
torch::nn::init::constant_(bn_2d->weight, 1);
torch::nn::init::constant_(bn_2d->bias, 0);
}
void initialize_sequential(torch::nn::Sequential& seq) {
torch::NoGradGuard noGrad;
vector<shared_ptr<torch::nn::Module>> mods = seq->modules();
for (auto mod = std::begin(mods); mod != end(mods); ++mod) {
shared_ptr<torch::nn::Module> m = *mod;
torch::nn::Module* m_ = m.get();
if (typeid(*m_) == typeid(torch::nn::Conv2dImpl*)) {
torch::nn::Conv2d* c = dynamic_cast<torch::nn::Conv2d*>(m_);
init_conv(*c);
}
if (typeid(*m_) == typeid(torch::nn::BatchNorm2dImpl*)) {
torch::nn::BatchNorm2d* bn = dynamic_cast<torch::nn::BatchNorm2d*>(m_);
init_bn_2d(*bn);
}
}
}
| I can use the apply() function on the sequential object like this:
#include <torch/torch.h>
void sequential_init_weights(torch::nn::Module& m){
if ((typeid(m) == typeid(torch::nn::Conv2dImpl))) {
auto p = m.named_parameters(false);
auto w = p.find("weight");
auto b = p.find("bias");
if (w != nullptr) torch::nn::init::kaiming_normal_(*w, 0.0,
torch::kFanOut, torch::kReLU);
if (b != nullptr) torch::nn::init::constant_(*b, 0.0);
}
if ((typeid(m) == typeid(torch::nn::BatchNorm2dImpl))) {
auto p = m.named_parameters(false);
auto w = p.find("weight");
auto b = p.find("bias");
if (w != nullptr) torch::nn::init::constant_(*w, 1.0);
if (b != nullptr) torch::nn::init::constant_(*b, 0.0);
}
}
struct example_mod : torch::nn::Module {
example_mod(int64_t in_channels, int64_t out_channels) {
m = register_module("m", torch::nn::Sequential(torch::nn::Conv2d(torch::nn::Conv2dOptions(in_channels, out_channels, 1)),
torch::nn::BatchNorm2d(torch::nn::BatchNorm2dOptions(out_channels),
torch::nn::ReLU()));
m->apply(sequential_init_weights);
}
torch::nn::Sequential m = nullptr;
};
Basically just write a function that parses the modules by typeid then used the named parameters to get what you need and pass those to an init function, seems to work pretty well.
| https://stackoverflow.com/questions/72546742/ |
Int object is not iterable when looping through a dataset | I am trying to extract data in batches from a dataset to train a model
Here is part of the code
#defining the loss,optimizer and training function for the neural network
def train_network(model,optimizer,loss_function,num_epochs,batch_size,x_train,y_train):
#start model training
model.train()
loss_for_every_epoch=nn.ModuleList()
for epoch in range(num_epochs):
train_loss=0.0
for i in range(0,x_train.shape[0],batch_size):
#extract train batch from x and y
input_data=x_train[i:min(x_train.shape[0]),i+batch_size]
labels=y_train[i:min(y_train.shape[0]),i+batch_size]
#set gradients to zero before beginning optimization
optimizer.zero_grad()
#forwad pass
output_data=model(input_data)
#calculate loss
loss=loss_function(output_data,labels)
#backpropagate
loss.backward()
#update weights
optimizer.step()
train_loss+=loss.item()*batch_size
print("Epoch: {} - Loss:{:.4f}".format(epoch+1,train_loss ))
loss_for_every_epoch.extend([train_loss])
#predict
y_test_prediction=model(x_test)
a=np.where(y_test_prediction>0.5,1,0)
return loss_for_every_epoch
#create an object of the class
model=neuralnetwork()
#define the loss function
loss_function = nn.BCELoss()#binary cross entropy loss function
#define optimizer
adam_optimizer=torch.optim.Adam(params=model.parameters(),lr=0.001)
#define epochs and batch size
number_of_epochs=100
batch_size=16
#Calling the function for training and pass model, optimizer, loss and related paramters
adam_loss=train_network(model,adam_optimizer,loss_function,number_of_epochs,batch_size,x_train,y_train)
But i get the following error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
g:\My Drive\CODE\pythondatascience\simpleneuralnetwork.ipynb Cell 7' in <cell line: 11>()
9 batch_size=16
10 #Calling the function for training and pass model, optimizer, loss and related paramters
---> 11 adam_loss=train_network(model,adam_optimizer,loss_function,number_of_epochs,batch_size,x_train,y_train)
g:\My Drive\CODE\pythondatascience\simpleneuralnetwork.ipynb Cell 5' in train_network(model, optimizer, loss_function, num_epochs, batch_size, x_train, y_train)
7 train_loss=0.0
8 for i in range(0,4000,batch_size):
9 #extract train batch from x and y
---> 10 input_data=x_train[i:min(x_train.shape[0]),i+batch_size]
11 labels=y_train[i:min(y_train.shape[0]),i+batch_size]
12 #set gradients to zero before beginning optimization
TypeError: 'int' object is not iterable
What could be the cause of the error cause thse source that am using to write the programmed did it in the exact same way
In addition to that can someone explain to me specifically what this line means
input_data=x_train[i:min(x_train.shape[0]),i+batch_size]
x_train is a dataset
| input_data=... is taking a slice of your data to use it as an input to your training algorithm.
Although the comparison statement is invalid (x_train.shape should return a tuple of ints, you shouldn't be able to do min(x_train.shape[0]) without comparing it to something else. My guess is that you should have input_data=x_train[i:min(x_train.shape[0],i+batch_size)]. You have the same issue for y_train as well.
| https://stackoverflow.com/questions/72547849/ |
PyTorch and Neural Networks: How many parameters in a layer? | Ive seen many sources talk about the number of parameters in a neural network and mention that it is calculated as:
num parameters = ((shape of width of the filter * shape of height of the filter * number of filters in the previous layer+1)*number of filters)
but I've been having trouble understanding how that applies to networks created using nn from torch
for example how many parameters would this network have?
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
| The object nn.Linear represents a matrix with dimention [m, n].
For example, nn.Linear(28*28, 512) has (28*28)*512 parameters(weights).
Check here for more information about it.
The object nn.Flatten() and nn.ReLU() do not contain parameters.
| https://stackoverflow.com/questions/72554288/ |
How to save/load a model checkpoint with several losses in Pytorch? | Using Ubuntu 20.04, Pytorch 1.10.1.
I am trying to solve a music generation task with a transformer architecture and multi-embeddings, for processing tokens with several characteristics.
In each training iteration, I have to calculate the loss of each token characteristic and store it in a vector, then I suppose that I should store in a checkpoint a vector containing all of them (or something similar), instead of what I'm doing now which is saving the total loss. I would like to know how to store all losses in the checkpoint (be able to keep training when loading it), or if it isn't needed at all.
The epochs loop:
for epoch in range(0, epochs):
print('Epoch: ', epoch)
loss = trfrmr.train(epoch+1, model, train_loader, train_loss_func, opt, lr_scheduler, num_iters=-1)
loss_train.append(loss)
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': opt.state_dict(),
'loss': loss,
}, "model_pop909_checkpoint.pth")
The training loop:
for batch_num, batch in enumerate(dataloader):
time_before = time.time()
opt.zero_grad()
x = batch[0].to(get_device())
tgt = batch[1].to(get_device())
# x is the input sequence (N,T,Z), that should be input into the transformer forward function as (T,N,Z)
y = model(x.permute(1, 0, 2))
# tgt is the real output sequence, of shape (N,T,Z), T is sequence length, N batch size, Z the different token types
# y are the output logits, is a list of Z tensors of shape (T,N,C*) where C is the vocabulary size, and will vary depending on the token type (pitch, velocity etc...)
losses = []
for j in range(LEN_VOCAB):
aux_loss = loss.forward(y[j].permute(1, 2, 0),
tgt[..., j]) # shapes (N,C,T) and (N,T), see Pytorch cross-entropy for details
losses.append(aux_loss)
losses_sum = sum(losses) # here we sum, but we could also have mean for instance
losses_sum.backward()
opt.step()
if lr_scheduler is not None:
lr_scheduler.step()
lr = opt.param_groups[0]['lr']
loss_hist.append(losses_sum)
if batch_num == num_iters:
break
Thanks in advance.
HOURS LATER EDIT: SOLUTION TO MY SPECIFIC PROBLEM
The problem was that when loading again the model I wasn't doing it properly (not loading optimizer parameters, but only model ones). Now in my code, at the beginning of the loop I do:
if loaded:
print('Loading model and optimizer...')
model.load_state_dict(checkpoint['model_state_dict'], strict=False)
opt.load_state_dict(checkpoint['optimizer_state_dict'])
print('Loaded succesfully!')
And I also load the epoch:
epoch = 0
if loaded:
print('Loading epoch value...')
epoch = checkpoint['epoch']
print('Loaded succesfully!')
| As far as I can tell from your code, your loss function has no custom learnable parameters; it's just recalculated every time your model iterates. Thus there is no need to save its value other than keeping a history of it; it is not required to continue training from a checkpoint.
| https://stackoverflow.com/questions/72556640/ |
How do I set the dimensions of Conv1d correctly? | This is a toy example as I'm learning PyTorch and using it on one-dimensional time series, in this case a sine wave.
I'm trying to use Conv1d, but I get the following error:
RuntimeError: Given groups=1, weight of size [5, 1, 2], expected input[1, 994, 5] to have 1 channels, but got 994 channels instead
My 'lookback' is 5 time steps, and the shape of my data batch is [994, 5].
What am I doing wrong?
import torch;from torch.utils.data import Dataset, DataLoader
import torch.nn.functional as F;import pytorch_lightning as pl
from torch import nn, tensor
class TsDs(torch.utils.data.Dataset):
def __init__(self, s, l=5): super().__init__();self.l,self.s=l,s
def __len__(self): return self.s.shape[0] - 1 - self.l
def __getitem__(self, i): return self.s[i:i+self.l], torch.log(self.s[i+self.l+1]/self.s[i+self.l])
def plt(self): plt.plot(self.s)
class TsDm(pl.LightningDataModule):
def __init__(self, length=5000, batch_size=1000): super().__init__();self.batch_size=batch_size;self.s = torch.sin(torch.arange(length)*0.2) + 5
def train_dataloader(self): return DataLoader(TsDs(self.s[:3999]), batch_size=self.batch_size, shuffle=False)
def val_dataloader(self): return DataLoader(TsDs(self.s[4000:]), batch_size=self.batch_size)
dm = TsDm()
class MyModel(pl.LightningModule):
def __init__(self, learning_rate=0.01):
super().__init__();self.learning_rate = learning_rate
super().__init__();self.learning_rate = learning_rate
self.network = nn.Sequential(nn.Conv1d(1,5,2),nn.ReLU(),nn.Linear(5,3),nn.ReLU(),nn.Linear(3,1), nn.Tanh())
# self.network = nn.Sequential(nn.Linear(5,5),nn.ReLU(),nn.Linear(5,3),nn.ReLU(),nn.Linear(3,1), nn.Tanh())
def forward(self, x): return self.network(x)
def step(self, batch, batch_idx, stage):
x, y = batch
loss = -torch.mean(self(x)*y)
print(loss)
return loss
def training_step(self, batch, batch_idx): return self.step(batch, batch_idx, "train")
def validation_step(self, batch, batch_idx): return self.step(batch, batch_idx, "val")
def configure_optimizers(self): return torch.optim.SGD(self.parameters(), lr=self.learning_rate)
mm = MyModel(0.01);trainer = pl.Trainer(max_epochs=10)
trainer.fit(mm, datamodule=dm)
| There are two issues in your code:
Looking at the documentation of nn.Conv1d, your input shape should be (B, C, L). In your default case, you have L=5, the sequence length, but you need to create that extra dimension representing the feature size of a sequence element, here C=1. You can do so by changing TsDs's __getitem__ function to:
def __getitem__(self, i):
x = self.s[i:i+self.l] # minibatch x shaped (1, self.l)
y = torch.log(self.s[i+self.l+1]/self.s[i+self.l]) # minibatch y shaped (1,)
return x, y
Your convolutional layer has a stride of 1 and a size of 2, this means its output will be shaped (B, 5, L-1=4). The following layer is a fully connected layer instantiated as nn.Linear(5, 3), which means it expects (*, H_in=5) and will output (*, H_out). You can either
You can flatten the conv1d output with nn.Flatten and feed it to a bigger fully connected layer (for instance nn.Linear(20, 3).
You can use a convolutional layer with a wider kernel, if you use a kernel of 5 (your sequence length you will end up with a tensor of (B, 5, 1) which you feed to a nn.Linear(5, 3). Although this approach doesn't really scale when L is changed.
You could apply a nn.AvgPool1d to get an average representation of the sequence after the convolutional layers have been applied.
Those are just a few directions...
| https://stackoverflow.com/questions/72557032/ |
Where can one get the coco utils modules? (unable to find them in hf datasets) | I have been following this tutorial which is a walk-through of fine-tuning a pre-trained Detr model
While trying to evaluate the model, I should be using from datasets import get_coco_api_from_dataset. However, I am getting an ImportError while doing that:
ImportError: cannot import name 'get_coco_api_from_dataset'
I don't seem to find the coco_eval module too. So, this line fails: from datasets.coco_eval import CocoEvaluator. Were the coco-related modules shifted somewhere, or do I need to import them from a different package?
Can someone please help me understand how can I import this module?
| So it seems it gets cloned in the step before in that notebook. See https://github.com/facebookresearch/detr/tree/main/datasets.
| https://stackoverflow.com/questions/72557835/ |
Python - AttributeError: 'numpy.ndarray' object has no attribute 'to' | I now have the updated code as follows:
# Hyperparameters
random_seed = 123
learning_rate = 0.01
num_epochs = 10
batch_size = 128
device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
for epoch in range(num_epochs):
model = resnet34.train()
for batch_idx, (features, targets) in enumerate(train_generator):
features = features.to(device)
targets = targets.to(device)
### FORWARD AND BACK PROP
logits = model(features)
cost = torch.nn.functional.cross_entropy(logits, targets)
optimizer.zero_grad()
cost.backward()
### UPDATE MODEL PARAMETERS
optimizer.step()
### LOGGING
if not batch_idx % 50:
print ('Epoch: %03d/%03d | Batch %03d/%03d | Cost: %.4f'
%(epoch+1, num_epochs, batch_idx,
len(datagen)//batch_size, cost))
model = model.eval() # eval mode to prevent upd. batchnorm params during inference
with torch.set_grad_enabled(False): # save memory during inference
print('Epoch: %03d/%03d training accuracy: %.2f%%' % (
epoch+1, num_epochs,
compute_accuracy(model, train_generator)))
When having only one image, the code runs fine. But, when I add another image or more, I get the following:
features = features.to(device)
targets = targets.to(device)
AttributeError: 'numpy.ndarray' object has no attribute 'to'
| It would be nice to see your train_generator code for clarity, but it does not seem to be a torch DataLoader. In this case, you should probably convert your arrays to tensors manually. There are several ways to do so:
torch.from_numpy(numpy_array) - for numpy arrays;
torch.as_tensor(list) - for common lists and tuples;
torch.tensor(array) should also work but the above ways will avoid copying the data when possible.
| https://stackoverflow.com/questions/72558809/ |
How to explicitly include decision variables in pytorch | I’m implementing a neural network from a paper in PyTorch. Here is the screenshot of the paper:
Here N_Psi is a neural network, and K is a decision matrix. The way I came up with is to include an extra linear layer for K, but wondering if there's any chance to explicitly define K as decision variables in a more direct way?
Any hint would be very helpful. Thanks in advance!
| You can define your decision matrix as a fully connected layer with no bias, using nn.Linear. Then you have to add this additional layer to your optimizer parameter list. Given N your neural network, K you linear layer, and optim your torch.optim.Optimizer class, you can:
optimizer = optim(list(N.parameters()) + list(K.parameters()))
Then in the inference stage, given x_n+1 and x_n, do something like:
mse = F.mse_loss(N(x_n+1), K(N(x_n)))
reg_1 = K.weight.pow(2).sum()
reg_2 = p2v(N.parameters()).sum()
loss = mse + lamb_1*reg_1 + lamb_2*reg_2
Where we imported:
torch.nn.functional as F
torch.nn.utils.parameters_to_vector as p2v
| https://stackoverflow.com/questions/72561471/ |
How to chang the output of the concrete function in Tensorflow 2.x? | I have trained a TensorFlow model and saved it to a local disk. when I loaded it and do inference, how can I get the output of the intermediate layer?
I use the example in the tutorial as a demo.
The model is:
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10)
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Create an instance of the model
model = MyModel()
I save the model to local and load it in another place.
# save model to local
tf.saved_model.save(model, export_dir="./saved_model")
# load model from local
loaded = tf.saved_model.load("./saved_model")
concrete_fun = loaded.signatures["serving_default"]
# do reference
out = concrete_fun(tf.zeros((2, 28, 28, 1)))
out["output_1"].shape
As I know, the concrete function is unique to the input and output.
How can I get the output, weights and bias of intermediate layers, for example self.conv1?
| You can try running:
print([var for var in concrete_fun.trainable_variables])
to get your each layer's weights and biases. To access the graph of your model, you can run concrete_fun.graph. See here for more details.
To access the output of intermediate layers, it would be easiest to save the model like this:
model.save('your_model', save_format='tf')
and then load it:
model.save('your_model', save_format='tf')
model = tf.keras.models.load_model('your_model')
conv_layer = model.get_layer(index=0)
print(conv_layer(tf.random.normal((1, 28, 28, 1))).shape)
| https://stackoverflow.com/questions/72562107/ |
What should I think about when writing a custom loss function? | I'm trying to get my toy network to learn a sine wave.
I output (via tanh) a number between -1 and 1, and I want the network to minimise the following loss, where self(x) are the predictions.
loss = -torch.mean(self(x)*y)
This should be equivalent to trading a stock with a sinusoidal price, where self(x) is our desired position, and y are the returns of the next time step.
The issue I'm having is that the network doesn't learn anything. It does work if I change the loss function to be torch.mean((self(x)-y)**2) (MSE), but this isn't what I want. I'm trying to focus the network on 'making a profit', not making a prediction.
I think the issue may be related to the convexity of the loss function, but I'm not sure, and I'm not certain how to proceed. I've experimented with differing learning rates, but alas nothing works.
What should I be thinking about?
Actual code:
%load_ext tensorboard
import matplotlib.pyplot as plt; plt.rcParams["figure.figsize"] = (30,8)
import torch;from torch.utils.data import Dataset, DataLoader
import torch.nn.functional as F;import pytorch_lightning as pl
from torch import nn, tensor
def piecewise(x): return 2*(x>0)-1
class TsDs(torch.utils.data.Dataset):
def __init__(self, s, l=5): super().__init__();self.l,self.s=l,s
def __len__(self): return self.s.shape[0] - 1 - self.l
def __getitem__(self, i): return self.s[i:i+self.l], torch.log(self.s[i+self.l+1]/self.s[i+self.l])
def plt(self): plt.plot(self.s)
class TsDm(pl.LightningDataModule):
def __init__(self, length=5000, batch_size=1000): super().__init__();self.batch_size=batch_size;self.s = torch.sin(torch.arange(length)*0.2) + 5 + 0*torch.rand(length)
def train_dataloader(self): return DataLoader(TsDs(self.s[:3999]), batch_size=self.batch_size, shuffle=True)
def val_dataloader(self): return DataLoader(TsDs(self.s[4000:]), batch_size=self.batch_size)
dm = TsDm()
class MyModel(pl.LightningModule):
def __init__(self, learning_rate=0.01):
super().__init__();self.learning_rate = learning_rate
super().__init__();self.learning_rate = learning_rate
self.conv1 = nn.Conv1d(1,5,2)
self.lin1 = nn.Linear(20,3);self.lin2 = nn.Linear(3,1)
# self.network = nn.Sequential(nn.Conv1d(1,5,2),nn.ReLU(),nn.Linear(20,3),nn.ReLU(),nn.Linear(3,1), nn.Tanh())
# self.network = nn.Sequential(nn.Linear(5,5),nn.ReLU(),nn.Linear(5,3),nn.ReLU(),nn.Linear(3,1), nn.Tanh())
def forward(self, x):
out = x.unsqueeze(1)
out = self.conv1(out)
out = out.reshape(-1,20)
out = nn.ReLU()(out)
out = self.lin1(out)
out = nn.ReLU()(out)
out = self.lin2(out)
return nn.Tanh()(out)
def step(self, batch, batch_idx, stage):
x, y = batch
loss = -torch.mean(self(x)*y)
# loss = torch.mean((self(x)-y)**2)
print(loss)
self.log("loss", loss, prog_bar=True)
return loss
def training_step(self, batch, batch_idx): return self.step(batch, batch_idx, "train")
def validation_step(self, batch, batch_idx): return self.step(batch, batch_idx, "val")
def configure_optimizers(self): return torch.optim.SGD(self.parameters(), lr=self.learning_rate)
#logger = pl.loggers.TensorBoardLogger(save_dir="/content/")
mm = MyModel(0.1);trainer = pl.Trainer(max_epochs=10)
# trainer.tune(mm, dm)
trainer.fit(mm, datamodule=dm)
#
| If I understand you correctly, I think that you were trying to maximize the unnormalized correlation between the network's prediction, self(x), and the target value y.
As you mention, the problem is the convexity of the loss wrt the model weights. One way to see the problem is to consider that the model is a simple linear predictor w'*x, where w is the model weights, w' it's transpose, and x the input feature vector (assume a scalar prediction for now). Then, if you look at the derivative of the loss wrt the weight vector (i.e., the gradient), you'll find that it no longer depends on w!
One way to fix this is change the loss to,
loss = -torch.mean(torch.square(self(x)*y))
or
loss = -torch.mean(torch.abs(self(x)*y))
You will have another big problem, however: these loss functions encourage unbound growth of the model weights. In the linear case, one solves this by a Lagrangian relaxation of a hard constraint on, for example, the norm of the model weight vector. I'm not sure how this would be done with neural networks as each layer would need it's own Lagrangian parameter...
| https://stackoverflow.com/questions/72562855/ |
How to free GPU from CUDA (using Pytorch)? | I'm using spark/face-alignment to generate faces that are almost the same.
fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False) # try to use GPU with Pytorch depenencies.
imageVector.append( convertImagefa(image, fa))
del fa
gc.collect()
torch.cuda.empty_cache() # trying to clean up cuda.
return imageVector
I'm on a 1 machine with 4 threads that all try to access the GPU. As such I have worked out a strategy that every 4th request it uses the GPU. This seems to fit in memory.
My issue is that when I clean up after cuda it never actually fully cleans. I'll see the load move around the threads and some space free up but CUDA never lets go of the last 624MiB. Is there a way to clean it all the way up?
nvidia-smi
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 17132 C .../face-the-same/bin/python 624MiB |
| 0 N/A N/A 17260 C .../face-the-same/bin/python 1028MiB |
| 0 N/A N/A 17263 C .../face-the-same/bin/python 624MiB |
| 0 N/A N/A 17264 C .../face-the-same/bin/python 624MiB |
FYI: I ended up using a distributed lock to pin the GPU computation to one executor/process id. THis was the outcome derived from the comment made from @Jan.
| According to https://discuss.pytorch.org/t/pytorch-do-not-clear-gpu-memory-when-return-to-another-function/125944/3 this is due to the CUDA context remaining in place unless you end the script. They recommend calling torch.cuda.empty_cache() to clear the cache, however, there will always be a remainder. To get rid of that you could switch to processes instead of threads so that the process can actually be killed without killing your programm (but that'll be quite some effort I suppose).
| https://stackoverflow.com/questions/72565981/ |
How to create RNN layer on top of BERT multilingual in pytorch | I am working on a classification problem. I want to pass the BERT embedding to RNN layer and then FCN layer at the end for classification. But I am facing some issues, is there anyone who have worked on the same problem.
I created this class as below
class BERTClass(torch.nn.Module):
def __init__(self):
super(BERTClass, self).__init__()
self.l1 = BertModel.from_pretrained('bert-base-multilingual-cased', return_dict=False)
# for param in self.l1.parameters():
# param.requires_grad = False
self.l2 = torch.nn.Dropout(0.4)
self.l3 = torch.nn.RNN(768, 1028)
self.activation = torch.nn.ReLU()
self.l4 = torch.nn.Dropout(0.2)
self.l5 = torch.nn.Linear(1028, 128)
self.activation2 = torch.nn.ReLU()
self.l6 = torch.nn.Linear(128, 10)
def forward(self, ids, mask, token_type_ids):
_, output_1= self.l1(ids, attention_mask = mask, token_type_ids = token_type_ids)
output_2 = self.l2(output_1)
output3 = self.l3(output_2)
act = self.activation(output3)
output4 = self.l4(act)
output5 = self.l5(output4)
act2 = self.activation2(output5)
output6 = self.l6(act2)
return output6
model = BERTClass()
but I am getting an error
<ipython-input-23-bbe09bd88901> in forward(self, ids, mask, token_type_ids)
22 output_2 = self.l2(output_1)
23 output3 = self.l3(output_2)
---> 24 act = self.activation(output3)
25 output4 = self.l4(act)
26 output5 = self.l5(output4)
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/activation.py in forward(self, input)
96
97 def forward(self, input: Tensor) -> Tensor:
---> 98 return F.relu(input, inplace=self.inplace)
99
100 def extra_repr(self) -> str:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in relu(input, inplace)
1440 result = torch.relu_(input)
1441 else:
-> 1442 result = torch.relu(input)
1443 return result
1444
TypeError: relu(): argument 'input' (position 1) must be Tensor, not tuple
| The output of torch.nn.RNN is a tuple of shape (output, h_n) (for more info about this layer, visit this link)
So the input of activation layer in your code, should be just first element of RNN output (output3[0]).
Final code would be:
def forward(self, ids, mask, token_type_ids):
_, output_1= self.l1(ids, attention_mask = mask, token_type_ids = token_type_ids)
output_2 = self.l2(output_1)
output3 = self.l3(output_2)
act = self.activation(output3[0])
output4 = self.l4(act)
output5 = self.l5(output4)
act2 = self.activation2(output5)
output6 = self.l6(act2)
return output6
| https://stackoverflow.com/questions/72566262/ |
RuntimeError: Given groups=1, weight of size [64, 64, 1, 1], expected input[4, 1, 1080, 1920] to have 64 channels, but got 1 channels instead | I want to train a U-net segmentation model on the German Asphalt Pavement Distress (GAPs) dataset using U-Net. I'm trying to modify the model at https://github.com/khanhha/crack_segmentation to train on that dataset.
Here is the folder containing all the related files and folders:
https://drive.google.com/drive/folders/14NQdtMXokIixBJ5XizexVECn23Jh9aTM?usp=sharing
I modified the training file, and renamed it as "train_unet_GAPs.py". When I try to train on Colab using the following command:
!python /content/drive/Othercomputers/My\ Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py -data_dir "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/GAPs/" -model_dir /content/drive/Othercomputers/My\ Laptop/crack_segmentation_khanhha/crack_segmentation-master/model/ -model_type resnet101
I get the following error:
total images = 2410
create resnet101 model
Downloading: "https://download.pytorch.org/models/resnet101-63fe2227.pth" to /root/.cache/torch/hub/checkpoints/resnet101-63fe2227.pth
100% 171M/171M [00:00<00:00, 212MB/s]
Started training model from epoch 0
Epoch 0: 0% 0/2048 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py", line 259, in <module>
train(train_loader, model, criterion, optimizer, validate, args)
File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/train_unet_GAPs.py", line 118, in train
masks_pred = model(input_var)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/Othercomputers/My Laptop/crack_segmentation_khanhha/crack_segmentation-master/unet/unet_transfer.py", line 224, in forward
conv2 = self.conv2(x)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py", line 144, in forward
out = self.conv1(x)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 447, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py", line 444, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: Given groups=1, weight of size [64, 64, 1, 1], expected input[4, 1, 1080, 1920] to have 64 channels, but got 1 channels instead
Epoch 0: 0% 0/2048 [00:08<?, ?it/s]
I think that this is because the images of GAPs dataset are grayscale images (with one channel), while Resnet expects to receive RGB images with 3 channels.
How can I solve this issue? How can I modify the model to receive grayscale images instead of RGB images? I need help with that. I have no experience with torch, and I think this implementation uses built-in Resnet model.
| I figured out few things with your code.
According to the trace back, you are using a resnet based Unet model.
Your current model forward method is defined as :
def forward(self, x):
#conv1 = self.conv1(x)
#conv2 = self.conv2(conv1)
conv2 = self.conv2(x)
conv3 = self.conv3(conv2)
conv4 = self.conv4(conv3)
conv5 = self.conv5(conv4)
...
Your error comes from self.conv2(x), because, conv2 takes a matrix with a number of channels of 64. It means, something is missing, or.. commented :)
By changing
#conv1 = self.conv1(x)
#conv2 = self.conv2(conv1)
conv2 = self.conv2(x)
into
conv1 = self.conv1(x)
conv2 = self.conv2(conv1)
Will fix the problem the problem of 64 channels as input. But, there is another problem :
Using an input of (B,1,H,W), no matters what B, H and W are, won't be possible with your current architecture. Why ? Because of this :
resnet34 = torchvision.models.resnet34(pretrained=False)
resnet101 = torchvision.models.resnet101(pretrained=False)
resnet152 = torchvision.models.resnet152(pretrained=False)
print(resnet34.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
print(resnet101.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
print(resnet152.conv1)
-> Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
In any case, the layer conv1 of resnet, takes a 3 channels input.
Once you have made those modifications, you should also try your network with a dummy example like :
model = UNetResNet(34,num_classes=2)
out = model(torch.rand(4,3,1920,1920))
print(out.shape)
-> (4,2,1920,1920) | (batch_size, num_classes, H, W)
Why your width and height are the same here ? Because your current architecture only supports squared images.
For example :
-> (1080,1920) = dim mismatching during concatenation part
-> (1920,1920) = success
-> (108,192) = dim mismatching during concatenation part
-> (192,192) = success
Conclusion :
Modify your network to accept grayscale images if your dataset is made of grayscale images.
Preprocess your images to make Width=Height.
Edit (device mismatch) :
class UNetResNet(nn.Module):
def __init__(self, encoder_depth, num_classes, num_filters=32, dropout_2d=0.2,
pretrained=False, is_deconv=False):
super().__init__()
self.num_classes = num_classes
self.dropout_2d = dropout_2d
if encoder_depth == 34:
self.encoder = torchvision.models.resnet34(pretrained=pretrained)
bottom_channel_nr = 512
elif encoder_depth == 101:
self.encoder = torchvision.models.resnet101(pretrained=pretrained)
bottom_channel_nr = 2048
elif encoder_depth == 152:
self.encoder = torchvision.models.resnet152(pretrained=pretrained)
bottom_channel_nr = 2048
else:
raise NotImplementedError('only 34, 101, 152 version of Resnet are implemented')
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU(inplace=True)
#self.conv1 = nn.Sequential(self.encoder.conv1,
# self.encoder.bn1,
# self.encoder.relu,
# self.pool)
self.conv1 = nn.Sequential(nn.Conv2d(1,64,kernel_size=(7,7),stride=(2,2),padding=(3,3),bias=False), # 1 Here is for grayscale images, replace by 3 if you need RGB/BGR
nn.BatchNorm2d(64),
nn.ReLU(),
self.pool
)
self.conv2 = self.encoder.layer1
self.conv3 = self.encoder.layer2
self.conv4 = self.encoder.layer3
self.conv5 = self.encoder.layer4
self.center = DecoderBlockV2(bottom_channel_nr, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec5 = DecoderBlockV2(bottom_channel_nr + num_filters * 8, num_filters * 8 * 2, num_filters * 8, is_deconv)
self.dec4 = DecoderBlockV2(bottom_channel_nr // 2 + num_filters * 8, num_filters * 8 * 2, num_filters * 8,
is_deconv)
self.dec3 = DecoderBlockV2(bottom_channel_nr // 4 + num_filters * 8, num_filters * 4 * 2, num_filters * 2,
is_deconv)
self.dec2 = DecoderBlockV2(bottom_channel_nr // 8 + num_filters * 2, num_filters * 2 * 2, num_filters * 2 * 2,
is_deconv)
self.dec1 = DecoderBlockV2(num_filters * 2 * 2, num_filters * 2 * 2, num_filters, is_deconv)
self.dec0 = ConvRelu(num_filters, num_filters)
self.final = nn.Conv2d(num_filters, num_classes, kernel_size=1)
def forward(self, x):
conv1 = self.conv1(x)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
conv4 = self.conv4(conv3)
conv5 = self.conv5(conv4)
pool = self.pool(conv5)
center = self.center(pool)
dec5 = self.dec5(torch.cat([center, conv5], 1))
dec4 = self.dec4(torch.cat([dec5, conv4], 1))
dec3 = self.dec3(torch.cat([dec4, conv3], 1))
dec2 = self.dec2(torch.cat([dec3, conv2], 1))
dec1 = self.dec1(dec2)
dec0 = self.dec0(dec1)
return self.final(F.dropout2d(dec0, p=self.dropout_2d))
| https://stackoverflow.com/questions/72567402/ |
How do you design an LSTM to recognize images after extracting features with a CNN? | I am creating a captcha image recognition system. It first extracts the features of the images with ResNet and then uses LSTM to recognize the words and letter in the image. An fc layer is supposed to connect the two. I have not designed a LSTM model before and am very new to machine learning, so I am pretty confused and overwhelmed by this.
I am confused enough that I am not even totally sure what questions I should ask. But here are a couple things that stand out to me:
What is the purpose of embedding the captions if the captcha images are all randomized?
Is the linear fc layer in the first part of the for loop the correct way to connect the CNN feature vectors to the LSTM?
Is this a correct use of the LSTM cell in the LSTM?
And, in general, if there are any suggestions of general directions to look into, that would be really appreciated.
So far, I have:
class LSTM(nn.Module):
def __init__(self, cnn_dim, hidden_size, vocab_size, num_layers=1):
super(LSTM, self).__init__()
self.cnn_dim = cnn_dim #i think this is the input size
self.hidden_size = hidden_size
self.vocab_size = vocab_size #i think this should be the output size
# Building your LSTM cell
self.lstm_cell = nn.LSTMCell(input_size=self.vocab_size, hidden_size=hidden_size)
'''Connect CNN model to LSTM model'''
# output fully connected layer
# CNN does not necessarily need the FCC layers, in this example it is just extracting the features, that gets set to the LSTM which does the actual processing of the features
self.fc_in = nn.Linear(cnn_dim, vocab_size) #this takes the input from the CNN takes the features from the cnn #cnn_dim = 512, hidden_size = 128
self.fc_out = nn.Linear(hidden_size, vocab_size) # this is the looper in the LSTM #I think this is correct?
# embedding layer
self.embed = nn.Embedding(num_embeddings=self.vocab_size, embedding_dim=self.vocab_size)
# activations
self.softmax = nn.Softmax(dim=1)
def forward(self, features, captions):
#features: extracted features from ResNet
#captions: label of images
batch_size = features.size(0)
cnn_dim = features.size(1)
hidden_state = torch.zeros((batch_size, self.hidden_size)).cuda() # Initialize hidden state with zeros
cell_state = torch.zeros((batch_size, self.hidden_size)).cuda() # Initialize cell state with zeros
outputs = torch.empty((batch_size, captions.size(1), self.vocab_size)).cuda()
captions_embed = self.embed(captions)
'''Design LSTM model for captcha image recognition'''
# Pass the caption word by word for each time step
# It receives an input(x), makes an output(y), and receives this output as an input again recurrently
'''Defined hidden state, cell state, outputs, embedded captions'''
# can be designed to be word by word or character by character
for t in range(captions).size(1):
# for the first time step the input is the feature vector
if t == 0:
# probably have to get the output from the ResNet layer
# use the LSTM cells in here i presume
x = self.fc_in(features)
hidden_state, cell_state = self.lstm_cell(x[t], (hidden_state, cell_state))
x = self.fc_out(hidden_state)
outputs.append(hidden_state)
# for the 2nd+ time steps
else:
hidden_state, cell_state = self.lstm_cell(x[t], (hidden_state, cell_state))
x = self.fc_out(hidden_state)
outputs.append(hidden_state)
# build the output tensor
outputs = torch.stack(outputs,dim=0)
return outputs
|
nn.Embedding() is usually used to transfer a sparse one-hot vector to a dense vector (e.g. transfer 'a' to [0.1,0.2,...]) for computation practically. I do not understand why you try to embed captions, which looks like ground-truth. If you want to compute loss with that, try nn.CTCLoss().
If you are going to send a string to LSTM, it is recommended to embed characters in the string with nn.Embedding() firstly, which makes them dense and computational-practical. But if the inputs of LSTM is something extracted from CNN (or other modules), it is already dense and computational-practical and not necessary to project them with fc_in from my view.
I often use nn.LSTM() instead of nn.LSTMCell(), for the latter is troublesome.
There are some bugs in your code and I fixed them:
import torch
from torch import nn
class LSTM(nn.Module):
def __init__(self, cnn_dim, hidden_size, vocab_size, num_layers=1):
super(LSTM, self).__init__()
self.cnn_dim = cnn_dim # i think this is the input size
self.hidden_size = hidden_size
self.vocab_size = vocab_size # i think this should be the output size
# Building your LSTM cell
self.lstm_cell = nn.LSTMCell(input_size=self.vocab_size, hidden_size=hidden_size)
'''Connect CNN model to LSTM model'''
# output fully connected layer
# CNN does not necessarily need the FCC layers, in this example it is just extracting the features, that gets set to the LSTM which does the actual processing of the features
self.fc_in = nn.Linear(cnn_dim,
vocab_size) # this takes the input from the CNN takes the features from the cnn #cnn_dim = 512, hidden_size = 128
self.fc_out = nn.Linear(hidden_size,
vocab_size) # this is the looper in the LSTM #I think this is correct?
# embedding layer
self.embed = nn.Embedding(num_embeddings=self.vocab_size, embedding_dim=self.vocab_size)
# activations
self.softmax = nn.Softmax(dim=1)
def forward(self, features, captions):
# features: extracted features from ResNet
# captions: label of images
batch_size = features.size(0)
cnn_dim = features.size(1)
hidden_state = torch.zeros((batch_size, self.hidden_size)).cuda() # Initialize hidden state with zeros
cell_state = torch.zeros((batch_size, self.hidden_size)).cuda() # Initialize cell state with zeros
# outputs = torch.empty((batch_size, captions.size(1), self.vocab_size)).cuda()
outputs = torch.Tensor([]).cuda()
captions_embed = self.embed(captions)
'''Design LSTM model for captcha image recognition'''
# Pass the caption word by word for each time step
# It receives an input(x), makes an output(y), and receives this output as an input again recurrently
'''Defined hidden state, cell state, outputs, embedded captions'''
# can be designed to be word by word or character by character
# for t in range(captions).size(1):
for t in range(captions.size(1)):
# for the first time step the input is the feature vector
if t == 0:
# probably have to get the output from the ResNet layer
# use the LSTM cells in here i presume
x = self.fc_in(features)
# hidden_state, cell_state = self.lstm_cell(x[t], (hidden_state, cell_state))
hidden_state, cell_state = self.lstm_cell(x, (hidden_state, cell_state))
x = self.fc_out(hidden_state)
# outputs.append(hidden_state)
outputs = torch.cat([outputs, hidden_state])
# for the 2nd+ time steps
else:
# hidden_state, cell_state = self.lstm_cell(x[t], (hidden_state, cell_state))
hidden_state, cell_state = self.lstm_cell(x, (hidden_state, cell_state))
x = self.fc_out(hidden_state)
# outputs.append(hidden_state)
outputs = torch.cat([outputs, hidden_state])
# build the output tensor
# outputs = torch.stack(outputs, dim=0)
return outputs
m = LSTM(16, 32, 10)
m = m.cuda()
features = torch.randn((2, 16))
features = features.cuda()
captions = torch.randn((2, 10))
captions = torch.clip(captions, 0, 9)
captions = captions.long()
captions = captions.cuda()
m(features, captions)
This paper may help you somewhat: https://arxiv.org/abs/1904.01906
| https://stackoverflow.com/questions/72569340/ |
Pytorch: How to get 2D data into a DataLoader? | I have a data set like this:
edge_origins = np.array([[0,1,2,3,4],[6,7,8]])
edge_destinations = np.array([[1,2,3,4,5],[7,8,9]])
target = np.array([0,1])
x = [[np.array([0.1,0.5,0.2]),np.array([0.5,0.6,0.23]),
np.array([0.1,0.5,0.5]),np.array([0.1,0.6,0.23]),
np.array([0.1,0.4,0.4]),np.array([0.52,0.6,0.23])],
[np.array([0.1,0.3,0.3]),np.array([0.3,0.6,0.23]),
np.array([0.1,0.1,0.2]),np.array([0.4,0.6,0.23])]]
This is a list of two networks. The first network has 6 nodes with 5 edges and a class 0, and then 4 nodes with 3 edges and class 1 networks.
I want to develop a model in Pytorch that will classify each network into it's class, and then i'll give it a new set of networks to classify.
So ultimately, I want to be able to shuffle these lists (simultaneously, i.e. maintaining the order between the data and the classes), split into train and test, and then read the train and test data into two data loaders, and feed these into a PyTorch network.
I wrote this:
edge_origins = np.array([[0,1,2,3,4],[6,7,8]])
edge_destinations = np.array([[1,2,3,4,5],[7,8,9]])
target = np.array([0,1])
x = [[np.array([0.1,0.5,0.2]),np.array([0.5,0.6,0.23]),
np.array([0.1,0.5,0.5]),np.array([0.1,0.6,0.23]),
np.array([0.1,0.4,0.4]),np.array([0.52,0.6,0.23])],
[np.array([0.1,0.3,0.3]),np.array([0.3,0.6,0.23]),
np.array([0.1,0.1,0.2]),np.array([0.4,0.6,0.23])]]
edge_index = torch.tensor([edge_origins, edge_destinations], dtype=torch.long)
dataset = Data(x=x, edge_index=edge_index, y=y, num_classes = len(set(target)))
print(dataset)
And the error is:
edge_index = torch.tensor([edge_origins, edge_destinations], dtype=torch.long)
ValueError: expected sequence of length 5 at dim 2 (got 3)
But then once that is fixed I think the next step is:
torch.manual_seed(12345)
dataset = dataset.shuffle()
train_dataset = dataset[:1] #for toy example
test_dataset = dataset[1:]
print(f'Number of training graphs: {len(train_dataset)}')
print(f'Number of test graphs: {len(test_dataset)}')
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)
class GCN(torch.nn.Module):
def __init__(self, hidden_channels):
super(GCN, self).__init__()
torch.manual_seed(12345)
self.conv1 = GCNConv(dataset.num_node_features, hidden_channels)
self.conv2 = GCNConv(hidden_channels, hidden_channels)
self.conv3 = GCNConv(hidden_channels, hidden_channels)
self.lin = Linear(hidden_channels, dataset.num_classes)
def forward(self, x, edge_index, batch):
# 1. Obtain node embeddings
x = self.conv1(x, edge_index)
x = x.relu()
x = self.conv2(x, edge_index)
x = x.relu()
x = self.conv3(x, edge_index)
# 2. Readout layer
x = global_mean_pool(x, batch) # [batch_size, hidden_channels]
# 3. Apply a final classifier
x = F.dropout(x, p=0.5, training=self.training)
x = self.lin(x)
return x
model = GCN(hidden_channels=64)
print(model)
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
criterion = torch.nn.CrossEntropyLoss()
def train():
model.train()
for data in train_loader: # Iterate in batches over the training dataset.
out = model(data.x, data.edge_index, data.batch) # Perform a single forward pass.
loss = criterion(out, data.y) # Compute the loss.
loss.backward() # Derive gradients.
optimizer.step() # Update parameters based on gradients.
optimizer.zero_grad() # Clear gradients.
def test(loader):
model.eval()
correct = 0
for data in loader: # Iterate in batches over the training/test dataset.
out = model(data.x, data.edge_index, data.batch)
pred = out.argmax(dim=1) # Use the class with highest probability.
correct += int((pred == data.y).sum()) # Check against ground-truth labels.
return correct / len(loader.dataset) # Derive ratio of correct predictions.
for epoch in range(1, 171):
train()
train_acc = test(train_loader)
test_acc = test(test_loader)
print(f'Epoch: {epoch:03d}, Train Acc: {train_acc:.4f}, Test Acc: {test_acc:.4f}')
Could someone demonstrate to me how to get my data running into the Pytorch network above?
| In Pytorch Geometric the Data object is used to contain only one graph. So you could iterate through all your arrays like so:
data_list = []
for i in range(2):
edge_index_curr = torch.tensor([edge_origins[i],
edge_destinations[i],
dtype=torch.long)
data = Data(x=torch.tensor(x[i]), edge_index=edge_index_curr, y=torch.tensor(target[i]))
datas.append(data)
You can then use this list of Data to create your own Dataloader:
loader = DataLoader(data_list, batch_size=32)
If you need to split into train/val/test (I would advise having more than 2 samples for this case) you can do it manually or using sklearn.model_selection.
For data augmentation if you really do have very little data, pytorch-geometric comes with transforms.
| https://stackoverflow.com/questions/72575944/ |
How does custom loss function in pyTorch work? | I see pytorch provides support to write custom loss functions. Consider following hinge loss.
class MarginRankingLossExp(nn.Module):
def __init__(self) -> None:
super(MarginRankingLossExp, self).__init__( )
def forward(self,input1,input2,target):
# loss_without_reduction = max(0, −target * (input1 − input2) + margin)
neg_target = -target
input_diff = input2-input1
mul_target_input = neg_target*input_diff
add_margin = mul_target_input
zeros=torch.zeros_like(add_margin)
loss = torch.max(add_margin, zeros)
return loss.mean()
This has only forward and constructor function defined. How does pytorch calculate gradient for custom functions? Does it differentiate it somehow?
Also, This function is non differentiable at y=margin but it didn't throw any error.
| Your function will be differentiable by PyTorch's autograd as long as all the operators used in your function's logic are differentiable. That is, as long as you use torch.Tensor and built-in torch operators that implement a backward function, your custom function will be differentiable out of the box.
In a few words, on inference, a computational graph will be constructed on the fly. That is, for every operation you make, the tensors necessary to compute the gradients will be matched for a later backward pass. Assuming that you use only differentiable operators (i.e. most operators are mathematically differentiable and as such PyTorch provides the backward functionality for them). You will be able to perform backpropagation on the graph: from the end of it from the loss term, up to its leaves on parameters and inputs.
A very easy way to tell if your function is differentiable by Autograd is to infer its output with inputs which require gradient computation. Then check for a grad_fn callback on the output:
>>> x1 = torch.rand(1,10,2,2, requires_grad=True)
>>> x2 = torch.rand(1,10,2,2, requires_grad=True)
>>> y = torch.rand(1,10,2,2)
Here we can check with:
>>> MarginRankingLossExp()(x1, x2, y)
tensor(0.1045, grad_fn=<MeanBackward0>)
Where you notice MeanBackward0 which refers to torch.Tensor.mean, being the very last operator applied by MarginRankingLossExp.forward.
| https://stackoverflow.com/questions/72577858/ |
How to convert a matrix to a list using List Comprehension | dataset[i][j] is a 100x30 matrix.
And I want to convert to a list using list comprehension.
I tried
train_dataset = [dataset[i][j] for i,j in [range(30), range(100)]]
but there was an error : ValueError: too many values to unpack (expected 2)
How could I assign the conversion, so len(train_dataset)=3000 ?
| If I got this right, dataset is a matrix, with size 100x30, and you are trying to get a list from it?
If this is the case, you can do:
dataset = [[x for x in range(30)] for j in range(100)]
train_dataset = [dataset[i][j] for i in range(100) for j in range(30)]
print(train_dataset)
print(len(train_dataset))
dataset will be:
[0, ..., 29]
[0, ..., 29]
x100
[0, ..., 29]
and your output will be:
[0, ..., 29, 0, ..., 29... x100]
resulting an array of size 3000.
| https://stackoverflow.com/questions/72581764/ |
Why ab of L*a*b is divided by 110 to normalize it between -1 and 1? | This article tries to explain image colorizing strategy of pix2pix research.
This question references this article only.
In the dataset 'Making Dataset and DataLoaders' section, inside __getitem__ method, after converting the image to L*a*b, they divide L part of the tensor by 50 and subtract 1. This will surely bring the L values between -1 and 1 as L ranges from 0 to 100.
But ab values are divided by 110, which is strange, as ab values range from -128 to 128. As of my understanding, ab should be divided by 128 to bring its values between -1 and 1.
If anyone understands the logic behind this, please take the time to tell me.
| Theoriticaly a and b are not bounded but often clamped for practical reason.
from wikipedia
The a* and b* axes are unbounded, and depending on the reference white
they can easily exceed ±150 to cover the human gamut. Nevertheless,
software implementations often clamp these values for practical
reasons. For instance, if integer math is being used it is common to
clamp a* and b* in the range of −128 to 127.
I think that the source of the 110 is this Matlab implementation
However, I assume that this DOESN'T hold for the skimage.color implemneation so it may be a mistake.
2: http://ai.stanford.edu/~ruzon/software/rgblab.html scikit.
| https://stackoverflow.com/questions/72583057/ |
Reducing the dimensions of a 4D feature tensor from ResNet to fit into a 2D LSTM model | I am designing a machine learning model that takes a feature tensor from ResNet and uses an LSTM to identify the sequences of letters in the image. The feature tensor that's from ResNet is 4-D , however, LSTM_cell wants inputs that are 2-D. I know about other methods such as .view() and .squeeze() that are able to reduce dimensions. However, it seems as if I do this, it changes the size of the dimensions of the feature vectors. At first the vector is [128, 2, 5, 512] but it needs to be [128, 512]. However, calling .view(-1,512) multiplies the dimensions to get [1280, 512]. How would you change dimensions without multiplying?
| Outputs of CNN should be a 3-D Tensor (e.g. [128, x, 512]) so that it can be treated as a sequence. Then you can feed them into nn.LSTMCell() with an x-iteration for-loop.
However, 4-D Tensor remains some spatial features and it is not appropriate to be fed into LSTM. A typical practice is to redesign your CNN architecture to make sure that it produces a 3-D Tensor. For example, you can add an nn.Conv2d() or something else at the end of CNN network to make the outputs as shape [128, x, 512].
| https://stackoverflow.com/questions/72583957/ |
Pytorch uses cuda despite disabling it. How to force it to use CPU-only | I tried disabling cuda for pytorch following this stackoverflow question and a few others.
At OS level, before initializing python -> set CUDA_VISIBLE_DEVICES ''
But when I enter the python prompt, I still see Cuda is available
>>> import torch
>>> torch.cuda.is_available()
True
A subsequent operation confirms it.
>>> a = torch.ones(5)
>>> b = a
>>> a.add_(1)
tensor([2., 2., 2., 2., 2.])
>>> print(b)
tensor([2., 2., 2., 2., 2.])
I can see that b also got updated. How do I force pytorch not to use GPU ?
| You could use a.to("cpu") to move a tensor to a particular device, in this case CPU. Then a.is_cuda() should return False, confirming that it is not on GPU.
The same can be done with entire models, etc., e.g. model.to("cpu")
| https://stackoverflow.com/questions/72584071/ |
How to detect objects with a custom YOLOv5 model? | I trained a YOLOv5 model from a custom dataset with the provided training routine on github (from inside tutorial.ipynb).
Using this model for detecting objects in unseen images gets me decent results when executing:
!python detect.py --weights custom_weights.pt --img 224 --conf 0.5 --source data/images
Now I want to use my model in a small project. Using the following approach does not lead to good results. It either detects complete nonsense or nothing at all on the same images used above.
model = torch.hub.load('ultralytics/yolov5', 'custom', path='custom_weights.pt', force_reload=True)
img = cv2.imread('test.jpg')
model.eval()
pred = model(img)
bboxes = pred.xyxy
Am I forced to use detect.py and hence cloning the whole YOLO repository into my project?
Or is there something I am missing when calling the model like I do right now?
| Seems like calling the model with a numpy array is not working, at least not straightforward. When the image is transformed to a torch.tensor, you can apply non maximum supression (from yolov5/utils/general.py) to the output and receive a valid result:
model = torch.hub.load('ultralytics/yolov5', 'custom', path='custom_weights.pt', force_reload=True)
img = cv2.imread('image.jpg')
img = torch.from_numpy(img)/255.0
img = img.unsqueeze(0)
img = torch.permute(img, (0, 3, 1, 2))
pred = model(img)
pred = non_max_suppression(pred)
| https://stackoverflow.com/questions/72584233/ |
KeyError when iterating through pytorch dataloader | I am trying to build a model with pytorch, and I want to use a customized dataset. So, I have a dataset.py which defines a class, MyData, which is a subclass of torch.utils.data.Dataset. Here's the file.
# dataset.py
import torch
from tqdm import tqdm
import numpy as np
import re
from torch.utils.data import Dataset
from pathlib import Path
class MyDataset(Dataset):
def __init__(self, path, size=10000):
if not Path(path).exists():
raise FileNotFoundError
self.data = []
self.load_data(path, size)
def __len__(self):
return len(self.data)
def __getitem__(self, index):
return self.data[index]
def load_data(self, path, size):
# Loading data from csv files and some preparation
# Each sample is in the format of (int_tag1, int_tag2, feature_dictionary),
# then the sample is appended to self.data
pass
Then I tried to test this dataset using a DataLoader in the test file dataset_test.py
from torch.utils.data import DataLoader
from dataset import MyDataset
path = 'dataset/sample_train.csv'
size = 1000
dataset = MyDataset(path, size)
dataloader = DataLoader(dataset, batch_size=1000)
for v in dataloader:
print(v)
I got the following output
730600it [11:08, 1093.11it/s]
1000it [00:00, 20325.47it/s]
Traceback (most recent call last):
File "dataset_test.py", line 12, in <module>
for v in dataloader:
File "/home/usr/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/home/usr/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/usr/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/usr/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 84, in default_collate
return [default_collate(samples) for samples in transposed]
File "/home/usr/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 84, in <listcomp>
return [default_collate(samples) for samples in transposed]
File "/home/usr/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in default_collate
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "/home/usr/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in <dictcomp>
return {key: default_collate([d[key] for d in batch]) for key in elem}
File "/home/usr/.local/lib/python3.6/site-packages/torch/utils/data/_utils/collate.py", line 74, in <listcomp>
return {key: default_collate([d[key] for d in batch]) for key in elem}
KeyError: '210'
The first two lines might be the output when loading data. (I'm not sure because I didn't write any output. But I am using tqdm to load data, so I assume it's tqdm's output?)
Then, I got this key error. I'm wondering which part should be modified? I think the dataset class is well-written, since there's no error when reading the data from file. Is it because the format of samples is not right, so the dataloader cannot load data from dataset properly? Is there any requirement for the format? I've read other people's code, but I didn't find any info mentioning that there's any requirement of the format of samples in Dataset class.
EDIT: A single sample looks like this
('0', '0', {'210': '9093445', '216': '9154780', '301': '9351665', '205': '4186222', '206': '8316799', '207': '8416205', '508': '9355039', '121': '3438658', '122': '3438762', '101': '31390', '124': '3438769', '125': '3438774', '127': '3438782', '128': '3864885', '129': '3864887', '150_14': '3941161', '127_14': '3812616', '109_14': '449068', '110_14': '569621'})
The first two '0's are labels, and the following dictionary contains features.
| As @Shai mentioned, if they keys in feature_dictionary are not the same in a batch, then you get this error from the default collate_fn of DataLoader. As a solution, you can write a custom collate_fn as follows and it works
class MyDataset(Dataset):
# ... your code ...
def collate_fn(self, batch):
tag1_batch = []
tag2_batch = []
feat_dict_batch = []
for tag1, tag2, feat_dict in batch:
tag1_batch.append(tag1)
tag2_batch.append(tag2)
feat_dict_batch.append(feat_dict)
return tag1_batch, tag2_batch, feat_dict_batch
path = 'dataset/sample_train.csv'
size = 1000
dataset = MyDataset(path, size)
dataloader = DataLoader(dataset, batch_size=3, collate_fn=dataset.collate_fn)
| https://stackoverflow.com/questions/72585740/ |
Re-setting learning rate while training in Pytorch | I am training a model using Learning Rate Scheduler in Pytorch to decrease the value of learning rate. By using learning rate scheduler, I reduced learning rate from 0.0001 to 1e-5, and save all the weights, parameters, learning rate values, etc at a particular checkpoint. Now, I want to resume training the model, but with different value of learning rate, while remaining all other values. How can I do this?
This is the code for saving checkpoint. I used Adam optimizer
checkpoint = {
'epoch': epoch + 1,
'val_loss_min': val_loss['total'].avg,
'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict(),
'scheduler': scheduler.state_dict(),
}
When loading checkpoint, I used this code:
checkpoint = torch.load(args.SAVED_MODEL)
# Load current epoch from checkpoint
epochs = checkpoint['epoch']
# Load state_dict from checkpoint to model
model.load_state_dict(checkpoint['state_dict'])
# Load optimizer from checkpoint to optimizer
optimizer.load_state_dict(checkpoint['optimizer'])
# Load valid_loss_min from checkpoint to valid_loss_min
val_loss_min = checkpoint['val_loss_min']
# Load scheduler from checkpoint to scheduler
scheduler.load_state_dict(checkpoint['scheduler'])
| You can change the learning rate of your optimizer by accessing its param_groups attribute. Depending on whether you have multiple groups or not, you can do the following (after having loaded the checkpoint onto it):
for g in optimizer.param_groups:
g['lr'] = new_lr
| https://stackoverflow.com/questions/72589798/ |
the derivative for 'target' is not implemented | I added two VAEs to the original model, so I need to add optimizer and loss. However, the following errors are reported. How can I modify them?
Traceback (most recent call last):
File "train.py", line 320, in <module>
main()
File "train.py", line 315, in main
ImgCla.TrainingData()
File "train.py", line 201, in TrainingData
lossv1 = self.loss_function(recon_audio, audio1, mean1, logstd1)
File "train.py", line 135, in loss_function
BCE = F.binary_cross_entropy(recon_x, x, reduction='sum')
File "/home/user1/.conda/envs/tyz/lib/python3.6/site-packages/torch/nn/functional.py", line 2762, in binary_cross_entropy
return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)
RuntimeError: the derivative for 'target' is not implemented
The train.py is as follows:
from torch.utils.data import DataLoader
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
import dataloader
import pandas
import os
import imp
import model
import math
import time
import matplotlib.pyplot as plt
import seaborn as sn
from tqdm import tqdm
from sklearn.metrics import classification_report,accuracy_score
import training_plot
from sklearn.metrics import confusion_matrix
import torch.nn.functional as F
from model import VAE1,VAE2
config = imp.load_source("config","config/Resnet50.py").config
device_ids = config["device_ids"]
data_train_opt = config['data_train_opt']
device = torch.device("cuda:0" if torch.cuda.is_available() else 'cpu')
print("======================================")
print("Device: {}".format(device_ids))
def fix_bn(m):
classname = m.__class__.__name__
if classname.find('BatchNorm') != -1:
m.eval()
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, name, fmt=':f'):
self.name = name
self.fmt = fmt
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __str__(self):
fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
return fmtstr.format(**self.__dict__)
class ProgressMeter(object):
def __init__(self, num_batches, meters, prefix=""):
self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
self.meters = meters
self.prefix = prefix
def display(self, batch):
entries = [self.prefix + self.batch_fmtstr.format(batch)]
entries += [str(meter) for meter in self.meters]
print('\t'.join(entries))
def _get_batch_fmtstr(self, num_batches):
num_digits = len(str(num_batches // 1))
fmt = '{:' + str(num_digits) + 'd}'
return '[' + fmt + '/' + fmt.format(num_batches) + ']'
def adjust_learning_rate(optimizer, epoch, args):
"""Decay the learning rate based on schedule"""
lr = args.lr
if args.cos: # cosine lr schedule
lr *= 0.5 * (1. + math.cos(math.pi * epoch / args.epochs))
else: # stepwise lr schedule
for milestone in args.schedule:
lr *= 0.1 if epoch >= milestone else 1.
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def accuracy(output, target, topk=(1,)):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k]
correct_k = torch.sum(correct_k).float()
res.append(correct_k.mul_(100.0 / batch_size))
return res
class ImageClassify(object):
def __init__(self):
self.name_list = []
self.model = model.Mixed_model(data_train_opt["dim"])
self.model = torch.nn.DataParallel(self.model, device_ids=device_ids)
self.model = self.model.cuda(device=device_ids[0])
self.save = data_train_opt["final_model_file"]
self.training_save = data_train_opt["feat_training_file"]
self.training_log = data_train_opt["training_log"]
self.loss = 9999
self.best = 0
self.train_dataset = dataloader.Load_Data(config["data_dir"],"train")
self.trainloader = DataLoader(self.train_dataset, batch_size=data_train_opt['batch_size']*len(device_ids),num_workers=8,shuffle=True,drop_last=False)
self.valid_dataset = dataloader.Load_Data(config["data_dir"],"val")
self.validloader = DataLoader(self.valid_dataset,batch_size=data_train_opt['batch_size']*len(device_ids),num_workers=8,shuffle=True)
self.LossFun()
print("Trainloader: {}".format(len(self.trainloader)))
print("Validloader: {}".format(len(self.validloader)))
self.vae1 = VAE1().cuda()
self.vae2 = VAE2().cuda()
def loss_function(self,recon_x, x, mean, std):
BCE = F.binary_cross_entropy(recon_x, x, reduction='sum')
var = torch.pow(torch.exp(std), 2)
KLD = -0.5 * torch.sum(1 + torch.log(var) - torch.pow(mean, 2) - var)
return BCE+KLD
def loss_function2(self,recon_x, x, mean, std):
BCE = F.binary_cross_entropy(recon_x, x, reduction='sum')
var = torch.pow(torch.exp(std), 2)
KLD = -0.5 * torch.sum(1 + torch.log(var) - torch.pow(mean, 2) - var)
return BCE + KLD
def LossFun(self):
print("lossing...")
self.criterion = nn.CrossEntropyLoss()
self.optimizer = optim.Adam(self.model.parameters(), lr=data_train_opt['lr'])
VAE needs to introduce reconstruction error, which is added to my previous model, so I first updated the parameters of the model and trained the previous model. Update the parameters of VAE and train VAE. When training VAE, I want to fix the parameters of other parts, so I add this part:
for name,param in model.Mixed_model().named_parameters():
if 'video' in name:
param.requires_grad=False
if 'audio_net' in name:
param.requires_grad=False
if 'classifier' in name:
param.requires_grad=False
self.optimizer2 = optim.Adam(filter(lambda param:param.requires_grad,model.Mixed_model().parameters()), lr=data_train_opt['lr'])
def TrainingData(self):
self.model.train()
log = []
for epoch in range(data_train_opt['epoch']):
if (epoch+1) % data_train_opt["decay_epoch"] == 0 :
for param_group in self.optimizer.param_groups:
param_group['lr'] = param_group['lr']*data_train_opt["decay_rate"]
batch_time = AverageMeter('Time', ':6.3f')
data_time = AverageMeter('Data', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
progress = ProgressMeter(
len(self.trainloader),
[batch_time, data_time, losses,top1],
prefix="Epoch: [{}]".format(epoch+1))
# switch to train mode
self.model.train()
end = time.time()
for i, (img,audio, class_id) in enumerate(self.trainloader):
# measure data loading time
data_time.update(time.time() - end)
img,audio,class_id = img.cuda(device=device_ids[0]),audio.cuda(device=device_ids[0]),class_id.cuda(device=device_ids[0])
predict,audio1,img1= self.model(img,audio)
loss = self.criterion(predict, class_id)
# acc1/acc5 are (K+1)-way contrast classifier accuracy
# measure accuracy and record loss
acc1= accuracy(predict, class_id, topk=(1,))
losses.update(loss.item(), img.size(0))
top1.update(acc1[0], img.size(0))
self.optimizer.zero_grad()
loss.backward(retain_graph=True)
self.optimizer.step()
z1, logstd1, mean1, eps1,recon_audio = self.vae1(audio1)
z2, logstd2, mean2, eps2,recon_img = self.vae2(img1)
lossv1 = self.loss_function(recon_audio, audio1, mean1, logstd1)
lossv2 = self.loss_function2(recon_img, img1, mean2, logstd2)
lossv = lossv2 + lossv1
lossv.backward()
self.optimizer2.zero_grad()
lossv.backward()
self.optimizer2.step()
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if (i+1) % data_train_opt["log_step"] == 0:
loss_avg = losses.avg
acc_avg = top1.avg
log.append([epoch, i + 1, loss.item(), acc1[0], loss_avg, acc_avg])
progress.display(i+1)
if (epoch+1) % data_train_opt["save_epoch"] == 0:
acc, a = self.ValidingData(epoch+1)
if losses.avg <self.loss:
self.loss = losses.avg
a = 1
np.save(data_train_opt["training_log"], log)
if a == 1:
self.save_checkpoint({
'epoch': epoch + 1,
'state_dict': self.model.state_dict(),
'optimizer' : self.optimizer.state_dict(),
'acc':acc
}, filename=os.path.join(data_train_opt["feat_training_file"],'Epoch_{}_acc_{}_loss_{}.pth'.format(epoch+1,acc,losses.avg)))
# }, filename=os.path.join(data_train_opt["feat_training_file"],'checkpoint_{:04d}.pth'.format(epoch+1)))
# }, filename=os.path.join(data_train_opt["feat_training_file"],'best.pth'))
def save_checkpoint(self,state,filename='checkpoint.pth.tar'):
torch.save(state, filename)
def ValidingData(self,epoch):
self.model.eval()
a = 0
with torch.no_grad():
y_pre = []
y_true = []
with tqdm(total=len(self.validloader), desc='Example', leave=True, ncols=100, unit='batch', unit_scale=True) as pbar:
for i, (img,audio,class_id) in enumerate(self.validloader):
img,audio, class_id = img.cuda(device=device_ids[0]),audio.cuda(device=device_ids[0]), class_id.cuda(device=device_ids[0])
predict = self.model(img, audio)
_, pre = torch.max(predict,dim=1)
y_pre.append(pre.cpu())
y_true.append(class_id.cpu())
pbar.update(1)
y_pre = torch.cat(y_pre).cpu().detach().numpy()
y_true = torch.cat(y_true).cpu().detach().numpy()
cm = confusion_matrix(y_true, y_pre)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
cm.diagonal()
keys = ['airport',
'bus',
'metro',
'metro_station',
'park',
'public_square',
'shopping_mall',
'street_pedestrian',
'street_traffic',
'tram']
values = [np.round(i, decimals=3) for i in list(cm.diagonal())]
df_cm = pandas.DataFrame(cm, index=[i for i in keys], columns=[i for i in keys])
plt.figure(figsize=(15, 12))
sn.heatmap(df_cm, annot=True)
plt.savefig('confusion.png')
report = classification_report(y_true, y_pre, target_names=
['airport',
'bus',
'metro',
'metro_station',
'park',
'public_square',
'shopping_mall',
'street_pedestrian',
'street_traffic',
'tram'], digits=4)
acc = accuracy_score(y_true, y_pre)
if acc>self.best:
a = 1
self.best=acc
print(report)
print("==================")
with open(data_train_opt["txt"],"a") as f:
f.write("========= {} =======\n".format(epoch))
f.write("classification_report".format(epoch))
f.write(report)
f.write("\n")
self.model.train()
if a ==1:
with open(data_train_opt["best"], "a") as f:
f.write("========= {} =======\n".format(epoch))
f.write("classification_report".format(epoch))
f.write(report)
f.write("================\n")
return acc,a
def main():
ImgCla = ImageClassify()
ImgCla.TrainingData()
training_plot.draw(data_train_opt["training_log"])
acc, a = ImgCla.ValidingData(epoch=0)
if __name__ == '__main__':
main()
| The error message refers to the fact you are requiring gradient computation on the target tensor which is not supported by nn.functional.binary_cross_entropy. In other words you need to detach the target before computing the loss term:
BCE = F.binary_cross_entropy(recon_x, x.detach(), reduction='sum')
In both loss_function and loss_function2.
| https://stackoverflow.com/questions/72590591/ |
Error while reading csv file: converting a column from string to float | I am trying to read a csv file that contains a column, SpType, in which there are String values. My variable is being converted into an object, but I need it to be float type.
Here's the snippet:
data = pd.read_csv("/content/Star3642_balanced.csv")
X_orig = data[["Vmag", "Plx", "e_Plx", "B-V", "SpType", "Amag"]].to_numpy()
Here's what's giving me the error:
X = torch.tensor(X_orig, dtype=torch.float32)
The error reads "can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool."
I tried doing this after reading the csv file, but it didn't help:
data["SpType"] = data.SpType.astype(float)
Can someone please tell me what can be done about this?
| Strings should be encoded into numeric values. The easiest way would be using Pandas one-hot encoding (that will create lots of extra columns in this case, but a neural network should process those without much effort):
ohe = pd.get_dummies(data["SpType"], drop_first=True)
data[ohe.columns] = ohe
data = data.drop(["SpType"], axis=1)
Alternatively, you may use sklearn encoders or category_encoders library - more complex encoding might require to process the test set separately to avoid the target leakage.
| https://stackoverflow.com/questions/72592026/ |
Retraining pytorch model (augmented learning) | I have the following code:
import torch
from facenet_pytorch import InceptionResnetV1, MTCNN
from torch.utils.data import DataLoader
from torchvision import datasets
import numpy as np
import pandas as pd
import os
workers = 0 if os.name == 'nt' else 4
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Running on device: {}'.format(device))
mtcnn = MTCNN(
image_size=160, margin=0, min_face_size=20,
thresholds=[0.6, 0.7, 0.7], factor=0.709, post_process=True,
device=device
)
def collate_fn(x):
return x[0]
dataset = datasets.ImageFolder('data/images/')
dataset.idx_to_class = {i:c for c, i in dataset.class_to_idx.items()}
loader = DataLoader(dataset, collate_fn=collate_fn, num_workers=workers)
#print(dataset.idx_to_class)
aligned = []
names = []
i = 0
for x, y in loader:
x_aligned, prob = mtcnn(x, return_prob=True)
if x_aligned is not None:
print('Face detected with probability: {:8f}'.format(prob))
aligned.append(x_aligned)
names.append(dataset.idx_to_class[y])
i += 1
#print(i)
for name, param in mtcnn.named_parameters(): #Freezing everything but last layer
#print(name)
if name != "onet.dense6_3.bias":
param.require_grad = False
else:
param.require_grad = True
And now I would like to retrain this model to predict three classes (Now it only predicts the probability of a face). Let say that I have inside data/images/ three folders, faces1, faces2 and faces3. How could I retrain this model with these three folders? I would like to have a tensor like [prob1, prob2, prob3] with the probability of an image for each class. Thanks.
| MTCN: This class loads pretrained P-, R-, and O-nets and returns images cropped to include the face only, given raw input images.
I am assuming that you are trying to use InceptionResnetV1 for classification on your dataset. To retrain the Inception model you just load the model with the number of classes you need and then train it.
resnet = InceptionResnetV1(
classify=True,
pretrained='vggface2',
num_classes=3
)
Complete finetuning example is here https://github.com/timesler/facenet-pytorch/blob/master/examples/finetune.ipynb
| https://stackoverflow.com/questions/72593257/ |
How to train network on images of different sizes Pytorch | I am trying to feed the Neural network dataset of images and I am getting this error
I don't know what might be the cause as all the images have different sizes
I have also tried to change batch sizes and kernels but I had no success with this.
File "c:\Users\david\Desktop\cs_agent\main.py", line 49, in <module>
for i, data in enumerate(train_loader, 0):
File "C:\Users\david\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\dataloader.py", line 530, in __next__
data = self._next_data()
File "C:\Users\david\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\dataloader.py", line 570, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\david\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\_utils\fetch.py", line 52, in fetch
return self.collate_fn(data)
File "C:\Users\david\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\_utils\collate.py", line 172, in default_collate
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "C:\Users\david\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\_utils\collate.py", line 172, in <listcomp>
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "C:\Users\david\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\_utils\collate.py", line 138, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 300, 535] at entry 0 and [3, 1080, 1920] at entry 23
this is my main file
import numpy as np
import matplotlib.pyplot as plt
import torch
import dataset
import os
from torch.utils.data import DataLoader
import torch.nn as nn
import torchvision
import check_device
import neural_network
import torch.optim as optim
EPS = 1.e-7
LR=0.5
WEIGHT_DECAY=0.5
batch_size =50
#DATA LOADING ###################################################################################################################
test_dataset =dataset.csHeadBody(csv_file="images\\test_labels.csv",root_dir="images\\test")
train_dataset =dataset.csHeadBody(csv_file="images\\train_labels.csv",root_dir="images\\train")
train_loader =DataLoader(dataset =train_dataset,batch_size=batch_size,shuffle=True)
test_loader =DataLoader(dataset=test_dataset,batch_size=batch_size,shuffle=True)
#DATA LOADING ###################################################################################################################END
#NEURAL NET #####################################################################################################################################################
net=neural_network.Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
#NEURAL NET END ######################################################################################
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs; data is a list of [inputs, labels]
print(data)
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f'[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}')
running_loss = 0.0
print('Finished Training')
and this is my dataset file
class csHeadBody(Dataset):
def __init__(self, csv_file, root_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.root_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
this is my neural network architecture
import torch.nn.functional as F
import torch.nn as nn
import torch
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 535, 535)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
| You need to adjust the parameters of your convolutional and linear layers. The first argument is the number of input channels (3 for standard RGB images in conv1), then the number of output channels and then the convolution kernel size. To clarify, I've used named arguments in the code below. The code works for images of a square input size of 224x224 pixels (standard imagenet size, adjust if needed). If you want image size agnostic code you could use something like global average pooling (mean of each channel in the last conv layer). The net below supports both:
class Net(nn.Module):
def __init__(self, use_global_average_pooling: bool = False):
super().__init__()
self.use_global_average_pooling = use_global_average_pooling
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3)
self.pool = nn.MaxPool2d(kernel_size=(2, 2))
self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3)
if use_global_average_pooling:
self.fc_gap = nn.Linear(64, 10)
else:
self.fc_1 = nn.Linear(54 * 54 * 64, 84) # 54 img side times 64 out channels from conv2
self.fc_2 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x))) # img side: (224 - 2) // 2 = 111
x = self.pool(F.relu(self.conv2(x))) # img side: (111 - 2) // 2 = 54
if self.use_global_average_pooling:
# mean for global average pooling (mean over channel dimension)
x = x.mean(dim=(-1, -2))
x = F.relu(self.fc_gap(x))
else: # use all features
x = torch.flatten(x, 1)
x = F.relu(self.fc_1(x))
x = self.fc_2(x)
return x
Additionally, the torchvision.io.read_image function used in your Dataset returns an uint8 tensor with integer values from 0 to 255. You'll want floating point values for your network, so you have to divide the result by 255 to get values in the [0, 1] range. Furthermore, neural networks work best with normalized inputs (subtracting the mean and then dividing by the standard error of your training dataset). I've added normalization to the image transforms below. For convenience, it is using the imagenet mean and standard error, which should work fine if your images are similar to imagenet images (otherwise you can calculate them on your own images).
Note that the resizing might distort your images (doesn't keep the original aspect ratio). Often this is no problem, but if it is you might want to pad your images with a constant color (e.g. black) to resize them to the required dimensions (there are also transforms for this in the torchvision library).
IMAGENET_MEAN = [0.485, 0.456, 0.406]
IMAGENET_STD = [0.229, 0.224, 0.225]
transforms = torchvision.transforms.Compose([
torchvision.transforms.Lambda(lambda x: x / 255.),
torchvision.transforms.Normalize(mean=IMAGENET_MEAN, std=IMAGENET_STD),
torchvision.transforms.Resize((224, 224)),
])
You might also need to adjust the code in your Dataset to load images as an RGB image (if they also have an alpha channel). This can be done like this:
image = read_image(img_path, mode=torchvision.io.image.ImageReadMode.RGB)
You can then initialise your Dataset using:
test_dataset = dataset.csHeadBody(csv_file="images\\test_labels.csv", root_dir="images\\test", transform=transforms)
train_dataset = dataset.csHeadBody(csv_file="images\\train_labels.csv", root_dir="images\\train", transform=transforms)
I haven't tested the code, let me know if it doesn't work!
| https://stackoverflow.com/questions/72595995/ |
why Heroku: slug size so large after installing Pytorch? | I'm trying to deploy python to heroku, but it's too big to deploy.
I've been getting the slug size too large warning (Compiled slug size: 789.8M is too large (max is 500M)) from Heroku
Compiled slug size: 826.6M is too large (max is 500M).
remote: ! See: http://devcenter.heroku.com/articles/slug-size
remote:
remote: ! Push failed
remote: Verifying deploy....
remote:
fastapi==0.78.0
gunicorn==20.1.0
numpy==1.22.4
opencv-python==4.5.4.58
Pillow==8.4.0
retina-face==0.0.12
uvicorn==0.17.6
--find-links https://download.pytorch.org/whl/torch_stable.html
torch==1.11.0+cpu; python_full_version >= "3.6.2"
torchvision==0.12.0+cpu
tensorflow-cpu == 2.8.0
remote: Compressing source files... done.
remote: Building source:
remote:
remote: -----> Building on the Heroku-20 stack
remote: -----> Determining which buildpack to use for this app
remote: -----> Python app detected
remote: -----> No Python version was specified. Using the buildpack default: python-3.10.5
remote: To use a different version, see: https://devcenter.heroku.com/articles/python-runtimes
remote: -----> Installing python-3.10.5
remote: -----> Installing pip 22.1.2, setuptools 60.10.0 and wheel 0.37.1
remote: -----> Installing SQLite3
remote: -----> Installing requirements with pip
remote: Looking in links: https://download.pytorch.org/whl/torch_stable.html
remote: Collecting fastapi==0.78.0
remote: Downloading fastapi-0.78.0-py3-none-any.whl (54 kB)
remote: Collecting gunicorn==20.1.0
remote: Downloading gunicorn-20.1.0-py3-none-any.whl (79 kB)
remote: Collecting numpy==1.22.4
remote: Downloading numpy-1.22.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
remote: Collecting opencv-python==4.5.4.58
remote: Downloading opencv_python-4.5.4.58-cp310-cp310-manylinux2014_x86_64.whl (60.3 MB)
remote: Collecting Pillow==8.4.0
remote: Downloading Pillow-8.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB)
remote: Collecting retina-face==0.0.12
remote: Downloading retina_face-0.0.12-py3-none-any.whl (15 kB)
remote: Collecting uvicorn==0.17.6
remote: Downloading uvicorn-0.17.6-py3-none-any.whl (53 kB)
remote: Collecting torch==1.11.0+cpu
remote: Downloading https://download.pytorch.org/whl/cpu/torch-1.11.0%2Bcpu-cp310-cp310-linux_x86_64.whl (169.2 MB)
remote: Collecting torchvision==0.12.0+cpu
remote: Downloading https://download.pytorch.org/whl/cpu/torchvision-0.12.0%2Bcpu-cp310-cp310-linux_x86_64.whl (14.7 MB)
remote: Collecting tensorflow-cpu==2.8.0
remote: Downloading tensorflow_cpu-2.8.0-cp310-cp310-manylinux2010_x86_64.whl (190.6 MB)
remote: Collecting pydantic!=1.7,!=1.7.1,!=1.7.2,!=1.7.3,!=1.8,!=1.8.1,<2.0.0,>=1.6.2
remote: Downloading pydantic-1.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.0 MB)
remote: Collecting starlette==0.19.1
remote: Downloading starlette-0.19.1-py3-none-any.whl (63 kB)
remote: Collecting gdown>=3.10.1
remote: Downloading gdown-4.4.0.tar.gz (14 kB)
remote: Installing build dependencies: started
remote: Installing build dependencies: finished with status 'done'
remote: Getting requirements to build wheel: started
remote: Getting requirements to build wheel: finished with status 'done'
remote: Preparing metadata (pyproject.toml): started
remote: Preparing metadata (pyproject.toml): finished with status 'done'
remote: Collecting tensorflow>=1.9.0
remote: Downloading tensorflow-2.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (511.7 MB)
remote: Collecting h11>=0.8
remote: Downloading h11-0.13.0-py3-none-any.whl (58 kB)
remote: Collecting click>=7.0
remote: Downloading click-8.1.3-py3-none-any.whl (96 kB)
remote: Collecting asgiref>=3.4.0
remote: Downloading asgiref-3.5.2-py3-none-any.whl (22 kB)
remote: Collecting typing-extensions
remote: Downloading typing_extensions-4.2.0-py3-none-any.whl (24 kB)
remote: Collecting requests
remote: Downloading requests-2.28.0-py3-none-any.whl (62 kB)
remote: Collecting opt-einsum>=2.3.2
remote: Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB)
remote: Collecting flatbuffers>=1.12
remote: Downloading flatbuffers-2.0-py2.py3-none-any.whl (26 kB)
remote: Collecting gast>=0.2.1
remote: Downloading gast-0.5.3-py3-none-any.whl (19 kB)
remote: Collecting keras-preprocessing>=1.1.1
remote: Downloading Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
remote: Collecting libclang>=9.0.1
remote: Downloading libclang-14.0.1-py2.py3-none-manylinux1_x86_64.whl (14.5 MB)
remote: Collecting grpcio<2.0,>=1.24.3
remote: Downloading grpcio-1.46.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.4 MB)
remote: Collecting tensorboard<2.9,>=2.8
remote: Downloading tensorboard-2.8.0-py3-none-any.whl (5.8 MB)
remote: Collecting astunparse>=1.6.0
remote: Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
remote: Collecting protobuf>=3.9.2
remote: Downloading protobuf-4.21.1-cp37-abi3-manylinux2014_x86_64.whl (407 kB)
remote: Collecting keras<2.9,>=2.8.0rc0
remote: Downloading keras-2.8.0-py2.py3-none-any.whl (1.4 MB)
remote: Collecting six>=1.12.0
remote: Downloading six-1.16.0-py2.py3-none-any.whl (11 kB)
remote: Collecting termcolor>=1.1.0
remote: Downloading termcolor-1.1.0.tar.gz (3.9 kB)
remote: Preparing metadata (setup.py): started
remote: Preparing metadata (setup.py): finished with status 'done'
remote: Collecting absl-py>=0.4.0
remote: Downloading absl_py-1.1.0-py3-none-any.whl (123 kB)
remote: Collecting google-pasta>=0.1.1
remote: Downloading google_pasta-0.2.0-py3-none-any.whl (57 kB)
remote: Collecting tensorflow-io-gcs-filesystem>=0.23.1
remote: Downloading tensorflow_io_gcs_filesystem-0.26.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (2.4 MB)
remote: Collecting h5py>=2.9.0
remote: Downloading h5py-3.7.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (4.5 MB)
remote: Collecting tf-estimator-nightly==2.8.0.dev2021122109
remote: Downloading tf_estimator_nightly-2.8.0.dev2021122109-py2.py3-none-any.whl (462 kB)
remote: Collecting wrapt>=1.11.0
remote: Downloading wrapt-1.14.1-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (77 kB)
remote: Collecting anyio<5,>=3.4.0
remote: Downloading anyio-3.6.1-py3-none-any.whl (80 kB)
remote: Collecting tqdm
remote: Downloading tqdm-4.64.0-py2.py3-none-any.whl (78 kB)
remote: Collecting filelock
remote: Downloading filelock-3.7.1-py3-none-any.whl (10 kB)
remote: Collecting beautifulsoup4
remote: Downloading beautifulsoup4-4.11.1-py3-none-any.whl (128 kB)
remote: Collecting werkzeug>=0.11.15
remote: Downloading Werkzeug-2.1.2-py3-none-any.whl (224 kB)
remote: Collecting google-auth-oauthlib<0.5,>=0.4.1
remote: Downloading google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
remote: Collecting markdown>=2.6.8
remote: Downloading Markdown-3.3.7-py3-none-any.whl (97 kB)
remote: Collecting google-auth<3,>=1.6.3
remote: Downloading google_auth-2.7.0-py2.py3-none-any.whl (160 kB)
remote: Collecting tensorboard-data-server<0.7.0,>=0.6.0
remote: Downloading tensorboard_data_server-0.6.1-py3-none-manylinux2010_x86_64.whl (4.9 MB)
remote: Collecting tensorboard-plugin-wit>=1.6.0
remote: Downloading tensorboard_plugin_wit-1.8.1-py3-none-any.whl (781 kB)
remote: Collecting charset-normalizer~=2.0.0
remote: Downloading charset_normalizer-2.0.12-py3-none-any.whl (39 kB)
remote: Collecting urllib3<1.27,>=1.21.1
remote: Downloading urllib3-1.26.9-py2.py3-none-any.whl (138 kB)
remote: Collecting idna<4,>=2.5
remote: Downloading idna-3.3-py3-none-any.whl (61 kB)
remote: Collecting certifi>=2017.4.17
remote: Downloading certifi-2022.5.18.1-py3-none-any.whl (155 kB)
remote: Collecting gast>=0.2.1
remote: Downloading gast-0.4.0-py3-none-any.whl (9.8 kB)
remote: Collecting tensorflow-estimator<2.10.0,>=2.9.0rc0
remote: Downloading tensorflow_estimator-2.9.0-py2.py3-none-any.whl (438 kB)
remote: Collecting tensorflow>=1.9.0
remote: Downloading tensorflow-2.9.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (511.7 MB)
remote: Downloading tensorflow-2.8.2-cp310-cp310-manylinux2010_x86_64.whl (498.0 MB)
remote: Collecting tensorflow-estimator<2.9,>=2.8
remote: Downloading tensorflow_estimator-2.8.0-py2.py3-none-any.whl (462 kB)
remote: Collecting protobuf>=3.9.2
remote: Downloading protobuf-3.19.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.1 MB)
remote: Collecting sniffio>=1.1
remote: Downloading sniffio-1.2.0-py3-none-any.whl (10 kB)
remote: Collecting rsa<5,>=3.1.4
remote: Downloading rsa-4.8-py3-none-any.whl (39 kB)
remote: Collecting cachetools<6.0,>=2.0.0
remote: Downloading cachetools-5.2.0-py3-none-any.whl (9.3 kB)
remote: Collecting pyasn1-modules>=0.2.1
remote: Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
remote: Collecting requests-oauthlib>=0.7.0
remote: Downloading requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
remote: Collecting soupsieve>1.2
remote: Downloading soupsieve-2.3.2.post1-py3-none-any.whl (37 kB)
remote: Collecting PySocks!=1.5.7,>=1.5.6
remote: Downloading PySocks-1.7.1-py3-none-any.whl (16 kB)
remote: Collecting pyasn1<0.5.0,>=0.4.6
remote: Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
remote: Collecting oauthlib>=3.0.0
remote: Downloading oauthlib-3.2.0-py3-none-any.whl (151 kB)
remote: Building wheels for collected packages: gdown, termcolor
remote: Building wheel for gdown (pyproject.toml): started
remote: Building wheel for gdown (pyproject.toml): finished with status 'done'
remote: Created wheel for gdown: filename=gdown-4.4.0-py3-none-any.whl size=14759 sha256=7708c4f7156a089f3fc5fc4488a952096a5a55641ba3eb077fbe30557bad88b9
remote: Stored in directory: /tmp/pip-ephem-wheel-cache-gs7e9o2q/wheels/03/0b/3f/6ddf67a417a5b400b213b0bb772a50276c199a386b12c06bfc
remote: Building wheel for termcolor (setup.py): started
remote: Building wheel for termcolor (setup.py): finished with status 'done'
remote: Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4848 sha256=1d47aa236c647ae87a51f34c01fb6c879088c2d0af3e04cd7a949a4b8b0ae977
remote: Stored in directory: /tmp/pip-ephem-wheel-cache-gs7e9o2q/wheels/a1/49/46/1b13a65d8da11238af9616b00fdde6d45b0f95d9291bac8452
remote: Successfully built gdown termcolor
remote: Installing collected packages: tf-estimator-nightly, termcolor, tensorflow-estimator, tensorboard-plugin-wit, pyasn1, libclang, keras, flatbuffers, wrapt, werkzeug, urllib3, typing-extensions, tqdm, tensorflow-io-gcs-filesystem, tensorboard-data-server, soupsieve, sniffio, six, rsa, PySocks, pyasn1-modules, protobuf, Pillow, oauthlib, numpy, markdown, idna, h11, gunicorn, gast, filelock, click, charset-normalizer, certifi, cachetools, asgiref, absl-py, uvicorn, torch, requests, pydantic, opt-einsum, opencv-python, keras-preprocessing, h5py, grpcio, google-pasta, google-auth, beautifulsoup4, astunparse, anyio, torchvision, starlette, requests-oauthlib, google-auth-oauthlib, gdown, fastapi, tensorboard, tensorflow-cpu, tensorflow, retina-face
remote: Successfully installed Pillow-8.4.0 PySocks-1.7.1 absl-py-1.1.0 anyio-3.6.1 asgiref-3.5.2 astunparse-1.6.3 beautifulsoup4-4.11.1 cachetools-5.2.0 certifi-2022.5.18.1 charset-normalizer-2.0.12 click-8.1.3 fastapi-0.78.0 filelock-3.7.1 flatbuffers-2.0 gast-0.5.3 gdown-4.4.0 google-auth-2.7.0 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 grpcio-1.46.3 gunicorn-20.1.0 h11-0.13.0 h5py-3.7.0 idna-3.3 keras-2.8.0 keras-preprocessing-1.1.2 libclang-14.0.1 markdown-3.3.7 numpy-1.22.4 oauthlib-3.2.0 opencv-python-4.5.4.58 opt-einsum-3.3.0 protobuf-3.19.4 pyasn1-0.4.8 pyasn1-modules-0.2.8 pydantic-1.9.1 requests-2.28.0 requests-oauthlib-1.3.1 retina-face-0.0.12 rsa-4.8 six-1.16.0 sniffio-1.2.0 soupsieve-2.3.2.post1 starlette-0.19.1 tensorboard-2.8.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-2.8.2 tensorflow-cpu-2.8.0 tensorflow-estimator-2.8.0 tensorflow-io-gcs-filesystem-0.26.0 termcolor-1.1.0 tf-estimator-nightly-2.8.0.dev2021122109 torch-1.11.0+cpu torchvision-0.12.0+cpu tqdm-4.64.0 typing-extensions-4.2.0 urllib3-1.26.9 uvicorn-0.17.6 werkzeug-2.1.2 wrapt-1.14.1
remote: -----> Discovering process types
remote: Procfile declares types -> web
remote:
remote: -----> Compressing...
remote: ! Compiled slug size: 826.6M is too large (max is 500M).
remote: ! See: http://devcenter.heroku.com/articles/slug-size
remote:
remote: ! Push failed
remote: Verifying deploy....
remote:
remote: ! Push rejected to lit-castle-78959.
remote:
| You are installing both pytorch and tensorflow in the same slug.
Both libraries are known to be large and they are rarely deployed in the same slug.
Since they both aim to solve the same problem (neural networks/tensor computation), do you really need both? If yes, try to split your application and create two different slugs.
I am used to deploy slugs with either torch==1.11.0+cpu or tensorflow-cpu==2.1.0.
| https://stackoverflow.com/questions/72598691/ |
AttributeError: 'MaskedLMOutput' object has no attribute 'view' | Sorry to bother, I met this error when I evaluate some models, and I didn't find a good method to fix it.
What does 'MaskedLMOutput'means?Could somebody tell me How to fix this please? Thank you.
(AttributeError: 'MaskedLMOutput' object has no attribute 'view')
from transformers import BertForMaskedLM
class BertPunc(nn.Module):
def __init__(self, segment_size, output_size, dropout):
super(BertPunc, self).__init__()
self.bert = BertForMaskedLM.from_pretrained('cl-tohoku/bert-base-japanese')
self.bert_vocab_size = 32000
self.bn = nn.BatchNorm1d(segment_size*self.bert_vocab_size)
self.fc = nn.Linear(segment_size*self.bert_vocab_size, output_size)
self.dropout = nn.Dropout(dropout)
def forward(self, input):
x = self.bert(input)
x = x.view(x.shape[0], -1) # wrong thing here
x = self.fc(self.dropout(self.bn(x)))
return x
I ran this in jupyter notebook:
def predictions(data_loader):
y_pred = []
y_true = []
for inputs, labels in tqdm(data_loader, total=len(data_loader)):
with torch.no_grad():
inputs, labels = inputs.cuda(), labels.cuda()
output = bert_punc(inputs)
y_pred += list(output.argmax(dim=1).cpu().data.numpy().flatten())
y_true += list(labels.cpu().data.numpy().flatten())
return y_pred, y_true
def evaluation(y_pred, y_test):
precision, recall, f1, _ = metrics.precision_recall_fscore_support(
y_test, y_pred, average=None, labels=[1, 2, 3])
overall = metrics.precision_recall_fscore_support(
y_test, y_pred, average='macro', labels=[1, 2, 3])
result = pd.DataFrame(
np.array([precision, recall, f1]),
columns=list(punctuation_enc.keys())[1:],
index=['Precision', 'Recall', 'F1']
)
result['OVERALL'] = overall[:3]
return result
y_pred_test, y_true_test = predictions(data_loader_test)
eval_test = evaluation(y_pred_test, y_true_test)
eval_test
wrong here:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [12], in <cell line: 1>()
----> 1 y_pred_test, y_true_test = predictions(data_loader_test)
2 eval_test = evaluation(y_pred_test, y_true_test)
3 eval_test
Input In [10], in predictions(data_loader)
5 with torch.no_grad():
6 inputs, labels = inputs.cuda(), labels.cuda()
----> 7 output = bert_punc(inputs)
8 y_pred += list(output.argmax(dim=1).cpu().data.numpy().flatten())
9 y_true += list(labels.cpu().data.numpy().flatten())
File ~\anaconda3\lib\site-packages\torch\nn\modules\module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\lib\site-packages\torch\nn\parallel\data_parallel.py:166, in DataParallel.forward(self, *inputs, **kwargs)
163 kwargs = ({},)
165 if len(self.device_ids) == 1:
--> 166 return self.module(*inputs[0], **kwargs[0])
167 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
168 outputs = self.parallel_apply(replicas, inputs, kwargs)
File ~\anaconda3\lib\site-packages\torch\nn\modules\module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File F:\BertPunc-master\model.py:19, in BertPunc.forward(self, input)
16 def forward(self, input):
18 x = self.bert(input)
---> 19 x = x.view(x.shape[0], -1)
20 x = self.fc(self.dropout(self.bn(x)))
21 return x
AttributeError: 'MaskedLMOutput' object has no attribute 'view'
| You can refer to the documentation of MaskedLMOutput. Basically, it is an object holding the loss, logits, hidden_states and attentions. It is not a tensor so you are getting this error. I think you are interested in logits, i.e., score for each token before applying softmax. Then, in the forward function, you can just access to the logits tensor like this
x = self.bert(input).logits
| https://stackoverflow.com/questions/72601057/ |
PyTorch CNN Different Input Size | Hello Guys I have a question about different Input Sizes.
My training set and validation dataset have an input Size of 256 and for my prediction (with an unseen Test Dataset) I have an input size of 496.
class Net(nn.Module):
def __init__(self, shape):
super(Net,self).__init__()
self.conv1 = nn.Conv1d(shape,1,1)
self.batch1 = nn.BatchNorm1d(1)
self.avgpl1 = nn.AvgPool1d(1, stride=1)
self.fc1 = nn.Linear(1,3)
#forward method
def forward(self,x):
x = self.conv1(x)
x = self.batch1(x)
x = F.relu(x)
x = self.avgpl1(x)
x = torch.flatten(x,1)
x = F.log_softmax(self.fc1(x))
return x
I saved the model and wanna use it also for my prediction.
Error Message is:
Input In [244], in predict_data(prediction_data, model_path, data_config, context)
25 new_model = Net(shape_preprocessed_data)
26 # load the previously saved state_dict
---> 27 new_model.load_state_dict(torch.load("NetModel.pth"))
29 # check if predictions of models are equal
30
31 # generate random input of size (N,C,H,W)
32
33 # switch to eval mode for both models
34 model = model.eval()
RuntimeError: Error(s) in loading state_dict for Net:
size mismatch for conv1.weight: copying a param with shape
torch.Size([1, 256, 1]) from checkpoint, the shape in current model is torch.Size([1, 494, 1]).
How can I solve this?
| You could reshape/downsample the input as the first step of the forward pass in your model. This can be done using the torch.nn.functional.interpolate function.
For example:
class Net(nn.Module):
def __init__(self, shape):
super(Net,self).__init__()
self.input_shape = shape
self.conv1 = nn.Conv1d(shape,1,1)
self.batch1 = nn.BatchNorm1d(1)
self.avgpl1 = nn.AvgPool1d(1, stride=1)
self.fc1 = nn.Linear(1,3)
#forward method
def forward(self,x):
x = torch.nn.functional.interpolate(x, size=self.input_shape)
x = self.conv1(x)
x = self.batch1(x)
x = F.relu(x)
x = self.avgpl1(x)
x = torch.flatten(x,1)
x = F.log_softmax(self.fc1(x))
return x
Your test images would then be downsampled to size 256 in order to be compatible with the model.
| https://stackoverflow.com/questions/72601248/ |
Running Detectron2 locally - windows - [Pytorch Config error] | I am trying to run this code locally:
https://gist.github.com/shashank524/74d8f46d5de633b84e2265fcc34774de#file-tabledetection-ipynb
After installing required packages, when I am trying to run this line:
import layoutparser as lp
# PubLayNet
model = layoutparser.Detectron2LayoutModel('lp://PubLayNet/faster_rcnn_R_50_FPN_3x/config',extra_config=["MODEL.ROI_HEADS.SCORE_THRESH_TEST", 0.81],label_map={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"})
I receive this error:
OSError: [Errno 22] Invalid argument: 'C:\\Users\\username/.torch/iopath_cache\\s/f3b12qc4hc0yh4m\\config.yml?dl=1.lock'
I looked into the directory and there was no config file available.
I tried to download the config file from here (https://layout-parser.readthedocs.io/en/latest/notes/modelzoo.html) and put it in the directory but it didn't solve the issue!
| Even I got a similar error. I tried out manually some work around in Windows.
I am using your case as example: OSError: [Errno 22] Invalid argument: 'C:\Users\username/.torch/iopath_cache\s/f3b12qc4hc0yh4m\config.yml?dl=1.lock'
Please follow the following process.
Navigate to C:\Users\username/.torch/iopath_cache\s/f3b12qc4hc0yh4m\config.yml
Open that config.yaml file
Scroll down to WEIGHTS: https://www.dropbox.com/s/h7th27jfv19rxiy/model_final.pth?dl=1 should be around 265 line.
Copy that link and paste it in your browser, a 'model_final.pth' will be downloaded. Copy this file to your desired folder.
Now replace the path to WEIGHTS: your_desired_folder\model_final.pth
Save it and run the code it works!
But there is a small work around I think before you do this (if you have not done)
iopath work around
https://github.com/Layout-Parser/layout-parser/issues/15 (Github link to the issue)
Hope this helps!
| https://stackoverflow.com/questions/72603852/ |
How to check if any of the gradients in a PyTorch model is nan? | I have a toy model:
import torch
import torch.nn as nn
import torch.optim as optim
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc1 = nn.Linear(1, 2)
self.fc2 = nn.Linear(2, 3)
self.fc3 = nn.Linear(3, 1)
def forward(self, x):
x = self.fc1(x)
x = torch.relu(x)
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Model()
opt = optim.Adam(net.parameters())
The training loop is
features = torch.rand((3,1))
for i in range(10):
opt.zero_grad()
out = net(features)
loss = torch.mean(torch.square(torch.tensor(5) - torch.sum(out)))
loss.backward()
opt.step()
How can I check if any of the gradients is nan? That is, if just 1 of the gradients is nan print something/break
pseudocode:
for i in range(10):
opt.zero_grad()
out = net(features)
loss = torch.mean(torch.square(torch.tensor(5) - torch.sum(out)))
loss.backward()
if_gradients_nan:
print("NAN")
opt.step()
| You can check as below. This approach only checks for the gradients with respect to the model parameters. It does not look at intermediate gradients, actually, those intermediate gradients do not exist after loss.backward() is called without retain_graph=True argument. For the demonstration purposes, I have multiplied output of first torch.relu(x) with float("inf") so that some of the gradients become nan.
...
def forward(self, x):
x = self.fc1(x)
x = torch.relu(x) * float("inf")
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return x
...
loss = torch.mean(torch.square(torch.tensor(5) - torch.sum(out)))
loss.backward()
for name, param in net.named_parameters():
print(name, torch.isnan(param.grad))
opt.step()
This prints
fc1.weight tensor([[False],
[False]])
fc1.bias tensor([False, False])
fc2.weight tensor([[True, True],
[True, True],
[True, True]])
fc2.bias tensor([True, True, True])
fc3.weight tensor([[True, True, True]])
fc3.bias tensor([True])
fc1.weight tensor([[False],
[False]])
fc1.bias tensor([False, False])
fc2.weight tensor([[True, True],
[True, True],
[True, True]])
...
To check if any of the gradients is nan, you can use
for name, param in net.named_parameters():
if torch.isnan(param.grad).any():
print("nan gradient found")
raise SystemExit
| https://stackoverflow.com/questions/72605066/ |
Mobilevit Binary classification ValueError: `logits` and `labels` must have the same shape, received ((None, 2) vs (None, 1)) | I am using the colab notebook(https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/mobilevit.ipynb) for mobilevit to train on a dataset I have of 25k pictures for 2 classes. Since it's a binary classification, I have used keras.losses.BinaryCrossentropy and Sigmoid as activation function at the last layer:-
def create_mobilevit(num_classes=2):
inputs = keras.Input((image_size, image_size, 3))
x = layers.Rescaling(scale=1.0 / 255)(inputs)
# Initial conv-stem -> MV2 block.
x = conv_block(x, filters=16)
x = inverted_residual_block(
x, expanded_channels=16 * expansion_factor, output_channels=16
)
# Downsampling with MV2 block.
x = inverted_residual_block(
x, expanded_channels=16 * expansion_factor, output_channels=24, strides=2
)
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=24
)
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=24
)
# First MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=48, strides=2
)
x = mobilevit_block(x, num_blocks=2, projection_dim=64)
# Second MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=64 * expansion_factor, output_channels=64, strides=2
)
x = mobilevit_block(x, num_blocks=4, projection_dim=80)
# Third MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=80 * expansion_factor, output_channels=80, strides=2
)
x = mobilevit_block(x, num_blocks=3, projection_dim=96)
x = conv_block(x, filters=320, kernel_size=1, strides=1)
# Classification head.
x = layers.GlobalAvgPool2D()(x)
outputs = layers.Dense(num_classes, activation="sigmoid")(x)
return keras.Model(inputs, outputs)
And here's my dataset preparation cell:-
batch_size = 64
auto = tf.data.AUTOTUNE
resize_bigger = 512
num_classes = 2
def preprocess_dataset(is_training=True):
def _pp(image, label):
if is_training:
# Resize to a bigger spatial resolution and take the random
# crops.
image = tf.image.resize(image, (resize_bigger, resize_bigger))
image = tf.image.random_crop(image, (image_size, image_size, 3))
image = tf.image.random_flip_left_right(image)
else:
image = tf.image.resize(image, (image_size, image_size))
label = tf.one_hot(label, depth=num_classes)
return image, label
return _pp
def prepare_dataset(dataset, is_training=True):
if is_training:
dataset = dataset.shuffle(batch_size * 10)
dataset = dataset.map(preprocess_dataset(is_training), num_parallel_calls=auto)
return dataset.batch(batch_size).prefetch(auto)
And this is the cell for training the model:-
learning_rate = 0.002
label_smoothing_factor = 0.1
epochs = 30
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
loss_fn = keras.losses.BinaryCrossentropy(label_smoothing=label_smoothing_factor)
def run_experiment(epochs=epochs):
mobilevit_xxs = create_mobilevit(num_classes=num_classes)
mobilevit_xxs.compile(optimizer=optimizer, loss=loss_fn, metrics=["accuracy"])
checkpoint_filepath = "/tmp/checkpoint"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_accuracy",
save_best_only=True,
save_weights_only=True,
)
mobilevit_xxs.fit(
train_ds,
validation_data=val_ds,
epochs=epochs,
callbacks=[checkpoint_callback],
)
mobilevit_xxs.load_weights(checkpoint_filepath)
_, accuracy = mobilevit_xxs.evaluate(val_ds)
print(f"Validation accuracy: {round(accuracy * 100, 2)}%")
return mobilevit_xxs
mobilevit_xxs = run_experiment()
Basically the code is identical to https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/mobilevit.ipynb except for the change in BinaryCrossEntropy loss and Sigmoid as actv. func. I don't understand why I am getting this even though I am explicitly ont-hot-coded my class labels -
ValueError: `logits` and `labels` must have the same shape, received ((None, 2) vs (None, 1)).
| You need to change the num_classes = 1 instead of num_classes = 2 as you have used Sigmoid activation function which returns the values between 0 to 1 for binary classification(0,1).
The values <0.5 will be considered as class 0 and values >0.5 will be as class 1 in between two binary classes (0,1).
Please refer to the replicated gist for your reference.
| https://stackoverflow.com/questions/72605644/ |
RuntimeError: Trying to backward through the graph a second time. Saved intermediate values of the graph are freed when you call .backward() | I am trying to train SRGAN from scratch. I have read solutions for this type of problem, but it would be great if someone could help me debug my code. The exact error is: "RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad()" Here is the snippet I am trying to train:
gen_model = Generator().to(device, non_blocking=True)
disc_model = Discriminator().to(device, non_blocking=True)
opt_gen = optim.Adam(gen_model.parameters(), lr=0.01)
opt_disc = optim.Adam(disc_model.parameters(), lr=0.01)
from torch.nn.modules.loss import BCELoss
def train_model(gen, disc):
for epoch in range(20):
run_loss_disc = 0
run_loss_gen = 0
for data in train:
low_res, high_res = data[0].to(device, non_blocking=True, dtype=torch.float).permute(0, 3, 1, 2),data[1].to(device, non_blocking=True, dtype=torch.float).permute(0, 3, 1, 2)
#--------Discriminator-----------------
gen_image = gen(low_res)
gen_image = gen_image.detach()
disc_gen = disc(gen_image)
disc_real = disc(high_res)
p=nn.BCEWithLogitsLoss()
loss_gen = p(disc_real, torch.ones_like(disc_real))
loss_real = p(disc_gen, torch.zeros_like(disc_gen))
loss_disc = loss_gen + loss_real
opt_disc.zero_grad()
loss_disc.backward()
run_loss_disc+=loss_disc
#---------Generator--------------------
cont_loss = vgg_loss(high_res, gen_image)
adv_loss = 1e-3*p(disc_gen, torch.ones_like(disc_gen))
gen_loss = cont_loss+(10^-3)*adv_loss
opt_gen.zero_grad()
gen_loss.backward()
opt_disc.step()
opt_gen.step()
run_loss_gen+=gen_loss
print("Run Loss Discriminator: %d", run_loss_disc)
print("Run Loss Generator: %d", run_loss_gen)
train_model(gen_model, disc_model)
| Apparently your disc_gen value was discarded by the first backward() call, as it says.
It should work if you change the discriminator part a bit:
gen_image = gen(low_res)
disc_gen = disc(gen_image.detach())
and add this at the start of the generator part:
disc_gen = disc(gen_image)
| https://stackoverflow.com/questions/72607593/ |
In the latest version of PyTorch, what is best practice to get all tensors to use a particular device by default? | In pytorch, if I do something like
import torch
x = torch.randn(3)
y = x + 5
all tensors correspond to the "cpu" device by default. Is there some way to make to make it so, by default, all tensors are on another device (e.g. "cuda:0")?
I know I can always be careful to add .cuda() or specify cuda whenever creating a tensor, but it would be great if I could just change the default device directly at the beginning of the program and be done with it, so that torch.randn(3) comes from the desired device without having to specify it every time.
Or would that be a bad thing to do for some reason? E.g. is there any reason I wouldn't want every tensor/operation to be done on cuda by default?
| Pytorch has an optional function to change the default type of tensor set_default_tensor_type. Applying the default type on the main script:
>>> import torch
>>>
>>> if __name__ == '__main__':
... cuda = torch.cuda.is_available()
... if cuda:
... torch.set_default_tensor_type('torch.cuda.FloatTensor')
... a = torch.randn(3,3)
... print(a.device)
...
cuda:0
Or would that be a bad thing to do for some reason? E.g. is there any reason I wouldn't want every tensor/operation to be done on cuda by default?
I couldn't find any reference or any document to answer this question. However, in my opinion, it's to avoid the memory fragmentation in GPU memory.
I'm not an expert, but the data in memory should be arranged in an efficient way, if not, the redundant space will cause OOM. That's why, in default, Tensorflow will take all of your GPU's memory no matter how many parameters your model has. You can improve the space and speed just by setting the tensor shape multiples of 8 amp documents.
In practice, higher performance is achieved when A and B dimensions are multiples of 8.
In conclusion, I think it's better to control the device of tensor manually instead of setting it gpu as default.
| https://stackoverflow.com/questions/72610665/ |
How to reshape a dataset in order to make it sequential? | I have a classic dataset of images and labels.
Here is a simple representation of the __getitem__ function :
def __getitem__(self, index):
(img_path, label) = df.iloc[index].values
img = Image.open(img_path).convert("RGB")
y = torch.tensor(labels))
return (img, y)
I have :
dataset = ClassDataset()
train_set, validation_set = random_split(dataset)
train_loader = DataLoader(dataset=train_set)
The size of one batch of the train loader would be : [32,3,256,256]
With 32 being the batch size, 3 the number of channels and 256 the width and height of my image.
I want to modify the shape of one batch so that it is sequential [8,4,3,256,256] with 8 being the batch size and 4 the length of one sequence.
I know that it could be easily done with torch.view() or torch.reshape() knowing that my data are already in the right order (they can be grouped directly into sequences).
But I want to know where is the most intelligent place to make this change, in the dataset class, in the dataloader class or in the train loop.
I already tried passing sequences into the getitem :
(img_path, coords) = df.iloc[4*(index-1):4*index].values
(assuming that sequence length is 4), but it didn't work.
| It is more relevant to do this kind of processing in the dataset layer. Indeed, what you are looking to implement there is "given a dataset index index return the corresponding input and its label". In your case you are dealing with a sequence as input, so something like this makes sense for your __getitem__ to return a sequence of images.
The data loader will automatically collate the data such that you get (batch_size, seq_len, channel, height, width) for your input, and (batch_size, seq_len) for your label (or (batch_size,) if there is meant to be a single label per sequence).
| https://stackoverflow.com/questions/72618833/ |
Why is the Tensorflow and Pytorch CrossEntropy loss returns different values for same example | I have tried getting Tensorflow and Pytorch CrossEntropyLoss but it returns different values and I don't know why. Please find the below code and results. Thanks for your inputs and help.
import tensorflow as tf
import numpy as np
y_true = [3, 3, 1]
y_pred = [
[0.3377, 0.4867, 0.8842, 0.0854, 0.2147],
[0.4853, 0.0468, 0.6769, 0.5482, 0.1570],
[0.0976, 0.9899, 0.6903, 0.0828, 0.0647]
]
scce3 = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.AUTO)
loss3 = scce3(y_true, y_pred).numpy()
print(loss3)
The result for above is : 1.69
Pytorch loss:
from torch import nn
import torch
loss = nn.CrossEntropyLoss()
y_true = torch.Tensor([3, 3, 1]).long()
y_pred = torch.Tensor([
[0.3377, 0.4867, 0.8842, 0.0854, 0.2147],
[0.4853, 0.0468, 0.6769, 0.5482, 0.1570],
[0.0976, 0.9899, 0.6903, 0.0828, 0.0647]
])
loss2 = loss(y_pred, y_true)
print(loss2)
The loss value for above is: 1.5
| Tensorflow's CrossEntropy expects probabilities as inputs (i.e. values after a tf.nn.softmax operation), whereas PyTorch's CrossEntropyLoss expects raw inputs, or more commonly named, logits. If you use the softmax operation, the values should be the same:
import tensorflow as tf
import numpy as np
y_true = [3, 3, 1]
y_pred = [
[0.3377, 0.4867, 0.8842, 0.0854, 0.2147],
[0.4853, 0.0468, 0.6769, 0.5482, 0.1570],
[0.0976, 0.9899, 0.6903, 0.0828, 0.0647]
]
scce3 = tf.keras.losses.SparseCategoricalCrossentropy(reduction=tf.keras.losses.Reduction.AUTO)
loss3 = scce3(y_true, tf.nn.softmax(y_pred)).numpy()
print(loss3)
>>> 1.5067214
from torch import nn
import torch
loss = nn.CrossEntropyLoss()
y_true = torch.Tensor([3, 3, 1]).long()
y_pred = torch.Tensor([
[0.3377, 0.4867, 0.8842, 0.0854, 0.2147],
[0.4853, 0.0468, 0.6769, 0.5482, 0.1570],
[0.0976, 0.9899, 0.6903, 0.0828, 0.0647]
])
loss2 = loss(y_pred, y_true)
print(loss2)
>>> tensor(1.5067)
Using the raw inputs (logits) is usually advised due to the LogSumExp trick for numerical stability. If you are using Tensorflow, I'd suggest using the tf.nn.softmax_cross_entropy_with_logits function instead, or its sparse counterpart. Edit: The SparseCategoricalCrossentropy class also has a keyword argument from_logits=False that can be set to True to the same effect.
| https://stackoverflow.com/questions/72622202/ |
PyTorch TransformerEncoderLayer different input order gets different results | Before I start, I’m very new to Transformers, and sorry for by bad sentence structure, I have a fever right now.
Any time I use nn.TransformerEncoderLayer in anyway with a saved model if the data is in a different order I get different results. Is there a way to save the Encode table (or whatever this would be), this would be in the MultiheadAttention part of the TransformerEncoderLayer right?
Just using TransformerEncoderLayer and save the model and then use np.random.permutation() to shuffle the input data. And running the input data through the model after calling model.eval. This always gives me different results unless I use the same order every time.
I have this layer in my model like this
self.transformer = nn.TransformerEncoderLayer()
I save the model like so
torch.save(model, path)
Does this not save the nn.TransformerEncoderLayer() or something?
| Got it, what is the shape of your input, and are you using batch_first=True or not? Basically one thing to just make sure of is that you don’t have your batch and the sequence dimensions mixed up in your implementation.
batch_first – If True, then the input and output tensors are provided as (batch, seq, feature). Default: False (seq, batch, feature).
| https://stackoverflow.com/questions/72623363/ |
timm: typeError: 'builtin_function_or_method' object is not subscriptable | I am implementing timm tutorials on data augmentation to increase the number of images of my dataset. According to their tutorials, I have implemented same code but it did not work.
Code
import numpy as np
import torch
from PIL import Image
from timm.data.transforms_factory import create_transform
a = create_transform(224, is_training=True)
print(a)
pets_image_paths = './download.png'
image = Image.open(pets_image_paths)
# We can convert this into a tensor, and transpose the channels into the format that PyTorch expects:
np_image = np.array(image, dtype=np.float32)
image = torch.as_tensor(np_image).transpose(2, 0)[None]
from timm.data.transforms import RandomResizedCropAndInterpolation
tfm = RandomResizedCropAndInterpolation(size=350, interpolation='random')
import matplotlib.pyplot as plt
fig, ax = plt.subplots(2, 4, figsize=(10, 5))
for idx, im in enumerate([tfm(image) for i in range(4)]):
ax[0, idx].imshow(im)
for idx, im in enumerate([tfm(image) for i in range(4)]):
ax[1, idx].imshow(im)
fig.tight_layout()
plt.show()
Traceback
Traceback (most recent call last):
File "/home/cvpr/PycharmProjects/timm_tutorials/9_augmentation.py", line 24, in <module>
for idx, im in enumerate([tfm(image) for i in range(4)]):
File "/home/cvpr/PycharmProjects/timm_tutorials/9_augmentation.py", line 24, in <listcomp>
for idx, im in enumerate([tfm(image) for i in range(4)]):
File "/home/cvpr/anaconda3/envs/timm_tutorials/lib/python3.8/site-packages/timm/data/transforms.py", line 181, in __call__
i, j, h, w = self.get_params(img, self.scale, self.ratio)
File "/home/cvpr/anaconda3/envs/timm_tutorials/lib/python3.8/site-packages/timm/data/transforms.py", line 143, in get_params
area = img.size[0] * img.size[1]
TypeError: 'builtin_function_or_method' object is not subscriptable
| As the previous answer mentions RandomResizedCropAndInterpolation expects PIL.Image.
You can look at timm docs in the Note:
Note: RandomResizedCropAndInterpolation expects the input to be an instance of PIL.Image and not torch.tensor.
So you can remove the Tensor convert:
import numpy as np
import torch
from PIL import Image
from timm.data.transforms_factory import create_transform
a = create_transform(224, is_training=True)
print(a)
pets_image_paths = './download.png'
image = Image.open(pets_image_paths)
from timm.data.transforms import RandomResizedCropAndInterpolation
tfm = RandomResizedCropAndInterpolation(size=350, interpolation='random')
import matplotlib.pyplot as plt
fig, ax = plt.subplots(2, 4, figsize=(10, 5))
for idx, im in enumerate([tfm(image) for i in range(4)]):
ax[0, idx].imshow(im)
for idx, im in enumerate([tfm(image) for i in range(4)]):
ax[1, idx].imshow(im)
fig.tight_layout()
plt.show()
| https://stackoverflow.com/questions/72627697/ |
PyTorch indexing by argmax | Dear community I have a challenge with regard to tensor indexing in PyTorch. The problem is very simple. Given a tensor create an index tensor to index its maximum values per column.
x = T.tensor([[0, 3, 0, 5, 9, 8, 2, 0],
[0, 4, 9, 6, 7, 9, 1, 0]])
Given this tensor I would like to build a boolean mask for indexing its maximum values per colum. To be specific I do not need its maximum values, torch.max(x, dim=0), nor its indices, torch.argmax(x, dim=0), but a boolean mask for indexing other tensor based on this tensor max values. My ideal output would be:
# Input tensor
x
tensor([[0, 3, 0, 5, 9, 8, 2, 0],
[0, 4, 9, 6, 7, 9, 1, 0]])
# Ideal output bool mask tensor
idx
tensor([[1, 0, 0, 0, 1, 0, 1, 1],
[0, 1, 1, 1, 0, 1, 0, 0]])
I know that values_max = x[idx] and values_max = x.max(dim=0) are equivalent but I am not looking for values_max but for idx.
I have built a solution around it but it just seem to complex and I am sure torch have an optimized way to do this. I have tried to use torch.index_select with the output of x.argmax(dim=0) but failed so I built a custom solution that seems to cumbersome to me so I am asking for help to do this in a vectorized / tensorial / torch way.
| You can perform this operation by first extracting the index of the maximum value column-wise of your tensor with torch.argmax, setting keepdim to True
>>> x.argmax(0, keepdim=True)
tensor([[0, 1, 1, 1, 0, 1, 0, 0]])
Then you can use torch.scatter to place 1s in a zero tensor at the designated indices:
>>> torch.zeros_like(x).scatter(0, x.argmax(0,True), value=1)
tensor([[1, 0, 0, 0, 1, 0, 1, 1],
[0, 1, 1, 1, 0, 1, 0, 0]])
| https://stackoverflow.com/questions/72628000/ |
'MaskedLMOutput' object has no attribute 'view' | I wrote this:
def forward(self, x):
x = self.bert(x)
x = x.view(x.shape[0], -1)
x = self.fc(self.dropout(self.bn(x)))
return x
but it doesn't work well, and the error is 'MaskedLMOutput' object has no attribute 'view'.
I'm considering the input might not be 'tensor' type, so I change it as below:
def forward(self, x):
x = torch.tensor(x) # this part
x = self.bert(x)
x = x.view(x.shape[0], -1)
x = self.fc(self.dropout(self.bn(x)))
return x
but it still gets wrong, same error 'MaskedLMOutput' object has no attribute 'view'.
Could someone tell me how to fix this? Much thanks.
Whole error information here:
------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [5], in <cell line: 8>()
6 optimizer = optim.Adam(bert_punc.parameters(), lr=learning_rate_top)
7 criterion = nn.CrossEntropyLoss()
----> 8 bert_punc, optimizer, best_val_loss = train(bert_punc, optimizer, criterion, epochs_top,
9 data_loader_train, data_loader_valid, save_path, punctuation_enc, iterations_top, best_val_loss=1e9)
Input In [3], in train(model, optimizer, criterion, epochs, data_loader_train, data_loader_valid, save_path, punctuation_enc, iterations, best_val_loss)
17 inputs.requires_grad = False
18 labels.requires_grad = False
---> 19 output = model(inputs)
20 loss = criterion(output, labels)
21 loss.backward()
File ~\anaconda3\lib\site-packages\torch\nn\modules\module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\lib\site-packages\torch\nn\parallel\data_parallel.py:166, in DataParallel.forward(self, *inputs, **kwargs)
163 kwargs = ({},)
165 if len(self.device_ids) == 1:
--> 166 return self.module(*inputs[0], **kwargs[0])
167 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
168 outputs = self.parallel_apply(replicas, inputs, kwargs)
File ~\anaconda3\lib\site-packages\torch\nn\modules\module.py:1110, in Module._call_impl(self, *input, **kwargs)
1106 # If we don't have any hooks, we want to skip the rest of the logic in
1107 # this function, and just call forward.
1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1109 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1110 return forward_call(*input, **kwargs)
1111 # Do not call functions when jit is used
1112 full_backward_hooks, non_full_backward_hooks = [], []
File D:\BertPunc-original\model.py:21, in BertPunc.forward(self, x)
18 x = torch.tensor(x)
19 x = self.bert(x)
---> 21 x = x.view(x.shape[0], -1)
22 x = self.fc(self.dropout(self.bn(x)))
23 return x
AttributeError: 'MaskedLMOutput' object has no attribute 'view'
| I think so this should help you solve the error. https://stackoverflow.com/a/72601533/13748930
The output after self.bert(x) is an object of the class MaskedLMOutput.
| https://stackoverflow.com/questions/72628556/ |
Error in pytorch data loader batch cycles | I keep getting this error in sagemaker when iterating through pytorch dataloader batch cycles:
Traceback (most recent call last):
File "main.py", line 371, in <module>
g_scaler=g_scaler, d_scaler=d_scaler, runtime_log_folder=runtime_log_folder, runtime_log_file_name=runtime_log_file_name)
File "main.py", line 78, in train_fn
for idx, (x, y) in enumerate(loop):
File "/opt/conda/lib/python3.6/site-packages/tqdm/std.py", line 1171, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 525, in __next__
(data, worker_id) = self._next_data()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1252, in _next_data
return (self._process_data(data), w_id)
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1299, in _process_data
data.reraise()
File "/opt/conda/lib/python3.6/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
File "/opt/conda/lib/python3.6/site-packages/botocore/exceptions.py", line 84, in __init__
super(HTTPClientError, self).__init__(**kwargs)
File "/opt/conda/lib/python3.6/site-packages/botocore/exceptions.py", line 40, in __init__
msg = self.fmt.format(**kwargs)
KeyError: 'error'
---------------------------------------------------------------------------
UnexpectedStatusException Traceback (most recent call last)
<ipython-input-1-81655136a841> in <module>
58 py_version='py3')
59
---> 60 pytorch_estimator.fit({'train': Runtime.dataset_path}, job_name=Runtime.job_name)
61
62 #print(pytorch_estimator.latest_job_tensorboard_artifacts_path())
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/sagemaker/estimator.py in fit(self, inputs, wait, logs, job_name, experiment_config)
955 self.jobs.append(self.latest_training_job)
956 if wait:
--> 957 self.latest_training_job.wait(logs=logs)
958
959 def _compilation_job_name(self):
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/sagemaker/estimator.py in wait(self, logs)
1954 # If logs are requested, call logs_for_jobs.
1955 if logs != "None":
-> 1956 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
1957 else:
1958 self.sagemaker_session.wait_for_job(self.job_name)
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/sagemaker/session.py in logs_for_job(self, job_name, wait, poll, log_type)
3751
3752 if wait:
-> 3753 self._check_job_status(job_name, description, "TrainingJobStatus")
3754 if dot:
3755 print()
~/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/sagemaker/session.py in _check_job_status(self, job, desc, status_key_name)
3304 ),
3305 allowed_statuses=["Completed", "Stopped"],
-> 3306 actual_status=status,
3307 )
3308
UnexpectedStatusException: Error for Training job 2022-06-03-05-16-49-pix2pix-U12239-2022-05-09-14-39-18-training: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command "/opt/conda/bin/python3.6 main.py --runtime_var dataset_name=U12239-2022-05-09-14-39-18,job_name=2022-06-03-05-16-49-pix2pix-U12239-2022-05-09-14-39-18-training,model_name=pix2pix"
0%| | 0/248 [00:00<?, ?it/s]
0%| | 1/248 [00:30<2:07:28, 30.97s/it]
0%| | 1/248 [00:30<2:07:28, 30.97s/it]
Traceback (most recent call last):
File "main.py", line 371, in <module>
g_scaler=g_scaler, d_scaler=d_scaler, runtime_log_folder=runtime_log_folder, runtime_log_file_name=runtime_log_file_name)
File "main.py", line 78, in train_fn
for idx, (x, y) in enumerate(loop):
File "/opt/conda/lib/python3.6/site-packages/tqdm/std.py", line 1171, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 525, in __next__
(data, worker_id) = self._next_data()
File "/opt/conda/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 1252, in _next_data
return (self
Here is the code which results in the error:
def train_fn(disc, gen, loader, opt_disc, opt_gen, l1, bce, g_scaler, d_scaler,runtime_log_folder,runtime_log_file_name):
total_output=''
loop = tqdm(loader, leave=True)
device = "cuda" if torch.cuda.is_available() else "cpu"
print("Loop")
print(loop)
print("Length loop")
print(len(loop))
for idx, (x, y) in enumerate(loop): #<--error happens here
print("Loop index")
print(idx)
print("Loop item")
print(x,y)
x = x.to(device)
y = y.to(device)
# train discriminator
with torch.cuda.amp.autocast():
y_fake = gen(x)
D_real = disc(x, y)
D_fake = disc(x, y_fake.detach())
# use detach so as to avoid breaking computational graph when do optimizer.step on discriminator
# can use detach, or when do loss.backward put loss.backward(retain_graph = True)
D_real_loss = bce(D_real, torch.ones_like(D_real))
D_fake_loss = bce(D_fake, torch.ones_like(D_fake))
D_loss = (D_real_loss + D_fake_loss) / 2
# log tensorboard
disc.zero_grad()
d_scaler.scale(D_loss).backward()
d_scaler.step(opt_disc)
d_scaler.update()
# train generator
with torch.cuda.amp.autocast():
D_fake = disc(x, y_fake)
# compute fake loss
# trick discriminator to believe these are real, hence send in torch.oneslikedfake
G_fake_loss = bce(D_fake, torch.ones_like(D_fake))
# compute L1 loss
L1 = l1(y_fake, y) * args.l1_lambda
G_loss = G_fake_loss + L1
# log tensorboard
opt_gen.zero_grad()
g_scaler.scale(G_loss).backward()
g_scaler.step(opt_gen)
g_scaler.update()
# print epoch, generator loss, discriminator loss
print(f'[Epoch {epoch}/{args.num_epochs} (b: {idx})] [D loss: {D_loss}, D real loss: {D_real_loss}, D fake loss: {D_fake_loss}] [G loss: ##{G_loss}, G fake loss: {G_fake_loss}, L1 loss: {L1}]')
output = f'[Epoch {epoch}/{args.num_epochs} (b: {idx})] [D loss: {D_loss}, D real loss: {D_real_loss}, D fake loss: {D_fake_loss}] [G loss: ##{G_loss}, G fake loss: {G_fake_loss}, L1 loss: {L1}]\n'
total_output+=output
runtime_log = get_json_file_from_s3(runtime_log_folder, runtime_log_file_name)
runtime_log += total_output
upload_json_file_to_s3(runtime_log_folder,runtime_log_file_name,json.dumps(runtime_log))
def __getitem__(self, index):
print("Index ",index)
pair_key = self.list_files[index]
print("Pair key ",pair_key)
pair = Boto.s3_client.list_objects(Bucket=Boto.bucket_name, Prefix=pair_key, Delimiter='/')
input_image_key = pair.get('Contents')[1].get('Key')
input_image_path = f's3://{Boto.bucket_name}/{input_image_key}'
print("Input image path ",input_image_path)
input_image_s3_source = get_file_from_filepath(input_image_path)
input_image = np.array(Image.open(input_image_s3_source))
target_image_key = pair.get('Contents')[0].get('Key')
target_image_path = f's3://{Boto.bucket_name}/{target_image_key}'
print("Target image path ",target_image_path)
target_image_s3_source = get_file_from_filepath(target_image_path)
target_image = np.array(Image.open(target_image_s3_source))
augmentations = config.both_transform(image=input_image, image0=target_image)
# get input image and target image by doing augmentations of images
input_image, target_image = augmentations['image'], augmentations['image0']
input_image = config.transform_only_input(image=input_image)['image']
target_image = config.transform_only_mask(image=target_image)['image']
print("Input image size ",input_image.size())
print("Target image size ",target_image.size())
return input_image, target_image
I did multiple runs and here are the traces of the failure points
i) 2022-06-03-05-00-04-pix2pix-U12239-2022-05-09-14-39-18-training
No index shown
[Epoch 0/100 (b: 0)]
ii) 2022-06-03-05-16-49-pix2pix-U12239-2022-05-09-14-39-18-training
Index 160
[Epoch 0/100 (b: 0)]
iii) 2022-06-03-05-44-46-pix2pix-U12239-2022-05-09-14-39-18-training
Index 160
[Epoch 0/100 (b: 0)]
iv) 2022-06-03-06-08-33-pix2pix-U12239-2022-05-09-14-39-18-training
Index 160
[Epoch 1/100 (b: 0)]
v) 2022-06-15-02-49-20-pix2pix-U12239-2022-05-09-14-39-18-training
Index 160
Pair key datasets/training-data/testing/2022-05-09-14-39-18/match-raws-finals/U12239/P423712/Pair_71/
[Epoch 0/100 (b: 0)
vi) 2022-06-15-02-59-43-pix2pix-U12239-2022-05-09-14-39-18-training
Index 64
Pair key datasets/training-data/testing/2022-05-09-14-39-18/match-raws-finals/U12239/P425642/Pair_27/
[Epoch 0/100 (b: 247)]
vii) 2022-06-15-04-49-33-pix2pix-U12239-2022-05-09-14-39-18-training
Index 64
Pair key datasets/training-data/testing/2022-05-09-14-39-18/match-raws-finals/U12239/P415414/Pair_124/
No specific epoch
My batch size is 248, so as you can see it seems to fail either at the start of the batch (0) or at the end (247). Also there are some common Indexes in the get item which seems to cause it to fail, namely Index 64 and Index 160. However there doesn't seem to be a common data point in the dataset that causes it to fail, as can be seen from the pair key all 3 data points in the datasets are different.
Does anyone have any idea why this error happens please?
| Try to run the same training script outside of a SageMaker training job and see what happens.
If the error doesn't happen on a standalone script, try to run it as a Local SageMaker training job, so you can reproduce it in seconds instead of minutes, and potentially use a debugger to figure out what is the problem.
| https://stackoverflow.com/questions/72633246/ |
Is there any shuffle mode in training Pytorch neural network model? | I'm using the code below to train a simple neural net to learn a harmonic wave by PyTorch. But I want to turn the shuffle mode on to improve the model. Is there any syntax to this aim?
model = FCN(1,1,50,4)
optimizer = torch.optim.Adam(model.parameters(),lr=15e-3, weight_decay=15e-3/4000)
for i in range(4000):
optimizer.zero_grad()
yhh = model(x_data)
loss = torch.mean((yhh-y_data)**2)
loss.backward()
optimizer.step()
Also, I used the code below alternatively to reorder the learning set randomly, but the result was awful.
yhh = model(x_data[[np.random.choice(range(len(x_data)), len(x_data), replace=False)]])
| Assuming your x_data is plain Torch tensor of size, say [100], you can use torch.utils.data.DataLoader with shuffle=True to shuffle x_data after each epoch:
dataset = torch.utils.data.TensorDataset(x_data) # first create a dataset wrapping your tensor
dataloader = torch.utils.data.DataLoader(dataset, batch_size=bs, shuffle=True) # specify your required batch size
dataloader object is now an iterable and can be used as:
for data in dataloader:
model(data[0]) #data[0] is a tensor of size(bs)
| https://stackoverflow.com/questions/72634017/ |
Pytorch training script on slurm doesn't write on stdout | I recently moved from tensorflow to pytorch, and today I had a little problem.
I run my scripts with slurm on an hpc, and all the stdout is redirected to a text file. The problem is that from this afternoon it stopped writing to that file while training (it should write training progresses), but the training is still going on, since slurm job is executing, and it saves model checkpoints every epoch.
The code of training loop is the following:
class TrainingParams:
def __init__(self, model_params):
self.BATCH_SIZE = 320
self.LEARNING_RATE = 6e-5
self.EPOCHS = 8
with open('weights/full2021/classes/classes_weights.json', 'r') as fp:
classes_weights = json.load(fp)
classes_weights = torch.as_tensor(list(classes_weights.values()), dtype=torch.float)
self.loss = nn.CrossEntropyLoss(weight=classes_weights)
self.optimizer = torch.optim.Adamax(model_params, lr=self.LEARNING_RATE)
def get_pbar(progress):
trattini = []
progress = math.ceil(100)
for i in range(0, math.floor(progress / 4)):
trattini.append('-')
for i in range(0, 25 - (math.floor(progress / 4))):
trattini.append(' ')
pbar = '[' + ''.join(trattini) + ']'
return pbar
def train(model: nn.Module, train_paths: dict, val_paths: dict,
resume=False, units=None, starting_epoch=1):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
model.cuda()
if resume:
checkpoint = torch.load(f"weights/full2021/unfreezed_cp_{units}/checkpoint_epoch-{starting_epoch - 1}")
model.load_state_dict(checkpoint['model'])
training_params = TrainingParams(model.parameters())
optimizer = training_params.optimizer
optimizer.load_state_dict(checkpoint['optimizer'])
scaler = torch.cuda.amp.GradScaler()
scaler.load_state_dict(checkpoint['scaler'])
print(f'Correctly restored checkpoint of network with units {units} at epoch {starting_epoch - 1}')
else:
training_params = TrainingParams(model.parameters())
optimizer = training_params.optimizer
scaler = torch.cuda.amp.GradScaler()
loss = training_params.loss.cuda()
train_data = Dataset(train_paths['ids'], train_paths['attention'], train_paths['labels'])
val_data = Dataset(val_paths['ids'], val_paths['attention'], val_paths['labels'])
train_len = len(train_data)
val_len = len(val_data)
train_data = torch.utils.data.DataLoader(train_data, batch_size=training_params.BATCH_SIZE, shuffle=True)
val_data = torch.utils.data.DataLoader(val_data, batch_size=training_params.BATCH_SIZE)
history = {
'epoch': [],
'loss': [],
'accuracy': [],
'val_loss': [],
'val_accuracy': []
}
print(f'Starting training from epoch {starting_epoch}...')
for epoch in range(0, training_params.EPOCHS):
start_time = time.time()
train_accuracy = 0
train_loss = 0
val_accuracy = 0
val_loss = 0
for i, data in enumerate(train_data, 0):
input, label = data
model.zero_grad(set_to_none=True)
with torch.cuda.amp.autocast():
label = label.long().to(device)
input_ids = input['input_ids'].squeeze(1).to(device)
attention_mask = input['attention_mask'].to(device)
output = model(input_ids, attention_mask)
batch_loss = loss(output, label)
scaler.scale(batch_loss).backward()
scaler.unscale_(optimizer)
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=3.0)
scaler.step(optimizer)
scaler.update()
train_loss += batch_loss.item()
batch_accuracy = (output.argmax(dim=1) == label).sum().item()
train_accuracy += batch_accuracy
progress = 100 * i / (train_len // training_params.BATCH_SIZE)
print(f'Epoch: {starting_epoch + epoch} | '
f'It: {i: 5d}/{train_len // training_params.BATCH_SIZE} | '
f'Elapsed: {time.time() - start_time: .2f}s | '
f'Progress: {get_pbar(progress)} {progress: .2f}% | '
f'Loss: {train_loss / (i + 1): .3f} | '
f'Accuracy: {train_accuracy / (training_params.BATCH_SIZE * (i + 1)): .3f} |'
)
with torch.no_grad():
for input, label in val_data:
with torch.cuda.amp.autocast():
label = label.long().cuda()
input_ids = input['input_ids'].squeeze(1).cuda()
attention_mask = input['attention_mask'].cuda()
output = model(input_ids, attention_mask)
batch_loss = loss(output, label)
val_loss += batch_loss.item()
batch_accuracy = (output.argmax(dim=1) == label).sum().item()
val_accuracy += batch_accuracy
print_train_loss = train_loss / (train_len // training_params.BATCH_SIZE)
print_train_accuracy = train_accuracy / train_len
print_val_loss = val_loss / (val_len // training_params.BATCH_SIZE)
print_val_accuracy = val_accuracy / val_len
print(f'-----> Epoch {starting_epoch + epoch} ended. '
f'Total elapsed time: {(time.time() - start_time) * 60: .2f}m | '
f'| Training Loss: {print_train_loss: .3f} '
f'| Training Accuracy: {print_train_accuracy: .3f} '
f'| Validation Loss: {print_val_loss: .3f} '
f'| Validation Accuracy: {print_val_accuracy: .3f}'
)
history['epoch'].append(starting_epoch + epoch)
history['loss'].append(print_train_loss)
history['accuracy'].append(print_train_accuracy)
history['val_loss'].append(print_val_loss)
history['val_accuracy'].append(print_val_accuracy)
checkpoint = {"model": model.state_dict(),
"optimizer": optimizer.state_dict(),
"scaler": scaler.state_dict()}
torch.save(checkpoint, f"weights/full2021/unfreezed_cp_{units}/checkpoint_epoch-{starting_epoch + epoch}")
history = pd.DataFrame.from_dict(history)
history.to_csv(f'history_log-dir/full2021/history_{units}_epoch-{starting_epoch + training_params.EPOCHS - 1}',
index=False)
Basically it skips all "print" statements, since I can see everytime these lines:
2022-06-15 12:08:28.989315: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Some weights of the model checkpoint at Bert/ were not used when initializing BertModel: ['cls.seq_relationship.weight', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.dense.weight']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
I can't understand why this is happening, it always worked correctly since this afternoon.
PS: now it updated one of the output files (I'm training on two different nodes with two models), and it debugged ~2000 iterations in one shot, so it now debug only after tot iterations. This is strange because since this afternoon it was updating the text file at every iteration (e.g. if I was pressing f5 continuously, I was seeing the file in constant update).
What can be the cause?
| The print statements are being loaded into be buffer which then prints once its filled up. To force the print to be printed directly try running your code with the -u flag. Something like python -u my_code.py. Alternatively, adding flush=True to the print statements should accomplish the same.
| https://stackoverflow.com/questions/72636360/ |
How to avoid "RuntimeError: CUDA out of memory." during inference of one single image? | I am facing the famous "CUDA out of memory" error.
File "DATA\instance-mask-r-cnn-torch\venv\lib\site-packages\torchvision\models\detection\roi_heads.py", line 416, in paste_mask_in_image
im_mask = torch.zeros((im_h, im_w), dtype=mask.dtype, device=mask.device)
RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 2.00 GiB total capacity; 1.66 GiB already allocated; 0 bytes free; 1.73 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Windows 10, CUDA 11.3, torch 0.11.0+cu113, torchvision 0.12.0+cu113
On the ENV I played with PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:32, 128, 8, 24, 32... without success.
An image of size 640x512 (1.5mb) works, another one of size 3264x1840 (1.75mb) leads to an OOME.
import torchvision.transforms
from torchvision.models.detection import mask_rcnn
import torch
from PIL import Image
import gc
if torch.cuda.is_available():
print(f'GPU: {torch.cuda.get_device_name(0)}')
device = torch.device('cuda')
torch.cuda.empty_cache()
else:
device = torch.device('cpu')
print(f'Device: {device}')
model = mask_rcnn.maskrcnn_resnet50_fpn(pretrained=True)
print(model.eval())
model.to(device)
img_path = 'images/tv_image05.png'
img_path = 'images/DJI_20220519110029_0001_W.JPG'
img_path = 'images/DJI_20220519110143_0021_T.JPG'
img_path = 'images/WP_20160104_09_52_53_Pro.jpg'
img = Image.open(img_path).convert("RGB")
img_tensor = torchvision.transforms.functional.to_tensor(img)
with torch.no_grad():
predictions = model([img_tensor.cuda()])
print(predictions)
gc.collect()
torch.cuda.empty_cache()
So far i found lots of hints, reducing batch size. But I am not using training mode. what else can I do to be able to process images of sizes up to 7mb?
| The 3264x1840 image is going to be 72MB in float32. Since it works for your 640x512 image, I'd suggest resizing it.
Simply add torchvision.transforms.functional.resize(img,512)
Another common trick is to quantize the model and the image to float16 but this may degrade the model accuracy depending on what you're doing.
| https://stackoverflow.com/questions/72637215/ |
Will using image augmentation techniques in pytorch increase the dataset size on local machine also | I was training a custom model in pytorch and the dataset was very uneven. As in there are 10 classes for which some class have only 800 images while some have 4000 images. I found that image augmentation was a solution for my problem to avoid overfitting. But i got confused in between while implementing, the below codes were used to alter the features of the images
loader_transform = transforms.Compose([
transforms.RandomRotation(30),
transforms.RandomResizedCrop(140),
transforms.RandomHorizontalFlip()
])
but while training it shows the same original number of images where did the newly created augmented dataset go. And if i want to save it on my local machine and to make all classes even what can be done??
| It looks like you are using online augmentations, If you like to use offline please do a pre-processing step that saves the images and then use them in the training step
Please make sure you understand the difference between online augmentations and offline augmentations
Offline or pre-processing Augmentation
To increase the size of the data set, enhancement is applied as a pre-processing step. Usually, we do this when we want to expand a small training data set.
When applying to larger data sets, we have to consider disk space
Online or real-time Augmentation
The augmentation is being applied in real-time through random augmentations.
Since the augmented images do not need to be saved on the disk, this method is usually applied to large data sets.
At each epoch, the online augmentation model will see a different image.
| https://stackoverflow.com/questions/72640633/ |
PyTorch model always returns zeros while Keras model works fine | I am implementing the same (simple) model in tf.keras and PyTorch. My Keras model does fine, but my Torch model always predicts zeros and does poorly as a result.
My Keras model is defined as:
model = Sequential([
Dense(5, activation='relu', name='layer1'),
Dense(5, activation='relu', name='layer2'),
Dense(1, activation='sigmoid', name='output')
])
model.compile(
optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy']
)
history = model.fit(
data.x_train,
data.y_train,
batch_size=128,
epochs=30,
validation_data=(data.x_test, data.y_test)
)
Meanwhile, my Torch model is defined like so:
class Model(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(19, 5),
nn.ReLU(),
nn.Linear(5, 5),
nn.ReLU(),
nn.Linear(5, 1),
nn.Sigmoid()
)
def forward(self, x):
return self.model(x)
def train(train_ds, test_ds, model):
train_dl = DataLoader(train_ds, batch_size=128, shuffle=True)
test_dl = DataLoader(test_ds, batch_size=128, shuffle=True)
opt = optim.Adam(model.parameters())
loss_func = F.binary_cross_entropy
# 30 epochs
for epoch in range(30):
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred.reshape((pred.shape[0])), yb)
loss.backward()
opt.step()
opt.zero_grad()
Both use default initialization, which is Glorot Uniform; the data passed to both is the same too. As far as I can tell, I've implemented the same model, with the same optimizer and hyper-parameters in both frameworks, but the PyTorch model returns all zeros when I print it out. What am I doing wrong?
| Fixing the optimizer steps
For the optimizer to work in your Pytorch model, these are the steps you should take, it might be counter intuitive in the beginning, but you need to zero grad first:
opt.zero_grad()
loss.backward()
opt.step()
Evaluating different models
As a side note; If you want the models to be exactly the same you should validate that your basic Learning Rate and Optimizer parameters are the same. Same as the Weights initializations params, the method might be the same (Glorot) but it might be depended on some internal params, that you can fix in your initialization.
| https://stackoverflow.com/questions/72640932/ |
Is the waveform the "raw" audio data? (Pytorch) | I am using PyTorch in an audio deep learning project. I am using the torchaudio.load method to load the waveform and sample rate. Now, my question is, is the waveform considered the "raw" audio data? Is it the PCM data? If not, then how can I get PCM data from .ogg format?
| Solution
Yes, it's raw data.
For the explanation read below. If you know about sampling theory and how sound is generated, skip to the last paragraph.
PCM
PCM is a fancy way to explain the process by which a continuous time wave is represented inside a computer. You can learn more in any introductory course/book of digital signal processing such chapter 3 of The Scientist and Engineer's Guide to Digital Signal Processing.
Briefly in a computer you can only represent finite quantities, so you need to take discrete samples in time (sampling) at a certain amplitudes (quantization).
When loading any audio file this process has been already done for you.
RAW DATA
If you connect a speaker and you play a wave, the membrane will oscillate as the amplitude of such wave at every instant. This is the "raw" audio, a signal that contains the amplitude at each "time" instant. If you can "see" the wave changing with no discontinuity from left to right when plotting your data, it is very likely a raw vector.
What is non-raw data then? Every compression algorithm modifies the input vector with any sort of mathematical function, so that it occupies less space, but also is not understandable anymore by just looking at it. This is because the samples don't represent anymore an amplitude over time. If you'd play the compressed wave through a speaker you wouldn't get any sound, only noise.
Pytorch
In the example you provided from the pytorch documentation we can clearly see that the plot represents raw data, sampled at 16kHz.
To exclude the possibility that
torchaudio.load could still give a sort of compressed object
the raw data is generated and plotted by plot_waveform
We can see that the waveform variable is long 54400 samples and sampled at 16kHz. This means it represents 54400*(1/16000) seconds, which are exactly 3.4s.
The plot shows 3.4seconds, thus telling us that what is represented in the variable waveform returned by the load function is the raw data.
| https://stackoverflow.com/questions/72644539/ |
issue terminate called after throwing an instance of 'c10::CUDAError' | I am having issues with running yolov5 on colab. I was able to run the code fine when I had I had more classes, and a slightly smaller dataset, but now I have decreased the amount of classes and 70 instances when the overall one has 3400 instances. Now I am getting this error.
terminate called after throwing an instance of 'c10::CUDAError'
Other times I will get
cuda assertion index >= -sizes[i] && index < sizes[i] && index out of bounds
any idea what could be causing this and how to fix it?
| The issue was that I was not outputting any labels that I said existed. So for example if I claimed that there would be labels "0","1","2" in the training dataset. There was no instances of label "2".
| https://stackoverflow.com/questions/72645027/ |
Convert function to exploit parallelization of the GPU | I have a function that uses values stored in one array to operate on another array. This behaves similar to the numpy.hist function. For example:
import numpy as np
from numba import jit
@jit(nopython=True)
def array_func(x, y, output_counts, output_weights):
for row in range(x.size):
col = int(x[row] * 10)
output_counts[col] += 1
output_weights[col] += y[row]
return (output_counts, output_weights)
# in the current code these arrays exists ad pytorch tensors
# on the GPU and get converted to numpy arrays on the CPU before
# being passed to "array_func"
x = np.random.randint(0, 11, (1000)) / 10
y = np.random.randint(0, 100, (10000))
output_counts, output_weights = array_func(x, y, np.zeros(y.size), np.zeros(y.size))
While this works for arrays it does not work for torch tensors that are on the GPU. This is close to what histogram functions do, but I also need the summation of binned values (i.e., the output_weights array/tensor). The current function requires me to continually pass the data from GPU to CPU, followed by the CPU function being run in series.
Can this function be converted to run in parallel on the GPU?
##EDIT##
The challenge is caused by the following line:
output_weights[col] += y[row]
If it weren't for that line I could just use the torch.histc function.
Here's my thought: GPUs are "fast" because they have hundreds/thousands of threads available and can run parts of a big job (or many smaller jobs) on these threads. However, if I convert the function above to work on torch tensors then there is no benefit to running on the GPU (it actually kills the performance). I wonder if there is a way I can break of x so each value gets sent to different threads (similar to how apply_async does within multiprocessing)?
I'm open to other options.
In it's current form the function is fast, but the GPU-to-CPU data transfer is killing me.
| Your computation is indeed a general histogram operation. There are multiple ways to compute this on a GPU regarding the number of items to scan, the size of the histogram and the distribution of the values.
For example, one solution consist in building local histograms in each separate kernel blocks and then perform a reduction. However, this solution is not well suited in your case since len(x) / len(y) is relatively small.
An alternative solution is to perform atomic updates of the histogram in parallel. This solutions only scale well if there is no atomic conflicts which is dependent of the actual input data. Indeed, if all value of x are equal, then all updates will be serialized which is slower than doing the accumulation sequentially on a CPU (due to the overhead of the atomic operations). Such a case is frequent on small histograms but assuming the distribution is close to uniform, this can be fine.
This operation can be done with Numba using CUDA (targetting Nvidia GPUs). Here is an example of kernel solving your problem:
@cuda.jit
def array_func(x, y, output_counts, output_weights):
tx = cuda.threadIdx.x # Thread id in a 1D block
ty = cuda.blockIdx.x # Block id in a 1D grid
bw = cuda.blockDim.x # Block width, i.e. number of threads per block
pos = tx + ty * bw # Compute flattened index inside the array
if pos < x.size:
col = int(x[pos] * 10)
cuda.atomic.add(output_counts, col, 1)
cuda.atomic.add(output_weights, col, y[pos])
For more information about how to run this kernel, please read the documentation. Note that the arrays output_counts and output_weights can possibly be directly created on the GPU so to avoid transfers. x and y should be on the GPU for better performance (otherwise a CPU reduction will be certainly faster). Also note that the kernel should be pretty fast so the overhead to run/wait it and allocate/free temporary array may be significant and even possibly slower than the kernel itself (but certainly faster than doing a double transfer from/to the CPU so to compute things on the CPU assuming data was on the GPU). Note also that such atomic accesses are only fast on quite recent Nvidia GPU that benefit from specific computing units for atomic operations.
| https://stackoverflow.com/questions/72648310/ |
extracting tensor values given tensor index values torch | I have a tensor of values val with shape (b,n) and a tensor of indexes ind with shape (b,m) (where n>m). My goal is to take the values in val that corresponds to the indexes in ind. Ive tried using val[ind], but it only expanded the dimensions of val, rather than taking only the relevant items
val = torch.tensor([[1,2,3],
[4,5,6],
[7,8,9],
[10,11,12],
[13,14,15]])
ind = torch.tensor([[1,2],
[0,2],
[0,1],
[1,2],
[0,1]])
val[ind] # shaped (5,2,4), I need (5,2)
the wanted output is
torch.tensor([[2,3],
[4,6],
[7,8],
[11,12],
[13,14]])
| You can perform such operation using torch.gather:
>>> val.gather(dim=1, index=ind)
tensor([[ 2, 3],
[ 4, 6],
[ 7, 8],
[11, 12],
[13, 14]])
Essentially indexing val's 2nd dimension using ind's values. The returned tensor out follows:
out[i][j] = val[i][ind[i]]
| https://stackoverflow.com/questions/72648537/ |
How to sum the second column's values according to the first column's value in a pytorch tensor? | I've a tensor in pytorch, its first column's value choice is limited, but its second column's value is freely chosen, e.g:
val = torch.tensor([[1,233],
[1,222],
[2,333],
[2,3234],
[2,3242],
[2,3234],
[3,234],
[3,234],
[4,323]])
Now I want to sum all values in the second column if their correspoding first column's values are same, the output should be as following:
output_val=torch.tensor([[1,455],
[2,10043],
[3,468],
[4,323]])
I want to use pytorch's tensor-support APIs to hanlde this task instead of using python's for/while loop programming because I've more than billions of records to do this kind of handling, the for/while looping code will consume more than several days, and any suggestion is welcom.
Thanks!
| Thanks for suggestions from @Shai and @Alexander-guyer, finally I've got the full solution to fully utilize the pytorch's parallel computing power(with its APIs) to do this kind of handling. The following is my final solution:
Input value tensor is:
val = torch.tensor([[1,233],
[1,222],
[2,333],
[2,3234],
[2,3242],
[2,3234],
[3,234],
[3,234],
[4,323]])
Now get its first and second columns into val0 and val1:
val0=val[:,0]
val1=val[:,1]
Now, use the torch.unique() to get its first column's unique values into uniq_val0, and get the inversal_index into index0:
uniq_val0, index0=torch.unique(val0, return_inverse=True)
zero_sum=torch.zeros(uniq_val0.shape, dtype=torch.int64)
Now, we could index_add_() to get the values' sum we want with the index0 we got from previous step:
output_val1=zero_sum.index_add_(0, index0, val1)
Now, we could stack the uniq_val0 and output_val1 togather, this is what we want:
output_val=torch.stack((uniq_val0, output_val1),-1)
Now, check the value, it's just what we want:
print(output_val)
tensor([[ 1, 455],
[ 2, 10043],
[ 3, 468],
[ 4, 323]])
| https://stackoverflow.com/questions/72649314/ |
Change of Tensor Dimension Cause an Error | I'm trying to test the principle of transforms.Resize while I find a confusing point. When I run the below codes:
import numpy as np
import torch
from torchvision import transforms
tim = np.array([[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]]) # (2, 3, 3)
tim = torch.from_numpy(tim)
tf = transforms.Compose([ # Principle?
transforms.ToPILImage(),
transforms.Resize((6, 6)), # HW
transforms.ToTensor()
])
mask = tf(tim)
squ = mask.squeeze()
A bug occurs:
Traceback (most recent call last):
File "C:/Users/Tim/Desktop/U-Net/test.py", line 62, in <module>
mask = tf(tim)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 95, in __call__
img = t(img)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\transforms.py", line 227, in __call__
return F.to_pil_image(pic, self.mode)
File "C:\Users\Tim\.conda\envs\Segment\lib\site-packages\torchvision\transforms\functional.py", line 315, in to_pil_image
raise TypeError(f"Input type {npimg.dtype} is not supported")
TypeError: Input type int32 is not supported
However, when I change the size of the tensor, the problem is solved:
tim = np.array([[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]]) # (1, 3, 3)
I’m wondering why this happens as the bug description has nothing to do with size but type. If anyone has any ideas on the reason, please let me know, thanks for your time!
| Change the datatype to float ...
tim = np.array([[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]],
[[1, 2, 3],
[1, 2, 3],
[1, 2, 3]]], dtype=np.float32) # (2, 3, 3)
Make sure you know what the input data is.
| https://stackoverflow.com/questions/72650013/ |
Pytorch - What is the proper way to minimize and maximize the same loss? | I have an objective function where I am looking to maximize a loss w.r.t f1 and f2, which are encoders; at the same time minimizing it w.r.t to g, which is a bijective convolution and X is just an image.
Here is how I assume it's supposed to be done.
obs_ch1, obs_ch23 = self.infomin.split_RGB_into_R_GB(obs=obs_RGB)
obs_ch1, obs_ch23 = self.infomin.R_GB_to_frame_stacked_R_GB(
obs_R=obs_ch1, obs_GB=obs_ch23)
# minimize wrt g
logits = self.infomin.compute_logits(anchor=obs_ch1, pos=obs_ch23)
loss = self.cross_entropy_loss(logits, labels)
self.infomin_discrim_optimizer.zero_grad()
loss.backward()
self.infomin_discrim_optimizer.step()
# maximize wrt f1, f2
logits = self.infomin.compute_logits(
anchor=obs_ch1.detach(), pos=obs_ch23.detach())
labels = torch.arange(logits.shape[0]).long().to(self.device)
loss = self.cross_entropy_loss(logits, labels)
self.infomin_encoders_optimizer.zero_grad()
(-loss).backward()
self.infomin_encoders_optimizer.step()
So I have both f1 and f2 using the same optimizer which is infomin_encoders_optimizer. I calculate the NCE loss w.r.t to just g then I detach that tensor and calculate NCE w.r.t f1 and f2. I take the opposite of that value and backpropagate them separately. The reason I do it this way is because I can't do them together because they are opposing objective directions. Also have to detach what comes out of g, because otherwise, it gives an error about multiple gradient updates to g.
How are you supposed to minimize and maximize an objective at the same time w.r.t to different parameters?
| This is actually the proper way of proceeding in order to optimize a min-max objective. You can't solve this kind of problem with a single optimization step simply because both loss functions (for f1/f2 and for g) are based on a result computed with I_NCE. This means you are required to infer twice: first for computing the objective for g, then a second time for computing the objective for f1/f2.
Note this is a very similar procedure if not identical to training a generative adversarial network.
| https://stackoverflow.com/questions/72650995/ |
TypeError with Dataloader | i used a very large dataset for testing my model. to make the testing samples fast, I would like to construct a data loader. but I'm getting errors. I couldn't solve it for two days. Here is my code:
PRE_TRAINED_MODEL_NAME = 'bert-base-cased'
tokenizer = BertTokenizer.from_pretrained(PRE_TRAINED_MODEL_NAME)
class GPReviewDataset(Dataset):
def __init__(self, Paragraph, target, tokenizer, max_len):
self.Paragraph = Paragraph
self.target= target
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.Paragraph)
def __getitem__(self, item):
Paragraph = str(self.Paragraph[item])
target = self.target[item]
encoding = self.tokenizer.encode_plus(
Paragraph,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
return {
'review_text': Paragraph,
'input_ids': encoding['input_ids'].flatten(),
'attention_mask': encoding['attention_mask'].flatten(),
'targets': torch.tensor(target, dtype=torch.long)
}
def create_data_loader(df, tokenizer, max_len, batch_size):
ds = GPReviewDataset(
Paragraph=df.Paragraph.to_numpy(),
target=df.target.to_numpy(),
tokenizer=tokenizer,
max_len=max_len
)
return DataLoader(
ds,
batch_size=batch_size,
num_workers=4
)
# Main function
paragraph=['Image to PDF Converter. ', 'Test Test']
target=['0','1']
df = pd.DataFrame({'Paragraph': paragraph, 'target': target})
MAX_LEN='512'
BATCH_SIZE = 1
train_data_loader1 = create_data_loader(df, tokenizer, MAX_LEN, BATCH_SIZE)
for d in train_data_loader1:
print(d)
When I iterate over the dataloader I got this error:
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py",
line 178, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "<ipython-input-3-c4f87a4dbb48>", line 20, in __getitem__
return_tensors='pt',
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1069, in encode_plus
return_special_tokens_mask=return_special_tokens_mask,
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1365, in prepare_for_model
if max_length and total_len > max_length:
TypeError: '>' not supported between instances of 'int' and 'str'
Can anyone help me? and Also, Can you give tips on how I can test my model on a large dataset? I mean what the faster way to test my model on 3M samples of data is?
| The error is as it stated
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils.py", line 1365, in prepare_for_model
if max_length and total_len > max_length:
TypeError: '>' not supported between instances of 'int' and 'str'
You should change your MAX_LEN from string to int:
# MAX_LEN='512'
MAX_LEN=512
| https://stackoverflow.com/questions/72652399/ |
train stable baselines 3 with examples? | For my basic evaulation of learning algorithms
I defined a custom environment.
Now with standard examples for stable baselines the learning
seems always to be initiated by stable baselines automatically
(by stablebaselines choosing random actions itsself and evaluating the rewards).
The standard learning seems to be done like this:
model.learn(total_timesteps=10000)
and this will try out different actions and optimize the action-observation-relations
while learning.
I would like to try out a really basic approach: for my custom environment I would
try to generate lists of examples, which actions should be taken according to some
situations of relevance (so there is a list of predefined observation-action-rewards).
And I would like to train the model with this list.
What would be the most appropriate way to implement this with stablebaselines3
(using pytorch)?
Additional information:
maybe the sense of the question could be compared to the idea in case of an atari game, not to always train a whole game sequence at once (from start to end of game, and then restart again until training ends), but instead to train the agent only with some more specific, representative situations of importance.
Or in chess: it seems to be a huge difference to let an agent
select randomly self choosen or randomly selected moves or
to let him follow moves played by masters in particular interesting
situations.
Maybe one could put the lists as main part of the environment reaction
(so e.g. train the agent with environment 1 for e.g. 1000 steps,
then train with environment 2 for e.g. 1000 steps and so on).
This could be a solution.
But the problem would be, that stable baselines would choose
actions itsself, so that it could not learn a complete sequence
of "correct" or like in chess masterful choosen steps in sequence.
So again the practical question is : is is possible, ot important to bring stablebaselines to train predefined actions instead of self choosen ones while training/learning?
| Imitation Learning is essentially what you are looking for. There is an imitation library that sits on top of baselines that you can use to achieve this.
See this example on how to create a policy that mimics expert behavior to train the network. The behavior in this case comes from a set of action sequences or rollout. In the example the rollout comes from an expertly trained policy, but you can probably create a hand written one. See this on how to create a rollout.
| https://stackoverflow.com/questions/72656770/ |
What these alphabets stands for? | Sorry if this question is trivial but I couldn't find the answer in the internet,
In pytorch documentation I saw these large alphabets in formulas
like:
Input:(N, C, H^in, W^out)
link: https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html
So what are these stands for and where
I can find the meaning of these alphabets.
| Those are typical axis notations used in the PyTorch documentation to refer to the dimension sizes:
N: batch size
C: number of channels
W: the width of the tensor
H: the height of the tensor
Suffix _in and _out stand for the axis' corresponding input and output size respectively.
| https://stackoverflow.com/questions/72661511/ |
Adding my custom loss messes up autograd in Pytorch? | I'm trying to use two different losses, MSELoss for some of my labels and a custom loss for the other labels. I'm then trying to sum these losses together before backprop. My model prints out the same loss after every epoch so I must be doing something wrong. Any help is appreciated! I suspect my implementation is messing up Pytorch's autograd. See code below:
mse_loss = torch.nn.MSELoss()
...
loss1 = mse_loss(preds[:,(0,1,3)], label[:,(0,1,3)])
print("loss1", loss1)
loss2 = my_custom_loss(preds[:,2], label[:,2])
print("loss2", loss2)
print("summing losses")
loss = sum([loss1, loss2]) # tensor + float = tensor
print("loss sum", loss)
loss = torch.autograd.Variable(loss, requires_grad=True)
print("loss after Variable(loss, requires_grad=True)", loss)
These print statements yield:
loss1 tensor(4946.1221, device='cuda:0', grad_fn=<MseLossBackward0>)
loss2 34.6672
summing losses
loss sum tensor(4980.7891, device='cuda:0', grad_fn=<AddBackward0>)
loss after Variable() tensor(4980.7891, device='cuda:0', requires_grad=True)
My custom loss function is below:
def my_custom_loss(preds, label):
angle_diff = preds - label
# /2 to bring angle diff between -180<theta<180
half_angle_diff = angle_diff.detach().cpu().numpy()/2
sine_diff = np.sin(half_angle_diff)
square_sum = np.nansum(sine_diff**2)
return square_sum
| The reason why you are not backpropagating through your second loss is that you haven't defined it as a differentiable operator. You should stick with PyTorch operators without switching to NumPy.
Something like this will work:
def my_custom_loss(preds, label):
half_angle_diff = (preds - label)/2
sine_diff = torch.sin(half_angle_diff)
square_sum = torch.nansum(sine_diff**2)
return square_sum
You can check that your custom loss is differentiable with dummy inputs:
>>> preds = torch.rand(1,3,10,10, requires_grad=True)
>>> label = torch.rand(1,3,10,10)
>>> my_custom_loss(preds, label)
tensor(11.7584, grad_fn=<NansumBackward0>)
Notice the grad_fn attribute on it which shows the output tensor is indeed attached to a computational graph, and you can therefore perform back propagation from it.
Additionally, you should not use Variable as it is now deprecated.
| https://stackoverflow.com/questions/72661641/ |
How to test my trained huggingface model on the test dataset? | I was following the huggingface tutorial on training a multiple choice QA model and trained my model with
training_args = TrainingArguments(
output_dir="./results",
evaluation_strategy="epoch",
learning_rate=5e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=1,
weight_decay=0.01,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_qa["train"],
eval_dataset=tokenized_qa["validation"],
tokenizer=tokenizer,
data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
compute_metrics=compute_metrics
)
trainer.train()
Afterwards I can load the model with:
# load trained model for testing
model = AutoModelForMultipleChoice.from_pretrained('results/checkpoint-1000')
But how can I test it on the testing dataset?
The dataset looks like:
DatasetDict({
train: Dataset({
features: ['id', 'sent1', 'sent2', 'ending0', 'ending1', 'ending2', 'ending3', 'label', 'input_ids', 'attention_mask'],
num_rows: 10178
})
test: Dataset({
features: ['id', 'sent1', 'sent2', 'ending0', 'ending1', 'ending2', 'ending3', 'label', 'input_ids', 'attention_mask'],
num_rows: 1273
})
validation: Dataset({
features: ['id', 'sent1', 'sent2', 'ending0', 'ending1', 'ending2', 'ending3', 'label', 'input_ids', 'attention_mask'],
num_rows: 1272
})
})
I have quite a bit of code so if there's more information needed please do let me know.
| Okay figured it out and adding an answer for completion. Seems like the training arguments from the trainer class are not needed:
trainer = Trainer(
model=model,
tokenizer=tokenizer,
data_collator=DataCollatorForMultipleChoice(tokenizer=tokenizer),
compute_metrics=compute_metrics
)
Put in evaluation mode:
model.eval() # put in testing mode (dropout modules are deactivated)
And then call:
trainer.predict(tokenized_qa["test"])
PredictionOutput(predictions=array([[-1.284791 , -1.2848296, -1.2848794, -1.2848705],
[-1.2848867, -1.2849237, -1.2848233, -1.2848446],
[-1.284851 , -1.2847253, -1.2849066, -1.2848204],
...,
[-1.284877 , -1.2848783, -1.284853 , -1.284804 ],
[-1.2848401, -1.2848557, -1.2847972, -1.2848665],
[-1.2848748, -1.2848799, -1.2848252, -1.2848618]], dtype=float32), label_ids=array([1, 3, 1, ..., 1, 2, 2]), metrics={'test_loss': 1.386292576789856, 'test_accuracy': 0.25727773406766324, 'test_runtime': 16.0096, 'test_samples_per_second': 79.39, 'test_steps_per_second': 9.932})
| https://stackoverflow.com/questions/72664868/ |
Pytorch - If you detach a nn.module in the middle of a network do all the modules prior to that one not get their gradient calculated? | So let's say I have X an input and a sequential network of net A, net B and net C. If I detach net B and I put X through A->B->C, because B is detached do I lose gradient information from A? I would assume no? I'm assuming it would just treat B like a constant to be added to the output of A rather than something differentiable.
| TLDR; Preventing gradient computation on B won't stop computing gradients for the upstream network A.
I think there is some confusion on what you consider "detaching a model". In my opinion, there are three things to keep in mind with this kind of thing:
You can detach a tensor which effectively detaches it from the computational graph, i.e. if this tensor is used to compute another tensor requiring gradient, the backpropagation step will not propagate past this "detached" tensor.
In your way of describing "detaching a model", you can disable gradient computation on given layers of your network by switching the requires_grad to False on its parameters. This can done in a single line at the module level with nn.Module.requires_grad_. So in your case doing B.requires_grad_(False) will freeze the parameters of B such that they can't be updated. In other words, the gradients of the parameters of B won't be computed however the intermediate gradients used to propagate to A will! Here is a minimal example:
>>> A = nn.Linear(10,10)
>>> B = nn.Linear(10,10)
>>> C = nn.Linear(10,10)
# disable gradient computation on B
>>> B.requires_grad_(False)
# dummy input, inference, and backpropagation
>>> x = torch.rand(1,10, requires_grad=True)
>>> C(B(A(x))).mean().backward()
We can now check that gradients of C and A have indeed be filled properly:
>>> A.weight.grad.sum()
tensor(0.3281)
>>> C.weight.grad.sum()
tensor(-1.6335)
However of course, B.weight.grad returns None.
Lastly, yet another behaviour is when using the no_grad context manager. This effectively kills the gradient. If you do something like:
>>> yA = A(x)
>>> with torch.no_grad():
... yB = B(yA)
>>> yC = C(yB)
Here yC is already detached from the network.
| https://stackoverflow.com/questions/72665429/ |
How to connect a LSTM layer to a Linear layer in Pytorch | I am doing a classification task of MFCC (time-series data) using LSTM.
I have input (16,60,40) (Batch,step,features)
class model(nn.Module):
def __init__(self,ninp,num_layers,class_num,nhid=128):
super().__init__()
self.lstm_nets = nn.LSTM(input_size=ninp,hidden_size=nhid,num_layers=num_layers,
batch_first=True,dropout=0.2,bidirectional=False)
self.FC = nn.Linear(nhid,class_num)
self.tanh = nn.Tanh()
self.softmax = nn.LogSoftmax(1)
def forward(self,X):
device = 'cuda:0'
out, (ht, ct) = self.lstm_nets(X)
# out = ht.contiguous().view(16,-1)
out = self.tanh(out)
out = self.FC(out)
Out = self.softmax(out)
return Out
model = model(ninp=X.shape[2],num_layers=1,class_num=32,nhid=128)
loss_function = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.5e-4)
If I have out = ht.contiguous().view(16,-1) that flattens the LSTM output, I got error
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-96-a7e2ba68dcd9> in <module>()
11
12 optimizer.zero_grad()
---> 13 y_pred = model(X)
14 # calculate loss function
15 loss = loss_function(y_pred, y)
3 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input)
101
102 def forward(self, input: Tensor) -> Tensor:
--> 103 return F.linear(input, self.weight, self.bias)
104
105 def extra_repr(self) -> str:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (16x32 and 128x32)
If I have out = out.contiguous().view(16,-1) that flattens the LSTM output, I got error RuntimeError: mat1 and mat2 shapes cannot be multiplied (16x7680 and 128x32)
If I remove the Flatten Step, I got such an error. RuntimeError: Expected target size [16, 32], got [16]
In addition, I found examples online do not flatten the output of LSTM
Thanks for any help.
| In each timestep of an LSTM the input goes through a simple neural network and the output gets passed to the next timestep.
The output out of function
out, (ht, ct) = self.lstm_nets(X)
contains a list of ALL outputs (i.e the output of the neural networks of every timestep). Yet, in classification, you mostly only really care about the LAST output. You can get it like this:
out = out[:, -1]
This output has now the shape (hidden-size, 1).
So in your case your forward function should look like this:
def forward(self,X):
device = 'cuda:0'
out, (ht, ct) = self.lstm_nets(X)
out = out[: ,-1]
out = self.tanh(out)
out = self.FC(out)
Out = self.softmax(out)
return Out
| https://stackoverflow.com/questions/72667646/ |
how to do KFold cross validation with TXT files | for example I have 10 .txt files, in order to divide in test and train data.
(test_rate = 0.2 which means 2 test data and 8 train data)
In that case, the whole KFold cross validation should run 45 times (C[10,2])
how to do this in python? using sklearn's KFold function(code below) or other methods.
Much thanks for your reply.
import pandas as pd
import numpy as np
from sklearn.model_selection import KFold
KFold(n_splits=2, random_state=None, shuffle=False)
| Yes, you can use sklearn. You should use 5-Fold cross validation if you want your test data to be 0.2 of the whole dataset. Because in 5-Fold CV, you divide your data into 5 splits and use 4 of them for training, remaining 1 for testing, each time. So, n_splits should be 5.
fnames = np.array([
"1.txt",
"2.txt",
"3.txt",
"4.txt",
"5.txt",
"6.txt",
"7.txt",
"8.txt",
"9.txt",
"10.txt"
])
kfold = KFold(n_splits=5)
for i, (train_idx, test_idx) in enumerate(kfold.split(fnames)):
print(f"Fold {i}")
train_fold, test_fold = fnames[train_idx], fnames[test_idx]
print(f"\tlen train fold: {len(train_fold)}")
print(f"\tTrain fold: {train_fold}")
print(f"\tlen test fold: {len(test_fold)}")
print(f"\tTest fold: {test_fold}")
This prints
Fold 0
len train fold: 8
Train fold: ['3.txt' '4.txt' '5.txt' '6.txt' '7.txt' '8.txt' '9.txt' '10.txt']
len test fold: 2
Test fold: ['1.txt' '2.txt']
Fold 1
len train fold: 8
Train fold: ['1.txt' '2.txt' '5.txt' '6.txt' '7.txt' '8.txt' '9.txt' '10.txt']
len test fold: 2
Test fold: ['3.txt' '4.txt']
Fold 2
len train fold: 8
Train fold: ['1.txt' '2.txt' '3.txt' '4.txt' '7.txt' '8.txt' '9.txt' '10.txt']
len test fold: 2
Test fold: ['5.txt' '6.txt']
Fold 3
len train fold: 8
Train fold: ['1.txt' '2.txt' '3.txt' '4.txt' '5.txt' '6.txt' '9.txt' '10.txt']
len test fold: 2
Test fold: ['7.txt' '8.txt']
Fold 4
len train fold: 8
Train fold: ['1.txt' '2.txt' '3.txt' '4.txt' '5.txt' '6.txt' '7.txt' '8.txt']
len test fold: 2
Test fold: ['9.txt' '10.txt']
You may want to give shuffle=True and a random_state in KFold for reproducibility.
| https://stackoverflow.com/questions/72675762/ |
"can't optimize a non-leaf Tensor" on torch Parameter | I'm getting "can't optimize a non-leaf Tensor" on this bit of code
self.W_ch1 = nn.Parameter(
torch.rand(encoder_feature_dim, encoder_feature_dim), requires_grad=True
).to(self.device)
self.W_ch1_optimizer = torch.optim.Adam([self.W_ch1], lr=encoder_lr)
Don't know why it's happening that should be the leaf tensor, because it has no children connected to it. It's just a torch.rand inside a nn.Parameter variable. It throws the error at the initialization of self.w_ch1_optmizer
| The reason why it throws an error is that torch.Tensor.cuda has the effect of creating a reference for transferring the data doing so by registering a new node in the graph. In other words your parameter module W_ch1 is no longer a leaf node since you already have this "computation" tree:
nn.Parameter -> cuda:parameter = W_ch1
You can compare the following two results:
>>> p = nn.Parameter(torch.rand(1)).cuda()
>>> p.is_leaf
False
What you need to be doing is first instantiate your modules, and define your optimizer(s). Only then can you transfer them to the desired device. Not before:
>>> p = nn.Parameter(torch.rand(1))
>>> optimizer = optim.Adam([p], lr=lr)
Then you can transfer everything:
>>> p.cuda()
>>> optimizer.cuda()
| https://stackoverflow.com/questions/72679858/ |
Best way to convert Tensor from [24, 512, 768, 1] to [24, 512, 14, 14] | I had an incompatibility issue with PyTorch tensor shape. Hence, I need to convert one tensor from shape [24, 512, 768, 1] to [24, 512, 14, 14].
What is the best way while trying to preserve as much info as possible about the original tensor representation?
| I think this is an optimal solution, regardless of the data preservation you want to include:
import tensorflow as tf
t = tf.constant([[[[1]]*768]*512]*24)
t = tf.reshape(tf.constant(t.numpy()[:, :, :196]), (24, 512, 14, 14))
print(t.shape)
Output:
(24, 512, 14, 14)
| https://stackoverflow.com/questions/72684076/ |
extract blocks of columns (as seperated subarrays) indicated by 1D binary array | Based on a 1D binary mask, for example, np.array([0,0,0,1,1,1,0,0,1,1,0]), I would like to extract the columns of another array, indicated by the 1's in the binary mask, as as sub-arrays/separate blocks, like [9, 3.5, 7]) and [2.8, 9.1] (I am just making up the numbers to illustrate the point).
So far what I have (again just as a demo to illustrate what my goal is, not the data where this operation will be performed):
arr = torch.from_numpy(np.array([0,0,0,1,1,1,0,0,1,1,0]))
split_idx = torch.where(torch.diff(arr) == 1)[0]+1
torch.tensor_split(arr, split_idx.tolist())
The output is:
(tensor([0, 0, 0]),
tensor([1, 1, 1]),
tensor([0, 0]),
tensor([1, 1]),
tensor([0]))
What I would like to have in the end is:
(tensor([1, 1, 1]),
tensor([1, 1]))
Do you know how to implement it, preferably in pytorch, but numpy functions are also fine. A million thanks in advance!!
| You can construct your tensor of slice indices with your approach. Only thing is you were missing the indices for the position of the end of each slice. You can do something like:
>>> slices = arr.diff().abs().nonzero().flatten()+1
tensor([ 3, 6, 8, 10])
Then apply tensor_split and slice to only keep every other element:
>>> torch.tensor_split(arr, slices)[1::2]
(tensor([1, 1, 1]), tensor([1, 1]))
| https://stackoverflow.com/questions/72686613/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.