instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Weight tensor should be defined either for all 1000 classes or no classes but got weight tensor of shape: [5]
I'm trying to use VGG16 for ** 5 classes data set**. I've already added 5 new layers to adjust the output for logit as 5. model = models.vgg16(pretrained=True) #Downloads the vgg16 model which is pretrained on Imagenet dataset. #Replace the Final layer of pretrained vgg16 with 5 new layers. model.fc = nn.Sequential(nn.Linear(1000,512), nn.ReLU(inplace=True), nn.Linear(512,256), nn.ReLU(inplace=True), nn.Linear(256,128), nn.ReLU(inplace=True), nn.Linear(128,64), nn.ReLU(inplace=True), nn.Linear(64,5), ) And my loss function is as follows loss_fn = nn.CrossEntropyLoss(weight=class_weights) #CrossEntropyLoss with class_weights. where class_weights is defined as such from sklearn.utils import class_weight #For calculating weights for each class. class_weights = class_weight.compute_class_weight(class_weight='balanced',classes=np.array([0,1,2,3,4]),y=train_df['level'].values) class_weights = torch.tensor(class_weights,dtype=torch.float).to(device) print(class_weights) #Prints the calculated weights for the classes. output: tensor([0.2556, 4.6000, 1.5333, 9.2000, 9.2000], device='cuda:0') After first epoch I get the error given below. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Input In [15], in <cell line: 5>() 3 nb_epochs = 3 4 #Call the optimize function. ----> 5 train_losses, valid_losses = optimize(train_dataloader,valid_dataloader,model,loss_fn,optimizer,nb_epochs) Input In [14], in optimize(train_dataloader, valid_dataloader, model, loss_fn, optimizer, nb_epochs) 21 print(f'\nEpoch {epoch+1}/{nb_epochs}') 22 print('-------------------------------') ---> 23 train_loss = train(train_dataloader,model,loss_fn,optimizer, epoch) #Calls the train function. 24 train_losses.append(train_loss) 25 valid_loss = validate(valid_dataloader,model,loss_fn) #Calls the validate function. Input In [12], in train(dataloader, model, loss_fn, optimizer, epoch) 24 for batch,(x,y) in enumerate(dataloader): #Iterates through the batches. 26 output = model(x.to(device)) #model's predictions. ---> 27 loss = loss_fn(output,y.to(device)) #loss calculation. 29 running_loss += loss.item() 31 total += y.size(0) File ~/anaconda3/envs/Ammar/lib/python3.9/site-packages/torch/nn/modules/module.py:1113, in Module._call_impl(self, *input, **kwargs) 1109 # If we don't have any hooks, we want to skip the rest of the logic in 1110 # this function, and just call forward. 1111 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1112 or _global_forward_hooks or _global_forward_pre_hooks): -> 1113 return forward_call(*input, **kwargs) 1114 # Do not call functions when jit is used 1115 full_backward_hooks, non_full_backward_hooks = [], [] File ~/anaconda3/envs/Ammar/lib/python3.9/site-packages/torch/nn/modules/loss.py:1163, in CrossEntropyLoss.forward(self, input, target) 1162 def forward(self, input: Tensor, target: Tensor) -> Tensor: -> 1163 return F.cross_entropy(input, target, weight=self.weight, 1164 ignore_index=self.ignore_index, reduction=self.reduction, 1165 label_smoothing=self.label_smoothing) File ~/anaconda3/envs/Ammar/lib/python3.9/site-packages/torch/nn/functional.py:2961, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 2959 if size_average is not None or reduce is not None: 2960 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2961 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) RuntimeError: weight tensor should be defined either for all 1000 classes or no classes but got weight tensor of shape: [5]
I faced the same problem as you. I started by changing the size of my final classifier layer (I copied the code from here): model = models.mobilenet_v2(pretrained=True) last_item_index = len(model.classifier) - 1 old_fc = model.classifier.__getitem__(last_item_index ) new_fc = nn.Linear(in_features=old_fc.in_features, out_features= 129, bias=True) model.classifier.__setitem__(last_item_index , new_fc) After changing this, I printed the model architecture using the following code: from torchsummary import summary summary(model, (3, 224, 224)) And it's working (number of classes in my dataset is 129): (classifier): Sequential( (0): Dropout(p=0.2, inplace=False) (1): Linear(in_features=1280, out_features=129, bias=True) )
https://stackoverflow.com/questions/71538030/
AttributeError in Classification report
The thing is i want to output the precisio, recall, and f1-score using classification report. But when i run below code, that error occurs. How can i fix the AttributeError? print(classification_report(test.targets.cpu().numpy(), File "C:\Users\Admin\PycharmProjects\ImageRotation\venv\lib\site-packages\torch\utils\data\dataset.py", line 83, in __getattr__ raise AttributeError AttributeError This is where i load the data from my directory. data_loader = ImageFolder(data_dir,transform = transformer) lab = data_loader.classes num_classes = int(len(lab)) print("Number of Classes: ", num_classes) print("The classes are as follows : \n",data_loader.classes) batch_size = 128 train_size = int(len(data_loader) * 0.8) test_size = len(data_loader) - train_size train,test = random_split(data_loader,[train_size,test_size]) train_size = int(len(train) * 0.8) val_size = len(train) - train_size train_data, val_data = random_split(train,[train_size,val_size]) #load the train and validation into batches. print(f"Length of Train Data : {len(train_data)}") print(f"Length of Validation Data : {len(val_data)}") print(f"Length of Test Data : {len(test)}") train_dl = DataLoader(train_data, batch_size, shuffle = True) val_dl = DataLoader(val_data, batch_size*2) test_dl = DataLoader(test, batch_size, shuffle=True) model.evaL() code with torch.no_grad(): # set the model in evaluation mode model.eval() # initialize a list to store our predictions preds = [] # loop over the test set for (x, y) in test_dl: # send the input to the device x = x.to(device) # make the predictions and add them to the list pred = model(x) preds.extend(pred.argmax(axis=1).cpu().numpy()) # generate a classification report print(classification_report(test.targets.cpu().numpy(), np.array(preds), target_names=test.classes))
It seems, ImageFolder is the your dataset object, but that is not inherited ted from torch.utils.data.Datasets. torch Dataloader tries to call the __getitem__ method in the your Dataset object, but since it is not torch.utils.data.Dataset object it does not have has a function, then that causes to AttributeError now you are getting. Convert ImageFolder to torch torch dataset. For further library details : torch doc Practical implemtation : ast_dataloader Also, you can use freeze model [without back propagation] to speedup the inference process. with torch.no_grad(): # make the predictions and add them to the list pred = model(x) Update> sample torch dataset: from torch.utils.data import Dataset class Dataset_train(Dataset): def __init__(self, list_IDs, labels, base_dir): """self.list_IDs : list of strings (each string: utt key), self.labels : dictionary (key: utt key, value: label integer)""" self.list_IDs = list_IDs self.labels = labels self.base_dir = base_dir def __len__(self): return len(self.list_IDs) def __getitem__(self, index): key = self.list_IDs[index] X, _ = get_sample(f"{self.base_dir}/{key}", self.noises) y = self.labels[index] return X, y [Note] get_sample is custom build function for .wav file read. you could replace it with any funtion. torch example-1 torch example-2 medium example
https://stackoverflow.com/questions/71538490/
Get mean squared error between every element in two tensors
I have two single dimensional tensors, y_pred and y_true where: >>> y_pred.shape torch.Size([2730441, 1]) >>> y_true.shape torch.Size([2730441, 1]) To get the Mean Squared Error between the two tensors I can use torch.nn.MSELoss() However, I want to get the loss between each row/ element in the tensors y_pred & y_true i.e. I want to run some function elementWiseMSE(y_pred, y_true) which will return loss_tensor of shape [2730441, 1] which contains the elementwise mean error of all the predictions.
The "function" you are looking for is literally loss_tensor = (y_pred - y_true) ** 2
https://stackoverflow.com/questions/71539108/
Length of train image dataset and train image loader are different?
image_datasets Is a dictionary containing both train and test data. Code below: transforms= transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) image_datasets = {'train': datasets.CIFAR10(root=data_dir, train=True, download=True, transform=transforms), 'test': datasets.CIFAR10(root=data_dir, train=False, download=True, transform=transforms) } image_datasets OUTPUT: {'test': Dataset CIFAR10 Number of datapoints: 10000 Root location: ../Data Split: Test StandardTransform Transform: Compose( ToTensor() Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)) ), 'train': Dataset CIFAR10 Number of datapoints: 50000 Root location: ../Data Split: Train StandardTransform Transform: Compose( ToTensor() Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)) )} #CREATING DATA LOADER data_loaders = { 'train': DataLoader(image_datasets['train'], 10, shuffle=True), 'test': DataLoader(image_datasets['test'], 10)} When I call len(data_loaders['train']) it returns 5000 When defining my data loader I am using a batch_size=10. Is the length of my data_loader being divided by my batch_size. New to coding and just wanted to double check.
In brief, len(data_loaders['train'].dataset)gives you how many instances in the dataset e.g., 50000 in CIFAR10. len(data_loaders['train']) gives you the number of batches in this dataloader e.g., 5000 in CIFA10 if your batch_size=10. Number of batches is calculated by len(dataset)/(batch_size). Therefore, when we calculate the accuracy in each epoch we divide number of correct ones over len(data_loaders['train'].dataset) not len(data_loaders['train']) because I fixed this bug in Stack Overflow for someone whose accuracy goes beyond 100% and the cause is division over len(data_loaders['train']) .
https://stackoverflow.com/questions/71539378/
rearrange an array element based on list in python
I have got a 2D array a with size 2, 1403 and a list b which has 2 list. a.shape = (2, 1403) # a is 2D array, each row has got unique elements. len(b) = 2 # b is list len(b[0]), len(b[1]) = 415, 452 # here also both the list inside b has got unique elements all the elements present in b[0] and b[1] is present in a[0] and a[1] respectively Now i want to rearrange elements of a based on elements of b. I want to rearrange such that all the elements in b[0] which is also present in a[0] should come in the ending of a[0], meaning new a should be such that a[0][:-len(b[0])] = b[0], similarly a[1][:-len(b[1])] = b[1]. Toy Example a has got elements like [[1,2,3,4,5,6,7,8,9,10,11,12],[1,2,3,4,5,6,7,8,9,10,11,12] b has got elements like [[5, 9, 10], [2, 6, 8, 9, 11]] new_a becomes [[1,2,3,4,6,7,8,11,12,5,9,10], [1,3,4,5,7,10,12,2,6,8,9,11]] I have written a code which loops over all the element which becomes very slow, it's shown below a_temp = [] remove_temp = [] for i, array in enumerate(a): a_temp_inner = [] remove_temp_inner = [] for element in array: if element not in b[i]: a_temp_inner.append(element) # get all elements first which are not present in b else: remove_temp_inner.append(element) #if any element present in b, remove it from main array a_temp.append(a_temp_inner) remove_temp.append(b_temp_inner) a_temp = torch.tensor(a_temp) remove_temp = torch.tensor(remove_temp) a = torch.cat((a_temp, remove_temp), dim = 1) Can anyone please help me with some faster implementation that works better than this
Assuming a is a np.array, b is a list you can use np.array([np.concatenate((i[~np.in1d(i, j)], j)) for i, j in zip(a,b)]) Output array([[ 1, 2, 3, 4, 6, 7, 8, 11, 12, 5, 9, 10], [ 1, 3, 4, 5, 7, 10, 12, 2, 6, 8, 9, 11]]) Can be micro-optimized if b contains empty lists np.array([np.concatenate((i[~np.in1d(i, j)], j)) if j else i for i, j in zip(a,b)]) In my benchmarks, for np.arrays with less than ~100 elements converting .tolist() is faster than np.concatenate np.array([i[~np.in1d(i, j)].tolist() + j for i, j in zip(a,b)]) Data example and imports for this solution import numpy as np a = np.array([ [1,2,3,4,5,6,7,8,9,10,11,12], [1,2,3,4,5,6,7,8,9,10,11,12] ]) b = [[5, 9, 10], [2, 6, 8, 9, 11]]
https://stackoverflow.com/questions/71542505/
Numpy: Filter segments in array based on how much overlap there is with a second array with different segmentation type
I am looking for the most computationally efficient way to filter out arrays from one array, based on segment overlap in a second array, and that array has a different segmentation type. This is my first array iob = np.array( [0, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 1, 2, 2, 2, 2, 0, 0, 1, 2, 1, 2, 2, 0] ) The number 1 is the start of each segment, the number 2 is the rest of the segment, and the number 0 indicates no segment. So for this array, the segments are 1, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1, 2, and 1, 2, 2. This is my second array output = np.array( [0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0] ) The segments are only defined by the number 1, and 0 indicates no segment. So the segments for this array are 1, 1, 1, 1, 1, 1, and 1, 1, 1. In the first array, I want filter out segments whose contents to not overlap by at least 50% by a segment in the 2nd array. In other words, if at least half of the contents in a segment in the first array overlaps with a segment in the 2nd array, I want to keep that segment. So this is the desired result array([0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0]) I am looking for the most computationally efficient method to calculate this solution. Current solution I can get the segment indices using the technique described here https://stackoverflow.com/a/71297663/3259896 output_zero = np.concatenate((output, [0])) zero_output = np.concatenate(([0], output)) iob_zero = np.concatenate((iob, [0])) iob_starts = np.where(iob_zero == 1)[0] iob_ends = np.where((iob_zero[:-1] != 0) & (iob_zero[1:] != 2))[0] iob_pairs = np.column_stack((iob_starts, iob_ends)) output_starts = np.where((zero_output[:-1] == 0) & (zero_output[1:] == 1))[0] output_ends = np.where((output_zero[:-1] == 1) & (output_zero[1:] == 0))[0] output_pairs = np.column_stack((output_starts, output_ends)) Next, I directly compare all possible combination of segments to see which ones have at least a 50% overlap, and only keep those segments. valid_pairs = [] for o_p in output_pairs: for i_p in iob_pairs: overlap = 1 + min(o_p[1], i_p[1]) - max(o_p[0], i_p[0]) if overlap > np.diff(i_p)[0]/2: valid_pairs.append(i_p) valid_pairs = np.array(valid_pairs) Finally, I used the filtered indices to create the desired array final = np.zeros_like(output) for block in valid_pairs: final[block[0]:block[1]+1] = 1 final array([0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0]) I suspect that this may not be the most computationally efficient solution. It uses quite a few lines of code, and also used a nested for loop to do all possible comparisons, and uses another loop to create the desired array. I don't have a mastery on use of all of numpy's functions, and I am wondering if there is a more computationally efficient way to calculate this.
Here's a 4-liner: output = np.concatenate((output, [0])) iob = np.concatenate((iob, [0])) idx = np.dstack(np.where(iob == 1) + tuple(np.array(np.where(np.diff(iob) < 0)) + 1)).flatten() desired_array = np.concatenate([np.full(len(x), 1 if ((np.sum(x==2) >= np.sum(x==1)) and x.sum()>0) else 0) for x in np.split(np.where(iob>0, 1, 0) + output, idx)])[0:-1] Output: >>> desired_array array([0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0]) And here is a more verbose version output_0 = np.concatenate((output, [0])) iob_0 = np.concatenate((iob, [0])) overlap = np.where(iob_0>0, 1, 0) + output_0 iob_segment_starts = np.where(iob_0 == 1) iob_segment_ends = tuple(np.array(np.where(np.diff(iob_0) < 0)) + 1) iob_segment_indxs = np.dstack(iob_segment_starts + iob_segment_ends).flatten() overlap_segments = np.split(overlap, iob_segment_indxs) filtered_segment_params = ( ( len(x), np.sum(x == 2) >= np.sum(x == 1) and x.sum() > 0, ) for x in overlap_segments ) filtered_segment_pieces = [ np.full(length, value, dtype=int) for length, value in filtered_segment_params ] filtered_array = np.concatenate(filtered_segment_pieces)[:-1] filtered_array Pytorch Version import torch output = torch.tensor( [1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1] ) iob = torch.tensor( [0, 1, 2, 2, 2, 2, 2, 2, 2, 0, 0, 1, 2, 2, 2, 2, 0, 0, 1, 2, 1, 2, 2, 1] ) output_0 = torch.cat((output, torch.tensor([0]))) iob_0 = torch.cat((iob, torch.tensor([0]))) overlap = torch.where(iob_0>0, 1, 0) + output_0 iob_segment_starts = torch.where(iob_0 == 1)[0] iob_segment_ends = torch.where(torch.diff(iob_0) < 0)[0] + 1 iob_segment_cuts = torch.cat([ iob_segment_starts, iob_segment_ends, torch.tensor([0, overlap.shape[0]]) ]).sort()[0] iob_segment_sizes = torch.diff(iob_segment_cuts) overlap_segments = torch.split(overlap, iob_segment_sizes.tolist(), dim=0) filtered_segment_params = ( ( len(x), torch.sum(x == 2) >= torch.sum(x == 1) and x.sum() > 0, ) for x in overlap_segments ) filtered_segment_pieces = [ torch.full((1, length), value, dtype=int).flatten(0) for length, value in filtered_segment_params ] filtered_array = torch.cat(filtered_segment_pieces)[:-1] filtered_array
https://stackoverflow.com/questions/71543151/
reorder columns in a tensor according to a dictionary
I don't know how to explain it correctly, so the title might be misleading. What I want to do is to move columns from a 3d tensor t1 to another 3d tensor t2 according to the indices. There's a dictionary td, and a (k,v) pair in td means that kth column of t1 will be the vth column of t2 Currently, I'm doing it this way: for k,v in td.items(): t2[:,:,v] = torch.select(t1, 2, k) but yes, it's super slow, as there are millions of them. What would be the best way to do the work?
Assuming no repeated values then you can use t2[:,:,list(td.values())] = t1[:,:,list(td.keys())]
https://stackoverflow.com/questions/71543191/
Using a target size (torch.Size([16])) that is different to the input size (torch.Size([16, 2])) is deprecated
I am trying to build a multiclass text classification using Pytorch and torchtext. but I am receiving this error whenever output in last hidden layer is 2, but running fine on 1 outputdim. I know there is a problem with batchsize and Data shape. What to do? I don't know the fix. Constructing iterator: #set batch size BATCH_SIZE = 16 train_iterator, valid_iterator = BucketIterator.splits( (train_data, valid_data), batch_size = BATCH_SIZE, sort_key = lambda x: len(x.text), sort_within_batch=True, device = device) Model class: class classifier(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout): super(classifier,self).__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim) self.gru = nn.GRU(embedding_dim, hidden_dim, num_layers=n_layers, bidirectional=bidirectional, dropout=dropout, batch_first=True) self.fc1 = nn.Linear(hidden_dim * 2, 128) self.relu1 = nn.ReLU() self.fc2 = nn.Linear(128, 64) self.relu2 = nn.ReLU() self.fc3 = nn.Linear(64, 16) self.relu3 = nn.ReLU() self.fc4 = nn.Linear(16, output_dim) self.act = nn.Sigmoid() def forward(self, text, text_lengths): embedded = self.embedding(text) #embedded = [batch size, sent_len, emb dim] packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths.to('cpu'),batch_first=True) packed_output, hidden = self.gru(packed_embedded) hidden = torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1) dense_1=self.fc1(hidden) x = self.relu1(dense_1) x = self.fc2(x) x = self.relu2(x) x = self.fc3(x) x = self.relu3(x) dense_outputs = self.fc4(x) #Final activation function outputs=self.act(dense_outputs) return outputs instantiating the model: size_of_vocab = len(TEXT.vocab) embedding_dim = 300 num_hidden_nodes = 256 num_output_nodes = 2 num_layers = 4 bidirection = True dropout = 0.2 model = classifier(size_of_vocab, embedding_dim, num_hidden_nodes,num_output_nodes, num_layers, bidirectional = True, dropout = dropout).to(device) def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters') pretrained_embeddings = TEXT.vocab.vectors model.embedding.weight.data.copy_(pretrained_embeddings) print(pretrained_embeddings.shape) Optimizer and criterion used: optimizer = optim.Adam(model.parameters()) criterion = nn.BCELoss() model = model.to(device) criterion = criterion.to(device) Training function: import torchmetrics as tm metrics = tm.Accuracy() def train(model, iterator, optimizer, criterion): #initialize every epoch epoch_loss = 0 epoch_acc = 0 #set the model in training phase model.train() for batch in iterator: #resets the gradients after every batch optimizer.zero_grad() #retrieve text and no. of words text, text_lengths = batch.text #convert to 1D tensor predictions = model(text, text_lengths).squeeze() #compute the loss loss = criterion(predictions, batch.label) #compute the binary accuracy # acc = binary_accuracy(predictions, batch.label) acc = metrics(predictions,batch.label) #backpropage the loss and compute the gradients loss.backward() #update the weights optimizer.step() #loss and accuracy epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator) Full error --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-60-eeabf5bacadf> in <module>() 5 6 #train the model ----> 7 train_loss, train_acc = train(model, train_iterator, optimizer, criterion) 8 9 #evaluate the model 3 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction) 2906 raise ValueError( 2907 "Using a target size ({}) that is different to the input size ({}) is deprecated. " -> 2908 "Please ensure they have the same size.".format(target.size(), input.size()) 2909 ) 2910 ValueError: Using a target size (torch.Size([16])) that is different to the input size (torch.Size([16, 2])) is deprecated. Please ensure they have the same size.
What you want is CrossEntropyLoss instead of BCELoss.
https://stackoverflow.com/questions/71547452/
Is it possible to use PyTorch's `BatchNorm1d` with `BCELossWithLogits`?
I am attempting to normalize the outputs of my classifier that uses BCELossWithLogits as part of its loss function. As far as I know, this implements Sigmoid function internally and outputs the loss. I want to normalize the output of the sigmoid function prior to calculating the loss. Is it possible to use BatchNorm1d with BCELossWithLogits? Or is passing the output tensor to torch.sigmoid to BatchNorm1d and separately calculating BCELoss the only possible solution? Thanks.
You can use BCELoss instead of BCELossWithLogits which is described as: This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss For example, m = nn.Sigmoid() bn = nn.BatchNorm1d(3) loss = nn.BCELoss() input = torch.randn((2, 3), requires_grad=True) target = torch.empty(2, 3).random_(2) output = loss(m(bn(input)), target) output.backward()
https://stackoverflow.com/questions/71547905/
Will the gradients be recorded when finding the model's accuracy in pytorch?
I starting to learn PyTorch and I am confused about something. From what I understand, if we set .requires_grad_() to our parameters then the necessary calculations to find the gradients of those parameters will be recorded. This way, we can perform gradient descent. However, the gradient values will be added on top of the previous gradient values, so after we perform a gradient descent step we should reset our gradients using param.grad.zero_(), where param is a weight or a bias term. I have a model which has just the input layer and one output neuron, so really simple stuff (since I have just one output neuron you can tell that I only have 2 possible classes). I have my parameters called weights and bias, it is on these 2 variables that I set requires_grad_(). Also I put my training data in a DataLoader called train_dl and my validation data in valid_dl. I use a subset of the MNIST dataset, but that is really not important to this question. These are the functions I use: def forward_propagation(xb): z = xb @ weights + bias a = z.sigmoid() return a def mse_loss(predictions, targets): loss = ((predictions - targets) ** 2).mean() return loss def backward_propagation(loss): loss.backward() weights.data -= lr * weights.grad.data weights.grad.zero_() bias.data -= lr * bias.grad.data bias.grad.zero_() def train_epoch(): for xb, yb in train_dl: a = forward_propagation(xb) loss = mse_loss(a, yb) backward_propagation(loss) As you can see I use the function train_epoch() to perform: forward propagation (where some of the calculations for the gradient will be recorded since that is where our parameters are first used), calculate the loss (this step will also be used to calculate the gradients), and then backward propagation where I update my parameters and then reset the gradients to 0, so that they don't accumulate. I used this code to train my model and it worked fine, I am satisfied with the accuracy I got. So I assume that it works, at least somewhat. But I also use this code to find the validation data accuracy for my model: def valid_accuracy(): accuracies = [] for xb, yb in valid_dl: a = forward_propagation(xb) correct = (a > 0.5) == yb accuracies.append(correct.float().mean()) return round(torch.stack(accuracies).mean().item(), 4) As you can see, in finding the model's accuracy I perform forward propagation (the above function, where I multiply the weights by the data and add the bias). My question is: will the gradients also be recorded here? So the next time when I use .backward() on loss will the gradients be influenced by the steps taken in finding the accuracy? I think that as it is right now, the gradient values will be added each time I find the accuracy of the model (which I do not want and doesn't make sense), but I am not sure. Should I have somewhere in the function valid_accuracy() another 2 lines with weights.grad.zero_() and bias.grad.zero_() so that this doesn't happen? Or is it the case that this doesn't happen automatically, so I get the desired behavior by default and I simply misunderstood something?
There are two things to consider: One is the gradients itself, and the other is the computational graph that is being built in each forward pass. To compute the gradient after a forward pass, we need to record what operations have been done to what tensors in what order, that is, the computation graph. So whenever whe compute a new tensor form other tensors that have requires_grad==True, the new tensor has an attribute .grad_fn that points to the previous operation and the involved tensors. This is basically how backward() "knows" where to go. If you call backward() it will consider the this .grad_fn and recursively do the backward pass. So currently the way you do it will actually build this computation graph, even when computing the accuracy. But if this graph is never accessed, the garbage collector will eventually destroy it. The key thing to notice is that each separate evaluation will produce a new computation graph (depending on your model maybe sharing some parts), but the backward pass will only start from the "node" from which you called .backward, so in your snippets you won't ever get a gradient from the accuracy computation as you never call a.backward(), you only ever call loss.backward(). The recoding of the computation graph does require some overhead though, but this can be disabled using the torch.no_grad() context manager, which is made with this exact use case in mind. Unfortunately the name (as well as the documentation) mention the gradient, but it really is just about recording the (forward) computation graph. But obviously if you disable that, as a consequence you won't be able to compute a backward pass either.
https://stackoverflow.com/questions/71550601/
I have the code below which I want to translate into pytorch. I'm looking for a way to translate np.vectorize to any pytorch way in this case
I need to translate this code to pytorch. The code given below use np.vectorize. I am looking for a pytorch equivalent for this. class SimplexPotentialProjection(object): def __init__(self, potential, inversePotential, strong_convexity_const, precision = 1e-10): self.inversePotential = inversePotential self.gradPsi = np.vectorize(potential) self.gradPsiInverse = np.vectorize(inversePotential) self.precision = precision self.strong_convexity_const = strong_convexity_const
The doc for numpy.vectorize clearly states that: The vectorize function is provided primarily for convenience, not for performance. The implementation is essentially a for loop. Therefore, in order to convert your numpy code to pytorch you'll simply need apply potential and inversePotential in a loop over their tensor arguments. However, that might be very inefficient. You would better re-implement your functions to act "natively" in a vectorized manner on tensors.
https://stackoverflow.com/questions/71553309/
I do not split well in pytorch
I would like to do a tensor split in pytorch. However, I get an error message because I can't get the splitting to work. The behavior I want is to split the input data into two Fully Connected layers. I then want to create a model that combines the two Fully Connected layers into one. I believe the error is due to a wrong code in x1, x2 = torch.tensor_split(x,2) import torch from torch import nn, optim import numpy as np from matplotlib import pyplot as plt class Regression(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(1, 32) self.linear2 = nn.Linear(32, 16) self.linear3 = nn.Linear(16*2, 1) def forward(self, x): x1, x2 = torch.tensor_split(x,2) x1 = nn.functional.relu(self.linear1(x1)) x1 = nn.functional.relu(self.linear2(x1)) x2 = nn.functional.relu(self.linear1(x2)) x2 = nn.functional.relu(self.linear2(x2)) cat_x = torch.cat([x1, x2], dim=1) cat_x = self.linear3(cat_x) return cat_x def train(model, optimizer, E, iteration, x, y): losses = [] for i in range(iteration): optimizer.zero_grad() # 勾配情報を0に初期化 y_pred = model(x) # 予測 loss = E(y_pred.reshape(y.shape), y) # 損失を計算(shapeを揃える) loss.backward() # 勾配の計算 optimizer.step() # 勾配の更新 losses.append(loss.item()) # 損失値の蓄積 print('epoch=', i+1, 'loss=', loss) return model, losses x = np.random.uniform(0, 10, 100) # x軸をランダムで作成 y = np.random.uniform(0.9, 1.1, 100) * np.sin(2 * np.pi * 0.1 * x) # 正弦波を作成 x = torch.from_numpy(x.astype(np.float32)).float() # xをテンソルに変換 y = torch.from_numpy(y.astype(np.float32)).float() # yをテンソルに変換 X = torch.stack([torch.ones(100), x], 1) net = Regression() optimizer = optim.RMSprop(net.parameters(), lr=0.01) # 最適化にRMSpropを設定 E = nn.MSELoss() # 損失関数にMSEを設定 net, losses = train(model=net, optimizer=optimizer, E=E, iteration=5000, x=X, y=y) error message /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1846 if has_torch_function_variadic(input, weight, bias): 1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias) -> 1848 return torch._C._nn.linear(input, weight, bias) 1849 1850 RuntimeError: mat1 and mat2 shapes cannot be multiplied (50x2 and 1x32)
Tl;dr Specify dim=1 in torch.tensor_split(x,2) . Explanation The x comes from two tensors with the shape [100,1] stacked at dim 1, so its shape is [100, 2]. After applying tensor_split, you get two tensors both with shape [50, 2]. print(x.shape) # torch.Size([100, 2]) print(torch.tensor_split(X,2)[0].shape) # torch.Size([50, 2]) The error occurred because linear1 only accepts tensors with shape [BATCH_SIZE,1] as the input, but a tensor with shape [50, 2] was passed in. If your intention was to split the array of random numbers and the array of all ones, change torch.tensor_split(x,2) to torch.tensor_split(x,2,dim=1), which produces two tensors with the shape [100,1].
https://stackoverflow.com/questions/71554131/
I can't import torchvision
I installed torchvision0.12.0, python3.8 and my OS is Windows. I succeeded in importing torch, but I couldn't import torchvision and getting this error. ImportError DLL load failed while importing _imaging: File "C:\Users'MyName'\Documents\GitHub\pytorch-cifar\main.py", line 8, in import torchvision Is there someone who can solve this problem?
This may be due to incompatible versions of torch and torchvision.You can get the information you want through the following link: the corresponding torchvision versions and supported Python versions
https://stackoverflow.com/questions/71554974/
How to know the trained model is correct?
I use PyTorch Lightning for model training, during which I use ModelCheckpoint to save loading points. Finally, I would like to know whether the model is loaded correctly. Let me know if you require further information? checkpoint_callback = ModelCheckpoint( filename='tb1000_{epoch: 02d}-{step}', monitor='val/acc@1', save_top_k=5, mode='max') wandb_logger = pl.loggers.wandb.WandbLogger( name=run_name, project=args.project, entity=args.entity, offline=args.offline, log_model='all') model = BYOL(**args.__dict__, num_classes=dm.num_classes) trainer = pl.Trainer.from_argparse_args(args, logger=wandb_logger, callbacks=[checkpoint_callback]) trainer.fit(model, dm) # Loading and testing model_test = BYOL(**args.__dict__, num_classes=dm.num_classes) path = "/tb100_epoch= 819-step=39359.ckpt" model_test.load_from_checkpoint(path)
load_from_checkpoint() will return a model with trained weights, so you need to assign it to a new variable. model_test = model_test.load_from_checkpoint(path) or model_test = BYOL.load_from_checkpoint(path)
https://stackoverflow.com/questions/71558917/
What does Tensor[batch_mask, ...] do?
I saw this line of code in an implementation of BiLSTM: batch_output = batch_output[batch_mask, ...] I assume this is some kind of "masking" operation, but found little information on Google about the meaning of .... Please help:). Original Code: class BiLSTM(nn.Module): def __init__(self, vocab_size, tagset, embedding_dim, hidden_dim, num_layers, bidirectional, dropout, pretrained=None): # irrelevant code .......... def forward(self, batch_input, batch_input_lens, batch_mask): batch_size, padding_length = batch_input.size() batch_input = self.word_embeds(batch_input) # size: #batch * padding_length * embedding_dim batch_input = rnn_utils.pack_padded_sequence( batch_input, batch_input_lens, batch_first=True) batch_output, self.hidden = self.lstm(batch_input, self.hidden) self.repackage_hidden(self.hidden) batch_output, _ = rnn_utils.pad_packed_sequence(batch_output, batch_first=True) batch_output = batch_output.contiguous().view(batch_size * padding_length, -1) ####### HERE ########## batch_output = batch_output[batch_mask, ...] ######################### out = self.hidden2tag(batch_output) return out
I assume that batch_mask is a boolean tensor. In that case, batch_output[batch_mask] performs a boolean indexing that selects the elements corresponding to True in batch_mask. ... is usually referred as ellipsis, and in the case of PyTorch (but also other NumPy-like libraries), it is a shorthand for avoiding repeating the column operator (:) multiple times. For example, given a tensor v, with v.shape equal to (2, 3, 4), the expression v[1, :, :] can be rewritten as v[1, ...]. I performed some tests and using either batch_output[batch_mask, ...] or batch_output[batch_mask] seems to work identically: t = torch.arange(24).reshape(2, 3, 4) # mask.shape == (2, 3) mask = torch.tensor([[False, True, True], [True, False, False]]) print(torch.all(t[mask] == t[mask, ...])) # returns True
https://stackoverflow.com/questions/71559100/
How to load a fine tuned pytorch huggingface bert model from a checkpoint file?
I had fine tuned a bert model in pytorch and saved its checkpoints via torch.save(model.state_dict(), 'model.pt') Now When I want to reload the model, I have to explain whole network again and reload the weights and then push to the device. Can anyone tell me how can I save the bert model directly and load directly to use in production/deployment? Following is the training code and you can try running there in colab itself! After training completion, you will notice in file system we have a checkpoint file. But I want to save the model itself. LINK TO COLAB NOTEBOOK FOR SAMPLE TRAINING Following is the current inferencing code I written. import torch import torch.nn as nn from transformers import AutoModel, BertTokenizerFast import numpy as np import json tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') device = torch.device("cpu") class BERT_Arch(nn.Module): def __init__(self, bert): super(BERT_Arch, self).__init__() self.bert = bert # dropout layer self.dropout = nn.Dropout(0.1) # relu activation function self.relu = nn.ReLU() # dense layer 1 self.fc1 = nn.Linear(768, 512) # dense layer 2 (Output layer) self.fc2 = nn.Linear(512, 2) # softmax activation function self.softmax = nn.LogSoftmax(dim=1) # define the forward pass def forward(self, sent_id, mask): # pass the inputs to the model _, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False) x = self.fc1(cls_hs) x = self.relu(x) x = self.dropout(x) # output layer x = self.fc2(x) # apply softmax activation x = self.softmax(x) return x bert = AutoModel.from_pretrained('bert-base-uncased') model = BERT_Arch(bert) path = './models/saved_weights_new_data.pt' model.load_state_dict(torch.load(path, map_location=device)) model.to(device) def inference(comment): tokens_test = tokenizer.batch_encode_plus( list([comment]), max_length=75, pad_to_max_length=True, truncation=True, return_token_type_ids=False ) test_seq = torch.tensor(tokens_test['input_ids']) test_mask = torch.tensor(tokens_test['attention_mask']) predictions = model(test_seq.to(device), test_mask.to(device)) predictions = predictions.detach().cpu().numpy() predictions = np.argmax(predictions, axis=1) return predictions I simply want to save a model from this notebook in a way such that I can use it for inferencing anywhere.
Just save your model using model.save_pretrained, here is an example: model.save_pretrained("<path_to_dummy_folder>") You can download the model from colab, save it on your gdrive or at any other location of your choice. While doing inference, you can just give path to this model (you may have to upload it) and start with inference. To load the model model = AutoModel.from_pretrained("<path_to_saved_pretrained_model>") #Note: Instead of AutoModel class, you may use the task specific class as well.
https://stackoverflow.com/questions/71561761/
What's an elegant way to compare one-hot label results?
I'm trying to calculate the 'accuracy' of one-hot label encoded tensors, such that for the following example, I'd get 0.5. tensor([[0,0,1], [1,0,0]]) == tensor([[0,0,1], [0,1,0]]) I want to know what proportion of the predictions are correctly labelled. What's the most elegant way to achieve this with a pytorch tensor?
If I understand correctly. You want all values to match for each row to be considered as a correct prediction then it should be something like this. (tensor([[0,0,1], [1,0,0]]) == tensor([[0,0,1], [0,1,0]])).all(dim=1).float().mean()
https://stackoverflow.com/questions/71562120/
invalid syntax when copying weight in pytorch
I tried this: model2.down1.maxpool_conv.1.double_conv.0.weight.copy_(state_dict['layer1.0.conv1.weight']) But I got below error message: model2.down1.maxpool_conv.1.double_conv.0.weight.copy_(state_dict['layer1.0.conv1.weight']) ^ SyntaxError: invalid syntax But my model output show its has "down1.maxpool_conv.1.double_conv.0.weight"
Python does not allow numbers to be attributes, so if you somehow create an object that has such attributes (it can be done only by low level function calls) you will not have access to them through dot notation (and this is why you get an invalid syntax error, rather than something like Attribute not found error). You should instead take the param reference that you already have in your loop and do the copy for name, param in model2.named_parameters(): if name == 'down1.maxpool_conv.1.double_conv.0.weight': param.copy_(state_dict['layer1.0.conv1.weight'])
https://stackoverflow.com/questions/71564677/
For loop optimization to create an adjacency matrix
I am currently working with graph with labeled edges. The original adjacency matrix is a matrix with shape [n_nodes, n_nodes, n_edges] where each cell [i,j, k] is 1 if node i and j are connected via edge k. I need to create a reverse of the original graph, where nodes become edges and edges become nodes, so i need a new matrix with shape [n_edges, n_edges, n_nodes], where each cell [i,j,k] is 1 if edges i and j have k as a common vertex. The following code correctly completes the task, but the use of 5 nested for-loops is too slow, to process the amount of graphs with which I have to work seems to take about 700 hours. Is there a better way to implement this? n_nodes = extended_adj.shape[0] n_edges = extended_adj.shape[2] reversed_graph = torch.zeros(n_edges, n_edges, n_nodes, 1) for i in range(n_nodes): for j in range(n_nodes): for k in range(n_edges): #If adj_mat[i][j][k] == 1 nodes i and j are connected with edge k #For this reason the edge k must be connected via node j to every outcoming edge of j if extended_adj[i][j][k] == 1: #Given node j, we need to loop through every other possible node (l) for l in range(n_nodes): #For every other node, we need to check if they are connected by an edge (m) for m in range(n_edges): if extended_adj[j][l][m] == 1: reversed_graph[k][m][j] = 1 Thanks is advance.
Echoing the comments above, this graph representation is almost certainly cumbersome and inefficient. But that notwithstanding, let's define a vectorized solution without loops and that uses tensor views whenever possible, which should be fairly efficient to compute for larger graphs. For clarity let's use [i,j,k] to index G (original graph) and [i',j',k'] to index G' (new graph). And let's shorten n_edges to e and n_nodes to n. Consider the 2D matrix slice = torch.max(G,dim = 1). At each coordinate [a,b] of this slice, a 1 indicates that node a is connected by edge b to some other node (we don't care which). slice = torch.max(G,dim = 1) # dimension [n,e] We're well on our way to the solution, but we need an expression that tells us whether a is connected to edge b and another edge c, for all edges c. We can map all combinations b,c by expanding slice, copying it and transposing it, and looking for intersections between the two. expanded_dim = [slice.shape[0],slice.shape[1],slice.shape[1]] # value [n,e,e] # two copies of slice, expanded on different dimensions expanded_slice = slice.unsqueeze(1).expand(expanded_dim) # dimension [n,e,e] transpose_slice = slice.unsqueeze(2).expand(expanded_dim) # dimension [n,e,e] G = torch.bitwise_and(expanded_slice,transpose_slice).int() # dimension [n,e,e] G[i',j',k'] now equals 1 iff node i' is connected by edge j' to some other node, AND node i' is connected by edge k' to some other node. If j' = k' the value is 1 as long as one of the endpoints of that edge is i'. Lastly, we reorder dimensions to get to your desired form. G = torch.permute(G,(1,2,0)) # dimension [e,e,n]
https://stackoverflow.com/questions/71569202/
config a neural network with a JSON file
I want to config a neural network model (number of layers, number of neurons per layer, activation functions, ...) from a JSON file. But honestly, I have no idea how to do it. When I search the internet with "hyperparameters config PyTorch" or "hyperparameters tunning PyTorch" can't find anything interesting. The search result is more about hyperparameters optimization, not config from JSON. Anyone has any idea how to do this (JSON file config) or do you know any useful tutorials that I can watch/read, please? That would be a great help! Thank you in advance
Write your parameters to json file with proper name like following { "number_layers":1, "number_neurons":2, "activation_function":"relu", "training":{ "learning_rate":0.01 } } Then read the json file import json with open('xxx.json', 'r', encoding='utf-8') as f: config = json.loads(f.read()) and access the parameter you want with config['number_layers'] config['training']['learning_rate']
https://stackoverflow.com/questions/71572817/
torch model.load_state_dict *** AttributeError: 'ModelName' object has no attribute 'copy'
i previously saved a model like this: trainedmodelpath = "model.th" torch.save({'model': model, 'scaler': scaler, 'encoder': label_encoder, 'config': config_parameters}, trainedmodelpath) But when i try to load it like this: PreviousModelPath = "model.th" TorchLoadedState = torch.load(PreviousModelPath) TorchLoadedState_Model = TorchLoadedState['model'] model.load_state_dict(TorchLoadedState_Model) i received this error *** AttributeError: 'ModelName' object has no attribute 'copy'
Save model weigths with model.state_dict() instead: torch.save({'model': model.state_dict(), 'scaler': scaler, 'encoder': label_encoder, 'config': config_parameters}, trainedmodelpath)
https://stackoverflow.com/questions/71574676/
How to reorder tensor based on indexes tensor from the same size
Say I have tensor A, and indexes Tensor: A = [1, 2, 3, 4], indexes = [1, 0, 3, 2] I want to create a new Tensor from these two with the following result : [2, 1, 4, 3] Each element of the result is element from A and the order is defined by the indexes Tensor. Is there a way to do it with PyTorch tensor ops without loops? My goal is to do it for 2D Tensor, but I don't think there is a way to do it without loops, so I thought to project it to 1D, do the work and project it back to the 2D.
You can use scatter: A = torch.tensor([1, 2, 3, 4]) indices = torch.tensor([1, 0, 3, 2]) result = torch.tensor([0, 0, 0, 0]) print(result.scatter_(0, indices, A))
https://stackoverflow.com/questions/71575211/
DiffSharp sliding window implementation example
According to this issue Check view operations correspond to torch #199 It seems like it is not hard to implement sliding window function with DiffSharp Tensor. However I cannot get a hint searching DiffSharp official website. In PyTorch, the unfold is like: x = torch.arange(1., 20) x.unfold(0,4,2) tensor([[ 1., 2., 3., 4.], [ 3., 4., 5., 6.], [ 5., 6., 7., 8.], [ 7., 8., 9., 10.], [ 9., 10., 11., 12.], [11., 12., 13., 14.], [13., 14., 15., 16.], [15., 16., 17., 18.]]) How do I correctly implement unfold operator with DiffSharp Tensor?
Using unfold provided by TensorSharp instead works fine. Implement miscellaneous utility functions
https://stackoverflow.com/questions/71575527/
In Pytorch, how can i shuffle a DataLoader?
I have a dataset with 10000 samples, where the classes are present in an ordered manner. First I loaded the data into an ImageFolder, then into a DataLoader, and I want to split this dataset into a train-val-test set. I know the DataLoader class has a shuffle parameter, but thats not good for me, because it only shuffles the data when enumeration happens on it. I know about the RandomSampler function, but with it, i can only take n amount of data randomly from the dataset, and i have no control of what is being taken out, so one sample might be present in the train,test and val set at the same time. Is there a way to shuffle the data in a DataLoader? The only thing i need is the shuffle, after that i can subset the data.
The Subset dataset class takes indices (https://pytorch.org/docs/stable/data.html#torch.utils.data.Subset). You can probably exploit that to get this functionality as below. Essentially, you can get away by shuffling the indices and then picking the subset of the dataset. # suppose dataset is the variable pointing to whole datasets N = len(dataset) # generate & shuffle indices indices = numpy.arange(N) indices = numpy.random.permutation(indices) # there are many ways to do the above two operation. (Example, using np.random.choice can be used here too # select train/test/val, for demo I am using 70,15,15 train_indices = indices [:int(0.7*N)] val_indices = indices[int(0.7*N):int(0.85*N)] test_indices = indices[int(0.85*N):] train_dataset = Subset(dataset, train_indices) val_dataset = Subset(dataset, val_indices) test_dataset = Subset(dataset, test_indices)
https://stackoverflow.com/questions/71576668/
Saving the weight of one layer in Pytorch
I would like to save the weight of a model, but not the whole model like this: torch.save(model, 'model.pth') But rather, just one layer. for example, suppose, I have defined one layer like this: self.conv_up3 = convrelu(256 + 512, 512, 3, 1) How do I save the weight of only this layer. And also how do I load it for this layer.
You can do the following to save/get parameters of the specific layer: specific_params = self.conv_up3.state_dict() # save/manipulate `specific_params` as you want And similarly, to load the params to that specific layer: self.conv_up3.load_state_dict(params) You can do this because each layer is a neural network (nn.Module instance) in itself.
https://stackoverflow.com/questions/71577239/
huggingface sequence classification unfreezing layers
I am using longformer for sequence classification - binary problem I have downloaded required files # load model and tokenizer and define length of the text sequence model = LongformerForSequenceClassification.from_pretrained('allenai/longformer-base-4096', gradient_checkpointing=False, attention_window = 512) tokenizer = LongformerTokenizerFast.from_pretrained('allenai/longformer-base-4096', max_length = 1024) then as shown here I ran the below code for name, param in model.named_parameters(): print(name, param.requires_grad) longformer.embeddings.word_embeddings.weight True longformer.embeddings.position_embeddings.weight True longformer.embeddings.token_type_embeddings.weight True longformer.embeddings.LayerNorm.weight True longformer.embeddings.LayerNorm.bias True longformer.encoder.layer.0.attention.self.query.weight True longformer.encoder.layer.0.attention.self.query.bias True longformer.encoder.layer.0.attention.self.key.weight True longformer.encoder.layer.0.attention.self.key.bias True longformer.encoder.layer.0.attention.self.value.weight True longformer.encoder.layer.0.attention.self.value.bias True longformer.encoder.layer.0.attention.self.query_global.weight True longformer.encoder.layer.0.attention.self.query_global.bias True longformer.encoder.layer.0.attention.self.key_global.weight True longformer.encoder.layer.0.attention.self.key_global.bias True longformer.encoder.layer.0.attention.self.value_global.weight True longformer.encoder.layer.0.attention.self.value_global.bias True longformer.encoder.layer.0.attention.output.dense.weight True longformer.encoder.layer.0.attention.output.dense.bias True longformer.encoder.layer.0.attention.output.LayerNorm.weight True longformer.encoder.layer.0.attention.output.LayerNorm.bias True longformer.encoder.layer.0.intermediate.dense.weight True longformer.encoder.layer.0.intermediate.dense.bias True longformer.encoder.layer.0.output.dense.weight True longformer.encoder.layer.0.output.dense.bias True longformer.encoder.layer.0.output.LayerNorm.weight True longformer.encoder.layer.0.output.LayerNorm.bias True longformer.encoder.layer.1.attention.self.query.weight True longformer.encoder.layer.1.attention.self.query.bias True longformer.encoder.layer.1.attention.self.key.weight True longformer.encoder.layer.1.attention.self.key.bias True longformer.encoder.layer.1.attention.self.value.weight True longformer.encoder.layer.1.attention.self.value.bias True longformer.encoder.layer.1.attention.self.query_global.weight True longformer.encoder.layer.1.attention.self.query_global.bias True longformer.encoder.layer.1.attention.self.key_global.weight True longformer.encoder.layer.1.attention.self.key_global.bias True longformer.encoder.layer.1.attention.self.value_global.weight True longformer.encoder.layer.1.attention.self.value_global.bias True longformer.encoder.layer.1.attention.output.dense.weight True longformer.encoder.layer.1.attention.output.dense.bias True longformer.encoder.layer.1.attention.output.LayerNorm.weight True longformer.encoder.layer.1.attention.output.LayerNorm.bias True longformer.encoder.layer.1.intermediate.dense.weight True longformer.encoder.layer.1.intermediate.dense.bias True longformer.encoder.layer.1.output.dense.weight True longformer.encoder.layer.1.output.dense.bias True longformer.encoder.layer.1.output.LayerNorm.weight True longformer.encoder.layer.1.output.LayerNorm.bias True longformer.encoder.layer.2.attention.self.query.weight True longformer.encoder.layer.2.attention.self.query.bias True longformer.encoder.layer.2.attention.self.key.weight True longformer.encoder.layer.2.attention.self.key.bias True longformer.encoder.layer.2.attention.self.value.weight True longformer.encoder.layer.2.attention.self.value.bias True longformer.encoder.layer.2.attention.self.query_global.weight True longformer.encoder.layer.2.attention.self.query_global.bias True longformer.encoder.layer.2.attention.self.key_global.weight True longformer.encoder.layer.2.attention.self.key_global.bias True longformer.encoder.layer.2.attention.self.value_global.weight True longformer.encoder.layer.2.attention.self.value_global.bias True longformer.encoder.layer.2.attention.output.dense.weight True longformer.encoder.layer.2.attention.output.dense.bias True longformer.encoder.layer.2.attention.output.LayerNorm.weight True longformer.encoder.layer.2.attention.output.LayerNorm.bias True longformer.encoder.layer.2.intermediate.dense.weight True longformer.encoder.layer.2.intermediate.dense.bias True longformer.encoder.layer.2.output.dense.weight True longformer.encoder.layer.2.output.dense.bias True longformer.encoder.layer.2.output.LayerNorm.weight True longformer.encoder.layer.2.output.LayerNorm.bias True longformer.encoder.layer.3.attention.self.query.weight True longformer.encoder.layer.3.attention.self.query.bias True longformer.encoder.layer.3.attention.self.key.weight True longformer.encoder.layer.3.attention.self.key.bias True longformer.encoder.layer.3.attention.self.value.weight True longformer.encoder.layer.3.attention.self.value.bias True longformer.encoder.layer.3.attention.self.query_global.weight True longformer.encoder.layer.3.attention.self.query_global.bias True longformer.encoder.layer.3.attention.self.key_global.weight True longformer.encoder.layer.3.attention.self.key_global.bias True longformer.encoder.layer.3.attention.self.value_global.weight True longformer.encoder.layer.3.attention.self.value_global.bias True longformer.encoder.layer.3.attention.output.dense.weight True longformer.encoder.layer.3.attention.output.dense.bias True longformer.encoder.layer.3.attention.output.LayerNorm.weight True longformer.encoder.layer.3.attention.output.LayerNorm.bias True longformer.encoder.layer.3.intermediate.dense.weight True longformer.encoder.layer.3.intermediate.dense.bias True longformer.encoder.layer.3.output.dense.weight True longformer.encoder.layer.3.output.dense.bias True longformer.encoder.layer.3.output.LayerNorm.weight True longformer.encoder.layer.3.output.LayerNorm.bias True longformer.encoder.layer.4.attention.self.query.weight True longformer.encoder.layer.4.attention.self.query.bias True longformer.encoder.layer.4.attention.self.key.weight True longformer.encoder.layer.4.attention.self.key.bias True longformer.encoder.layer.4.attention.self.value.weight True longformer.encoder.layer.4.attention.self.value.bias True longformer.encoder.layer.4.attention.self.query_global.weight True longformer.encoder.layer.4.attention.self.query_global.bias True longformer.encoder.layer.4.attention.self.key_global.weight True longformer.encoder.layer.4.attention.self.key_global.bias True longformer.encoder.layer.4.attention.self.value_global.weight True longformer.encoder.layer.4.attention.self.value_global.bias True longformer.encoder.layer.4.attention.output.dense.weight True longformer.encoder.layer.4.attention.output.dense.bias True longformer.encoder.layer.4.attention.output.LayerNorm.weight True longformer.encoder.layer.4.attention.output.LayerNorm.bias True longformer.encoder.layer.4.intermediate.dense.weight True longformer.encoder.layer.4.intermediate.dense.bias True longformer.encoder.layer.4.output.dense.weight True longformer.encoder.layer.4.output.dense.bias True longformer.encoder.layer.4.output.LayerNorm.weight True longformer.encoder.layer.4.output.LayerNorm.bias True longformer.encoder.layer.5.attention.self.query.weight True longformer.encoder.layer.5.attention.self.query.bias True longformer.encoder.layer.5.attention.self.key.weight True longformer.encoder.layer.5.attention.self.key.bias True longformer.encoder.layer.5.attention.self.value.weight True longformer.encoder.layer.5.attention.self.value.bias True longformer.encoder.layer.5.attention.self.query_global.weight True longformer.encoder.layer.5.attention.self.query_global.bias True longformer.encoder.layer.5.attention.self.key_global.weight True longformer.encoder.layer.5.attention.self.key_global.bias True longformer.encoder.layer.5.attention.self.value_global.weight True longformer.encoder.layer.5.attention.self.value_global.bias True longformer.encoder.layer.5.attention.output.dense.weight True longformer.encoder.layer.5.attention.output.dense.bias True longformer.encoder.layer.5.attention.output.LayerNorm.weight True longformer.encoder.layer.5.attention.output.LayerNorm.bias True longformer.encoder.layer.5.intermediate.dense.weight True longformer.encoder.layer.5.intermediate.dense.bias True longformer.encoder.layer.5.output.dense.weight True longformer.encoder.layer.5.output.dense.bias True longformer.encoder.layer.5.output.LayerNorm.weight True longformer.encoder.layer.5.output.LayerNorm.bias True longformer.encoder.layer.6.attention.self.query.weight True longformer.encoder.layer.6.attention.self.query.bias True longformer.encoder.layer.6.attention.self.key.weight True longformer.encoder.layer.6.attention.self.key.bias True longformer.encoder.layer.6.attention.self.value.weight True longformer.encoder.layer.6.attention.self.value.bias True longformer.encoder.layer.6.attention.self.query_global.weight True longformer.encoder.layer.6.attention.self.query_global.bias True longformer.encoder.layer.6.attention.self.key_global.weight True longformer.encoder.layer.6.attention.self.key_global.bias True longformer.encoder.layer.6.attention.self.value_global.weight True longformer.encoder.layer.6.attention.self.value_global.bias True longformer.encoder.layer.6.attention.output.dense.weight True longformer.encoder.layer.6.attention.output.dense.bias True longformer.encoder.layer.6.attention.output.LayerNorm.weight True longformer.encoder.layer.6.attention.output.LayerNorm.bias True longformer.encoder.layer.6.intermediate.dense.weight True longformer.encoder.layer.6.intermediate.dense.bias True longformer.encoder.layer.6.output.dense.weight True longformer.encoder.layer.6.output.dense.bias True longformer.encoder.layer.6.output.LayerNorm.weight True longformer.encoder.layer.6.output.LayerNorm.bias True longformer.encoder.layer.7.attention.self.query.weight True longformer.encoder.layer.7.attention.self.query.bias True longformer.encoder.layer.7.attention.self.key.weight True longformer.encoder.layer.7.attention.self.key.bias True longformer.encoder.layer.7.attention.self.value.weight True longformer.encoder.layer.7.attention.self.value.bias True longformer.encoder.layer.7.attention.self.query_global.weight True longformer.encoder.layer.7.attention.self.query_global.bias True longformer.encoder.layer.7.attention.self.key_global.weight True longformer.encoder.layer.7.attention.self.key_global.bias True longformer.encoder.layer.7.attention.self.value_global.weight True longformer.encoder.layer.7.attention.self.value_global.bias True longformer.encoder.layer.7.attention.output.dense.weight True longformer.encoder.layer.7.attention.output.dense.bias True longformer.encoder.layer.7.attention.output.LayerNorm.weight True longformer.encoder.layer.7.attention.output.LayerNorm.bias True longformer.encoder.layer.7.intermediate.dense.weight True longformer.encoder.layer.7.intermediate.dense.bias True longformer.encoder.layer.7.output.dense.weight True longformer.encoder.layer.7.output.dense.bias True longformer.encoder.layer.7.output.LayerNorm.weight True longformer.encoder.layer.7.output.LayerNorm.bias True longformer.encoder.layer.8.attention.self.query.weight True longformer.encoder.layer.8.attention.self.query.bias True longformer.encoder.layer.8.attention.self.key.weight True longformer.encoder.layer.8.attention.self.key.bias True longformer.encoder.layer.8.attention.self.value.weight True longformer.encoder.layer.8.attention.self.value.bias True longformer.encoder.layer.8.attention.self.query_global.weight True longformer.encoder.layer.8.attention.self.query_global.bias True longformer.encoder.layer.8.attention.self.key_global.weight True longformer.encoder.layer.8.attention.self.key_global.bias True longformer.encoder.layer.8.attention.self.value_global.weight True longformer.encoder.layer.8.attention.self.value_global.bias True longformer.encoder.layer.8.attention.output.dense.weight True longformer.encoder.layer.8.attention.output.dense.bias True longformer.encoder.layer.8.attention.output.LayerNorm.weight True longformer.encoder.layer.8.attention.output.LayerNorm.bias True longformer.encoder.layer.8.intermediate.dense.weight True longformer.encoder.layer.8.intermediate.dense.bias True longformer.encoder.layer.8.output.dense.weight True longformer.encoder.layer.8.output.dense.bias True longformer.encoder.layer.8.output.LayerNorm.weight True longformer.encoder.layer.8.output.LayerNorm.bias True longformer.encoder.layer.9.attention.self.query.weight True longformer.encoder.layer.9.attention.self.query.bias True longformer.encoder.layer.9.attention.self.key.weight True longformer.encoder.layer.9.attention.self.key.bias True longformer.encoder.layer.9.attention.self.value.weight True longformer.encoder.layer.9.attention.self.value.bias True longformer.encoder.layer.9.attention.self.query_global.weight True longformer.encoder.layer.9.attention.self.query_global.bias True longformer.encoder.layer.9.attention.self.key_global.weight True longformer.encoder.layer.9.attention.self.key_global.bias True longformer.encoder.layer.9.attention.self.value_global.weight True longformer.encoder.layer.9.attention.self.value_global.bias True longformer.encoder.layer.9.attention.output.dense.weight True longformer.encoder.layer.9.attention.output.dense.bias True longformer.encoder.layer.9.attention.output.LayerNorm.weight True longformer.encoder.layer.9.attention.output.LayerNorm.bias True longformer.encoder.layer.9.intermediate.dense.weight True longformer.encoder.layer.9.intermediate.dense.bias True longformer.encoder.layer.9.output.dense.weight True longformer.encoder.layer.9.output.dense.bias True longformer.encoder.layer.9.output.LayerNorm.weight True longformer.encoder.layer.9.output.LayerNorm.bias True longformer.encoder.layer.10.attention.self.query.weight True longformer.encoder.layer.10.attention.self.query.bias True longformer.encoder.layer.10.attention.self.key.weight True longformer.encoder.layer.10.attention.self.key.bias True longformer.encoder.layer.10.attention.self.value.weight True longformer.encoder.layer.10.attention.self.value.bias True longformer.encoder.layer.10.attention.self.query_global.weight True longformer.encoder.layer.10.attention.self.query_global.bias True longformer.encoder.layer.10.attention.self.key_global.weight True longformer.encoder.layer.10.attention.self.key_global.bias True longformer.encoder.layer.10.attention.self.value_global.weight True longformer.encoder.layer.10.attention.self.value_global.bias True longformer.encoder.layer.10.attention.output.dense.weight True longformer.encoder.layer.10.attention.output.dense.bias True longformer.encoder.layer.10.attention.output.LayerNorm.weight True longformer.encoder.layer.10.attention.output.LayerNorm.bias True longformer.encoder.layer.10.intermediate.dense.weight True longformer.encoder.layer.10.intermediate.dense.bias True longformer.encoder.layer.10.output.dense.weight True longformer.encoder.layer.10.output.dense.bias True longformer.encoder.layer.10.output.LayerNorm.weight True longformer.encoder.layer.10.output.LayerNorm.bias True longformer.encoder.layer.11.attention.self.query.weight True longformer.encoder.layer.11.attention.self.query.bias True longformer.encoder.layer.11.attention.self.key.weight True longformer.encoder.layer.11.attention.self.key.bias True longformer.encoder.layer.11.attention.self.value.weight True longformer.encoder.layer.11.attention.self.value.bias True longformer.encoder.layer.11.attention.self.query_global.weight True longformer.encoder.layer.11.attention.self.query_global.bias True longformer.encoder.layer.11.attention.self.key_global.weight True longformer.encoder.layer.11.attention.self.key_global.bias True longformer.encoder.layer.11.attention.self.value_global.weight True longformer.encoder.layer.11.attention.self.value_global.bias True longformer.encoder.layer.11.attention.output.dense.weight True longformer.encoder.layer.11.attention.output.dense.bias True longformer.encoder.layer.11.attention.output.LayerNorm.weight True longformer.encoder.layer.11.attention.output.LayerNorm.bias True longformer.encoder.layer.11.intermediate.dense.weight True longformer.encoder.layer.11.intermediate.dense.bias True longformer.encoder.layer.11.output.dense.weight True longformer.encoder.layer.11.output.dense.bias True longformer.encoder.layer.11.output.LayerNorm.weight True longformer.encoder.layer.11.output.LayerNorm.bias True classifier.dense.weight True classifier.dense.bias True classifier.out_proj.weight True classifier.out_proj.bias True My questions why for all layers param.requires_grad is True? Shouldnt it be False at least for classifier. layers? Aren't we training them? Does param.requires_grad==True mean that particular layer is freezed? I am confused with wording requires_grad. Does it mean freezed? if i want to train some of the previous layers as shown here , should I use below code? for name, param in model.named_parameters(): if name.startswith("..."): # choose whatever you like here param.requires_grad = False considering it takes a lot of time to train, is there specific recommendation regarding layers that I should train? To begin with I am planning to train - all layers starting with longformer.encoder.layer.11. and `classifier.dense.weight` `classifier.dense.bias` `classifier.out_proj.weight` `classifier.out_proj.bias` Do i need add any additional layers such as dropout or is that already taken care by LongformerForSequenceClassification.from_pretrained? I am not seeing any dropout layers in the above output and that's why asking the question #------------------ update 1 How could I know which layers are frozen by using below code from the answer given by @joe32140 ? My guess is everything except last 4 layers from my output shown in my original question gets frozen. But is there any easier way to check? for param in model.base_model.parameters(): param.requires_grad = False
requires_grad==True means that we will compute the gradient of this tensor, so the default setting is we will train/finetune all layers. You can only train the output layer by freezing the encoder with for param in model.base_model.parameters(): param.requires_grad = False Yes, dropout is used in huggingface output layer implementation. See here: https://github.com/huggingface/transformers/blob/198c335d219a5eb4d3f124fdd1ce1a9cd9f78a9b/src/transformers/models/longformer/modeling_longformer.py#L1938 As for update 1, yes, base_model refers to layers excluding the output classification head. However, it's actually two layers instead of four where each layer has a weight and a bias tensors.
https://stackoverflow.com/questions/71577525/
Keras Upsampling2d vs PyTorch Upsampling
I am trying to convert a Keras Model to PyTorch. Now, it involves the UpSampling2D from keras. When I used torch.nn.UpsamplingNearest2d in pytorch, as default value of UpSampling2D in keras is nearest, I got different inconsistent results. The example is as follows: Keras behaviour In [3]: t1 = tf.random_normal([32, 8, 8, 512]) # as we have channels last in keras In [4]: u_s = tf.keras.layers.UpSampling2D(2)(t1) In [5]: u_s.shape Out[5]: TensorShape([Dimension(32), Dimension(16), Dimension(16), Dimension(512)]) So the output shape is (32,16,16,512). Now let's do the same thing with PyTorch. PyTorch Behaviour In [2]: t1 = torch.randn([32,512,8,8]) # as channels first in pytorch In [3]: u_s = torch.nn.UpsamplingNearest2d(2)(t1) In [4]: u_s.shape Out[4]: torch.Size([32, 512, 2, 2]) Here output shape is (32,512,2,2) as compared to (32,512,16,16) from keras. So how do I get equvivlent results of Keras in PyTorch. Thanks
In keras, it uses a scaling factor to upsample. SOURCE. tf.keras.layers.UpSampling2D(size, interpolation='nearest') size: Int, or tuple of 2 integers. The upsampling factors for rows and columns. And, PyTorch provides, both, direct output size and scaling factor. SOURCE. torch.nn.UpsamplingNearest2d(size=None, scale_factor=None) To specify the scale, it takes either the size or the scale_factor as its constructor argument. So, in your case # scaling factor in keras t1 = tf.random.normal([32, 8, 8, 512]) tf.keras.layers.UpSampling2D(2)(t1).shape TensorShape([32, 16, 16, 512]) # direct output size in pytorch t1 = torch.randn([32,512,8,8]) # as channels first in pytorch torch.nn.UpsamplingNearest2d(size=(16, 16))(t1).shape # or torch.nn.UpsamplingNearest2d(size=16)(t1).shape torch.Size([32, 512, 16, 16]) # scaling factor in pytorch. torch.nn.UpsamplingNearest2d(scale_factor=2)(t1).shape torch.Size([32, 512, 16, 16])
https://stackoverflow.com/questions/71585394/
How to combine an image tensor (4D) and a depth tensor (4D) to create a 5D tensor [batch size, channels, depth, height, width] in PyTorch?
During training, I load image and disparity data. The image tensor is of shape: [2, 3, 256, 256], and disparity/depth tensor is of shape: [2, 1, 256, 256] (batch size, channels, height, width). I want to use Conv3D, so I need to combine these two tensors and create a new tensor of shape: [2, 3, 256, 256, 256] (batch size, channels, depth, height, width). The depth values range from 0-400, and a possibility is to divide that into intervals, e.g., 4 intervals of 100. I want the resulting tensor to look like a voxel, similarly to the technique used in this paper. The training loop that iterates over the data is below: for batch_id, sample in enumerate(train_loader): sample = {name: tensor.cuda() for name, tensor in sample.items()} # image tensor [2, 3, 256, 256] rgb_image = transforms.Lambda(lambda x: x.mul(255))(sample["frame"]) # translate disparity to depth depth_from_disparity_frame = 132.28 / sample["disparity_frame"] # depth tensor [2, 1, 256, 256] depth_image = depth_from_disparity_frame.unsqueeze(1)
From the article your linked: We create a 3D voxel representation, with the same height and width as the original image, and with a depth determined by the difference between the maximum and minimum depth values found in the images. Each RGB-D pixel of an image is then placed at the same position in the voxel grid but at its corresponding depth. This is what Ivan suggested, more or less. If you know that your depth will always be 0-400 and I imagine that you can skip the first part of "depth determined by the difference between the maximum and minimum depth values". This could always be normalized before-hand or later. Code using dummy data: import torch import torch.nn.functional as F # Declarations (dummy tensors) rgb_im = torch.randint(0, 255, [1, 3, 256, 256]) depth = torch.randint(0, 400, [1, 1, 256, 256]) # Calculations depth_ohe = F.one_hot(depth, num_classes=400) # of shape (batch, channel, height, width, binary) bchwd_tensor = rgb_im.unsqueeze(-1)*depth_ohe # of shape (batch, channel, height, width, depth) bcdhw_tensor = bchwd_tensor.permute(0, 1, 4, 2, 3) # of shape (batch, channel, depth, height, width)
https://stackoverflow.com/questions/71585968/
torch matrix equaity sum operation
I want to do an operation similar to matrix multiplication, except instead of multiplying I want to check equality. The effect that I want to achieve is similar to the following: a = torch.Tensor([[1, 2, 3], [4, 5, 6]]).to(torch.uint8) b = torch.Tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]]).to(torch.uint8) result = [[sum(a[i] == b [j]) for j in range(len(b))] for i in range(len(a))] Is there a way that I can use einsum, or any other function in pytorch to achieve the above efficiently?
You can make use of the broadcasting to do the same, for instance with result = (a[:, None, :] == b[None, :, :]).sum(dim=2) Here None just introduces a dummy dimensions - alternatively you can use the less visual .unsqueeze() instead.
https://stackoverflow.com/questions/71586860/
Restructure code to avoid for loops in training loop?
I am defining a train function which I pass in a data_loader as a dict. data_loader['train']: consists of train data data_loader['val'] consists of validation data. I created a loop which iterates through which phase I am in (either train or val) and sets the model to either model.train() or model.eval() accordingly. However I feel I have too many nested for loops here making it computationally expensive. Could anyone recommend a better way of going about constructing my train function? Should I create a separate function for validating instead? Below is what I have so far: #Make train function (simple at first) def train_network(model, optimizer, data_loader, no_epochs): total_epochs = notebook.tqdm(range(no_epochs)) for epoch in total_epochs: for phase in ['train', 'val']: if phase == 'train': model.train() else: model.eval() for i, (images, g_truth) in enumerate(data_loader[phase]): images = images.to(device) g_truth = g_truth.to(device)
The outer-most and inner-most for loops are common when writing training scripts. The most common pattern I see is to do: total_epochs = notebook.tqdm(range(no_epochs)) for epoch in total_epochs: # Training for i, (images, g_truth) in enumerate(train_data_loader): model.train() images = images.to(device) g_truth = g_truth.to(device) ... # Validating for i, (images, g_truth) in enumerate(val_data_loader): model.eval() images = images.to(device) g_truth = g_truth.to(device) ... If you need to use your previous variable data_loader, you can replace train_data_loader with data_loader["train"] and val_data_loader with data_loader["val"] This layout is common because we generally want to do some things differently when validating as opposed to training. This also structures the code better and avoids a lot of if phase == "train" that you might need at different parts of your inner-most loop. This does however mean that you might need to duplicate some code. The trade off is generally accepted and your original code might be considered if we had 3 or more phases, like multiple validation phases or an evaluation phase as well.
https://stackoverflow.com/questions/71589440/
Dataloader() takes no arguments Error while executing a code to run a file in python using OOPS methodology
Hi Guys, I am trying to create a training, validation and test dataset in python for bike sharing dataset using Object oriented programming. I have first created a method called "DATALOADER" to read the file and then split the data into train, validation and test set. However, I am facing some challenges while executing the code. Pasting the code above and error response below. Need some help with that. Error Message: **--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-29-29ec681918b0> in <module> ----> 1 dataloader = Dataloader('C:/Users/pbhal/Downloads/hour.csv/hour.csv') 2 train, val, test = dataloader.getData() 3 fullData = dataloader.getFullData() 4 5 category_features = ['season', 'holiday', 'mnth', 'hr', 'weekday', 'workingday', 'weathersit'] TypeError: Dataloader() takes no arguments** I was trying to create a train, validation and test set out of hour.csv datafile. However , it did not work out
You have a typo, you should use __init__ (two underscores) instead of _init_. This means that your class does not have an initialization method defined, and the fallback (the Python class object I think) does not receive any argument. You can validate that this is the issue with a small empty class: class Example: pass a = Example() # Works b = Example(1) # Fails with "TypeError: Example() takes no arguments"
https://stackoverflow.com/questions/71596515/
Error Training Custom COCO Dataset with Detectron2
I'm trying to train a custom COCO-format dataset with Detectron2 on PyTorch. My datasets are json files with the aforementioned COCO-format, with each item in the "annotations" section looking like this: The code for setting up Detectron2 and registering the training & validation datasets are as follows: from detectron2.data.datasets import register_coco_instances for d in ["train", "validation"]: register_coco_instances(f"segmentation_{d}", {}, f"/content/drive/MyDrive/Segmentation Annotations/{d}.json", f"/content/drive/MyDrive/Segmentation Annotations/imgs") from detectron2.engine import DefaultTrainer from detectron2.config import get_cfg cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) cfg.DATASETS.TRAIN = ("segmentation_train",) cfg.DATASETS.TEST = () cfg.DATALOADER.NUM_WORKERS = 2 cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") cfg.SOLVER.IMS_PER_BATCH = 2 cfg.SOLVER.BASE_LR = 0.00025 cfg.SOLVER.MAX_ITER = 1000 cfg.SOLVER.STEPS = [] cfg.MODEL.ROI_HEADS.NUM_CLASSES = 20 os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) trainer = DefaultTrainer(cfg) trainer.resume_or_load(resume=False) trainer.train() However, when I run the training, I get the following error after the first iteration: KeyError Traceback (most recent call last) <ipython-input-12-2aaec108c313> in <module>() 17 trainer = DefaultTrainer(cfg) 18 trainer.resume_or_load(resume=False) ---> 19 trainer.train() 8 frames /usr/local/lib/python3.7/dist-packages/detectron2/engine/defaults.py in train(self) 482 OrderedDict of results, if evaluation is enabled. Otherwise None. 483 """ --> 484 super().train(self.start_iter, self.max_iter) 485 if len(self.cfg.TEST.EXPECTED_RESULTS) and comm.is_main_process(): 486 assert hasattr( /usr/local/lib/python3.7/dist-packages/detectron2/engine/train_loop.py in train(self, start_iter, max_iter) 147 for self.iter in range(start_iter, max_iter): 148 self.before_step() --> 149 self.run_step() 150 self.after_step() 151 # self.iter == max_iter can be used by `after_train` to /usr/local/lib/python3.7/dist-packages/detectron2/engine/defaults.py in run_step(self) 492 def run_step(self): 493 self._trainer.iter = self.iter --> 494 self._trainer.run_step() 495 496 @classmethod /usr/local/lib/python3.7/dist-packages/detectron2/engine/train_loop.py in run_step(self) 265 If you want to do something with the data, you can wrap the dataloader. 266 """ --> 267 data = next(self._data_loader_iter) 268 data_time = time.perf_counter() - start 269 /usr/local/lib/python3.7/dist-packages/detectron2/data/common.py in __iter__(self) 232 233 def __iter__(self): --> 234 for d in self.dataset: 235 w, h = d["width"], d["height"] 236 bucket_id = 0 if w > h else 1 /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self) 519 if self._sampler_iter is None: 520 self._reset() --> 521 data = self._next_data() 522 self._num_yielded += 1 523 if self._dataset_kind == _DatasetKind.Iterable and \ /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self) 1181 if len(self._task_info[self._rcvd_idx]) == 2: 1182 data = self._task_info.pop(self._rcvd_idx)[1] -> 1183 return self._process_data(data) 1184 1185 assert not self._shutdown and self._tasks_outstanding > 0 /usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _process_data(self, data) 1227 self._try_put_index() 1228 if isinstance(data, ExceptionWrapper): -> 1229 data.reraise() 1230 return data 1231 /usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self) 423 # have message field 424 raise self.exc_type(message=msg) --> 425 raise self.exc_type(msg) 426 427 KeyError: Caught KeyError in DataLoader worker process 1. Original Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch data.append(next(self.dataset_iter)) File "/usr/local/lib/python3.7/dist-packages/detectron2/data/common.py", line 201, in __iter__ yield self.dataset[idx] File "/usr/local/lib/python3.7/dist-packages/detectron2/data/common.py", line 90, in __getitem__ data = self._map_func(self._dataset[cur_idx]) File "/usr/local/lib/python3.7/dist-packages/detectron2/utils/serialize.py", line 26, in __call__ return self._obj(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/detectron2/data/dataset_mapper.py", line 189, in __call__ self._transform_annotations(dataset_dict, transforms, image_shape) File "/usr/local/lib/python3.7/dist-packages/detectron2/data/dataset_mapper.py", line 132, in _transform_annotations annos, image_shape, mask_format=self.instance_mask_format File "/usr/local/lib/python3.7/dist-packages/detectron2/data/detection_utils.py", line 400, in annotations_to_instances segms = [obj["segmentation"] for obj in annos] File "/usr/local/lib/python3.7/dist-packages/detectron2/data/detection_utils.py", line 400, in <listcomp> segms = [obj["segmentation"] for obj in annos] KeyError: 'segmentation' You all have any idea why this might be happening, and if so, what can be done to fix it? Any input is appreciated. Thanks!
It's difficult to give a concrete answer without looking at the full annotation file, but a KeyError exception is raised when trying to access a key that is not in a dictionary. From the error message you've posted, this key seems to be 'segmentation'. This is not in your code snippet, but before even getting into network training, have you done any exploration/inspections using the registered datasets? Doing some basic exploration or inspections would expose any problems with your dataset so you can fix them early in your development process (as opposed to letting the trainer catch them, in which case the error messages could get long and confounding). In any case, for your specific issue, you can take the registered training dataset and check if all annotations have the 'segmentation' field. A simple code snippet to do this below. # Register datasets from detectron2.data.datasets import register_coco_instances for d in ["train", "validation"]: register_coco_instances(f"segmentation_{d}", {}, f"/content/drive/MyDrive/Segmentation Annotations/{d}.json", f"/content/drive/MyDrive/Segmentation Annotations/imgs") # Check if all annotations in the registered training set have the segmentation field from detectron2.data import DatasetCatalog dataset_dicts_train = DatasetCatalog.get('segmentation_train') for d in dataset_dicts_train: for obj in d['annotations']: if 'segmentation' not in obj: print(f'{d["file_name"]} has an annotation with no segmentation field') It would be strange if some images have annotations with no 'segmentation' fields in them, but it would indicate that there's some problem in your upstream annotation process.
https://stackoverflow.com/questions/71601691/
For loops in a dictionary, pytorch
Hi guys I have a question, for the variable "image_datasets" there is a for loop for x in ['train', 'val']. I have never seen the implementation of a for loop in a dict before. data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean, std) ]), 'val': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(mean, std) ]), } data_dir = 'data/hymenoptera_data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=0) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes
This is called dictionary comprehension and it is iterating over a list. The code dataloaders = { x: torch.utils.data.DataLoader( image_datasets[x], batch_size=4, shuffle=True, num_workers=0 ) for x in ['train', 'val'] } is equivalent to dataloaders = { 'train': torch.utils.data.DataLoader( image_datasets['train'], batch_size=4, shuffle=True, num_workers=0 ), 'val': torch.utils.data.DataLoader( image_datasets['val'], batch_size=4, shuffle=True, num_workers=0 ) }
https://stackoverflow.com/questions/71602569/
PyTorch clip_grad_norm vs clip_grad_norm_, what is the differece when it has underline?
When coding PyTorch in torch.nn.utils I see two functions, clip_grad_norm and clip_grad_norm_. I want to know the difference so I went to check the documentation but when I searched I only found the clip_grad_norm_ and not clip_grad_norm. So I'm here to ask if anyone knows the difference.
Pytorch uses the trailing underscore convention for in-place operations. So the difference is that the one with an underscore modifies the tensor in place and the other one leaves the original tensor unmodified and returns a new tensor.
https://stackoverflow.com/questions/71608032/
Python assigning attribute changing length
I have a class that accepts a parameter X. This parameter X is a numpy array, containing lists that contain ints. array([[ 101, 2002, 8542, ..., 0, 0, 0], [ 101, 2002, 8974, ..., 0, 0, 0], [ 101, 5076, 2743, ..., 0, 0, 0], ..., [ 101, 4302, 2253, ..., 0, 0, 0], [ 101, 13875, 2003, ..., 0, 0, 0], [ 101, 1045, 2031, ..., 0, 0, 0]]) I have a class that takes this X and assigns it to an attribute. class TaskADataset(Dataset): def __init__(self, X, y): self.X = X, self.y = y But the parameter X and the attribute X now have different lengths. dataset = TaskADataset(X, y) print(len(dataset.X), len(X)) 1 10000 Why is this occurring? Thank you for the help.
As shriakhilc pointed out, I included a "," and this turned it into a tuple with one element.
https://stackoverflow.com/questions/71608931/
Validation accuracy and loss is the same after each epoch
My validation accuracy is the same after every epoch. Not sure what i'm doing wrong here? I have added my CNN network and my training function below. I initialise the CNN once. The training function however, works perfectly fine, the loss goes down and the accuracy increases per epoch. I made a test function the same structure as my validation function and the same thing happens. My train/val split is 40000/10000. I am using cifar 10. Below is my code: #Make train function (simple at first) def train_network(model, optimizer, train_loader, num_epochs=10): total_epochs = notebook.tqdm(range(num_epochs)) model.train() for epoch in total_epochs: train_acc = 0.0 running_loss = 0.0 for i, (x_train, y_train) in enumerate(train_loader): x_train, y_train = x_train.to(device), y_train.to(device) y_pred = model(x_train) loss = criterion(y_pred, y_train) loss.backward() optimizer.step() optimizer.zero_grad() running_loss += loss.item() train_acc += accuracy(y_pred, y_train) running_loss /= len(train_loader) train_acc /= len(train_loader) print('Evaluation Loss: %.3f | Evaluation Accuracy: %.3f'%(running_loss, train_acc)) @torch.no_grad() def validate_network(model, optimizer, val_loader, num_epochs=10): model.eval() total_epochs = notebook.tqdm(range(num_epochs)) for epoch in total_epochs: accu = 0.0 running_loss = 0.0 for i, (x_val, y_val) in enumerate(val_loader): x_val, y_val = x_val.to(device), y_val.to(device) val_pred = model(x_val) loss = criterion(val_pred, y_val) running_loss += loss.item() accu += accuracy(val_pred, y_val) running_loss /= len(val_loader) accu /= len(val_loader) print('Val Loss: %.3f | Val Accuracy: %.3f'%(running_loss,accu)) OUTPUT: Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 Val Loss: 0.623 | Val Accuracy: 0.786 So I guess my question is, how do I get a a representative output for my accuracy and loss per epoch when validating.
What happens here is that you run a loop for number_of_epochs where you just trst the same network multiple times. I would recommend you calling the validation function during training at the end of each epoch to test the improvement of the epoch to the model's performance. This means that the training function should look something like: def train_network(model, optimizer, train_loader, val_loader, num_epochs=10): total_epochs = notebook.tqdm(range(num_epochs)) model.train() for epoch in total_epochs: train_acc = 0.0 running_loss = 0.0 for i, (x_train, y_train) in enumerate(train_loader): x_train, y_train = x_train.to(device), y_train.to(device) y_pred = model(x_train) loss = criterion(y_pred, y_train) loss.backward() optimizer.step() optimizer.zero_grad() running_loss += loss.item() train_acc += accuracy(y_pred, y_train) running_loss /= len(train_loader) train_acc /= len(train_loader) print('Evaluation Loss: %.3f | Evaluation Accuracy: %.3f'%(running_loss, train_acc)) validate_network(model, optimizer, val_loader, num_epochs=1) Notice that I added the validation loader as input and called the validation function at the end of each epoch, setting the validation number of epochs to 1. A small additional change will be to remove the epochs loop from the validation function.
https://stackoverflow.com/questions/71610728/
How to place the dataset for training Yolov5?
I’m currently working on object detection using yolov5. I trained a model with a custom dataset which has 3 classes = [‘Car’,‘Motorcycle’,‘Person’] I have many questions related to yolov5. All the custom images are labelled using Roboflow. question1 : As you can see from the table that my dataset has mix of images with different sizes. Will this be a problem in training? And also assume that i’ve trained the model and got ‘best.pt’. Will that model work efficiently in any dimensions of images/videos. question 2: Is this directory model correct for training. Even i have ‘test’ directory but it seems that the directory is not at all used. The images in the ‘test’ folder is useless. ( I know that i’m asking dumb questions, please bare with me.) Is it ok if place all my images like this And should i need a ‘test’ folder? question3: What is the ‘imgsz’ in detect.py? Is it downsampling the input source? I’ve spent more than 3 weeks in yolo. I love it but i find some parts difficult to grasp. kindly provide suggestion for this questions. Thanks in advance.
"question1 : As you can see from the table that my dataset has mix of images with different sizes. Will this be a problem in training? And also assume that i’ve trained the model and got ‘best.pt’. Will that model work efficiently in any dimensions of images/videos." As long as you've resized/normalized all of your images to be the same square size, then you should be fine. YOLO trains on square images. You can use a platform like Roboflow to process your images so they not only come out in the right structure (for your images and annotation files) but also resize them while generating your dataset so they are all the same size. http://roboflow.com/ - you just need to make a public workspace to upload your images to and you can use the platform free. Here's a video that covers custom training with YOLOv5: https://www.youtube.com/watch?v=x0ThXHbtqCQ Roboflow's python package can also be used to extract your images programmatically: https://docs.roboflow.com/python "Is this directory model correct for training. Even i have ‘test’ directory but it seems that the directory is not at all used. The images in the ‘test’ folder is useless. ( I know that i’m asking dumb questions, please bare with me.)" Yes that directory model is correct from training. Its what I have whenever I run YOLOv5 training too. You do need a test folder if you want to run inference against the test folder images to learn more about your model's performance. The 'imgsz' parameter in detect.py is for setting the height/width of the images for inference. You set it at the value you used for --img when you ran train.py. For example: Resized images to 640 by 640 when generating your images for training? Use (640, 640) for the 'imgsz' parameter (that is the default value). And that would also mean you set --img to 640 when you ran train.py detect.py parameters (YOLOv5 Github repo) train.py parameters (YOLOv5 Github repo) YOLOv5's Github: Tips for Best Training Results https://github.com/ultralytics/yolov5/wiki/Tips-for-Best-Training-Results Roboflow's Model Production Tips: https://docs.roboflow.com/model-tips
https://stackoverflow.com/questions/71612175/
Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend
I am trying to save the pytorch model into .ptl file and loading it in android but it keeps throwing this error and driving me nuts. Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Vulkan, BackendSelect, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC]. But the thing is that I am tranferring my model to cpu before saving it. So the error doesn't even make sense. example = torch.rand(1, 3, 224, 224) model_conv = model_conv.to("cpu") for param in model_conv.parameters(): if param.is_cuda: print("Tensor on cuda") break else: print("No tensor on cuda.") # move model back to cpu, do tracing, and optimize traced_script_module = torch.jit.trace(model_conv, example) torchscript_model_optimized = optimize_for_mobile(traced_script_module) # save optimized model for mobile PATH = 'model.ptl' torchscript_model_optimized._save_for_lite_interpreter(PATH) print(f"optimized model saved to {PATH}") The output of for loop is No tensor on cuda. This is how I am loading the model in android. I loaded a sample model from their github and it works so I doubt there is issue with android code. module = LiteModuleLoader.load(MainActivity.assetFilePath(getApplicationContext(), "model.ptl")); Side node: There are so many ways to save a model. Why is there not a good documentation for Pytorch Mobile. Tflite has better doucmentation than this.
Turned out it was Android issue. Android does not update the asset files even if they are changed and for me it was using an old model which was not converted to cpu. If you are having similar issues or your accuracy is not improving, try clearing the app data either in similation or app setting. Phone/emulator settings -> apps menu this github issue that end up helping me: https://github.com/pytorch/pytorch/issues/53650
https://stackoverflow.com/questions/71612205/
pytorch reads frames from video for image classification, all predicting the same category
I have 3 categories of fish images, each category has 1000 images, when I trained the model, the classification accuracy was 97%, and when I used the folder images for prediction, the accuracy was no problem. But when I replace the picture with a video, and cut out each frame from the video for image classification, no matter which category of video it is, all frames are predicted to be category 1:"stingray".Why? #!/usr/bin/env python # coding: utf-8 import torch from torchvision import transforms import torchvision.models as models import cv2 import torch.nn.functional as F CLASSES = {0:"goldfish", 1:"stingray", 2:"tench"} BATCH_SIZE = 4 IMG_SIZE = (400, 400) TRANSFORM_IMG = transforms.Compose([ transforms.ToTensor(), transforms.Resize(IMG_SIZE), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) # model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = models.vgg19_bn(pretrained=False, num_classes=3) model.to(device) model.load_state_dict(torch.load('checkpoint.pt')) model.eval() videoCapture = cv2.VideoCapture(r'D:/video/Goldfish.mp4') fps = videoCapture.get(cv2.CAP_PROP_FPS) size = (int(videoCapture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(videoCapture.get(cv2.CAP_PROP_FRAME_HEIGHT))) ps = 25 fourcc = cv2.VideoWriter_fourcc(*'DIVX') videoWriter = cv2.VideoWriter("D:/goldfish.mp4", fourcc, fps, size) with torch.no_grad(): success, frame = videoCapture.read() while success: # frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) image_tensor = TRANSFORM_IMG(frame) image_tensor = image_tensor.unsqueeze(0) test_input = image_tensor.to(device) outputs = model(test_input) _, predicted = torch.max(outputs, 1) probability = F.softmax(outputs, dim=1) top_probability, top_class = probability.topk(1, dim=1) predicted = predicted.cpu().detach().numpy() predicted = predicted.tolist()[0] label = CLASSES[predicted] top_probability = top_probability.cpu().detach().numpy() top_probability = top_probability.tolist()[0][0] top_probability = '%.2f%%' % (top_probability * 100) print(top_probability) print(label) #all the label is stingray############################################ frame = cv2.putText(frame, label+': '+top_probability, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (0, 0, 255), 2) videoWriter.write(frame) success, frame = videoCapture.read() videoWriter.release()
When I use this code,there is no problem.I convert image channel from BGR to RGB for pytorch. import torch from torchvision import transforms import torchvision.models as models import cv2 import torch.nn.functional as F import copy CLASSES = {0:"goldfish", 1:"stingray", 2:"tench"} BATCH_SIZE = 4 IMG_SIZE = (400, 400) TRANSFORM_IMG = transforms.Compose([ transforms.ToTensor(), transforms.Resize(IMG_SIZE), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] ) ]) # model device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = models.vgg19_bn(pretrained=False, num_classes=3) model.to(device) model.load_state_dict(torch.load('checkpoint.pt')) model.eval() videoCapture = cv2.VideoCapture(r'video/Goldfish.mp4') fps = videoCapture.get(cv2.CAP_PROP_FPS) size = (int(videoCapture.get(cv2.CAP_PROP_FRAME_WIDTH)), int(videoCapture.get(cv2.CAP_PROP_FRAME_HEIGHT))) ps = 25 fourcc = cv2.VideoWriter_fourcc(*'DIVX') videoWriter = cv2.VideoWriter(r"D:/goldfish.mp4", fourcc, fps, size) with torch.no_grad(): success, frame = videoCapture.read() while success: frame_copy = copy.deepcopy(frame) frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_BGR2RGB) image_tensor = TRANSFORM_IMG(frame_copy) image_tensor = image_tensor.unsqueeze(0) test_input = image_tensor.to(device) outputs = model(test_input) _, predicted = torch.max(outputs, 1) probability = F.softmax(outputs, dim=1) top_probability, top_class = probability.topk(1, dim=1) predicted = predicted.cpu().detach().numpy() predicted = predicted.tolist()[0] label = CLASSES[predicted] top_probability = top_probability.cpu().detach().numpy() top_probability = top_probability.tolist()[0][0] top_probability = '%.2f%%' % (top_probability * 100) print(top_probability) print(label) frame = cv2.putText(frame, label+': '+top_probability, (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (0, 0, 255), 2) videoWriter.write(frame) success, frame = videoCapture.read() videoWriter.release()
https://stackoverflow.com/questions/71614072/
Replace every element in each channel if the channel has any value greater than 0 without looping
I have a batch tensor of size (4, 100, 56, 56), where some channels have a certain values in it, and some only have all zeros. I wanted to make each elements in the channels has any value greater than 0 to be of 100, whereas if it has all zeros, it should be made to has 1 in each element. Any idea how to achieve this without looping? t = torch.zeros((4, 100, 56, 56)) t[:, 5, 15:20, 15:20] = 0.07 new_t = torch.ones((4, 100, 56, 56)) for b in range(t.size(0)) for c in range(t.size(1)): if t[b, c, :,:].max() > 0: new_t[b, c, :, :] = 100 My code above is inefficient for large batches and channels, and it create memory overhead due to new_t, is there a way to use view() or similar functions to achieve this?
You can perform the following: mask = torch.any(t.flatten(2, 3) > 0., dim=2) t[mask] = 100. # or t[mask] *= 100. for differentiability
https://stackoverflow.com/questions/71614672/
Couldn't convert MobileNet V2 PyTorch to mlmodel using CoreML tools
I want to convert PyTorch MobileNet V2 pre-trained model to .mlmodel using coremltools. here is my code: import torchvision import torch import coremltools as ct # Load a pre-trained version of MobileNetV2 torch_model = torchvision.models.mobilenet_v2(pretrained=True) # Set the model in evaluation mode. torch_model.eval() # Trace the model with random data. example_input = torch.rand(1, 3, 224, 224) traced_model = torch.jit.trace(torch_model, example_input) out = traced_model(example_input) # Using image_input in the inputs parameter: # Convert to Core ML using the Unified Conversion API. model = ct.convert( traced_model, inputs=[ct.TensorType(shape=example_input.shape)] ) # Save the converted model. model.save("mobilenet_v2.mlmodel") It worked well on Google Colab, but when I run it my local machine (MacBook), I've got the following error Converting Frontend ==> MIL Ops: 100%|▉| 390/391 [00:00<00:00, 647.56 Running MIL Common passes: 0%| | 0/34 [00:00<?, ? passes/s]anaconda3/lib/python3.8/site-packages/coremltools/converters/mil/mil/passes/name_sanitization_utils.py:101: UserWarning: Input, 'input.1', of the source model, has been renamed to 'input_1' in the Core ML model. warnings.warn(msg.format(var.name, new_name)) anaconda3/lib/python3.8/site-packages/coremltools/converters/mil/mil/passes/name_sanitization_utils.py:129: UserWarning: Output, '830', of the source model, has been renamed to 'var_830' in the Core ML model. warnings.warn(msg.format(var.name, new_name)) Running MIL Common passes: 100%|█| 34/34 [00:00<00:00, 41.87 passes/s Running MIL Clean up passes: 100%|█| 9/9 [00:00<00:00, 80.15 passes/s Translating MIL ==> NeuralNetwork Ops: 100%|█| 495/495 [00:00<00:00, Traceback (most recent call last): File "convert_models.py", line 24, in <module> model = ct.convert( File "anaconda3/lib/python3.8/site-packages/coremltools/converters/_converters_entry.py", line 352, in convert mlmodel = mil_convert( File "anaconda3/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 183, in mil_convert return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs) File "anaconda3/lib/python3.8/site-packages/coremltools/converters/mil/converter.py", line 231, in _mil_convert return modelClass(proto, File "anaconda3/lib/python3.8/site-packages/coremltools/models/model.py", line 346, in __init__ self.__proxy__, self._spec, self._framework_error = _get_proxy_and_spec( File "anaconda3/lib/python3.8/site-packages/coremltools/models/model.py", line 123, in _get_proxy_and_spec specification = _load_spec(filename) File "/anaconda3/lib/python3.8/site-packages/coremltools/models/utils.py", line 210, in load_spec raise Exception( Exception: Unable to load libmodelpackage. Cannot make save spec. I am using the following versions of libraries: torchvision: '0.9.0' torch: '1.8.0' coremltools: '5.2.0'
I solved this problem by updating the macOS from 10.14.6 Mojave to 11.5.2 MacOS Big Sur. From the Github issue that I have created, they explained to me that the version "5.2" of coremltools is not supported in my previous MacOS version. So, by updating your macOS it should work.
https://stackoverflow.com/questions/71614808/
Save PyTorch model for conversion to ONNX
I'm brand new to Pytorch (and Python), I've followed this guide which trains a model and then saves the weights into a pth file: https://medium.com/@alexppppp/how-to-create-synthetic-dataset-for-computer-vision-keypoint-detection-78ba481cdafd My understanding is that to convert a model to ONNX, you need to save the entire thing and not just the weights. I think the relevant code is this: for epoch in range(num_epochs): train_one_epoch(model, optimizer, data_loader_train, device, epoch, print_freq=1000) lr_scheduler.step() evaluate(model, data_loader_test, device) # Save model weights after training torch.save(model.state_dict(), 'keypointsrcnn_weights.pth') Is there a simple way to save the "entire" model rather than just the weights? I've seen this in the docs but this looks like it would need to go within the epoch loop rather than after it's trained? torch.save({ 'epoch': epoch, 'model_state_dict': model.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'loss': loss, ... }, PATH) Please forgive my total lack of understanding. My intention is to try convert a PyTorch model to ONNX.
use torch.onnx.export. Should look something like arch = models.alexnet(); pic_x = 227 dummy_input = torch.zeros((1,3, pic_x, pic_x)) torch.onnx.export(arch, dummy_input, "alexnet.onnx", verbose=True, export_params=True, ) graph(%input.1 : Float(1, 3, 227, 227, strides=[154587, 51529, 227, 1], requires_grad=0, device=cpu), %features.0.weight : Float(64, 3, 11, 11, strides=[363, 121, 11, 1], requires_grad=1, device=cpu), %features.0.bias : Float(64, strides=[1], requires_grad=1, device=cpu), %features.3.weight : Float(192, 64, 5, 5, strides=[1600, 25, 5, 1], requires_grad=1, device=cpu), %features.3.bias : Float(192, strides=[1], requires_grad=1, device=cpu), ... %classifier.6.weight : Float(1000, 4096, strides=[4096, 1], requires_grad=1, device=cpu), %classifier.6.bias : Float(1000, strides=[1], requires_grad=1, device=cpu)): %17 : Float(1, 64, 56, 56, strides=[200704, 3136, 56, 1], requires_grad=1, device=cpu) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[11, 11], pads=[2, 2, 2, 2], strides=[4, 4]](%input.1, %features.0.weight, %features.0.bias) # c:\python39\lib\site-packages\torch\nn\modules\conv.py:442:0 %18 : Float(1, 64, 56, 56, strides=[200704, 3136, 56, 1], requires_grad=1, device=cpu) = onnx::Relu(%17) # c:\python39\lib\site-packages\torch\nn\functional.py:1297:0 ...
https://stackoverflow.com/questions/71615771/
Multiply a [3, 2, 3] by a [3, 2] tensor in pytorch (dot product along dimension)
Given the following tensors x and y with shapes [3,2,3] and [3,2]. I want to multiply the tensors along the 2nd dimension, this is expected to be a kind of dot product and scaling along the axis and return a [3,2,3] tensor. import torch a = [[[0.2,0.3,0.5],[-0.5,0.02,1.0]],[[0.01,0.13,0.06],[0.35,0.12,0.0]], [[1.0,-0.3,1.0],[1.0,0.02, 0.03]] ] b = [[1,2],[1,3],[0,2]] x = torch.FloatTensor(a) # shape [3,2,3] y = torch.FloatTensor(b) # shape [3,2] The expected output : Expected output shape should be [3,2,3] #output = [[[0.2,0.3,0.5],[-1.0,0.04,2.0]],[[0.01,0.13,0.06],[1.05,0.36,0.0]], [[0.0,0.0,0.0],[2.0,0.04, 0.06]] ] I have tried the two below but none of them is giving the desired output and output shape. torch.matmul(x,y) torch.matmul(x,y.unsqueeze(1).shape) What is the best way to fix this?
This is just broadcasted multiply. So you can insert a unitary dimension on the end of y to make it a [3,2,1] tensor and then multiply by x. There are multiple ways to insert unitary dimensions. # all equivalent x * y.unsqueeze(2) x * y[..., None] x * y[:, :, None] x * y.reshape(3, 2, 1) You could also use torch.einsum. torch.einsum('abc,ab->abc', x, y)
https://stackoverflow.com/questions/71618611/
Simple Neural Network in Pytorch with 3 inputs (Numerical Values)
Having a hard time setting up a neural network most of the examples are images. My problem has 3 inputs each of size N X M where N are the samples and M are the features. I have a separate file (CSV) with 1 x N binary target (0,1). The network i'm trying to configure should have two hidden layers with 100 and 50 neurons, respectively. Sigmoid activation function and cross-entropy to check performance. The result should just be a single probability output. Please help? EDIT: import torch import torch.nn as nn import torch.optim as optim import torch.autograd as autograd import torch.nn.functional as F #from torch.autograd import Variable import pandas as pd # Import Data Input1 = pd.read_csv(r'...') Input2 = pd.read_csv(r'...') Input3 = pd.read_csv(r'...') Target = pd.read_csv(r'...' ) # Convert to Tensor Input1_tensor = torch.tensor(Input1.to_numpy()).float() Input2_tensor = torch.tensor(Input2.to_numpy()).float() Input3_tensor = torch.tensor(Input3.to_numpy()).float() Target_tensor = torch.tensor(Target.to_numpy()).float() # Transpose to have signal as columns instead of rows input1 = Input1_tensor input2 = Input2_tensor input3 = Input3_tensor y = Target_tensor # Define the model class Net(nn.Module): def __init__(self, num_inputs, hidden1_size, hidden2_size, num_classes): # Initialize super class super(Net, self).__init__() #self.criterion = nn.CrossEntropyLoss() # Add hidden layer self.layer1 = nn.Linear(num_inputs,hidden1_size) # Activation self.sigmoid = torch.nn.Sigmoid() # Add output layer self.layer2 = nn.Linear(hidden1_size,hidden2_size) # Activation self.sigmoid2 = torch.nn.Sigmoid() self.layer3 = nn.Linear(hidden2_size, num_classes) def forward(self, x1, x2, x3): # implement the forward pass in1 = self.layer1(x1) in2 = self.layer1(x2) in3 = self.layer1(x3) xyz = torch.cat((in1,in2,in3),1) return xyz # Define loss function loss_function = nn.CrossEntropyLoss() # Define optimizer optimizer = optim.SGD(model.parameters(), lr=1e-4) for t in range(num_epochs): # Forward pass: Compute predicted y by passing x to the model y_pred = model(input1, input2, input3) # Compute and print loss loss = loss_function(y_pred, y) print(t, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() # Calculate gradient using backward pass loss.backward() # Update model parameters (weights) optimizer.step() Here I am getting an error of " RuntimeError: 0D or 1D target tensor expected, multi-target not supported" for line "loss = loss_function(y_pred, y)" Where y_pred is [20000,375] and y is [20000,1]
you can refer to pytorch, a python library for deep learning and neural networks. and you can use code that defines network below: from torch import nn import torch.nn.functional as F def network(nn.Module): def __init__(self, M): # M is the dimension of input feature super(network, self).__init__() self.layer1 = nn.Linear(M, 100) self.layer2 = nn.Linear(100, 50) self.out = nn.Linear(50,1) def forward(self,x): return F.sigmoid(self.out(self.layer2(self.layer1(x)))) ---------- You can then refer to the pytorch documentation and finish the rest training code. Edit: As for RuntimeError, you can squeeze the target tensor by y.squeeze(). This will remove redundant dimension in your tensor, e.g. [20000,1] -> [20000]
https://stackoverflow.com/questions/71625580/
Embedding Layer - torch.nn.Embedding in Pytorch
I'm quite new to NN and sorry if my question is quite dumb. I was just reading codes on github and found the pros use embedding (in that case not a word embedding) but may I please just ask in general: Does Embedding Layer has trainable variables that learn over time as to improve in embedding? May you please provide an intuition on it and what circumstances to use, like would the house price regression benefit from it ? If so (that it learns) what is the difference than just using Linear Layers? >>> embedding = nn.Embedding(10, 3) >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> input tensor([[1, 2, 4, 5], [4, 3, 2, 9]]) >>> embedding(input) tensor([[[-0.0251, -1.6902, 0.7172], [-0.6431, 0.0748, 0.6969], [ 1.4970, 1.3448, -0.9685], [-0.3677, -2.7265, -0.1685]], [[ 1.4970, 1.3448, -0.9685], [ 0.4362, -0.4004, 0.9400], [-0.6431, 0.0748, 0.6969], [ 0.9124, -2.3616, 1.1151]]])
In short, the embedding layer has learnable parameters and the usefulness of the layer depends on what inductive bias you want on the data. Does Embedding Layer has trainable variables that learn over time as to improve in embedding? Yes, as stated in the docs under the Variables section, it has an embedding weight that is altered during the training process. May you please provide an intuition on it and what circumstances to use, like would the house price regression benefit from it? An embedding layer is commonly used in NLP tasks where the input is tokenized. This means that the input is discrete in a sense and can be used for indexing the weight (which is basically what the embedding layer is in forward mode). This discrete attribution implies that inputs like 1, 2, 42 are entirely different (until the semantic correlation has been learnt). House price regression has continuous input space and values such as 1.0 and 1.1 might be more correlated than the values 1.0 and 42.0. This kind of assumption about the hypothesis space is called an inductive bias and pretty much every machine learning architecture conforms to some sort of inductive bias. I believe it is possible to use embedding layers for regression problems which would require some kind of discretization, but it would not benefit from it. If so (that it learns) what is the difference than just using Linear Layers? There is a big difference, the linear layer performs matrix multiplication with the weight as opposed to using it as a lookup table. During backpropagation for the embedding layer, the gradients will only propagate threw the corresponding indices used in the lookup and duplicate indices are accumulated.
https://stackoverflow.com/questions/71625755/
How to alternatively concatenate pytorch tensors?
Pytorch provides API to concatenate tensors, like cat, stack. But does it provide any API to concatenate pytorch tensors alternatively? For example, suppose input1.shape = C*H*W, a1.shape = H\*W, and output.shape = (3C)*H*W This can be achieved using a loop, but I am wondering if any Pytorch API can do this
I will try to do it with small example: input1 = torch.full((3, 3), 1) input2 = torch.full((3, 3), 2) input3 = torch.full((3, 3), 3) out = torch.concat((input1,input2, input3)).T.flatten() torch.stack(torch.split(out, 3), dim=1).reshape(3,-1) #output tensor([[1, 2, 3, 1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3, 1, 2, 3], [1, 2, 3, 1, 2, 3, 1, 2, 3]])
https://stackoverflow.com/questions/71628542/
Gradio - Pytorch MNIST Digit Recognizer
I watched the following video on YouTube https://www.youtube.com/watch?v=jx9iyQZhSwI where it was shown that it is possible to use Gradio and the learned model of MNIST dataset in Tensorflow. I have read and written that it is possible to use Pytorch in Gradio, but I have problems with its implementation. Does anyone have an idea how to do this? My Pytorch code of cnn import torch.nn as nn class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.conv1 = nn.Sequential( nn.Conv2d( in_channels=1, out_channels=16, kernel_size=5, stride=1, padding=2, ), nn.ReLU(), nn.MaxPool2d(kernel_size=2), ) self.conv2 = nn.Sequential( nn.Conv2d(16, 32, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(2), ) # fully connected layer, output 10 classes self.out = nn.Linear(32 * 7 * 7, 10) def forward(self, x): x = self.conv1(x) x = self.conv2(x) # flatten the output of conv2 to (batch_size, 32 * 7 * 7) x = x.view(x.size(0), -1) output = self.out(x) return output, x # return x for visualization By watching I find that I need to change function that Gradio use def predict_image(img): img_3d=img.reshape(-1,28,28) im_resize=img_3d/255.0 prediction=CNN(im_resize) pred=np.argmax(prediction) return pred
Im sorry if I got your question wrong, but from what I understand you are getting an error when trying to predict the digit using your function predict image. So here are two possible hints. Maybe you have implemented them already, but I don't know because of the very small code snippet. First of all. Have you set your model into evaluation mode using CNN.eval() Do after you finished training your model and want to evaluate inputs without training the model. Second of all, maybe you need to add a fourth dimension to your input tensor "im_resize". Normally your model expects a dimension for the number of channels, the batch size, the height and the width of your input. In addition I can not tell if your input is a of the datatype torch.tensor . If not transform your array into a tensor first. You can add a batch dimension to your input tensor by using im_resize = im_resize.unsqueeze(0) I hope that I understand your question correctly and was able to help you.
https://stackoverflow.com/questions/71629683/
Syntax for making objects callable in python
I understand that in python user-defined objects can be made callable by defining a __call__() method in the class definition. For example, class MyClass: def __init__(self): pass def __call__(self, input1): self.my_function(input1) def my_function(self, input1): print(f"MyClass - print {input1}") my_obj = MyClass() # same as calling my_obj.my_function("haha") my_obj("haha") # prints "MyClass - print haha" I was looking at how pytorch makes the forward() method of a nn.Module object be called implicitly when the object is called and saw some syntax I didn't understand. In the line that supposedly defines the __call__ method the syntax used is, __call__ : Callable[..., Any] = _call_impl This seemed like a combination of an annotation (keyword Callable[ following : ignored by python) and a value of _call_impl which we want to be called when __call__ is invoked, and my guess is that this is a shorthand for, def __call__(self, *args, **kwargs): return self._call_impl(*args, **kwargs) but wanted to understand clearly how this method of defining functions worked. My question is: When would we want to use such a definition of callable attributes of a class instead of the usual def myfunc(self, *args, **kwargs)
Functions are normal first-class objects in python. The name to with which you define a function object, e.g. with a def statement, is not set in stone, any more than it would be for an int or list. Just as you can do a = [1, 2, 3] b = a to access the elements of a through the name b, you can do the same with functions. In your first example, you could replace def __call__(self, input1): self.my_function(input1) with the much simpler __call__ = my_function You would need to put this line after the definition of my_function. The key differences between the two implementations is that def __call__(... creates a new function. __call__ = ... simply binds the name __call__ to the same object as my_function. The noticeable difference is that if you do __call__.__name__, the first version will show __call__, while the second will show my_function, since that's what gets assigned by a def statement.
https://stackoverflow.com/questions/71630563/
send torch tensor or image via localhost as JSON to another app
I have a basic flask server with a Generator model loaded. I'm sending an input vector via JSON which is hitting the Generator, which spits out a prediction. This works. I want to then send this image (I would settle for sending it as any kind of data I can reconstruct on the other end) to another application running on the same machine. From what I gather, it might be best to encode the image as base64, but all of my attempts have failed. Any guidance is appreciated. @app.route("/json", methods=['GET', 'POST', 'PUT']) def getjsondata(): if request.method=='POST': print("received POST") data = request.get_json() #print(format(data['z'])) jzf = [float(i) for i in data['z']] jzft = torch.FloatTensor(jzf) jzftr = jzft.reshape([1, 512]) z = jzftr.cuda() c = None # class labels (not used in this example) trunc = 1 img = G(z, c, trunc) img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) # now what?
I managed a solution. The Python Flask server looks like this and expects to receive a JSON object containing a Z array of 512 floats and truncation of a single float. from flask import Flask, jsonify, request, send_file # initialize our Flask application import json from flask_cors import CORS import base64 from torchvision import transforms import dnnlib import torch import PIL.Image from io import BytesIO from datetime import datetime import legacy device = torch.device('cuda') with dnnlib.util.open_url("snapshot.pkl") as f: G = legacy.load_network_pkl(f)['G_ema'].to(device) # type: ignore app = Flask(__name__) CORS(app) @app.route("/test", methods=['GET', 'POST', 'PUT']) def test(): return "OK" @app.route("/query", methods=['GET', 'POST', 'PUT']) def getjsondata(): if request.method=='POST': # print("received POST") data = request.get_json() #print(format(data['z'])) jzf = [float(i) for i in data['z']] jzft = torch.FloatTensor(jzf) jzftr = jzft.reshape([1, 512]) z = jzftr.cuda() c = None # class labels (not used in this example) trunc = data['truncation'] img = G(z, c, trunc) #img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8) img = (img * 127.5 + 128).clamp(0, 255).to(torch.uint8) # turn into PIL image pil_img = transforms.ToPILImage()(img[0]).convert("RGB") #pil_img = PIL.Image.fromarray(img[0].cpu().numpy(), 'RGB') # SAVING... #fn = datetime.today().strftime('%Y-%m-%d-%H:%M:%S') #pil_img.save('saved_images/' + fn + '.jpg') response = serve_pil_image64(pil_img) response.headers.add('Access-Control-Allow-Origin', '*') # response.headers.add('Content-Transfer-Encoding', 'base64') return response return 'OK' def serve_pil_image64(pil_img): img_io = BytesIO() pil_img.save(img_io, 'JPEG', quality=70) img_str = base64.b64encode(img_io.getvalue()).decode("utf-8") return jsonify({'status': True, 'image': img_str}) if __name__ == '__main__': app.run(host='localhost', port=9000, debug=True) As it stands, I am sending said JSON array from a simple JavaScript/HTML site. It then listens for the response, also JSON. // construct an HTTP request var xhr = new XMLHttpRequest(); // upon successful completion of request... xhr.onreadystatechange = function() { if (xhr.readyState == XMLHttpRequest.DONE) { var json = JSON.parse(xhr.responseText); // console.log(json); document.getElementById("image_output").src = "data:image/jpeg;base64," + json.image; } } xhr.open("POST", "http://localhost:9000/json"); xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
https://stackoverflow.com/questions/71639417/
How to rearrange the sample order of a torch dataloader?
I have a "torch.utils.data.DataLoader". I want to rearrange the order of the samples. Is it possible?
Yes, you can use torch.utils.data.Subset and specify the indices. import numpy as np import torch from torch.utils.data import DataLoader, Subset, TensorDataset data = np.arange(5) ** 2 dataset = TensorDataset(torch.tensor(data)) # Subset with entire Dataset in rearranged order dataset_ordered = Subset(dataset, indices=[2, 1, 3, 4, 0]) for x in DataLoader(dataset_ordered): print(x) # [tensor([4])] # [tensor([1])] # [tensor([9])] # [tensor([16])] # [tensor([0])]
https://stackoverflow.com/questions/71640642/
RuntimeError: Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cpu"
Earlier I have configured the following project https://github.com/zllrunning/face-makeup.PyTorch using Pytorch with CUDA=10.2, Now Pytorch with CUDA=10.2 support is not available for Windows. So, when I am configuring the same project using Pytorch with CUDA=11.3, then I am getting the following error: RuntimeError: Attempted to set the storage of a tensor on device "cuda:0" to a storage on different device "cpu". This is no longer allowed; the devices must match. Please help me in solving this problem.
I solved this by adding map_location=lambda storage, loc: storage.cuda() in the model_zoo.load_url method. I think in torch 1.12 they have changed the default location from GPU to CPU (which does not make any sense). Edit: In the file resnet.py, under function def init_weight(self):, the following line state_dict = modelzoo.load_url(resnet18_url) is changed with state_dict = modelzoo.load_url(resnet18_url, map_location=lambda storage, loc: storage.cuda())
https://stackoverflow.com/questions/71643035/
Compute metrics/loss every n batches Pytorch Lightning
I'm trying to use Pytorch lighning but I don't have clear all the steps. Anyway I'm trying to calculate the train_loss (for example) not only for each step(=batch) but every n bacthes (i.e. 500) but I'm not sure how to compute it (compute, reset etc). I tried this approach but this is not working. Can you help me? thanks def training_step(self, batch: tuple, batch_nb: int, *args, **kwargs) -> dict: """ Runs one training step. This usually consists in the forward function followed by the loss function. :param batch: The output of your dataloader. :param batch_nb: Integer displaying which batch this is Returns: - dictionary containing the loss and the metrics to be added to the lightning logger. """ inputs, targets = batch model_out = self.forward(**inputs) loss_val = self.loss(model_out, targets) y = targets["labels"] y_hat = model_out["logits"] labels_hat = torch.argmax(y_hat, dim=1) val_acc = self.metric_acc(labels_hat, y) tqdm_dict = {"train_loss": loss_val, 'batch_nb': batch_nb} self.log('train_loss', loss_val, on_step=True, on_epoch=True,prog_bar=True) self.log('train_acc', val_acc, on_step=True, prog_bar=True,on_epoch=True) # reset the metric to restart accumulating self.loss_val_bn = self.loss(model_out, targets) #accumulate state if batch_nb % 500 == 0: self.log("x batches test loss_train", self.loss_val_bn.compute(),batch_nb) # perform a compute every 10 batches self.loss_val_bn.reset() #output = OrderedDict( #{"loss": loss_val, "progress_bar": tqdm_dict, "log": tqdm_dict}) # can also return just a scalar instead of a dict (return loss_val) #return output return loss_val
Write your custom logger following (https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#make-a-custom-logger). The one I present here stores the values generated in every step in a dict where each metric name is a key. class History_dict(LightningLoggerBase): def __init__(self): super().__init__() self.history = collections.defaultdict(list) # copy not necessary here # The defaultdict in contrast will simply create any items that you try to access @property def name(self): return "Logger_custom_plot" @property def version(self): return "1.0" @property @rank_zero_experiment def experiment(self): # Return the experiment object associated with this logger. pass @rank_zero_only def log_metrics(self, metrics, step): # metrics is a dictionary of metric names and values # your code to record metrics goes here for metric_name, metric_value in metrics.items(): if metric_name != 'epoch': self.history[metric_name].append(metric_value) else: # case epoch. We want to avoid adding multiple times the same. It happens for multiple losses. if (not len(self.history['epoch']) or # len == 0: not self.history['epoch'][-1] == metric_value) : # the last values of epochs is not the one we are currently trying to add. self.history['epoch'].append(metric_value) else: pass return def log_hyperparams(self, params): pass Make the model reduce the stored metrics every n steps. I assume that taking the mean is how you want to reduce here. Empty the list after reducing. class MNISTModel(LightningModule): def __init__(self): super().__init__() self.l1 = torch.nn.Linear(28 * 28, 10) def forward(self, x): return torch.relu(self.l1(x.view(x.size(0), -1))) def training_step(self, batch, batch_nb): x, y = batch loss = F.cross_entropy(self(x), y) self.log('loss_epoch', loss, on_step=False, on_epoch=True) self.log('loss_step', loss, on_step=True, on_epoch=False) print(self.global_step) if batch_nb % 50 == 0 and self.global_step != 0: step_metrics = self.logger.history['loss_step'] # I am assuming that the reduction function you want over the saved step values are mean reduced = sum(step_metrics) / len(step_metrics) print(reduced) # Empty the loss list self.logger.history['loss_step'] = [] return loss def configure_optimizers(self): return torch.optim.Adam(self.parameters(), lr=0.02) Pass this custom logger to your trainer hd = History_dict() # Initialize a trainer trainer = Trainer( accelerator="auto", devices=1 if torch.cuda.is_available() else None, # limiting got iPython runs max_epochs=3, callbacks=[TQDMProgressBar(refresh_rate=20)], log_every_n_steps=10, logger=[hd], ) You should see the following being printed with this exact configuration: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 0.9309097290039062 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 0.9098988473415375 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 0.8920584758122762 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 0.8698084503412247 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 0.8622475385665893 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 0.8433656434218089 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 0.8188161773341043 ...
https://stackoverflow.com/questions/71646596/
Adding Intel MKL and MKL-DNN in Docker
I have ML code (e.g. Numpy, Scipy, LightGBM, PyTorch) deployed with Docker. I am using Python with Poetry, installing packages with pip. What should I do in order to use MKL and MKL-DNN? I know that the most standard way is to use Anaconda, but I cannot (large business, without commercial Anaconda license). Will pip install mkl suffice? How to install MKL-DNN, so that PyTorch will use it?
Will pip install mkl suffice? No, it will not, see the section in the numpy install docs: The NumPy wheels on PyPI, which is what pip installs, are built with OpenBLAS. The OpenBLAS libraries are included in the wheel. This makes the wheel larger, and if a user installs (for example) SciPy as well, they will now have two copies of OpenBLAS on disk. So you will need to built numpy from source. I know that the most standard way is to use Anaconda, but I cannot (large business, without commercial Anaconda license). Have you considered using miniforge and miniconda? IANAL, but I am quite certain that you are just not allowed to use the ana-/miniconda distributions and the anaconda channel in large scale commercial products, but conda-forge can still be used free of charge. You should be able to set up all the requirements that you mentioned from conda-forge. At least you would probably have an easier time compiling pytorch from source
https://stackoverflow.com/questions/71649223/
matmul to every row in pytorch tensor
I currently have a tensor that looks like this (numbers randomly selected), which I will call x: tensor([[ 1., -5.], [ 2., -4.], [ 3., 2.], [ 4., 1.], [ 5., 2.]]) I also have another 2D tensor (call it i) tensor([[-1., 1.], [ 1., -1.]], requires_grad=True) I hope to pytorch.matmul i to each row in x. Is there a way for me to achieve this? Below is my attempt: apply_i = lambda x: torch.matmul(x, i) final = pytorch.tensor([apply_i(a) for a in x]) It throws an error saying "only one element tensors can be converted to Python scalars". Does not work even when I remove the square bracket. Any help would be appreciated!
import torch x = torch.tensor([[ 1., -5.], [ 2., -4.], [ 3., 2.], [ 4., 1.], [ 5., 2.]]) change your code: i = torch.tensor([[-1., 1.], [ 1., -1.]], requires_grad=True) apply_i = lambda x: torch.matmul(x, i) # final = torch.tensor([apply_i(a) for a in x]) final = [apply_i(a) for a in x] final = torch. stack(final)
https://stackoverflow.com/questions/71652014/
Pytorch DataLoader is not dividing the dataset into batches
I am trying to load training data in the DataLoader with following code class Dataset(Dataset): def __init__(self, x, y): self.x = x self.y = y def __getitem__(self, index): x = torch.Tensor(self.x[index]) y = torch.Tensor(self.y[index]) return (x, y) def __len__(self): count = self.x.shape[0] return count X_train = np.reshape(X_train,(-1,1,X_train.shape[0],X_train.shape[1])) y_train = np.reshape(y_train,(-1,1,y_train.shape[0],y_train.shape[1])) train_dataset = Dataset(X_train, y_train) train_loader = torch.utils.data.DataLoader(dataset=train_dataset,batch_size=128,shuffle=True) Now, when I check the length of the DataLoader, I get one dataset everytime. The loader is not splitting the dataset into batches. What am I doing wrong here?
After testing your code, it seems to work perfectly if you remove the reshape steps. You're introducing a new dimension, so the new shape of X_train is (1, something, something), but you're indexing your items using self.x[index], so you're always accessing the batch dimension. You make the same mistake when calculating the length of your dataset: is always 1. Solution: do not reshape. X_train = np.random.rand(12_000, 1280) y_train = np.random.rand(12_000, 1) train_dataset = Dataset(X_train, y_train) train_loader = torch.utils.data.DataLoader(dataset=train_dataset,batch_size=128,shuffle=True) for x, y in train_loader: print(x.shape) print(y.shape) break
https://stackoverflow.com/questions/71661473/
pytorch tensor from pandas columns of vectors
I want to convert a panda's columns to a PyTorch tensor. Each cell of the column has a 300 dim NumPy vector (an embedding). I have tried this: torch.from_numpy(g_list[1]['sentence_vector'].to_numpy()) but it throws this error: TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.
If you have this dataframe which each column is a vector of 2 numbers: import torch import pandas as pd df = pd.DataFrame({'a': [[ 3, 29],[ 3, 29]], 'b': [[94, 170],[ 3, 29]], 'c': [[31, 115],[ 3, 29]]}) To convert this dataframe to a pytorch tensor, you only need to convert the values of dataframe to list and then a tensor: t = torch.Tensor(list(df.values)) #output tensor([[[ 3., 29.], [ 94., 170.], [ 31., 115.]], [[ 3., 29.], [ 3., 29.], [ 3., 29.]]]) The shape of t is [2,3,2] is 2 rows, 3 columns, 2 elements inside each list.
https://stackoverflow.com/questions/71664538/
Plot derivatives of sin(x) using pytorch
Am unsure why my code does not plot cos(x) (yes, am aware pytorch has cos(x) function) import math import os import torch import numpy as np import matplotlib.pyplot as plt import random x = torch.linspace(-math.pi, math.pi, 5000, requires_grad=True) y = torch.sin(x) y.backward(x) x.grad == torch.cos(x) # assert x.grad same as cos(x) plt.plot(x.detach().numpy(), y.detach().numpy(), label='sin(x)') plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label='cos(x)') # print derivative of sin(x)
You need to feed the upstream gradient (equals to all ones in your case) instead of x as input to y.backward(). Thus import math import torch import matplotlib.pyplot as plt x = torch.linspace(-math.pi, math.pi, 5000, requires_grad=True) y = torch.sin(x) y.backward(torch.ones_like(x)) plt.plot(x.detach().numpy(), y.detach().numpy(), label='sin(x)') plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label='cos(x)') # print derivative of sin(x) plt.show()
https://stackoverflow.com/questions/71666770/
Prepare for Binary Masks used for the image segmentation
I am trying to prepare the masks for image segmentation with Pytorch. I have three questions about data preparation. What is the appropriate data format to save the binary mask in general? PNG? JPEG? Is the mask size needed to be set square such as (224x224), not a rectangle such as (224x448)? Is the mask value fixed when the size is converted from rectangle to square? For example, the original mask image size is (600x900), which is binary [0,1]. However, when I applied import torchvision.transforms as transforms transforms.Compose([ transforms.Resize((300, 300)), transforms.ToTensor(), ]) to the mask, the output had other values: 0.01, 0.0156, 0.22... except for 0 and 1, since the mask size was converted. I applied the below code to convert the mask into the binary again if the value is less than 0.3, the value is 0, otherwise, 1. def __getitem__(self, idx): img, mask = self.load_data(idx) if self.img_transforms is not None: img = self.img_transforms(img) if self.mask_transforms is not None: mask = self.mask_transforms(mask) mask = torch.where(mask<=0.3,0,1) return img, mask but I wonder the process is a common approach and efficient.
PNG, because it is lossless by design. It depends. More convenient is to use standard resolution, (224x224), I would start with that. Use resize without interpolation transforms.Resize((300, 300), interpolation=InterpolationMode.NEAREST)
https://stackoverflow.com/questions/71669952/
Is there a way to create a train dataset for multiple pytorch files in a folder?
I have a folder with the name processed_data. In this file I have multiple .pt files which are named data_0.pt, data_1.pt, data_2.pt, data_3.pt, data_4.pt, data_5.pt, ....., data_998.pt, data_999.pt, data_1000.pt, data_1001.pt. All these .pt files are representing a graph which was created using pytorch-geometric. My question is, how do I save load all these files to create my training dataset so that I can use them in DataLoader?
A torch DataLoader needs a Dataset object. When defining your Dataset class, you need to implement __init__, __len__, and __getitem__. __init__ is easy, but also dependent on your exact use case/context. Assuming the simplest possible situation, I'd define init to take in the data folder and a file which contains names of the training set files (one per line). Then, I'd store each file name in a list as a member of the class. So we'd have: def __init__(self, data_folder, data_list_filename): self.data_folder = data_folder with open(data_list_filename, 'r') as f: self.data_file_list = f.read().splitlines() Ok, now we have two things stored in your Dataloader: 1) the data folder and 2) a list of data filenames. That makes __len__ especially easy: def __len__(self): return len(self.data_file_list) And lastly, we just need to deal with __get_item__: def __getitem__(self, idx): filename = self.data_file_list[idx] data, label = extract_data_from_file(filename) # this is arbitrary because I don't know how you need to do this return data, label Then put all of this together under a class: class MyDataset(Dataset): def __init__(self, data_folder, data_list_filename): self.data_folder = data_folder self.data_file_list = open(data_list_filename, 'r').read().splitlines() def __len__(self): return len(self.data_file_list) def __getitem__(self, idx): filename = self.data_file_list[idx] data, label = extract_data_from_file(filename) # idk how you plan to do this return data, label Obviously, your exact use will look different. But this should get you started.
https://stackoverflow.com/questions/71678709/
Is loss.backward() meant to be called on each sample or on each batch?
I have a training dataset which contains features of different sizes. I understand the implications of this in terms of network architecture and have designed my network accordingly to handle these heterogeneous shapes. When it comes to my training loop, though, I'm confused as to the order/placement of optimizer.zero_grad(), loss.backward(), and optimizer.step(). Because of the unequal feature sizes, I cannot do forward pass upon features of a batch at the same time. So, my training loop loops through samples of a batch manually, like this: for epoch in range(NUM_EPOCHS): for bidx, batch in enumerate(train_loader): optimizer.zero_grad() batch_loss = 0 for sample in batch: feature1 = sample['feature1'] feature2 = sample['feature2'] label1 = sample['label1'] label2 = sample['label2'] pred_l1, pred_l2 = model(feature1, feature2) sample_loss = compute_loss(label1, pred_l1) sample_loss += compute_loss(label2, pred_l2) sample_loss.backward() # CHOICE 1 batch_loss += sample_loss.item() # batch_loss.backward() # CHOICE 2 optimizer.step() I'm wondering if it makes sense here that backward is called upon each sample_loss with the optimizer step called every BATCH_SIZE samples (CHOICE 1). The alternative, I think, would be to call backward upon batch_loss (CHOICE 2) and I'm not so sure which is the right choice.
Differentiation is a linear operation, so in theory it should not matter whether you first differentiate the different losses and add their derivatives or whether you first add the losses and then compute the derivative of their sum. So for practical purposes both of them should lead to the same results (disregarding to the usual floating point issues). You might get a slightly different memory requirements and computation speeds (I'd guess the second version might be slightly faster.), but that is hard to predict but something that you can easily find out by timing the two versions.
https://stackoverflow.com/questions/71681430/
Gradient of Image in PyTorch - for Gradient Penalty calculation in WGAN
I am following this Github Repo for the WGAN implementation with Gradient Penalty. And I am trying to understand the following method, which does the job of unit-testing the gradient-penalty calulations. def test_gradient_penalty(image_shape): bad_gradient = torch.zeros(*image_shape) bad_gradient_penalty = gradient_penalty(bad_gradient) assert torch.isclose(bad_gradient_penalty, torch.tensor(1.)) image_size = torch.prod(torch.Tensor(image_shape[1:])) good_gradient = torch.ones(*image_shape) / torch.sqrt(image_size) good_gradient_penalty = gradient_penalty(good_gradient) assert torch.isclose(good_gradient_penalty, torch.tensor(0.)) random_gradient = test_get_gradient(image_shape) random_gradient_penalty = gradient_penalty(random_gradient) assert torch.abs(random_gradient_penalty - 1) < 0.1 # Now pass tuple argument for image dimenstion of # (batch_size, channel, height, width) test_gradient_penalty((256, 1, 28, 28)) I don't understand the below line good_gradient = torch.ones(*image_shape) / torch.sqrt(image_size) In above the torch.ones(*image_shape) is just filling a 4-D Tensor filled up with 1 and then torch.sqrt(image_size) is just representing the value of tensor(28.) So, what I am trying to understand why I need to divide the 4-D Tensor by tensor(28.) to get the good_gradient If I print bad_gradient, it will be a 4-D Tensor as below tensor([[[[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]], --- --- If I print good_gradient, the output will be tensor([[[[0.0357, 0.0357, 0.0357, ..., 0.0357, 0.0357, 0.0357], [0.0357, 0.0357, 0.0357, ..., 0.0357, 0.0357, 0.0357], [0.0357, 0.0357, 0.0357, ..., 0.0357, 0.0357, 0.0357], ..., [0.0357, 0.0357, 0.0357, ..., 0.0357, 0.0357, 0.0357], [0.0357, 0.0357, 0.0357, ..., 0.0357, 0.0357, 0.0357], [0.0357, 0.0357, 0.0357, ..., 0.0357, 0.0357, 0.0357]]], --- ---
For the line good_gradient = torch.ones(*image_shape) / torch.sqrt(image_size) First, note the Gradient Penalty term in WGAN is => (norm(gradient(interpolated)) - 1)^2 And for the Ideal Gradient (i.e. a Good Gradient), this Penalty term would be 0. i.e. A Good gradient is one which has its gradient_penalty is as close to 0 as possible This means the following should satisfy, after considering the L2-Norm of the Gradient (norm(gradient(x')) -1)^2 = 0 i.e norm(gradient(x')) = 1 i.e. sqrt(Sum(gradient_i^2) ) = 1 Now if you just continue simplifying the above (considering how norm is calculated, see my note below) math expression, you will end up with good_gradient = torch.ones(*image_shape) / torch.sqrt(image_size) Since you are passing the image_shape as (256, 1, 28, 28) - so torch.sqrt(image_size) in your case is tensor(28.) Effectively the above line is dividing each element of A 4-D Tensor like [[[[1., 1. ... ]]]] with a scaler tensor(28.) Separately, note how norm is calculated torch.norm without extra arguments performs, what is called a Frobenius norm which is effectively reshaping the matrix into one long vector and returning the 2-norm of that. Given an M * N matrix, The Frobenius Norm of a matrix is defined as the square root of the sum of the squares of the elements of the matrix. Input: mat[][] = [[1, 2], [3, 4]] Output: 5.47723 sqrt(1^2 + 2^2 + 3^2 + 4^2) = sqrt(30) = 5.47723
https://stackoverflow.com/questions/71685494/
How to solve the pytorch RuntimeError: Numpy is not available without upgrading numpy to the latest version because of other dependencies
I am running a simple CNN using Pytorch for some audio classification on my Raspberry Pi 4 on Python 3.9.2 (64-bit). For the audio manipulation needed I am using librosa. librosa depends on the numba package which is only compatible with numpy version <= 1.20. When running my code, the line spect_tensor = torch.from_numpy(spect).double() throws the RuntimeError: RuntimeError: Numpy is not available Searching the internet for solutions I found upgrading Numpy to the latest version to resolve that specific error, but throwing another error, because Numba only works with Numpy <= 1.20. Is there a solution to this problem which does not include searching for an alternative to using librosa?
Just wanted to give an update on my situation. I downgraded torch to version 0.9.1 which solved the original issue. Now OpenBLAS is throwing a warning because of an open MPLoop. But for now my code is up and running.
https://stackoverflow.com/questions/71689095/
speeding up 1d convolution in PyTorch
For my project I am using pytorch as a linear algebra backend. For the performance part of my code, I need to do 1D convolutions of 2 small (length between 2 and 9) vectors (1D tensors) a very large number of times. My code allows for batch-processing of inputs and thus I can stack a couple of input vectors to create matrices that can then be convolved all at the same time. Since torch.conv1d does not allow for convolving along a single dimension for 2D inputs, I had to write my own convolution function called convolve. This new function however consists of a double for-loop and is therefore very very slow. Question: how can I make the convolve function perform faster through better code-design and let it be able to deal with batched inputs (=2D tensors)? Partial answer: somehow avoid the double for-loop Below are three jupyter notebook cells that recreate a minimal example. Note that the you need line_profiler and the %%writefile magic command to make this work! %%writefile SO_CONVOLVE_QUESTION.py import torch def conv1d(a, v): padding = v.shape[-1] - 1 return torch.conv1d( input=a.view(1, 1, -1), weight=v.flip(0).view(1, 1, -1), padding=padding, stride=1 ).squeeze() def convolve(a, v): if a.ndim == 1: a = a.view(1, -1) v = v.view(1, -1) nrows, vcols = v.shape acols = a.shape[1] expanded = a.view((nrows, acols, 1)) * v.view((nrows, 1, vcols)) noutdim = vcols + acols - 1 out = torch.zeros((nrows, noutdim)) for i in range(acols): for j in range(vcols): out[:, i+j] += expanded[:, i, j] return out.squeeze() x = torch.randn(5) y = torch.randn(7) I write the code to the SO_CONVOLVE_QUESTION.py because that is necessary for line_profiler and to use as a setup for timeit.timeit. Now we can evaluate the output and performance of the code above on non-batch input (x, y) and batched input (x_batch, y_batch): from SO_CONVOLVE_QUESTION import * # Without batch processing res1 = conv1d(x, y) res = convolve(x, y) print(torch.allclose(res1, res)) # True # With batch processing, NB first dimension! x_batch = torch.randn(5, 5) y_batch = torch.randn(5, 7) results = [] for i in range(5): results.append(conv1d(x_batch[i, :], y_batch[i, :])) res1 = torch.stack(results) res = convolve(x_batch, y_batch) print(torch.allclose(res1, res)) # True print(timeit.timeit('convolve(x, y)', setup=setup, number=10000)) # 4.83391789999996 print(timeit.timeit('conv1d(x, y)', setup=setup, number=10000)) # 0.2799923000000035 In the block above you can see that performing convolution 5 times using the conv1d function produces the same result as convolve on batched inputs. We can also see that convolve (= 4.8s) is much slower than the conv1d (= 0.28s). Below we assess the slow part of the convolve function WITHOUT batch processing using line_profiler: %load_ext line_profiler %lprun -f convolve convolve(x, y) # evaluated without batch-processing! Output: Timer unit: 1e-07 s Total time: 0.0010383 s File: C:\python_projects\pysumo\SO_CONVOLVE_QUESTION.py Function: convolve at line 9 Line # Hits Time Per Hit % Time Line Contents ============================================================== 9 def convolve(a, v): 10 1 68.0 68.0 0.7 if a.ndim == 1: 11 1 271.0 271.0 2.6 a = a.view(1, -1) 12 1 44.0 44.0 0.4 v = v.view(1, -1) 13 14 1 28.0 28.0 0.3 nrows, vcols = v.shape 15 1 12.0 12.0 0.1 acols = a.shape[1] 16 17 1 4337.0 4337.0 41.8 expanded = a.view((nrows, acols, 1)) * v.view((nrows, 1, vcols)) 18 1 12.0 12.0 0.1 noutdim = vcols + acols - 1 19 1 127.0 127.0 1.2 out = torch.zeros((nrows, noutdim)) 20 6 32.0 5.3 0.3 for i in range(acols): 21 40 209.0 5.2 2.0 for j in range(vcols): 22 35 5194.0 148.4 50.0 out[:, i+j] += expanded[:, i, j] 23 1 49.0 49.0 0.5 return out.squeeze() Obviously a double for-loop and the line creating the expanded tensor are the slowest. Are these parts avoidable with better code-design?
Turns out that there is a way to do it without for-loops via grouping of the inputs along a dimension: out = torch.conv1d(x_batch.unsqueeze(0), y_batch.unsqueeze(1).flip(2), padding=y_batch.size(1)-1, groups=x_batch.size(0)) print(torch.allclose(out, res1)) # True
https://stackoverflow.com/questions/71695862/
Stack expects each tensor to be equal size
I am following PyTorch tutorial on speech command recogniton and trying to implement my own recognition of 22 sentences in german language. In the tutorial they use padding for audio tensors, but for labels they use only torch.stack. Because of that, I have an error, as I start training the network: RuntimeError: stack expects each tensor to be equal size, but got [456] at entry 0 and [470] at entry 1. I do understand what this says, but since I am new to PyTorch can't unfortunately implement padding function for sentences from scratch. Therefore I would be happy if you could give me some hints and tipps for this. Here is the code for collate_fn and pad_sequence functions: def pad_sequence(batch): # Make all tensor in a batch the same length by padding with zeros batch = [item.t() for item in batch] batch = torch.nn.utils.rnn.pad_sequence(batch, batch_first=True, padding_value=0.) return batch.permute(0, 2, 1) def collate_fn(batch): # A data tuple has the form: # waveform, label tensors, targets = [], [] # Gather in lists, and encode labels as indices for waveform, label in batch: tensors += [waveform] targets += [label] # Group the list of tensors into a batched tensor tensors = pad_sequence(tensors) targets = torch.stack(targets) return tensors, targets
As I started working directly with pad_sequence, I understood how simple it works. So, in my case I needed only bunch of strings (batch), which were automatically compared by PyTorch and extended to the maximal length of the one of the several strings in the batch. My code looks now like this: def pad_AudioSequence(batch): # Make all tensor in a batch the same length by padding with zeros batch = [item.t() for item in batch] batch = torch.nn.utils.rnn.pad_sequence(batch, batch_first=True, padding_value=0.) return batch.permute(0, 2, 1) def pad_TextSequence(batch): return torch.nn.utils.rnn.pad_sequence(batch,batch_first=True, padding_value=0) def collate_fn(batch): # A data tuple has the form: # waveform, label tensors, targets = [], [] # Gather in lists, and encode labels as indices for waveform, label in batch: tensors += [waveform] targets += [label] # Group the list of tensors into a batched tensor tensors = pad_AudioSequence(tensors) targets = pad_TextSequence(targets) return tensors, targets For those, who still don't understand how that works, here is little example: encDecClass2 = dummyEncoderDecoder() sent1 = audioWorkerClass.sentences[4] # wie viel Prozent hat der Akku noch? sent2 = audioWorkerClass.sentences[5] # Wie spät ist es? sent3 = audioWorkerClass.sentences[6] # Mach einen Timer für 5 Sekunden. # encode sentences into tensor of numbers, representing words, using my own enc-dec class sent1 = encDecClass2.encode(sent1) # tensor([11, 94, 21, 94, 22, 94, 23, 94, 24, 94, 25, 94, 26, 94, 15, 94]) sent2 = encDecClass2.encode(sent2) # tensor([27, 94, 28, 94, 12, 94, 29, 94, 15, 94]) sent3 = encDecClass2.encode(sent3) # tensor([30, 94, 31, 94, 32, 94, 33, 94, 34, 94, 35, 94, 19, 94]) print(sent1.shape) # torch.Size([16]) print(sent2.shape) # torch.Size([10]) print(sent3.shape) # torch.Size([14]) batch = [] # add sentences to the batch as separate arrays batch +=[sent1] batch +=[sent2] batch +=[sent3] output = pad_sequence(batch,batch_first=True, padding_value=0) print(f"{output}\n{output.shape}") ############################################################################# # output: # tensor([[11, 94, 21, 94, 22, 94, 23, 94, 24, 94, 25, 94, 26, 94, 15, 94], # [27, 94, 28, 94, 12, 94, 29, 94, 15, 94, 0, 0, 0, 0, 0, 0], # [30, 94, 31, 94, 32, 94, 33, 94, 34, 94, 35, 94, 19, 94, 0, 0]]) # torch.Size([3, 16]) ############################################################################# As you may see all arrays were equalized to the maximum length of those three arrays and padded with zeros. Shape of the output is 3x16, because we had three sentences and longest array had sequence of 16 in the batch.
https://stackoverflow.com/questions/71710205/
RuntimeError: DataLoader worker (pid(s) 15876, 2756) exited unexpectedly
I am compiling some existing examples from the PyTorch tutorial website. I am working especially on the CPU device no GPU. When running a program the type of error below is shown. Does it become I'm working on the CPU device or setup issue? raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e RuntimeError: DataLoader worker (pid(s) 15876, 2756) exited unexpectedly`. How can I solve it? import torch import torch.functional as F import torch.nn as nn import torch.optim as optim import torchvision import torchvision.transforms as transforms import matplotlib.pyplot as plt import numpy as np from torch.utils.tensorboard import SummaryWriter from torch.utils.data import DataLoader from torchvision import datasets device = 'cpu' if torch.cuda.is_available() else 'cuda' print(device) transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))] ) #Store separate training and validations splits in data training_set = datasets.FashionMNIST( root='data', train=True, download=True, transform=transform ) validation_set = datasets.FashionMNIST( root='data', train=False, download=True, transform=transform ) training_loader = DataLoader(training_set, batch_size=4, shuffle=True, num_workers=2) validation_loader = DataLoader(validation_set, batch_size=4, shuffle=False, num_workers=2) classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot') def matplotlib_imshow(img, one_channel=False): if one_channel: img = img.mean(dim=0) img = img/2+0.5 #unnormalize npimg = img.numpy() if one_channel: plt.imshow(npimg, cmap="Greys") else: plt.imshow(np.transpose(npimg, (1, 2, 0))) dataiter = iter(training_loader) images, labels = dataiter.next() img_grid = torchvision.utils.make_grid(images) matplotlib_imshow(img_grid, one_channel=True)
You need to first figure out why the dataLoader worker crashed. A common reason is out of memory. You can check this by running dmesg -T after your script crashes and see if the system killed any python process.
https://stackoverflow.com/questions/71713719/
How do I more efficiently convert a string of slices to slice objects that can then be used to slice arrays & tensors in PyTorch / NumPy?
How can I simplify this function that converts strings of slices for PyTorch / NumPy to slice list objects that can then be used to slice arrays & tensors? The code below works, but it seems rather inefficient in terms of how many lines it takes. def str_to_slice_indices(slicing_str: str): # Convert indices to lists indices = [ [i if i else None for i in indice_set.strip().split(":")] for indice_set in slicing_str.strip("[]").split(",") ] # Handle Ellipsis "..." indices = [ ... if index_slice == ["..."] else index_slice for index_slice in indices ] # Handle "None" values indices = [ None if index_slice == ["None"] else index_slice for index_slice in indices ] # Handle single number values indices = [ int(index_slice[0]) if isinstance(index_slice, list) and len(index_slice) == 1 and index_slice[0].lstrip("-").isdigit() else index_slice for index_slice in indices ] # Create indice slicing list indices = [ slice(*[int(i) if i and i.lstrip("-").isdigit() else None for i in index_slice]) if isinstance(index_slice, list) else index_slice for index_slice in indices ] return indices Running the above function with an example covering the various types of inputs, give this: out = str_to_slice_indices("[None, :1, 3:4, 2, :, 2:, ...]") print(out) # out: # [None, slice(None, 1, None), slice(3, 4, None), 2, slice(None, None, None), slice(2, None, None), Ellipsis]
Iterating multiple times is not necessary. The sample string has been slightly expanded to test more cases. def str2slices(s): d = {True: lambda e: slice(*[int(i) if i else None for i in e.split(':')]), 'None': lambda e: None, '...': lambda e: ...} return [d.get(':' in e or e.strip(), lambda e: int(e))(e.strip()) for e in s[1:-1].split(',')] str2slices('[None, :1, 3:4, 2, :, -10: ,::,:4:2, 1:10:2, -32,...]') Output [None, slice(None, 1, None), slice(3, 4, None), 2, slice(None, None, None), slice(-10, None, None), slice(None, None, None), slice(None, 4, 2), slice(1, 10, 2), -32, Ellipsis] The same errors as in OP's solution are caught. They don't silently change the result, but throw a ValueError for unsupported input. Breakdown of the solution Assuming string slicing and the split function are known. With example s = '[None, :1, 3:4, 2, :, -10: ,::,:4:2, 1:10:2, -32,...]' we can find slices with [':' in e for e in s[1:-1].split(',')] #[False, True, True, False, True, True, True, True, True, False, False] Using or short-circutting we can distinguish other cases [':' in e or e.strip() for e in s[1:-1].split(',')] #['None', True, True, '2', True, True, True, True, True, '-32', '...'] This values can be used as keys of a dictionary d = {True: 'slice', 'None': None, '...': ...} [d[':' in e or e.strip()] for e in s[1:-1].split(',')] #KeyError: '2' To prevent the KeyError we can use the get method with a default value. d = {True: 'slice', 'None': None, '...': ...} [d.get(':' in e or e.strip(), 'number') for e in s[1:-1].split(',')] #[None, 'slice', 'slice', 'number', 'slice', 'slice', 'slice', 'slice', 'slice', 'number', Ellipsis] In order to process slices, we need to parse additional values ​​at runtime. So we use lambdas as dictionary values ​​to be able to call them with (e.strip()). Finally, we convert values ​​to int if necessary. d = {True: lambda e: slice(*[int(i) if i else None for i in e.split(':')]), 'None': lambda e: None, '...': lambda e: ...} [d.get(':' in e or e.strip(), lambda e: int(e))(e.strip()) for e in s[1:-1].split(',')] Output [None, slice(None, 1, None), slice(3, 4, None), 2, slice(None, None, None), slice(-10, None, None), slice(None, None, None), slice(None, 4, 2), slice(1, 10, 2), -32, Ellipsis]
https://stackoverflow.com/questions/71718749/
Output of the model depends on the shape of the weights tensor
I want to train the model to sum the three inputs. So it is as simple as possible. Firstly the weights are initialized randomly. It produces bad error estimate (approx. 0.5) Then I initialize the weights with zeros. There are two options: the shape of the weights tensor is [1, 3] the shape of the weights tensor is [3] When I choose the 1st option the model still works bad and can't learn this simple formula. When I choose the 2nd option it works perfect with the error of 10e-12. Why the result depends on the shape of the weights? Why do I need to initialize the model with zeros to solve this simple problem? import torch from torch.nn import Sequential as Seq, Linear as Lin from torch.optim.lr_scheduler import ReduceLROnPlateau X = torch.rand((1024, 3)) y = (X[:,0] + X[:,1] + X[:,2]) m = Seq(Lin(3, 1, bias=False)) # 1 option m[0].weight = torch.nn.parameter.Parameter(torch.tensor([[0, 0, 0]], dtype=torch.float)) # 2 option #m[0].weight = torch.nn.parameter.Parameter(torch.tensor([0, 0, 0], dtype=torch.float)) optim = torch.optim.SGD(m.parameters(), lr=10e-2) scheduler = ReduceLROnPlateau(optim, 'min', factor=0.5, patience=20, verbose=True) mse = torch.nn.MSELoss() for epoch in range(500): optim.zero_grad() out = m(X) loss = mse(out, y) loss.backward() optim.step() if epoch % 20 == 0: print(loss.item()) scheduler.step(loss)
First option doesn't learning because it fails with broadcasting: while out.shape == (1024, 1) corresponding targets y has shape of (1024, ). MSELoss, as expected, computes mean of tensor (out - y)^2, which in this case has shape (1024, 1024), clearly wrong objective for this task. At the same time, after applying 2-nd option tensor (out - y)^2 has size (1024, ) and mean of it corresponds to actual mse. Default approach, without explicit changing weights shape (through option 1 and 2), would work if set target shape to (1024, 1) for example by y = y.unsqueeze(-1) after definition of y.
https://stackoverflow.com/questions/71718929/
Does inplace matter when we return ReLU(x)
Is there a difference between the following two classes? I know what inplace is (you don't need to do x = function(x) but only function(x) to modify x if inplace is True). But here because we return self.conv(x), it should not matter, right? class ConvBlock(nn.Module): def __init__( self, in_channels, out_channels, down=True, use_act=True, **kwargs ): super().__init__() self.conv = nn.Sequential((nn.Conv2d(in_channels, out_channels, padding_mode='reflect', **kwargs) if down else nn.ConvTranspose2d(in_channels, out_channels, **kwargs)), nn.InstanceNorm2d(out_channels), (nn.ReLU() if use_act else nn.Identity())) def forward(self, x): return self.conv(x) class ConvBlockInplace(nn.Module): def __init__( self, in_channels, out_channels, down=True, use_act=True, **kwargs ): super().__init__() self.conv = nn.Sequential((nn.Conv2d(in_channels, out_channels, padding_mode='reflect', **kwargs) if down else nn.ConvTranspose2d(in_channels, out_channels, **kwargs)), nn.InstanceNorm2d(out_channels), (nn.ReLU(inplace=True) if use_act else nn.Identity())) def forward(self, x): return self.conv(x)
The inplace operations do the exact amount of computations. However, there are less memory accesses, if your task is memory bound. Then, it would "matter". I used the ptflops flops counter to generate the following statistics ConvBlock( 0.0 M, 100.000% Params, 0.015 GMac, 100.000% MACs, (conv): Sequential( 0.0 M, 100.000% Params, 0.015 GMac, 100.000% MACs, (0): Conv2d(0.0 M, 100.000% Params, 0.014 GMac, 93.333% MACs, 3, 10, kernel_size=(3, 3), stride=(1, 1), padding_mode=reflect) (1): InstanceNorm2d(0.0 M, 0.000% Params, 0.0 GMac, 3.333% MACs, 10, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (2): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 3.333% MACs, ) ) ) Computational complexity: 0.01 GMac Number of parameters: 280 Warning: module ConvBlockInplace is treated as a zero-op. ConvBlockInplace( 0.0 M, 100.000% Params, 0.015 GMac, 100.000% MACs, (conv): Sequential( 0.0 M, 100.000% Params, 0.015 GMac, 100.000% MACs, (0): Conv2d(0.0 M, 100.000% Params, 0.014 GMac, 93.333% MACs, 3, 10, kernel_size=(3, 3), stride=(1, 1), padding_mode=reflect) (1): InstanceNorm2d(0.0 M, 0.000% Params, 0.0 GMac, 3.333% MACs, 10, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (2): ReLU(0.0 M, 0.000% Params, 0.0 GMac, 3.333% MACs, inplace=True) ) ) Computational complexity: 0.01 GMac Number of parameters: 280
https://stackoverflow.com/questions/71719770/
RuntimeError: Boolean value of Tensor with more than one value is ambiguous in python
I'm facing the following error, and I don't know why: This code is on GitHub, I ran it correctly on Collab, but it gives me the following error here: device="cpu" lr=3e-5#1e-3 num_training_steps=int(len(dataset) / TRAIN_BATCH_SIZE * EPOCH) model=Bert_Classification_Model().to(device) optimizer=AdamW(model.parameters(), lr=lr) scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps = 0, num_training_steps = num_training_steps) val_losses=[] batches_losses=[] val_acc=[] for epoch in range(EPOCH): t0 = time.time() print(f"\n=============== EPOCH {epoch+1} / {EPOCH} ===============\n") batches_losses_tmp=train_loop_fun1(train_data_loader, model, optimizer, device) epoch_loss=np.mean(batches_losses_tmp) print(f"\n*** avg_loss : {epoch_loss:.2f}, time : ~{(time.time()-t0)//60} min ({time.time()-t0:.2f} sec) ***\n") t1=time.time() output, target, val_losses_tmp=eval_loop_fun1(valid_data_loader, model, device) print(f"==> evaluation : avg_loss = {np.mean(val_losses_tmp):.2f}, time : {time.time()-t1:.2f} sec\n") tmp_evaluate=evaluate(target.reshape(-1), output) print(f"=====>\t{tmp_evaluate}") val_acc.append(tmp_evaluate['accuracy']) val_losses.append(val_losses_tmp) batches_losses.append(batches_losses_tmp) print("\t§§ model has been saved §§") torch.save(model, f"model1/model_epoch{epoch+1}.pt") Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.bias', 'cls.seq_relationship.weight'] - This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). =============== EPOCH 1 / 3 =============== --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-33-aa98faac385e> in <module>() 14 t0 = time.time() 15 print(f"\n=============== EPOCH {epoch+1} / {EPOCH} ===============\n") ---> 16 batches_losses_tmp=train_loop_fun1(train_data_loader, model, optimizer, device) 17 epoch_loss=np.mean(batches_losses_tmp) 18 print(f"\n*** avg_loss : {epoch_loss:.2f}, time : ~{(time.time()-t0)//60} min ({time.time()-t0:.2f} sec) ***\n") 6 frames /content/RoBERT_Recurrence_over_BERT/Custom_Dataset_Class.py in long_terms_tokenizer(self, data_tokenize, targets) 158 targets_list.append(targets) 159 --> 160 if remain and self.approach != 'head': 161 remain = torch.tensor(remain, dtype=torch.long) 162 idxs = range(len(remain)+self.chunk_len) RuntimeError: Boolean value of Tensor with more than one value is ambiguous This is the file link: https://github.com/helmy-elrais/RoBERT_Recurrence_over_BERT/blob/master/train.ipynb
Your tensor remain (in your Dataset class) is a boolean tensor and not a boolean variable. Therefore, the condition if remain is not well-defined.
https://stackoverflow.com/questions/71720038/
RuntimeError: mat1 and mat2 shapes cannot be multiplied (5400x64 and 5400x64)
I'm working on an image classification network and got a problem with the right values of inputs and outputs in the forward() function. I don't have an idea to solve this, because they seem the same to me. The error comes from this line: x = F.relu(self.fc1(x)), but I can't figure it out. Can anyone please help me with this problem? That's my code: class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 8, kernel_size=2) self.conv2 = nn.Conv2d(8, 12, kernel_size=2) self.conv3 = nn.Conv2d(12, 18, kernel_size=2) self.conv4 = nn.Conv2d(18, 24, kernel_size=2) self.fc1 = nn.Linear(5400, 64) self.fc2 = nn.Linear(64, 2) def forward(self, x): print(f'1. {x.size()}') x = self.conv1(x) x = F.max_pool2d(x, 2) x = F.relu(x) print(f'2. {x.size()}') x = self.conv2(x) x = F.max_pool2d(x, 2) x = F.relu(x) print(f'3. {x.size()}') x = self.conv3(x) x = F.max_pool2d(x, 2) x = F.relu(x) print(f'4. {x.size()}') x = self.conv4(x) x = F.max_pool2d(x, 2) x = F.relu(x) print(f'5. {x.size()}') x = x.view(-1, x.size(0)) print(f'6. {x.size()}') x = F.relu(self.fc1(x)) print(f'7. {x.size()}') x = self.fc2(x) print(f'8. {x.size()}') return torch.sigmoid(x) That's the print output: 1. torch.Size([64, 3, 256, 256]) 2. torch.Size([64, 8, 127, 127]) 3. torch.Size([64, 12, 63, 63]) 4. torch.Size([64, 18, 31, 31]) 5. torch.Size([64, 24, 15, 15]) 6. torch.Size([5400, 64])
I think changing x = x.view(-1, x.size(0)) to x = x.view([-1, 5400], x.size(0)) Will solve your problem, You see that in print 6: 6. torch.Size([5400, 64]) the batch size 64 is in the 1 axes and not in the 0 axes. The fully connected layer expects an input of size 5400 therefore changing this will likely solve since you do not know that batch size but you know that the input to the fully-connected is 5400.
https://stackoverflow.com/questions/71724185/
Explain (T,) tensor shape
In the following d2l tutorial: import torch T = 1000 time = torch.arange(1, T + 1, dtype=torch.float32) print(f"time shape: {time.shape}") x = torch.sin(0.01 * time) + torch.normal(0.0, 0.2, size=(T,)) Given that torch.sin(0.01 * time) shape is torch.Size([1000]) why the size attribute provided to normal function is (T,) and not (T)?
Because (T) is equal to T having type of int, while torch.normal requires a tuple. (T,) is a Python way to pass one-element tuple.
https://stackoverflow.com/questions/71732913/
What does \* mean in the function signature of PyTorch?
For example in the randint signature there is a \* as 4th argument. What does it mean ? torch.randint(low=0, high, size, \*, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor I am aware of position only args and keyword only args introduced in Python 3.8 which use \ and *. But here I see \*.
I was pretty sure user2357112 supports Monica is right. To confirm it, I've tried looking for the source of the documentation, which is autogenerated. I've looked at the implementation, which is in C++ where functions signatures work differently. I found the type annotation generation code: 'randint': ['def randint(low: _int, high: _int, size: _size, *,' ' generator: Optional[Generator]=None, {}) -> Tensor: ...' .format(FACTORY_PARAMS), 'def randint(high: _int, size: _size, *,' ' generator: Optional[Generator]=None, {}) -> Tensor: ...' .format(FACTORY_PARAMS)], This suggests that the commenter was indeed right, and it should have been * rather than \*. Positional arguments in Python are denoted by /, not by \. In Python's syntax \ is only used in two places: as an escape character in string literals, and at the end of a line as an explicit line continuation marker. It looks like this has been an issue since at least May 2020, with no activity since.
https://stackoverflow.com/questions/71733165/
YOLOv5 get boxes, scores, classes, nums
im trying to bind the Object Tracking with Deep Sort in my Project and i need to get the boxes, scores, classes, nums. Loading Pretrained Yolov5 model: model = torch.hub.load('ultralytics/yolov5', 'yolov5s', pretrained=True) model.eval() Getting the Prediction: result = model(img) print(result.shape) print(result) torch.Size([8, 6]) tensor([[277.50000, 379.25000, 410.50000, 478.75000, 0.90625, 2.00000], [404.00000, 205.12500, 498.50000, 296.00000, 0.88623, 2.00000], [262.50000, 247.75000, 359.50000, 350.25000, 0.88281, 2.00000], [210.50000, 177.75000, 295.00000, 261.75000, 0.83154, 2.00000], [195.50000, 152.50000, 257.75000, 226.00000, 0.78223, 2.00000], [137.00000, 146.75000, 168.00000, 162.00000, 0.55713, 2.00000], [ 96.00000, 130.12500, 132.50000, 161.12500, 0.54199, 2.00000], [ 43.56250, 89.56250, 87.68750, 161.50000, 0.50146, 5.00000]], device='cuda:0') tensor([[277.50000, 379.25000, 410.50000, 478.75000, 0.90625, 2.00000], [404.00000, 205.12500, 498.50000, 296.00000, 0.88623, 2.00000], [262.50000, 247.75000, 359.50000, 350.25000, 0.88281, 2.00000], [210.50000, 177.75000, 295.00000, 261.75000, 0.83154, 2.00000], [195.50000, 152.50000, 257.75000, 226.00000, 0.78223, 2.00000], [137.00000, 146.75000, 168.00000, 162.00000, 0.55713, 2.00000], [ 96.00000, 130.12500, 132.50000, 161.12500, 0.54199, 2.00000], [ 43.56250, 89.56250, 87.68750, 161.50000, 0.50146, 5.00000]], device='cuda:0') so now my question is how do i get the boxes, scores, classes, nums in each variables? I need that for the Object Tracking I tried it once with the example on Pytorch Documentation: result.xyxy[0] but in my Case I get an Error: Tensor has no attribute xyxy
The output from the model is a torch tensor and has no xyxy method. You need to extract the values manually. Either you can go through each detection one by one: import torch det = torch.rand(8, 6) for *xyxy, conf, cls in det: print(*xyxy) print(conf) print(cls) or you can slice the detections tensor by: xyxy = det[:, 0:4] conf = det[:, 4] cls = det[:, 5] print(xyxy) print(conf) print(cls)
https://stackoverflow.com/questions/71737788/
Filter torch tensor based on another tensor without loops
Suppose I have the following two torch.Tensors: x = torch.tensor([0,0,0,1,1,2,2,2,2], dtype=torch.int64) y = torch.tensor([0,2], dtype=torch.int64) I want to somehow filter x such that only the values that are in y remain: x_filtered = torch.tensor([0,0,0,2,2,2,2]) For another example, if y = torch.tensor([0,1]), then x_filtered = torch.tensor([0,0,0,1,1]). Both x,y are always 1D and int64. y is always sorted, if it makes it simpler, we can assume that x is always sorted as well. I tried to think of various ways to do it without using loops, but failed. I cannot really use loops because my use case involves x in the millions and y in tens of thousands. Any help is appreciated. Just realised what I need is the torch equivalent of numpy.in1d
For filtering tensor as you want in you task, you need to use isin function available in torch. The way it is used is given below:- import torch x = torch.tensor([0,0,0,1,1,2,2,2,2,3], dtype=torch.int64) y = torch.tensor([0,2], dtype=torch.int64) # torch.isin(x, y) c=x[torch.isin(x,y)] print(c) After running this code you will get your preferred answer.
https://stackoverflow.com/questions/71740402/
Trying to recreate BiLSTM model from Adhikari et al. 2019 (LSTM_reg) in Tensorflow
Im trying to recreate this model LSTM_reg from this paper in TensorFlow to use in my problem. I've come up with the following code: def get_model(lr=0.001): model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(nb_words, output_dim=embed_size, weights=[embedding_matrix], input_length = maxlen, trainable=False)) model.add(tf.keras.layers.Dropout(0.2)) # embedding dropouts model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(256, return_sequences=True, recurrent_dropout=0.2, activation = 'tanh'))) # weight drop on recurrent layers using recurrent_dropout model.add(tf.keras.layers.MaxPooling1D(pool_size=2, padding = 'valid')) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(512, activation='relu')) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(20)) model.add(tf.keras.layers.Activation('sigmoid')) model.compile(loss = 'categorical_crossentropy' , optimizer = 'adam', metrics = ['accuracy', tfa.metrics.F1Score(num_classes = 20)]) return model Have I gone about this the right way? Got some pretty weird values while training my dataset, hence was wondering about my implementation.. There is a pytorch implementation for this model here. But I'm not sure if I have reproduced this correctly.
One major difference I can see is that the paper uses a global max pooling, whereas you've only used max pooling with a kernel size of 2: def get_model(lr=0.001): model = tf.keras.models.Sequential() model.add(tf.keras.layers.Embedding(nb_words, output_dim=embed_size, weights=[embedding_matrix], input_length = maxlen, trainable=False)) model.add(tf.keras.layers.Dropout(0.2)) # embedding dropouts model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(256, return_sequences=True, recurrent_dropout=0.2, activation = 'tanh'))) # weight drop on recurrent layers using recurrent_dropout model.add(tf.keras.layers.GlobalMaxPooling1D()) model.add(tf.keras.layers.Dense(512, activation='relu')) model.add(tf.keras.layers.Dropout(0.5)) model.add(tf.keras.layers.Dense(20)) model.add(tf.keras.layers.Activation('sigmoid')) model.compile(loss = 'categorical_crossentropy' , optimizer = 'adam', metrics = ['accuracy', tfa.metrics.F1Score(num_classes = 20)]) return model I obviously don't have your data so make sure that the arguments are set correctly: https://keras.io/api/layers/pooling_layers/global_max_pooling1d/. Another change is that the pytorch repo you shared has a ReLU after the LSTM (I don't know why). You could try adding that in and see if it helps.
https://stackoverflow.com/questions/71742385/
Error during traning my model with pytorch, stack expects each tensor to be equal size
I am using MMSegmentainon library to train my model for instance image segmentation, during the traingin, I craete the model(Vision Transformer) and when I try to train the model using this: I get this error: RuntimeError:CaughtRuntimeErrorinDataLoaderworkerprocess0.OriginalTraceback(mostrecentcalllast): File"/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py",line287,in _worker_loop data=fetcher.fetch(index) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 47, infetch returnself.collate_fn(data) File "/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py", line 81, in collateforkeyinbatch[0] File"/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py",line81,in <dictcomp> forkey in batch[0] File"/usr/local/lib/python3.7/dist-packages/mmcv/parallel/collate.py",line59,incollatestacked.append(default_collate(padded_samples)) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/collate.py", line 56, indefault_collate returntorch.stack(batch,0,out=out) RuntimeError: stack expects each tensor to be equal size, but got [1, 256, 256, 256] at entry0 and[1,256,256] at entry3 ** I must also mention that I have tested my own dataset with other models available in their library but all of them works properly. tried : model=build_segmentor(cfg.model,train_cfg=cfg.get('train_cfg'),test_cfg=cfg.get('test_cfg'))train_segmentor(model,datasets,cfg,distributed=False,validate=True, meta=dict())
It seems that images in your dataset might not have the same size, as in the VIT model https://arxiv.org/abs/2010.11929, you are using an MLP model, if it was not the case, it is worth checking if your labels are all in the expected dimension. presumably, MMsegmentattion expects the output to be just the annotation map (a 2D array). It is recommended that you revise your dataset and prepare the annotation map.
https://stackoverflow.com/questions/71742524/
How can generator get input noise z?
Hi I'm looking this GAN implementation code. code here My question is generator class has no input parameter when defining class Generator(#38 from the link) But when training, generator gets input z(#141 from the link). I looked into the nn.Module class which is parent of class Generator but I can't find input parameter for noise z. Can anyone help? class Generator(nn.Module): #38 def __init__(self): super(Generator, self).__init__() def block(in_feat, out_feat, normalize=True): layers = [nn.Linear(in_feat, out_feat)] if normalize: layers.append(nn.BatchNorm1d(out_feat, 0.8)) layers.append(nn.LeakyReLU(0.2, inplace=True)) return layers self.model = nn.Sequential( *block(opt.latent_dim, 128, normalize=False), *block(128, 256), *block(256, 512), *block(512, 1024), nn.Linear(1024, int(np.prod(img_shape))), nn.Tanh() ) def forward(self, z): img = self.model(z) img = img.view(img.size(0), *img_shape) return img generator = Generator() #88 gen_imgs = generator(z) #141 I tried looking for nn.Module, variable() in pytorch docs and still can't get what I wanted.
Consider every quoted line (38, 88 and 141): On line 38 is a definition of class, by putting nn.Module in brackets it's declaring inheritance of class Generator from class nn.Module (which is common way to define your own neural network). On line 88 instance of class Generator is created -- for parameters it needs all what's inside brackets of __init__ (line 39), besides self, that's why brackets on line 88 is empty. And on line 141 there is a calling of genearator, behavior here defined by method forward (line 58) and there is one parameter to be passed -- z. Again, line 88 creates an instance, line 141 calls forward method of an instance.
https://stackoverflow.com/questions/71746987/
Pytorch: Assign values from one mask to another, masked by itself
I have a mask active that tracks batches that still have not terminated in a recurrent process. It's dimension is [batch_full,], and it's true entries show which elements need to still be used in current step. The recurrent process generates another mask, terminated, which has as many elements as true values in active mask. Now, I want to take values from ~terminated and put them back into active, but at the right indices. Basically I want to do: import torch active = torch.ones([4,], dtype=torch.bool) active[:2] = torch.tensor(False) terminated = torch.tensor([True, False]) active[active] = ~terminated print(active) # expected [F, F, F, T] However, I get error: RuntimeError: unsupported operation: some elements of the input tensor and the written-to tensor refer to a single memory location. Please clone() the tensor before performing the operation. How can I do the described above operation in an effective way?
There are a few solutions, I will also give their speed as measured by timeit, 10k repetitions, on 2021 macbook pro. The simplest solution, taking 0.260s: active[active.clone()] = ~terminated We can use masked_scatter_ inplace operation for abt. 2x speedup (0.136s): active.masked_scatter_( active, ~terminated, ) Out of place operation, taking 0.161s, would be: active = torch.masked_scatter( active, active, ~terminated, )
https://stackoverflow.com/questions/71747998/
Why does my variational autoencoder only produce positive values?
I copied this example to build a variational autoencoder (VAE). The example uses images, but I use it for a signal that contains negative values. After training, the autoencoder only reconstructs the positive part of the signal, it does not produce negative values. Can anyone spot where the problem is or explain why this is the case?
If you used the exact code as the one shown in the example you put the link in, then at the end of the decoder you have x = torch.sigmoid(self.decConv2(x)) which take the real number line and outputs numbers between [0, 1]. This is why the network is unable to output negative numbers. If you want to change the model to output negative numbers as well, remove the sigmoid function. This means of course that you also have to change the loss function with which you train your model since the BCE loss is only good for outputs in the range of [0, 1]. As a recommendation I would suggest anyone to use the BCE with logits loss and avoid using the sigmoid in the decoder since this method incorporates the sigmoid and the BCE loss in a more numerically stable manner.
https://stackoverflow.com/questions/71749870/
How do you implement SVoice?
I'm trying to use Facebook's SVoice to split out different speakers in my audio file using python. I found a library that implemented it here: https://github.com/facebookresearch/svoice However, I'm having trouble running it. The readme discusses how to train my own dataset which I can't really do since I don't have the noises parsed out in my own audio files. It also talks about how I can separate my own file using one of the models in the models folder but I get the following error when I try to follow the readme and create a model from the toy dataset: File "/mnt/c/Users/imrea/PycharmProjects/svoice/svoice/data/audio.py", line 34, in find_audio_files siginfo, _ = torchaudio.info(file) TypeError: cannot unpack non-iterable AudioMetaData object How do I run this to test the output on an audio file of my own? Has anyone used this before? Any guidance would be greatly appreciated!
You need to have torchaudio version 0.6.0 Try: pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 torchaudio==0.6.0 -f https://download.pytorch.org/whl/torch_stable.html This worked for me.
https://stackoverflow.com/questions/71750794/
Viewing Pytorch weights from a *.pth file
I have a .pth file created with Pytorch with weights. How would I be able to view the weights from this file? I tried this code to load and view but it was not working (as a newbie, I might be entirely wrong)- import torch import torchvision.models as models torch.save('weights\kharif_crops_final.pth') models.load_state_dict(torch.load('weights\kharif_crops_final.pth')) models.eval() print(models)
import torch model = torch.load('path') print(model) (Verify and confirm)
https://stackoverflow.com/questions/71754506/
How do I load a fine-tuned AllenNLP BERT-SRL model using BertPreTrainedModel.from_pretrained()?
I have fine-tuned a BERT model for semantic role labeling, using AllenNLP. This produces a model directory (serialization directory, if I recall?) that contains the following: best.th config.json meta.json metrics_epoch_0.json metrics_epoch_10.json metrics_epoch_11.json metrics_epoch_12.json metrics_epoch_13.json metrics_epoch_14.json metrics_epoch_1.json metrics_epoch_2.json metrics_epoch_3.json metrics_epoch_4.json metrics_epoch_5.json metrics_epoch_6.json metrics_epoch_7.json metrics_epoch_8.json metrics_epoch_9.json metrics.json model_state_e14_b0.th model_state_e15_b0.th model.tar.gz out.log training_state_e14_b0.th training_state_e15_b0.th vocabulary Where vocabulary is a folder with labels.txt and non_padded_namespaces.txt. I'd now like to use this fine-tuned model BERT model as the initialization when learning a related task, event extraction, using this library: https://github.com/wilsonlau-uw/BERT-EE (ie I want to exploit some transfer learning). The config.ini file has a line for fine_tuned_path, where I can specify an already-fine-tuned model that I want to use here. I provided the path to the AllenNLP serialization directory, and I got the following error: 2022-04-05 13:07:28,112 - INFO - setting seed 23 2022-04-05 13:07:28,113 - INFO - loading fine tuned model in /data/projects/SRL/ser_pure_clinical_bert-large_thyme_and_ontonotes/ Traceback (most recent call last): File "main.py", line 65, in <module> model = BERT_EE() File "/data/projects/SRL/BERT-EE/model.py", line 88, in __init__ self.__build(self.use_fine_tuned) File "/data/projects/SRL/BERT-EE/model.py", line 118, in __build self.__get_pretrained(self.fine_tuned_path) File "/data/projects/SRL/BERT-EE/model.py", line 110, in __get_pretrained self.__model = BERT_EE_model.from_pretrained(path) File "/home/richier/anaconda3/envs/allennlp/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1109, in from_pretrained f"Error no file named {[WEIGHTS_NAME, TF2_WEIGHTS_NAME, TF_WEIGHTS_NAME + '.index', FLAX_WEIGHTS_NAME]} found in " OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index', 'flax_model.msgpack'] found in directory /data/projects/SRL/ser_pure_clinical_bert-large_thyme_and_ontonotes/ or `from_tf` and `from_flax` set to False. Of course, the serialization directory doesn't have any of those files, hence the error. I tried unzipping model.tar.gz but it only has: config.json weights.th vocabulary/ vocabulary/.lock vocabulary/labels.txt vocabulary/non_padded_namespaces.txt meta.json Digging into the codebase of the GitHub repo I linked above, I can see that BERT_EE_model inherits from BertPreTrainedModel from the transformers library, so the trick would seem to be getting the AllenNLP model into a format that BertPreTrainedModel.from_pretrained() can load...? Any help would be greatly appreciated!
I believe I have figured this out. Basically, I had to re-load my model archive, access the underlying model and tokenizer, and then save those: from allennlp.models.archival import load_archive from allennlp_models.structured_prediction import SemanticRoleLabeler, srl, srl_bert archive = load_archive('ser_pure_clinical_bert-large_thyme_and_ontonotes/model.tar.gz') bert_model = archive.model.bert_model #type is transformers.models.bert.modeling_bert.BertModel bert_model.save_pretrained('ser_pure_clinical_bert-large_thyme_and_ontonotes_save_pretrained/') bert_tokenizer = archive.dataset_reader.bert_tokenizer bert_tokenizer.save_pretrained('ser_pure_clinical_bert-large_thyme_and_ontonotes_save_pretrained/') (This last part is probably less interesting to most folks, but also, in the config.ini I mentioned, the directory 'ser_pure_clinical_bert-large_thyme_and_ontonotes_save_pretrained' needed to be passed to the line pretrained_model_name_or_path not to fine_tuned_path.)
https://stackoverflow.com/questions/71755917/
ANN not training accurately as i am not getting a better loss reduction
Just starting up with regression and it seems i'm not getting something right please what am i doing wrong here as my loss is not reducing. import torch from torch import nn import numpy as np import pandas as pd from sklearn.model_selection import train_test_split df = pd.read_excel('Folds5x2_pp.xlsx') x = df.iloc[:,:-1].values y = df.iloc[:,-1].values x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0) class ANN(nn.Module): def __init__(self, input, output): super(ANN, self).__init__() self.fc1 = nn.Linear(input, 6) self.r1 = nn.ReLU() self.fc2 = nn.Linear(6, output) def forward(self, x): return self.fc2(self.r1(self.fc1(x))) f, s = x.shape ann = ANN(s, 1) criterion = nn.MSELoss() optimizer = torch.optim.Adam(ann.parameters(), lr=0.01) x = torch.from_numpy(x_train.astype(np.float32)) y = torch.from_numpy(y_train.astype(np.float32)) for i in range(100): y_pred = ann(x) loss = criterion(y_pred, y) print(f"i: {i}, loss: {loss.item()}") loss.backward() optimizer.step() optimizer.zero_grad()
You should put optimizer.zero_grad() first, because the gradient will be relative to the previous batch of data if you don't zero it out. Like this: for i in range(100): y_pred = ann(x) loss = criterion(y_pred, y) print(f"i: {i}, loss: {loss.item()}") optimizer.zero_grad() loss.backward() optimizer.step()
https://stackoverflow.com/questions/71760412/
Get Model Summary with `torchsummary` pip Package
I am trying to get a good summary of my deep learning model like Keras summary function (can be found in here). For that, what I have found is torch-summary pip package (details can be found here) is the best package I have found from this question. My pytorch model is like this- class DeepLearningModel(Module): # define model elements def __init__(self, n_inputs=34): super(DeepLearningModel, self).__init__() # input to first hidden layer self.hidden1 = Linear(n_inputs, 10) kaiming_uniform_(self.hidden1.weight, nonlinearity='relu') self.act1 = ReLU() # second hidden layer self.hidden2 = Linear(10, 8) kaiming_uniform_(self.hidden2.weight, nonlinearity='relu') self.act2 = ReLU() # third hidden layer and output self.hidden3 = Linear(8, 1) xavier_uniform_(self.hidden3.weight) self.act3 = Sigmoid() # forward propagate input def forward(self, X): # input to the first hidden layer X = self.hidden1(X) X = self.act1(X) # second hidden layer X = self.hidden2(X) X = self.act2(X) # third hidden layer and output X = self.hidden3(X) X = self.act3(X) return X And to view the model's summary with the package, I am using this- model_stats = summary(my_model, input_size=(1, 34, 8)) But for that, I am finding this error- Message=mat1 and mat2 shapes cannot be multiplied (68x8 and 34x10) Source=D:\Education\0. Research\1. Computer Science Knowledge Graph\Code\Terms Extractor\TermExtractor\BinaryClassifier\DeepLearningModel.py StackTrace: File "D:\Education\0. Research\1. Computer Science Knowledge Graph\Code\Terms Extractor\TermExtractor\BinaryClassifier\DeepLearningModel.py", line 25, in forward (Current frame) X = self.hidden1(X) File "D:\Education\0. Research\1. Computer Science Knowledge Graph\Code\Terms Extractor\TermExtractor\BinaryClassifier\DeepLearningClassifier.py", line 170, in printModel model_stats = summary(self.model, (1, 34, 8)) File "D:\Education\0. Research\1. Computer Science Knowledge Graph\Code\Terms Extractor\TermExtractor\BinaryClassifier\main-caller.py", line 11, in <module> model.printModel() So, I am not sure what values should I put in the input_size parameter. Can anyone please help me find the issue or find my actual input shape for the model for getting a summary?
Linear expects the number of channels on the last axis of the input to be in_features. model_stats = summary(my_model, input_size=(1, 8, 34))
https://stackoverflow.com/questions/71761532/
Parametric estimation of a Gaussian Mixture Model
I am trying to train a model to estimate a GMM. However, the means of the GMM are calculated each time based on a mean_placement parameter. I am following the solution provided here, I'll copy and paste the original code: import numpy as np import matplotlib.pyplot as plt import sklearn.datasets as datasets import torch from torch import nn from torch import optim import torch.distributions as D num_layers = 8 weights = torch.ones(8,requires_grad=True) means = torch.tensor(np.random.randn(8,2),requires_grad=True) stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True) parameters = [weights, means, stdevs] optimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9) num_iter = 10001 for i in range(num_iter): mix = D.Categorical(weights) comp = D.Independent(D.Normal(means,stdevs), 1) gmm = D.MixtureSameFamily(mix, comp) optimizer1.zero_grad() x = torch.randn(5000,2)#this can be an arbitrary x samples loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean() loss2.backward() optimizer1.step() print(i, loss2) What I would like to do is this: num_layers = 8 weights = torch.ones(8,requires_grad=True) means_coef = torch.tensor(10.,requires_grad=True) means = torch.tensor(torch.dstack([torch.linspace(1,means_coef.detach().item(),8)]*2).squeeze(),requires_grad=True) stdevs = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=True) parameters = [means_coef] optimizer1 = optim.SGD(parameters, lr=0.001, momentum=0.9) num_iter = 10001 for i in range(num_iter): means = torch.tensor(torch.dstack([torch.linspace(1,means_coef.detach().item(),8)]*2).squeeze(),requires_grad=True) mix = D.Categorical(weights) comp = D.Independent(D.Normal(means,stdevs), 1) gmm = D.MixtureSameFamily(mix, comp) optimizer1.zero_grad() x = torch.randn(5000,2)#this can be an arbitrary x samples loss2 = -gmm.log_prob(x).mean()#-densityflow.log_prob(inputs=x).mean() loss2.backward() optimizer1.step() print(i, means_coef) print(means_coef) However in this case the parameter is not updated and the grad value is always None. Any ideas how to fix this?
According to your instructions I have re-written your model. If you run it you can see that all the parameters are changing after the model is optimized. I also have provided the graph of the model at the end. You can simply modify the GMM class as you need if you want to make a new one. import numpy as np import matplotlib.pyplot as plt import sklearn.datasets as datasets import torch from torch import nn from torch import optim import torch.distributions as D class GMM(nn.Module): def __init__(self, weights, base, scale, n_cell=8, shift=0, dim=2): super(GMM, self).__init__() self.weight = nn.Parameter(weights) self.base = nn.Parameter(base) self.scale = nn.Parameter(scale) self.grid = torch.arange(1, n_cell+1) self.shift = shift self.n_cell = n_cell self.dim = dim def trsf_grid(self): trsf = ( torch.log(self.scale * self.grid + self.shift) / torch.log(self.base) ).reshape(-1, 1) return trsf.expand(self.n_cell, self.dim) def forward(self, x, std): means = self.trsf_grid() mix = D.Categorical(self.weight) comp = D.Independent(D.Normal(means, std), 1) gmm = D.MixtureSameFamily(mix, comp) return -gmm.log_prob(x).mean() if __name__ == "__main__": weight = torch.ones(8) base = torch.tensor(3.) scale = torch.tensor(1.) stds = torch.tensor(np.abs(np.random.randn(8,2)),requires_grad=False) model = GMM(weight, base, scale) print(list(model.parameters())) optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9) for i in range(1000): optimizer.zero_grad() x = torch.randn(5000,2) loss = model(x, stds) loss.backward() optimizer.step() print(list(model.parameters())) In my case It returned the following parameters: [Parameter containing: tensor([1., 1., 1., 1., 1., 1., 1., 1.], requires_grad=True), Parameter containing: tensor(3., requires_grad=True), Parameter containing: tensor(1., requires_grad=True)] [Parameter containing: tensor([0.7872, 1.1010, 1.3390, 1.3757, 0.5122, 0.2884, 1.2597, 0.7597], requires_grad=True), Parameter containing: tensor(3.3207, requires_grad=True), Parameter containing: tensor(0.2814, requires_grad=True)] which indeed shows that the parameters are updating. Also you can see the computation graph below:
https://stackoverflow.com/questions/71765132/
Accuracy 0% for binary classification
I am using the OpenFL framework for doing Federated Learning experiments. I run their tutorial notebooks without problems, so for example I am able to run classification on MNIST and everything is ok. Now I am using 2 clients with 2 different datasets. However, my accuracy is around 0% for a binary classification problem. So, I have 2 classes, "neg" and "pos" for both datasets. Images of the first dataset are 3000x2951 while images of the second are 4892x4020. I resize both to 256x256. My network is a ResNet9 without any sigmoid at the end, because I am using BCEWithLogitsLoss(). Here a bit of code, to check if everything is ok: optimizer_adam = optim.Adam(params_to_update, lr=1e-4) def cross_entropy(output, target): """Binary cross-entropy metric """ target = target.unsqueeze(1) criterion = nn.BCEWithLogitsLoss() loss = criterion(output, target.float()) return loss def train(net_model, train_loader, optimizer, device, loss_fn=cross_entropy, some_parameter=None): torch.manual_seed(0) device='cpu' function_defined_in_notebook(some_parameter) train_loader = tqdm.tqdm(train_loader, desc="train") net_model.train() net_model.to(device) losses = [] for data, target in train_loader: data, target = torch.tensor(data).to(device), torch.tensor( target).to(device, dtype=torch.int64) optimizer.zero_grad() #data = data.type(torch.LongTensor) #target = target.type(torch.LongTensor) output = net_model(data) loss = loss_fn(output=output, target=target) loss.backward() optimizer.step() losses.append(loss.detach().cpu().numpy()) return {'train_loss': np.mean(losses),} @task_interface.register_fl_task(model='net_model', data_loader='val_loader', device='device') def validate(net_model, val_loader, device): torch.manual_seed(0) device = torch.device('cpu') net_model.eval() net_model.to(device) val_loader = tqdm.tqdm(val_loader, desc="validate") val_score = 0 total_samples = 0 with torch.no_grad(): for data, target in val_loader: samples = target.shape[0] total_samples += samples data, target = torch.tensor(data).to(device), \ torch.tensor(target).to(device, dtype=torch.int64) output = net_model(data) pred = (output >= 0.5).long() # Binarize predictions to 0 and 1 val_score = (pred == target).sum().cpu().item()/data.size(0) #val_score += pred.eq(target).sum().cpu().numpy() return {'acc': val_score / total_samples,} I think that all this is correct. So the only part that can be wrong is when I import the data because in this federated learning framework is a bit tricky. Basically my datasets are organized both in this way: /Dataset1(2)/Train(Test)/neg(pos)/images.png. I want to extract x_train, y_train, x_test and y_test because I am following exactly the structure of a tutorial that works. So this is my proposed solution: def download_data(self): """Download prepared dataset.""" image_list_train = [] image_list_test = [] x_train = [] y_train = [] x_test = [] y_test = [] base_dir_train = 'Montgomery_real_splitted/TRAIN/' base_dir_test = 'Montgomery_real_splitted/TEST/' for f in sorted(os.listdir(base_dir_train)): if os.path.isdir(base_dir_train+f): print(f"{f} is a target class") for i in sorted(os.listdir(base_dir_train+f)): y_train.append(f) im = Image.open(base_dir_train+f+'/'+i) x_train.append(im) for f in sorted(os.listdir(base_dir_test)): if os.path.isdir(base_dir_test+f): print(f"{f} is a target class") for i in sorted(os.listdir(base_dir_test+f)): y_test.append(f) imt=Image.open(base_dir_test+f+'/'+i) x_test.append(imt) y_train = np.array(y_train) y_test = np.array(y_test) for i in range(len(y_train)): if y_train[i]=="neg": y_train[i]=0 else: y_train[i]=1 y_train = y_train.astype(np.uint8) for i in range(len(y_test)): if y_test[i]=="neg": y_test[i]=0 else: y_test[i]=1 y_test = y_test.astype(np.uint8) print('Mont-china data was loaded!') return (x_train, y_train), (x_test, y_test) This code above is in a python script needed to load the data. Then, inside the Jupyter notebook I have these cells in order to import the dataset: normalize = T.Normalize( mean=[0.1307], std=[0.3081] ) augmentation = T.RandomApply( [T.RandomHorizontalFlip(), T.RandomRotation(10)], p=.8 ) training_transform = T.Compose( [T.Resize((256,256)), augmentation, T.ToTensor()] ) valid_transform = T.Compose( [T.Resize((256,256)), T.ToTensor()] ) class TransformedDataset(Dataset): def __init__(self, dataset, transform=None, target_transform=None): """Initialize Dataset.""" self.dataset = dataset self.transform = transform self.target_transform = target_transform def __len__(self): """Length of dataset.""" return len(self.dataset) def __getitem__(self, index): img, label = self.dataset[index] label = self.target_transform(label) if self.target_transform else label img = self.transform(img) if self.transform else img return img, label class MontChinaDataset(DataInterface): def __init__(self, **kwargs): self.kwargs = kwargs @property def shard_descriptor(self): return self._shard_descriptor @shard_descriptor.setter def shard_descriptor(self, shard_descriptor): """ Describe per-collaborator procedures or sharding. This method will be called during a collaborator initialization. Local shard_descriptor will be set by Envoy. """ self._shard_descriptor = shard_descriptor self.train_set = TransformedDataset( self._shard_descriptor.get_dataset('train'), transform=training_transform ) self.valid_set = TransformedDataset( self._shard_descriptor.get_dataset('val'), transform=valid_transform ) def get_train_loader(self, **kwargs): """ Output of this method will be provided to tasks with optimizer in contract """ generator=torch.Generator() generator.manual_seed(0) return DataLoader( self.train_set, batch_size=self.kwargs['train_bs'], shuffle=True, generator=generator ) def get_valid_loader(self, **kwargs): """ Output of this method will be provided to tasks without optimizer in contract """ return DataLoader(self.valid_set, batch_size=self.kwargs['valid_bs']) def get_train_data_size(self): """ Information for aggregation """ return len(self.train_set) def get_valid_data_size(self): """ Information for aggregation """ return len(self.valid_set) fed_dataset = MontChinaDataset(train_bs=16, valid_bs=16) The strange thing is that the loss decreases, while the accuracy remains 0 or around 0. [12:29:44] METRIC Round 0, collaborator env_one train result train_loss: 0.673127 experiment.py:116 [12:29:53] METRIC Round 0, collaborator env_one locally_tuned_model_validate result acc: 0.000000 experiment.py:116 [12:29:56] METRIC Round 0, collaborator env_one aggregated_model_validate result acc: 0.000000 experiment.py:116 [12:30:49] METRIC Round 0, collaborator env_two train result train_loss: 0.562856 experiment.py:116 [12:31:14] METRIC Round 0, collaborator env_two locally_tuned_model_validate result acc: 0.000000 experiment.py:116 [12:31:19] METRIC Round 0, collaborator env_two aggregated_model_validate result acc: 0.000000 experiment.py:116 [12:31:21] METRIC Round 0, collaborator Aggregator train result train_loss: 0.581464 experiment.py:116 METRIC Round 0, collaborator Aggregator locally_tuned_model_validate result acc: 0.000000 experiment.py:116 [12:31:22] METRIC Round 0, collaborator Aggregator aggregated_model_validate result acc: 0.000000 experiment.py:116 [12:31:39] METRIC Round 1, collaborator env_one train result train_loss: 0.637785 experiment.py:116 [12:31:41] METRIC Round 1, collaborator env_one locally_tuned_model_validate result acc: 0.000000 experiment.py:116 [12:31:44] METRIC Round 1, collaborator env_one aggregated_model_validate result acc: 0.000000 experiment.py:116 [12:31:55] METRIC Round 1, collaborator env_two train result train_loss: 0.432979 experiment.py:116 [12:32:00] METRIC Round 1, collaborator env_two locally_tuned_model_validate result acc: 0.000000 experiment.py:116 [12:32:05] METRIC Round 1, collaborator env_two aggregated_model_validate result acc: 0.000000 experiment.py:116 [12:32:08] METRIC Round 1, collaborator Aggregator train result train_loss: 0.467540 experiment.py:116 METRIC Round 1, collaborator Aggregator locally_tuned_model_validate result acc: 0.000000 experiment.py:116 METRIC Round 1, collaborator Aggregator aggregated_model_validate result acc: 0.000000 And this goes on for several rounds
I'm not sure if this will solve your problem, but your validation code has some bugs (two new lines annotated below): @task_interface.register_fl_task(model='net_model', data_loader='val_loader', device='device') def validate(net_model, val_loader, device): torch.manual_seed(0) device = torch.device('cpu') net_model.eval() net_model.to(device) val_loader = tqdm.tqdm(val_loader, desc="validate") val_score = 0 total_samples = 0 with torch.no_grad(): for data, target in val_loader: samples = target.shape[0] total_samples += samples data, target = torch.tensor(data).to(device), \ torch.tensor(target).to(device, dtype=torch.int64) output = net_model(data) ##new line vvv output = torch.sigmoid(output) #compress output into prob distribution pred = (output >= 0.5).long() # Binarize predictions to 0 and 1 ##changed line below val_score += (pred == target).sum().cpu().item() ###/data.size(0) #val_score += pred.eq(target).sum().cpu().numpy() return {'acc': val_score / total_samples,} Essentially there are two issues: You're comparing to 0.5 when you haven't put your output through a sigmoid. I know you said that you didn't do this because of your loss function, and that is correct, however you must use a sigmoid/softmax in eval mode. You're dividing your val_score by both data.size(0) (batch size?) and then also total_samples which is NOT the number of batches, but the count of all of your data. You weren't increasing val_score every iteration, you were resetting it. If you have a lot of batches this would explain why it was 0 or close to 0. Hopefully these fixes should get you closer to your goal!
https://stackoverflow.com/questions/71767784/
numpy how to speed up tanh?
numpy tanh seems much slower than its pytorch equivalence: import torch import numpy as np data=np.random.randn(128,64,32).astype(np.float32) %timeit torch.tanh(torch.tensor(data)) %timeit np.tanh(data) 820 µs ± 24.6 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) 3.89 ms ± 95.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) is there a way to speed up tanh in numpy? Thanks!
You could try with numexpr as follows: pip install numexpr Then: import numexpr as ne import numpy as np data=np.random.randn(128,64,32).astype(np.float32) resne = ne.evaluate("tanh(data)") resnp = np.tanh(data) Then check all close: In [16]: np.allclose(resne,resnp) Out[16]: True And check timings: In [14]: %timeit res = ne.evaluate("tanh(data)") 311 µs ± 1.26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [15]: %timeit np.tanh(data) 1.85 ms ± 7.43 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
https://stackoverflow.com/questions/71767821/
How to fix the error where the target batch size does not match when I use CrossEntropyLoss function?
I am working on a trainning task with CNN. When I created the loss function with CrossEntropyLoss and trained the dataset, the error reminded me that the batch size is not matched. This is the main code for trainning: net = SimpleConvolutionalNetwork() train_history, val_history = train(net, batch_size=32, n_epochs=10, learning_rate=0.001) plot_losses(train_history, val_history) This is the neuron network code: class SimpleConvolutionalNetwork(nn.Module): # Q: why the scope of input not changed after relu?? def __init__(self) -> None: super(SimpleConvolutionalNetwork, self).__init__() # define convolutional filting layer(3 grids) and output size(18 channels) self.conv1 = nn.Conv2d(3, 18, kernel_size=3, stride=1, padding=1) # define pooling layer with max-pooling function self.pool = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) # define FCL and output layer by Linear function self.fc1 = nn.Linear(18*16*16, 64) self.fc2 = nn.Linear(64, 10) # Q: where the pooling layer?? def forward(self, x): # input shape: 3(grids) * 32 * 32(32*32 is the scope of each grid) # filted by conv1 defined in the construction function # then relu the filted x x = F.relu(self.conv1(x)) # now let 18*32*32 -> 18*16*16 x = x.view(-1, 18*16*16) # two step for 18*16*16(totally 4608) -> 64 # output by FC firstly, then relu again the output x = F.relu(self.fc1(x)) # 64 -> 10 finally x = self.fc2(x) return x In the train function, the error place is at the construction of loss function. Because it is a very long context, the main part is showed below: def train(net, batch_size, n_epochs, learning_rate): ... # load the training dataset train_loader = get_train_loader(batch_size) # get validation dataset val_loader = get_val_loader(batch_size) # set batch size n_minibatches = len(train_loader) # set loss function and validation test checking criterion, optimizer = createLossAndOptimizer(net, learning_rate) train_history = [] val_history = [] training_start_time = time.time() best_error = np.inf best_model_path = "best_model_path" # GPU if possible net = net.to(device) for epoch in range(n_epochs): running_loss = 0.0 print_every = n_minibatches start_time = time.time() total_train_loss = 0.0 # step1: training the datasets for i, (inputs, labels) in enumerate(train_loader): inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() #print statistics running_loss += loss.item() total_train_loss += loss.item() # print every 10th of epoch if (i + 1) % (print_every + 1) == 0: print("Epoch {}, {:d}% \t train_loss: {:.2f} took: {:.2f}s".format( epoch + 1, int(100 * (i + 1) / n_minibatches), running_loss / print_every, time.time() - start_time)) running_loss = 0.0 start_time = time.time() train_history.append(total_train_loss / len(train_loader)) ... the loss construction funciton and dataset loading are like this: def createLossAndOptimizer(net, learning_rate=0.001): # define a cross-entropy loss function: criterion = nn.CrossEntropyLoss() # optimizer include three parameters: net, learning rate, and # momentum rate for validate the dataset from over-fitting(default # value is 0.9) optimizer = opt.Adam(net.parameters(), lr=learning_rate) return criterion, optimizer def get_train_loader(batch_size): return th.utils.data.DataLoader(train_set,batch_size=batch_size,sampler=train_sampler, num_workers=num_workers) def get_val_loader(batch_size): return th.utils.data.DataLoader(train_set,batch_size=batch_size,sampler=train_sampler, num_workers=num_workers) However, the error reminded me that the input batch size is more than the target batch size: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-19-07b692e7a2bb> in <module>() 173 net = SimpleConvolutionalNetwork() 174 --> 175 train_history, val_history = train(net, batch_size=32, n_epochs=10, learning_rate=0.001) 176 177 plot_losses(train_history, val_history) 3 frames /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 2844 if size_average is not None or reduce is not None: 2845 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) 2847 2848 ValueError: Expected input batch_size (128) to match target batch_size (32). I primarily thought that I mistakely set the incorrect parameters because of the 'labels' which is size 4. But I don't know how to fix it. Thanks for answering.
In forward method of SimpleConvolutionalNetwork after applying conv1, tensor x has shape of (batch_size, 18, 32, 32). So when doing x = x.view(-1, 18 * 16 * 16) shape of x turns to (batch_size * 4, 18 * 16 * 16) and because fully-connected layers applyed further don't change this new batch size, output has shape (batch_size * 4, 10). My suggestion would be using pooling right after convolution, like: x = F.relu(self.conv1(x)) # after that x will have shape (batch_size, 18, 32, 32) x = self.pool(x) # after that x will have shape (batch_size, 18, 16, 16) That way forward will return tensor with shape (batch_size, 10) and batch size mismatch error won't occur.
https://stackoverflow.com/questions/71769754/
Network stops learning once batchsize is set to > 1
I started switching from Keras to Pytorch and played around with some simple feedforward network today. It is supposed to learn the squaring operation, i.e. f(x) = x^2. However, my network only learns reasonably if I set the batchsize to 1. Any other batchsize yields very poor results. I tried also different learning rates between 1 and 0.0001 to see if this somehow fixed it and also tested a few changes to the network but to no avail. Could anyone tell me what I am doing wrong, i.e. why does my network not learn once I set the batchsize to any value above 1? Find a minimal working example below. Thank you for your help! import numpy as np from random import randint import random import time from multiprocessing import Pool import torch from torch import nn from torch.utils.data import Dataset, DataLoader from torchvision import datasets, transforms class SquareDataset(Dataset): def __init__(self, num_samples): super(Dataset, self).__init__() self.num_samples = num_samples self.train = [None] * num_samples self.target = [None] * num_samples for i in range(0, num_samples): self.train[i] = random.random() * randint(1, 10) self.target[i] = self.train[i] ** 2 def __len__(self): return self.num_samples def __getitem__(self, index): return self.train[index], self.target[index] def trainNetwork(epochs=50): data_train = SquareDataset(num_samples=1000) data_train_loader = DataLoader(data_train, batch_size=1, shuffle=False) model = nn.Sequential(nn.Linear(1, 32), nn.ReLU(), nn.Linear(32, 32), nn.ReLU(), nn.Linear(32, 1)) # Define the loss criterion = nn.MSELoss() # Optimizers require the parameters to optimize and a learning rate optimizer = torch.optim.Adam(model.parameters(), lr=0.001) for e in range(epochs): running_loss = 0 for number, labels in data_train_loader: optimizer.zero_grad() number = number.view(number.size(0), -1) output = model(number.float()) loss = criterion(output, labels.float()) loss.backward() optimizer.step() running_loss += loss.item() else: print(f"Training loss: {running_loss/len(data_train_loader)}") # some test outputs sample = torch.tensor([0.2]) out = model(sample.float()) print("Out:") print(out.item()) sample = torch.tensor([1]) out = model(sample.float()) print("Out:") print(out.item()) trainNetwork()
On line loss = criterion(output, labels.float()) first tensor has shape (batch_size, 1) while labels has shape (batch_size, ). Hence when batch_size > 1 broadcasting occurs and this lead to wrong objective, case similar to this. To overcome issue rewrite loss line but with equal shapes, like: loss = criterion(output.squeeze(-1), labels.float())
https://stackoverflow.com/questions/71770920/
How to use detectron2's augmentation with datasets loaded using register_coco_instances
I've trained a detectron2 model on custom data I labeled and exported in the coco format, but I now want to apply augmentation and train using the augmented data. How can I do that if I'm not using a custom DataLoader, but the register_coco_instances function. cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") predictor = DefaultPredictor(cfg) outputs = predictor(im) train_annotations_path = "./data/cvat-corn-train-coco-1.0/annotations/instances_default.json" train_images_path = "./data/cvat-corn-train-coco-1.0/images" validation_annotations_path = "./data/cvat-corn-validation-coco-1.0/annotations/instances_default.json" validation_images_path = "./data/cvat-corn-validation-coco-1.0/images" register_coco_instances( "train-corn", {}, train_annotations_path, train_images_path ) register_coco_instances( "validation-corn", {}, validation_annotations_path, validation_images_path ) metadata_train = MetadataCatalog.get("train-corn") dataset_dicts = DatasetCatalog.get("train-corn") cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) cfg.DATASETS.TRAIN = ("train-corn",) cfg.DATASETS.TEST = ("validation-corn",) cfg.DATALOADER.NUM_WORKERS = 2 cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo cfg.SOLVER.IMS_PER_BATCH = 2 cfg.SOLVER.BASE_LR = 0.00025 cfg.SOLVER.MAX_ITER = 10000 cfg.SOLVER.STEPS = [] cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128 cfg.MODEL.ROI_HEADS.NUM_CLASSES = 4 os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) trainer = DefaultTrainer(cfg) trainer.resume_or_load(resume=False) trainer.train() I saw in the documentation you can load a dataset and apply augmentation like this: dataloader = build_detection_train_loader(cfg, mapper=DatasetMapper(cfg, is_train=True, augmentations=[ T.Resize((800, 800)) ])) But I'm not using a custom dataloader, what is the best approach to do this?
From my experience, how you register your datasets (i.e., tell Detectron2 how to obtain a dataset named "my_dataset") has no bearing on what dataloader to use during training (i.e., how to load information from a registered dataset and process it into a format needed by the model). So, you can register your dataset however you want - either by using the register_coco_instances function or by using the dataset APIs (DatasetCatalog, MetadataCatalog) directly; it doesn't matter. What matters is that you want to apply some transformation(s) during the data loading part. Basically, you want to customise the data loading part which can only be achieved by using a custom dataloader (unless you perform offline augmentation which is likely not what you want). Now, you don't need to define and use a custom dataloader directly in your top-level code. You can just create your own trainer deriving from DefaultTrainer, and override its build_train_loader method. This is as simple as the following. class MyTrainer(DefaultTrainer): @classmethod def build_train_loader(cls, cfg): mapper = DatasetMapper(cfg, is_train=True, augmentations=[T.Resize((800, 800))]) return build_detection_train_loader(cfg, mapper=mapper) In your top-level code then, the only change required would be to use MyTrainer instead of DefaultTrainer. trainer = MyTrainer(cfg) trainer.resume_or_load(resume=False) trainer.train()
https://stackoverflow.com/questions/71774744/
VGG parameters not updating
I am trying to train the VGG network from PyTorch to build a predictive model for the FashionMNIST dataset. But when I print the gradients out, it seems that the parameters are not updating and the gradient is always zero. Here is my implementation ## Specify Batch Size train_batch_size = 32 test_batch_size = 32 ## Specify Image Transforms img_transform = transforms.Compose([ transforms.Resize((64,64)), transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ]) ## Download Datasets train_data = FashionMNIST('./data', transform=img_transform, download=True, train=True) test_data = FashionMNIST('./data', transform=img_transform, download=True, train=False) ## Initialize Dataloaders training_dataloader = DataLoader(train_data, batch_size=train_batch_size, shuffle=True) test_dataloader = DataLoader(test_data, batch_size=test_batch_size, shuffle=True) vgg16 = models.vgg16() model_a = vgg16 model_a.classifier[6] = nn.Linear(4096, 10) # to match the output dimension FashionMNIST model_a.cuda() # Hyperparameters and weights init num_epochs = 50 batch_size = 196 #64 learning_rate = 1e-3 def init_weights(m): if isinstance(m, nn.Linear): torch.nn.init.xavier_uniform(m.weight) m.bias.data.fill_(0.01) print("Weights initialized using xavier_uniform") model_a.apply(init_weights) optimizer = torch.optim.Adam(model_a.parameters(), lr=learning_rate, weight_decay=1e-5) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1) criterion = nn.CrossEntropyLoss() # Training for epoch in tqdm(range(num_epochs)): for i, (images, labels) in enumerate(training_dataloader): images = torch.cat((images, images, images),1) optimizer.zero_grad() outputs = model_a(images.cuda()) loss = criterion(outputs, labels.cuda()) loss.backward() optimizer.step()
Changing the training loop to this will resolve the issue: for i, (images, labels) in enumerate(training_dataloader): images = Variable(images.to(device)) labels = Variable(labels.to(device)) images = torch.cat((images, images, images),1) optimizer.zero_grad() outputs = model_b(images) loss = criterion(outputs, labels) loss.backward() optimizer.step()
https://stackoverflow.com/questions/71776519/
When would I use model.to("cuda:1") as opposed to model.to("cuda:0")?
I have a user with two GPU's; the first one is AMD which can't run CUDA, and the second one is a cuda-capable NVIDIA GPU. I am using the code model.half().to("cuda:0"). I'm not sure if the invocation successfully used the GPU, nor am I able to test it because I don't have any spare computer with more than 1 GPU lying around. In this case, does "cuda:0" mean the first device which can run CUDA, so it would've worked even if their first device was AMD? Or would I need to say "cuda:1" instead? How would I detect which number is the first CUDA-capable device?
I tested it and as I suspected the model.half().to("cuda:0") will put your model in the first available GPU with CUDA support i.e. NVIDIA GPU in your case, the AMD GPU isn't visible as a cuda device, feel safe to assume cuda:0 is only a CUDA enabled GPU, and AMD GPU won't be seen by your program. Have a good day.
https://stackoverflow.com/questions/71776711/
How does the finetune on transformer (t5) work?
I am using pytorch lightning to finetune t5 transformer on a specific task. However, I was not able to understand how the finetuning works. I always see this code : tokenizer = AutoTokenizer.from_pretrained(hparams.model_name_or_path) model = AutoModelForSeq2SeqLM.from_pretrained(hparams.model_name_or_path) I don't get how the finetuning is done, are they freezing the whole model and training the head only, (if so how can I change the head) or are they using the pre-trained model as a weight initializing? I have been looking for an answer for couple days already. Any links or help are appreciated.
If you are using PyTorch Lightning, then it won't freeze the head until you specify it do so. Lightning has a callback which you can use to freeze your backbone and training only the head module. See Backbone Finetuning Also checkout Ligthning-Flash, it allows you to quickly build model for various text tasks and uses Transformers library for backbone. You can use the Trainer to specify which kind of finetuning you want to apply for your training. Thanks
https://stackoverflow.com/questions/71781813/
Pytorch: How to transform image patches into matrix of feature vectors?
For use as input in a neural network, I want to obtain a matrix of feature vectors from image patches. I'm using the Fashion-MNIST dataset (28x28 images) and have used Tensor.unfold to obtain patches (16 7x7 patches) by doing: #example on one image mnist_train = torchvision.datasets.FashionMNIST( root="../data", train=True, transform=transforms.Compose([transforms.ToTensor()]), download=True) x = mnist_train[0][0][-1, :, :] x = x.unfold(0, 7, 7).unfold(1, 7, 7) x.shape >>> torch.Size([4, 4, 7, 7]) Here I end up with a 4x4 tensor of 7x7 patches, however I want to vectorize each patch to obtain a matrix X with dimensions (16: number of patches x d: dimensions of feature vector). I'm unsure whether flatten() can be used here and how I would go about using it.
To close this out, moving the content of the comments to here: #example on one image mnist_train = torchvision.datasets.FashionMNIST( root="../data", train=True, transform=transforms.Compose([transforms.ToTensor()]), download=True) x = mnist_train[0][0][-1, :, :] x = x.unfold(0, 7, 7).unfold(1, 7, 7) x.shape Output: >>> torch.Size([4, 4, 7, 7]) And then: x.reshape(-1,7,7) x.shape Output: torch.Size([16,7,7])
https://stackoverflow.com/questions/71781971/
Pytorch-> LSTM-> RuntimeError: input must have 3 dimensions, got 2
I'm facing error in LSTM input dimensions with following model code: class Model(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_keys): super(Model, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, num_keys) def forward(self, x): h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) out, _ = self.lstm(x, (h0, c0)) out = self.fc(out[:, -1, :]) return out Input sequence shape: 2048, 10,1 -> (#batch, window_size, #input) Input label shape: 2048 Following code I have for training model: for epoch in range(num_epochs): # Loop over the dataset multiple times train_loss = 0 for step, (seq, label) in enumerate(dataloader): # Forward pass seq = seq.clone().detach().view(-1, window_size, input_size).to(device) output = model(seq) loss = criterion(output, label.to(device)) print('step: ',step,'sequence: ', seq.shape, 'Label: ', label.shape, 'model output: ', output.shape) # Backward and optimize optimizer.zero_grad() loss.backward() train_loss += loss.item() optimizer.step() writer.add_graph(model, seq) I get following error: RuntimeError: input must have 3 dimensions, got 2 Can anyone tell what is the problem should I fixed, I used seq.unsequeeze(-1) but its not working.
I got it, the input data containing -1 values. I have used map function to convert negative integers into positive as following: line = tuple(map(int, line.strip().split())) Please make it sure for classification your data should contain positive number!
https://stackoverflow.com/questions/71785855/
How to get the ROC curve of a neural network?
I'm trying to get the ROC curve for my Neural Network. My network uses pytorch and im using sklearn to get the ROC curve. My model outputs the binary right and wrong and also the probability of the output. output = model(batch_X) _, pred = torch.max(output, dim=1) I give the model both samples of input data (Am I doing this part right or should it only be 1 sample of the input data not both?) I take the probability ( the _ ) and the labels of what both inputs should be and feed it to sklearn like so nn_fpr, nn_tpr, nn_thresholds = roc_curve( "labels go here" , "probability go here" ) Next I plot it with. plt.plot(nn_fpr,nn_tpr,marker='.') plt.ylabel('True Positive Rate') plt.xlabel('False Positive Rate' ) plt.show() It comes out very accurately which my model is (0.0167% wrong out of 108,000), but I have a concave graph and I have been told it's normally not supposed to be concave. (attached pictures) I have been using Neural Nets for a while but I have never been asked to plot the ROC curve. So my question is, am I doing this right? Also should it be both labels or just one? All the examples I have seen for neural networks use Keras which if I remember right has a probability function. Therefore I don't know if PyTorch outputs the probability in the way sklearn want's it. For all the other examples I can find aren't for Neural Networks and they have a probability function built in.
Function roc_curve expects array with true labels y_true and array with probabilities for positive class y_score (which usually means class 1). Therefore what you need is not _, pred = torch.max(output, dim=1) but simply (if your model outputs probabities, which is not default in pytorch) probabilities = output[:, 1] or (if your model output logits, which is common case in pytorch) import torch.nn.functional as F probabilities = F.softmax(output, dim=1)[:, 1] After that, assuming that array with true labels called labels, and has shape (N,), you call roc_curve as: y_score = probabilities.detach().numpy() nn_fpr, nn_tpr, nn_thresholds = roc_curve(labels, y_score) That way you'll get correct results (which wasn't the case with torch.max) As recommendation -- for binary classification I would suggest using model with sigmoid on the end and one output (probability of positive class), like: model = nn.Sequential(nn.Linear(input_dim, hidden_dim), nn.ReLU(), nn.Linear(hidden_dim, 1), nn.Sigmoid()) That way you'll train model with nn.BCELoss, which expects probabilites (unlike nn.CrossEntropyLoss which expects logits). Also code to get roc curve gets simpler: probabilites = model(batch_X) y_score = probabilites.squeeze(-1).detach().numpy() fpr, tpr, threshold = roc_curve(labels, y_score) Take a look at gist where ROC curve created for neural network classificator.
https://stackoverflow.com/questions/71788074/
How can I find multiple maximum indices of a torch tensor?
If I have a tensor which has multiple maximum values how can I get all the indices of maximum value. I have tried torch.argmax(tensor) but it only gives me the first index. >>> a_list = [3,23,53,32,53] >>> a_tensor = torch.Tensor(a_list) >>> a_tensor tensor([ 3., 23., 53., 32., 53.]) >>> torch.max(a_tensor) tensor(53.) >>> torch.argmax(a_tensor) tensor(2) I have following function to do it but was wondering if there are more efficient approaches: def max_tensor_indices(tensor_t,max_value): tensor_list=tensor_t[0] indices_list=[] for i in range(len(tensor_list)): if tensor_list[i]==max_value: indices_list.append(i) return indices_list
Find the maximum value, then find all elements with that value. (x == torch.max(x)).nonzero() Note: nonzero may also be called with as_tuple=True, which may be helpful.
https://stackoverflow.com/questions/71788996/
I am running into a gradient computation inplace error
I am running this code (https://github.com/ayu-22/BPPNet-Back-Projected-Pyramid-Network/blob/master/Single_Image_Dehazing.ipynb) on a custom dataset but I am running into this error. RuntimeError: one of the variables needed for gradient computation has been modified by an in place operation: [torch. cuda.FloatTensor [1, 512, 4, 4]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! Please refer to the code link above for clarification of where the error is occurring. I am running this model on a custom dataset, the data loader part is pasted below. import torchvision.transforms as transforms train_transform = transforms.Compose([ transforms.Resize((256,256)), #transforms.RandomResizedCrop(256), #transforms.RandomHorizontalFlip(), #transforms.ColorJitter(), transforms.ToTensor(), transforms.Normalize([0.5,0.5,0.5], [0.5,0.5,0.5]) ]) class Flare(Dataset): def __init__(self, flare_dir, wf_dir,transform = None): self.flare_dir = flare_dir self.wf_dir = wf_dir self.transform = transform self.flare_img = os.listdir(flare_dir) self.wf_img = os.listdir(wf_dir) def __len__(self): return len(self.flare_img) def __getitem__(self, idx): f_img = Image.open(os.path.join(self.flare_dir, self.flare_img[idx])).convert("RGB") for i in self.wf_img: if (self.flare_img[idx].split('.')[0][4:] == i.split('.')[0]): wf_img = Image.open(os.path.join(self.wf_dir, i)).convert("RGB") break f_img = self.transform(f_img) wf_img = self.transform(wf_img) return f_img, wf_img flare_dir = '../input/flaredataset/Flare/Flare_img' wf_dir = '../input/flaredataset/Flare/Without_Flare_' flare_img = os.listdir(flare_dir) wf_img = os.listdir(wf_dir) wf_img.sort() flare_img.sort() print(wf_img[0]) train_ds = Flare(flare_dir, wf_dir,train_transform) train_loader = torch.utils.data.DataLoader(dataset=train_ds, batch_size=BATCH_SIZE, shuffle=True) To get a better idea of the dataset class , you can compare my dataset class with the link pasted above
Your code is stuck in what is called the "Backpropagation" of your GAN Network. What you have defined your backward graph should follow is the following: def backward(self, unet_loss, dis_loss): dis_loss.backward(retain_graph = True) self.dis_optimizer.step() unet_loss.backward() self.unet_optimizer.step() So in your backward graph, you are propagating the dis_loss which is the combination of the discriminator and adversarial loss first and then you are propagating the unet_loss which is the combination of UNet, SSIM and ContentLoss but the unet_loss is connected to discriminator's output loss. So the pytorch is confused and gives you this error as you are taking the optimizer step of dis_loss before even storing the backward graph for unet_loss and I would recommend you to change the code as follows: def backward(self, unet_loss, dis_loss): dis_loss.backward(retain_graph = True) unet_loss.backward() self.dis_optimizer.step() self.unet_optimizer.step() And this will start your training! but you can experiment with your retain_graph=True. And great work on the BPPNet Work.
https://stackoverflow.com/questions/71793678/
Dimensionality problem with PyTorch Conv layers
I'm trying to train a neural network in PyTorch with some input signals. The layers are conv1d. The shape of my input is [100, 10], meaning 100 signals of a length of 10. But when I execute the training, I have this error: Given groups=1, weight of size [100, 10, 1], expected input[1, 1, 10] to have 10 channels, but got 1 channels instead config = [10, 100, 100, 100, 100, 100, 100, 100] batch_size = 1 epochs = 10 learning_rate = 0.001 kernel_size = 1 class NeuralNet(nn.Module): def __init__(self, config, kernel_size=1): super().__init__() self.config = config self.layers = nn.ModuleList([nn.Sequential( nn.Conv1d(self.config[i], self.config[i + 1], kernel_size = kernel_size), nn.ReLU()) for i in range(len(self.config)-1)]) self.last_layer = nn.Linear(self.config[-1], 3) self.layers.append(nn.Flatten()) self.layers.append(self.last_layer) def forward(self, x): for i, l in enumerate(self.layers): x = l(x) return x def loader(train_data, batch_size): inps = torch.tensor(train_data[0]) tgts = torch.tensor(train_data[1]) inps = torch.unsqueeze(inps, 1) dataset = TensorDataset(inps, tgts) train_dataloader = DataLoader(dataset, batch_size = batch_size) return train_dataloader At first, my code was without the unsqueez(inps) line and I had the exact same error, but then I added this line thinking that I must have an input of size (num_examples, num_channels, lenght_of_signal) but it didn't resolve the problem at all. Thank you in advance for your answers
nn.Conv1d expects input with shape of form (batch_size, num_of_channels, seq_length). It's parameters allow to directly set number of ouput channels (out_channels) and change length of output using, for example, stride. For conv1d layer to work correctly it should know number of input channels (in_channels), which is not the case on first convolution: input.shape == (batch_size, 1, 10), therefore num_of_channels = 1, while convolution in self.layers[0] expects this value to be equal 10 (because in_channels set by self.config[0] and self.config[0] == 10). Hence to fix this append one more value to config: config = [10, 100, 100, 100, 100, 100, 100, 100] # as in snippet above config = [1] + config At this point convs should be working fine, but there is another obstacle in self.layers -- linear layer at the end. So if kernel_size of 1 was used, then after final convolution batch will have shape (batch_size, 100, 10), and after flatten (batch_size, 100 * 10), while last_layer expects input of shape (batch_size, 100). So, if length of sequence after final conv layer is known (which is certainly the case if you're using kernel_size of 1 with default stride of 1 and default padding of 0 -- length stays same), last_layer should be defined as: self.last_layer = nn.Linear(final_length * self.config[-1], 3) and in snippet above final_length can be set to 10 (since conditions in previous brackets satisfied). To catch idea of how shapes in conv1d transformed take look at simple example in gif below (here batch_size is equal to 1):
https://stackoverflow.com/questions/71796161/
Pad a numpym array to meet a required size
I need to pass an array of a specific shape (4,5) to a function. However when this array is initially generated it may be less than the required shape e.g. (2,5) or (1,5). How would I pad this array to meet my required (4,5) shape?
For a 2D array, np.pad(x, ((num_rows_before, num_rows_after), (num_cols_before, num_cols_after))) Will get you the desired shape. Example: In [11]: x Out[11]: array([[8, 3, 5, 1, 5]]) In [12]: np.pad(x, ((3, 0), (0, 0))) Out[12]: array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [8, 3, 5, 1, 5]]) In [13]: np.pad(x, ((0, 3), (0, 0))) Out[13]: array([[8, 3, 5, 1, 5], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]) In general, you can pass n 2-tuples for an n-dimensional array, where each 2-tuple consists of before-after pairs of integers that dictate how much to pad along each axis and in each direction.
https://stackoverflow.com/questions/71813372/