instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
PyTorch bool value of tensor with more than one value is ambiguous
I am trying to train a neural network with PyTorch, but I get the error in the title. I followed this tutorial, and I just applied some small changes to meet my needs. Here's the network: class ChordClassificationNetwork(nn.Module): def __init__(self, train_model=False): super(ChordClassificationNetwork, self).__init__() self.train_model = train_model self.flatten = nn.Flatten() self.firstConv = nn.Conv2d(3, 64, (3, 3)) self.secondConv = nn.Conv2d(64, 64, (3, 3)) self.pool = nn.MaxPool2d(2) self.drop = nn.Dropout(0.25) self.fc1 = nn.Linear(33856, 256) self.fc2 = nn.Linear(256, 256) self.outLayer = nn.Linear(256, 7) def forward(self, x): x = self.firstConv(x) x = F.relu(x) x = self.pool(x) x = self.secondConv(x) x = F.relu(x) x = self.pool(x) x = self.drop(x) x = self.flatten(x) x = self.fc1(x) x = F.relu(x) x = self.drop(x) x = self.fc2(x) x = F.relu(x) x = self.drop(x) x = self.outLayer(x) output = F.softmax(x, dim=1) return output and the accuray check part, the one that is causing the error: device = ("cuda" if torch.cuda.is_available() else "cpu") transformations = transforms.Compose([ transforms.Resize((100, 100)) ]) num_epochs = 10 learning_rate = 0.001 train_CNN = False batch_size = 32 shuffle = True pin_memory = True num_workers = 1 dataset = GuitarDataset("../chords_data/cropped_images/train", transform=transformations) train_set, validation_set = torch.utils.data.random_split(dataset, [int(0.8 * len(dataset)), len(dataset) - int(0.8*len(dataset))]) train_loader = DataLoader(dataset=train_set, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory) validation_loader = DataLoader(dataset=validation_set, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory) model = ChordClassificationNetwork().to(device) criterion = nn.BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) def check_accuracy(loader, model): if loader == train_loader: print("Checking accuracy on training data") else: print("Checking accuracy on validation data") num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device) y = y.to(device=device) scores = model(x) predictions = torch.tensor([1.0 if i >= 0.5 else 0.0 for i in scores]).to(device) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print( f"Got {num_correct} / {num_samples} with accuracy {float(num_correct) / float(num_samples) * 100:.2f}" ) return f"{float(num_correct) / float(num_samples) * 100:.2f}" def train(): model.train() for epoch in range(num_epochs): loop = tqdm(train_loader, total=len(train_loader), leave=True) if epoch % 2 == 0: loop.set_postfix(val_acc=check_accuracy(validation_loader, model)) for imgs, labels in loop: imgs = imgs.to(device) labels = labels.to(device) outputs = model(imgs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() loop.set_description(f"Epoch [{epoch}/{num_epochs}]") loop.set_postfix(loss=loss.item()) if __name__ == "__main__": train() The error is caused on this line: predictions = torch.tensor([1.0 if i >= 0.5 else 0.0 for i in scores]).to(device) but I don't understand why. I saw some other answers but those could not fix my problem. Complete stack trace: 0%| | 0/13 [00:00<?, ?it/s]Checking accuracy on validation data Traceback (most recent call last): File "/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&_Chords_Recognition/ChordsClassification/train_CCN.py", line 80, in <module> train() File "/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&_Chords_Recognition/ChordsClassification/train_CCN.py", line 66, in train loop.set_postfix(val_acc=check_accuracy(validation_loader, model)) File "/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&_Chords_Recognition/ChordsClassification/train_CCN.py", line 52, in check_accuracy predictions = torch.tensor([1.0 if i >= 0.5 else 0.0 for i in scores]).to(device) File "/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&_Chords_Recognition/ChordsClassification/train_CCN.py", line 52, in <listcomp> predictions = torch.tensor([1.0 if i >= 0.5 else 0.0 for i in scores]).to(device) RuntimeError: Boolean value of Tensor with more than one value is ambiguous 0%| | 0/13 [00:02<?, ?it/s]
The output of the model will be a discrete distribution over your 7 classes. To retrieve the predicted image you can directly apply an argmax over it: scores = model(x) predictions = scores.argmax(1)
https://stackoverflow.com/questions/69657133/
Convolutional Neural Network model predicts no cats
I have trained my first CNN model. I took first 100 images of cats and first 100 images of dogs from Kaggle dataset as my custom dataset. After the model is trained I'm trying to feed the same images back to the model to see predictions. As result I get score from 0.5 to 0.6 on all images. While I though it should be <0.5 for cats and >0.5 for dogs. Is it a problem of my model architecture, the training process or my dataset is just too small? Why no images gets below 0.5 at all? Here is my code: First I generate .csv file to be processed: import pandas as pd import os import torch device = ("cuda" if torch.cuda.is_available() else "cpu") train_df = pd.DataFrame(columns=["img_name","label"]) train_df["img_name"] = os.listdir("train/") for idx, i in enumerate(os.listdir("train/")): if "cat" in i: train_df["label"][idx] = 0 if "dog" in i: train_df["label"][idx] = 1 train_df.to_csv (r'train_csv.csv', index = False, header=True) Then I prepare the dataset: from torch.utils.data import Dataset import pandas as pd import os from PIL import Image import torch class CatsAndDogsDataset(Dataset): def __init__(self, root_dir, annotation_file, transform=None): self.root_dir = root_dir self.annotations = pd.read_csv(annotation_file) self.transform = transform def __len__(self): return len(self.annotations) def __getitem__(self, index): img_id = self.annotations.iloc[index, 0] img = Image.open(os.path.join(self.root_dir, img_id)).convert("RGB") y_label = torch.tensor(float(self.annotations.iloc[index, 1])) if self.transform is not None: img = self.transform(img) return (img, y_label) This is my model: import torch.nn as nn import torchvision.models as models class CNN(nn.Module): def __init__(self, train_CNN=False, num_classes=1): super(CNN, self).__init__() self.train_CNN = train_CNN self.inception = models.inception_v3(pretrained=True, aux_logits=False) self.inception.fc = nn.Linear(self.inception.fc.in_features, num_classes) self.relu = nn.ReLU() self.dropout = nn.Dropout(0.5) self.sigmoid = nn.Sigmoid() def forward(self, images): features = self.inception(images) return self.sigmoid(self.dropout(self.relu(features))).squeeze(1) This is my hyper-params, transformations and dataloaders: from torch.utils.data import DataLoader import torchvision.transforms as transforms num_epochs = 10 learning_rate = 0.00001 train_CNN = False batch_size = 32 shuffle = True pin_memory = True num_workers = 0 transform = transforms.Compose( [ transforms.Resize((356, 356)), transforms.CenterCrop((299, 299)), transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], ), ] ) dataset = CatsAndDogsDataset("train","train_csv.csv",transform=transform) train_size = int(0.8 * len(dataset)) validation_size = len(dataset) - train_size train_set, validation_set = torch.utils.data.random_split(dataset, [train_size, validation_size]) train_loader = DataLoader(dataset=train_set, shuffle=shuffle, batch_size=batch_size,num_workers=num_workers,pin_memory=pin_memory) validation_loader = DataLoader(dataset=validation_set, shuffle=shuffle, batch_size=batch_size,num_workers=num_workers, pin_memory=pin_memory) model = CNN().to(device) criterion = nn.BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) for name, param in model.inception.named_parameters(): if "fc.weight" in name or "fc.bias" in name: param.requires_grad = True else: param.requires_grad = train_CNN and accuracy check: def check_accuracy(loader, model): num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device) y = y.to(device=device) scores = model(x) predictions = torch.tensor([1.0 if i >= 0.5 else 0.0 for i in scores]).to(device) num_correct += (predictions == y).sum() num_samples += predictions.size(0) model.train() return f"{float(num_correct)/float(num_samples)*100:.2f}" And this is my training function: from tqdm import tqdm def train(): model.train() for epoch in range(num_epochs): loop = tqdm(train_loader, total = len(train_loader), leave = True) for imgs, labels in loop: imgs = imgs.to(device) labels = labels.to(device) outputs = model(imgs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() loop.set_description(f"Epoch [{epoch}/{num_epochs}]") loop.set_postfix(loss = loss.item(), val_acc = check_accuracy(validation_loader, model)) if __name__ == "__main__": train() Epoch [0/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [12:00<00:00, 120.10s/it, loss=0.652, val_acc=39.02] Epoch [1/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [11:51<00:00, 118.61s/it, loss=0.497, val_acc=39.02] Epoch [2/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [11:27<00:00, 114.51s/it, loss=0.693, val_acc=39.02] Epoch [3/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [11:04<00:00, 110.77s/it, loss=0.531, val_acc=39.02] Epoch [4/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [10:58<00:00, 109.68s/it, loss=0.693, val_acc=39.02] Epoch [5/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [12:03<00:00, 120.51s/it, loss=0.803, val_acc=39.02] Epoch [6/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [11:33<00:00, 115.62s/it, loss=0.693, val_acc=39.02] Epoch [7/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [11:27<00:00, 114.56s/it, loss=0.675, val_acc=39.02] Epoch [8/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [11:42<00:00, 117.10s/it, loss=0.806, val_acc=39.02] Epoch [9/10]: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [12:15<00:00, 122.58s/it, loss=0.768, val_acc=39.02] Then I loop through the model checking predictions on each image (the dataset variable is available because it is in the same Jupyter Notebook): import numpy as np with torch.no_grad(): for index in range(len(dataset)): item = dataset[index] image_tensor = item[0] true_target = item[1] img_np = np.array(image_tensor) img_normalized = img_np.transpose(1, 2, 0) image = torch.unsqueeze(image_tensor, 0) prediction = model(image) predicted_class = prediction[0] print("class: " + str(true_target.item()) + " score: " + str(predicted_class.item())) The output: class: 0.0 score: 0.547210156917572 class: 0.0 score: 0.5 class: 0.0 score: 0.5348594188690186 class: 0.0 score: 0.5336627960205078 class: 0.0 score: 0.5178861618041992 class: 0.0 score: 0.5692692995071411 class: 0.0 score: 0.5 class: 0.0 score: 0.5381814241409302 class: 0.0 score: 0.54604572057724 class: 0.0 score: 0.5157472491264343 class: 0.0 score: 0.5257323980331421 class: 0.0 score: 0.5137990713119507 class: 0.0 score: 0.5247158408164978 class: 0.0 score: 0.5320644378662109 class: 0.0 score: 0.5775637626647949 class: 0.0 score: 0.528205156326294 class: 0.0 score: 0.5457945466041565 class: 0.0 score: 0.5301501154899597 class: 0.0 score: 0.5102765560150146 class: 0.0 score: 0.5069065690040588 class: 0.0 score: 0.519408106803894 class: 0.0 score: 0.5414850115776062 class: 0.0 score: 0.5041879415512085 class: 0.0 score: 0.5055546760559082 show more (open the raw output data in a text editor) ... class: 1.0 score: 0.5 class: 1.0 score: 0.5 class: 1.0 score: 0.5166758894920349 class: 1.0 score: 0.5343206524848938 class: 1.0 score: 0.5716230869293213 So no cats get predicted
can you change your model arch to this(just remove dropout and relu) import torch.nn as nn import torchvision.models as models class CNN(nn.Module): def __init__(self, train_CNN=False, num_classes=1): super(CNN, self).__init__() self.train_CNN = train_CNN self.inception = models.inception_v3(pretrained=True, aux_logits=False) self.inception.fc = nn.Linear(self.inception.fc.in_features, num_classes) self.dropout = nn.Dropout(0.5) self.sigmoid = nn.Sigmoid() def forward(self, images): features = self.inception(images) return self.sigmoid(features).squeeze(1) And just try with model.eval() before doing inference, Since u have used dropout
https://stackoverflow.com/questions/69659913/
Creating new observations in pytorch ImageFolder
I am new to pythorch and what I would like to do will probably be easy, but I have not found anything online regarding actually increasing the number of observations without adding them into the image (in my case) folder. I don't want to add images to the folder because I want to play around with different transformations and see what is best without deleting images all the time. So what I do is: trf = transforms.Compose([ transforms.ToTensor(), transforms.RandomRotation(degrees=45), transforms.Grayscale(num_output_channels=1), transforms.Normalize(0, 1), transforms.functional.invert ]) train_data = torchvision.datasets.ImageFolder(root='./splitted_data/train', transform= trf) print(len(train_data)) train = DataLoader(train_data, batch_size= batch_size, shuffle= True, num_workers= os.cpu_count()) Here the output will be the same as the number of images in all folders, which means that transformations were applied to the existing observations, but this is not something that I want to achieve. I want each transformation to be a separate copy. How can I do that?
You can implement a transform wrapper that will apply transforms sequentially and output every single transform combination. The issue with Torchvision's random transform is that the parameters are sampled when the transform is called. This makes it difficult to reproduce identical transformations. One alternative is to stack or concatenate all the images and apply the transform once on that stack. I divided the transformation pipeline in three sections: the preprocessing and post-processing transform (the latter should not be stochastic since it is applied separately). As for the main transforms, they are the list of transforms you want to create combinations from, here RandomRotation and Grayscale. Be aware, this solution has limitations when working with transforms that affect the channel number such as Grayscale. Generally, you want to keep the same tensor dimensions otherwise your concatenations and/or stacks will fail. Here is a possible solution: class Combination(nn.Module): def __init__(self, transforms, pre, post): super().__init__() self.transforms = transforms self.pre = T.Compose(pre) self.post = T.Compose(post) def stacked_t(self, t, x): lengths = [len(o) for o in x] return t(torch.cat(x)).split(lengths) def forward(self, x): out = [self.pre(x)[None]] for t in transforms: out += self.stacked_t(t, out) # <- for every transform `t` we double # the number of instances in` out` out = [self.post(o)[0] for o in out] return out Here is an example usage with an input image: >>> img Initialize the transform combination: >>> t = Combination(pre=[T.ToTensor()], ... post=[T.Normalize(0, 1), ... T.functional.invert], ... transforms=[T.RandomRotation(degrees=45), ... T.Grayscale(num_output_channels=1)]) Here is a preview of the different transform combinations: >>> img_ = t(img) img_[0] img_[1] img_[2] img_[3]
https://stackoverflow.com/questions/69661238/
Pad multiple torch tensor over the last dim
I have multiple torch tensors with the following shapes x1 = torch.Size([1, 512, 177]) x2 = torch.Size([1, 512, 250]) x3 = torch.Size([1, 512, 313]) How I can pad all these tensors by 0 over the last dimension, to have a unique shape like ([1, 512, 350]). What I tried to do is to convert them into NumPy arrays and use these two lines of code: if len(x1) < 350: ff = np.pad(f, [(0, self.max_len - f.shape[0]), ], mode='constant') f = ff But unfortunately, it doesn't affect the last dim and still, the shapes are not equal. Any help will be appreciated Thanks
You can simply do: import torch.nn.functional as F x = F.pad(x, (0, self.max_len - x.size(2)), "constant", 0)
https://stackoverflow.com/questions/69662655/
Using captum with nn.Embedding getting RuntimeError
I am using captum library and getting following error. Here is the complete code to reproduce the error. RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior. import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from captum.attr import IntegratedGradients device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") vocab_size = 1024 embedding_dim = 32 seq_len = 128 num_classes = 5 hidden_dim = 256 class predictor(nn.Module): def __init__(self): super().__init__() self.seq_len = seq_len self.num_classes = num_classes self.hidden_dim = hidden_dim self.vocab_size, self.embedding_dim = vocab_size, embedding_dim self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim) self.linear = nn.Linear(self.seq_len*self.embedding_dim, self.num_classes) def forward(self, x): x = self.embedding(x.long()) x = x.reshape(-1, self.seq_len*self.embedding_dim) x = F.relu(self.linear(x)) return x class wrapper_predictor(nn.Module): def __init__(self, model): super().__init__() self.model = model def forward(self, x): x = self.model(x) x = F.softmax(x, dim=1) return x indexes = torch.Tensor(np.random.randint(0, vocab_size, (seq_len))).to(device) model = predictor().to(device) wrapper_model = wrapper_predictor(model).to(device) ig = IntegratedGradients(wrapper_model) attributions, delta = ig.attribute(inputs=indexes, target=0, n_steps=1, return_convergence_delta=True)
I resolved the issue with LayerIntegratedGradients. Here is the link to read more to know other possible solutions. https://captum.ai/tutorials/IMDB_TorchText_Interpret This is using an instance of LayerIntegratedGradients using forward function of model and the embedding layer as the example given in the link. Here is sample code which using LayerIntegratedGradients with nn.Embedding import numpy as np import torch import torch.nn as nn import torch.nn.functional as F from captum.attr import IntegratedGradients, LayerIntegratedGradients from torchsummary import summary device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") vocab_size = 1024 embedding_dim = 1 seq_len = 128 num_classes = 5 hidden_dim = 256 class predictor(nn.Module): def __init__(self): super(predictor, self).__init__() self.seq_len = seq_len self.num_classes = num_classes self.hidden_dim = hidden_dim self.vocab_size, self.embedding_dim = vocab_size, embedding_dim self.embedding = nn.Sequential( nn.Embedding(self.vocab_size, self.embedding_dim), ) self.embedding.weight = torch.randn((self.vocab_size, self.embedding_dim), requires_grad=True) self.fc = nn.Sequential( nn.Linear(self.seq_len*self.embedding_dim, self.hidden_dim, device=device, bias=False), nn.Linear(self.hidden_dim, self.num_classes, device=device, bias=False), ) def forward(self, x): x = self.embedding(x.long()) x = x.view(-1, self.seq_len*self.embedding_dim) x = self.fc(x) return x class wrapper_predictor(nn.Module): def __init__(self, model): super().__init__() self.model = model def forward(self, x): x = self.model(x) x = F.softmax(x, dim=1) #keep softmax out of forward if attribution score is too low. return x model = predictor().to(device) indexes = torch.Tensor(np.random.randint(0, vocab_size, (seq_len))).to(device) input_size = indexes.shape summary(model=model, input_size=input_size, batch_size=-1, device='cuda') wrapper_model = wrapper_predictor(model).to(device) lig = LayerIntegratedGradients(model, model.embedding) attributions, delta = lig.attribute(inputs=indexes, target=0, n_steps=1, return_convergence_delta=True)
https://stackoverflow.com/questions/69664738/
How can I create a loss function that will push the actual NN weights to move?
I have a simple NN: import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 5) self.fc2 = nn.Linear(5, 10) self.fc3 = nn.Linear(10, 1) def forward(self, x): x = self.fc1(x) x = torch.relu(x) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net = Model() opt = optim.Adam(net.parameters()) I also have some input features: features = torch.rand((3,1)) I can train it normally with a simple MSE loss function: for i in range(10): opt.zero_grad() out = net(features) loss = torch.mean(torch.square(torch.tensor(5) - torch.sum(out))) print('loss:', loss) loss.backward() opt.step() However, I'm trying to create a loss function that takes the actual weight values into account: loss = 1 - torch.mean(torch.tensor([torch.sum(w_arr) for w_arr in net.parameters()])) But I'm getting an error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn The goal here is to get each weight's value closest to 1 (or any other value) as possible.
A quick error fix will be to include requires_grad = True while creating tensor. This way - loss = 1 - torch.mean(torch.tensor([torch.sum(w_arr) for w_arr in net.parameters()], requires_grad=True)) But when converting list of weights to tensor, torch doesn't know origin of that tensor so the loss doesn't decrease. One way to do it is for i in range(500): opt.zero_grad() out = net(features) loss = torch.mean(torch.square(torch.tensor(5) - torch.sum(out))) len_w = 0 for w_arr in net.parameters(): loss += torch.mean(torch.abs(1 - w_arr)) len_w += 1 loss /= len_w print('loss:', loss) loss.backward() opt.step() In this way of loss computation, it makes sure that all the weights are close to +1.
https://stackoverflow.com/questions/69671268/
Efficient way to get the weights of a PyTorch NN model as a tensor
I have a simple NN: import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 5) self.fc2 = nn.Linear(5, 10) self.fc3 = nn.Linear(10, 1) def forward(self, x): x = self.fc1(x) x = torch.relu(x) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net = Model() Is there a more efficient way to get the weights of this network (while keeping the gradients) than iterate through every single one like this for w_arr in net.parameters(): or list(net.parameters()) Since the latter doesn't maintain the gradients (it converts it into a list)
You can use the torch.nn.utils.parameters_to_vector utility function. >>> net(torch.rand(1, 1, requires_grad=True)).mean().backward() >>> from torch.nn.utils import parameters_to_vector >>> parameters_to_vector(net.parameters()) tensor([-0.8196, -0.7785, -0.2459, 0.4670, -0.9747, 0.1994, 0.7510, -0.6452, 0.4948, 0.3376, 0.2641, -0.0707, 0.1282, -0.2944, 0.1337, 0.0461, -0.1491, 0.2985, 0.3031, 0.3566, 0.0058, 0.0157, -0.0712, 0.3874, 0.2870, -0.3829, 0.1178, -0.3901, -0.0425, -0.1603, 0.0408, 0.3513, 0.0289, -0.3374, -0.1820, 0.3684, -0.3069, 0.0312, -0.4205, 0.1456, 0.2833, 0.0589, -0.2229, -0.1753, -0.1829, 0.1529, 0.1097, 0.0067, -0.2694, -0.2176, 0.2292, 0.0529, -0.2617, 0.0736, 0.1617, 0.0438, 0.2387, 0.3278, -0.0536, -0.2875, -0.0869, 0.0770, -0.0774, -0.1909, 0.2803, -0.3237, -0.3851, -0.2241, 0.2838, 0.2202, 0.3057, 0.0128, -0.2650, 0.1660, -0.2961, -0.0123, -0.2106, -0.1021, 0.1135, -0.1051, 0.1735], grad_fn=<CatBackward>) It will convert a parameter generator into a flat tensor while retaining gradients, which corresponds to a concatenation of all parameter tensors flattened.
https://stackoverflow.com/questions/69680805/
Using dataloader to sample with replacement in pytorch
I have a dataset defined in the format: class MyDataset(Dataset): def __init__(self, N): self.N = N self.x = torch.rand(self.N, 10) self.y = torch.randint(0, 3, (self.N,)) def __len__(self): return self.N def __getitem__(self, idx): return self.x[idx], self.y[idx] During the training, I would like to sample batches of m training samples, with replacement; e.g. the first iteration includes data indices [1, 5, 6], second iteration includes data points [12, 3, 5], and so on and so forth. So the total number of iterations is an input, rather than N/m Is there a way to use dataloader to handle this? If not, is there any other method than something in the form of for i in range(iter): x = np.random.choice(range(N), m, replace=True) to implement this?
You can use a RandomSampler, this is a utility that slides in between the dataset and dataloader: >>> ds = MyDataset(N) >>> sampler = RandomSampler(ds, replacement=True, num_samples=M) Above, sampler will sample a total of M (replacement is necessary of course if num_samples > len(ds)). In your example M = iter*m. You can then initialize a DataLoader with sampler: >>> dl = DataLoader(ds, sampler=sampler, batch_size=2) Here is a possible result with N = 2, M = 2*len(ds) = 4, and batch_size = 2: >>> for x, y in dl: ... print(x, y) tensor([[0.5541, 0.3596, 0.5180, 0.1511, 0.3523, 0.4001, 0.6977, 0.1218, 0.2458, 0.8735], [0.0407, 0.2081, 0.5510, 0.2063, 0.1499, 0.1266, 0.1928, 0.0589, 0.2789, 0.3531]]) tensor([1, 0]) tensor([[0.5541, 0.3596, 0.5180, 0.1511, 0.3523, 0.4001, 0.6977, 0.1218, 0.2458, 0.8735], [0.0431, 0.0452, 0.3286, 0.5139, 0.4620, 0.4468, 0.3490, 0.4226, 0.3930, 0.2227]]) tensor([1, 0]) tensor([[0.5541, 0.3596, 0.5180, 0.1511, 0.3523, 0.4001, 0.6977, 0.1218, 0.2458, 0.8735], [0.5541, 0.3596, 0.5180, 0.1511, 0.3523, 0.4001, 0.6977, 0.1218, 0.2458, 0.8735]]) tensor([1, 1])
https://stackoverflow.com/questions/69681459/
PyTorch: Checking Model Accuracy Results in "TypeError: 'bool' object is not iterable."
I am training a neural network and would like to check its accuracy. I've used Librosa and SciKitLearn to represent audio in the form of 1D Numpy arrays. Thus x_train, x_test, y_train, and y_test are all 1D Numpy arrays with the x_* arrays containing floats and the y_* arrays containing strings corresponding to classes of data. For example: x_train = [0.235, 1.101, 3.497] y_train = ['happy', 'angry', 'neutral'] I've written a dictionary to represent these classes (strings) as integers: emotions = { '01' : 'neutral', '02' : 'calm', '03' : 'happy', '04' : 'sad', '05' : 'angry', '06' : 'fearful', '07' : 'disgust', '08' : 'surprised'} emotion_list = list(emotions.values()) Next I've defined a class to transform this data such that it can be passed to torch.utils.data.DataLoader(): class MakeDataset(Dataset): def __init__(self, x_train, y_train): self.x_train = torch.FloatTensor(x_train) self.y_train = torch.FloatTensor([emotion_list.index(each) for each in y_train]) def __len__(self): return self.x_train.shape[0] def __getitem__(self, ind): x = self.x_train[ind] y = emotion_list.index(y_train[ind]) return x, y I define a training set, testing set, batch size, and load the data: train_set = MakeDataset(x_train, y_train) test_set = MakeDataset(x_test, y_test) batch_size = 512 train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True) test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False) I define the model, train, and test as follows: class TwoLayerMLP(torch.nn.Module): def __init__(self, D_in, H, D_out): super(TwoLayerMLP, self).__init__() self.linear1 = torch.nn.Linear(D_in, H) self.linear2 = torch.nn.Linear(H, D_out) def forward(self, x): h_relu = self.linear1(x).clamp(min=0) y_pred = self.linear2(h_relu) return y_pred model = TwoLayerMLP(180, 90, 8) optimizer = torch.optim.Adam(model.parameters()) criterion = nn.CrossEntropyLoss() epochs = 5000 total_train = 0 correct_train = 0 for epoch in range(epochs): model.train() running_loss = 0.0 for batch_num, data in enumerate(train_loader): audio , label = data optimizer.zero_grad() outputs = model(audio.float()) loss = criterion(outputs, label) loss.backward() optimizer.step() predicted = torch.max(outputs.data,1) total_train += float(label.size(0)) # Code runs with line below commented # Else returns "TypeError: 'bool' object not iterable." correct_train += sum(predicted == label) Note that this code has been updated, formerly the problematic line was: correct_train += float((predicted == label)).sum() Can anyone explain why this boolean object cannot be iterated as expected? SOLVED Please see the comments in abhiskk's answer below, but for clarity and brevity the following changes solved the problem: pred_values, pred_indices = torch.max(outputs.data,1) total_train += float(label.size(0)) correct_train += (sum(pred_indices == label)).item()
The predicted variable contains both values and indices, you need to do pred_vals, pred_inds = torch.max(outputs.data, 1) and then you can do correct_train += (sum(pred_inds == label)).item() Also you don't need to convert to float before summing, you can use: (predicted == label).sum().item() (predicted == label) returns a BoolTensor which can be summed to obtain a float value.
https://stackoverflow.com/questions/69683898/
Pytorch DataParallel with custom model
I want to train model with multiple gpu's. I'm using following code model = load_model(path) if torch.cuda.device_count() > 1: print("Let's use", torch.cuda.device_count(), "GPUs!") # dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs model = nn.DataParallel(model) model.to(device) It works well except DataParallel doesn't contain functions from original model, is there a way around it? Thank you
The nn.Module passed to nn.DataParallel will end up being wrapped by the class to handle data parallelism. You can still access your model with the module attribute. >>> p_model = nn.DataParallel(model) >>> p_model.module # <- model For instance, to access your underlying model's quantize attribute, you would do: >>> p_model.module.quantize
https://stackoverflow.com/questions/69688593/
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 512, 3, 3], but got 2-dimensional input of size [32, 2048] instead
I want to train a classifier based on a pretrained network with PyTorch. What I need to do is to take a pretrained model (I tried with ResNet50), add some layers at the end (I need to do this as it is required by the project specifications) and train only those layers I add. I tried this: import torch import torch.nn as nn from torch.utils.data import DataLoader import torchvision.transforms as transforms from torchvision import models from guitar_dataset import GuitarDataset from tqdm import tqdm device = ("cuda" if torch.cuda.is_available() else "cpu") transformations = transforms.Compose([ transforms.Resize((200, 200)) ]) num_epochs = 10 learning_rate = 0.001 train_CNN = False batch_size = 32 shuffle = True pin_memory = True num_workers = 1 dataset = GuitarDataset(f"../chords_data/cropped/train", transform=transformations) train_set, validation_set = torch.utils.data.random_split(dataset, [int(0.8 * len(dataset)), len(dataset) - int(0.8 * len(dataset))]) train_loader = DataLoader(dataset=train_set, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory) validation_loader = DataLoader(dataset=validation_set, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory) testset = GuitarDataset(f"../chords_data/cropped/test", transform=transformations) test_loader = DataLoader(dataset=testset, shuffle=shuffle, batch_size=batch_size, num_workers=num_workers, pin_memory=pin_memory) model = models.resnet50(pretrained=True) for param in model.parameters(): param.requires_grad = False model.fc = nn.Sequential( nn.Conv2d(512, 64, (3, 3)), nn.ReLU(), nn.MaxPool2d(2), nn.Conv2d(64, 64, (3, 3)), nn.ReLU(), nn.MaxPool2d(2), nn.Dropout(0.5), nn.Flatten(), nn.Linear(147456, 512), nn.ReLU(), nn.Dropout(0.5), nn.Linear(512, 64), nn.ReLU(), nn.Dropout(0.5), nn.Linear(64, 7) ) model.to(device) criterion = nn.BCELoss() optimizer = torch.optim.Adam(model.fc.parameters(), lr=learning_rate) PATH = f"./saved_models/mynet.pth" def check_accuracy(loader, model): if loader == train_loader: print("Checking accuracy on training data") else: print("Checking accuracy on validation data") num_correct = 0 num_samples = 0 model.eval() with torch.no_grad(): for x, y in loader: x = x.to(device=device) y = y.to(device=device) scores = model(x) # predictions = torch.tensor([1.0 if i >= 0.5 else 0.0 for i in scores]).to(device) predictions = scores.argmax(1) num_correct += (predictions == y).sum() num_samples += predictions.size(0) print( f"Got {num_correct} / {num_samples} with accuracy {float(num_correct) / float(num_samples) * 100:.2f}" ) return f"{float(num_correct) / float(num_samples) * 100:.2f}" def train(): model.train() for epoch in range(num_epochs + 1): loop = tqdm(train_loader, total=len(train_loader), leave=True) # if epoch % 2 == 0: loop.set_postfix(val_acc=check_accuracy(validation_loader, model)) if epoch == num_epochs: break for imgs, labels in loop: labels = torch.nn.functional.one_hot(labels, num_classes=7).float() imgs = imgs.to(device) labels = labels.to(device) outputs = model(imgs) loss = criterion(outputs, labels) optimizer.zero_grad() loss.backward() optimizer.step() loop.set_description(f"Epoch [{epoch + 1}/{num_epochs}]") loop.set_postfix(loss=loss.item()) torch.save(model.state_dict(), PATH) def test(): model.load_state_dict(torch.load(PATH)) correct = 0 total = 0 # since we're not training, we don't need to calculate the gradients for our outputs with torch.no_grad(): for data in test_loader: images, labels = data # calculate outputs by running images through the network outputs = model(images) # the class with the highest energy is what we choose as prediction _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the test images: %d %%' % ( 100 * correct / total)) if __name__ == "__main__": print(f"Working on {data_type}") train() test() but I get the error in the title as soon as I start the training phase. Shouldn't the downloaded model be ready-to-use? Full stack trace: Traceback (most recent call last): File "/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&_Chords_Recognition/ChordsClassification/train_ResNetChord.py", line 139, in <module> train() File "/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&_Chords_Recognition/ChordsClassification/train_ResNetChord.py", line 99, in train loop.set_postfix(val_acc=check_accuracy(validation_loader, model)) File "/home/deffo/Documents/Unimore/Magistrale/Computer Vision and Cognitive Systems/Guitar_Fingering_&_Chords_Recognition/ChordsClassification/train_ResNetChord.py", line 83, in check_accuracy scores = model(x) File "/home/deffo/anaconda3/envs/ComputerVision/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/deffo/anaconda3/envs/ComputerVision/lib/python3.8/site-packages/torchvision/models/resnet.py", line 249, in forward return self._forward_impl(x) File "/home/deffo/anaconda3/envs/ComputerVision/lib/python3.8/site-packages/torchvision/models/resnet.py", line 244, in _forward_impl x = self.fc(x) File "/home/deffo/anaconda3/envs/ComputerVision/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/deffo/anaconda3/envs/ComputerVision/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/home/deffo/anaconda3/envs/ComputerVision/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/deffo/anaconda3/envs/ComputerVision/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 399, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/deffo/anaconda3/envs/ComputerVision/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 395, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 512, 3, 3], but got 2-dimensional input of size [32, 2048] instead
Your network design in wrong. You are not supposed to add Convolutional layers at the end of Resnet50's feature extractor. Put some Linear layers model.fc = nn.Sequential( # It has to start from 2048 nn.Linear(2048, 1024), nn.ReLU(), nn.Dropout(0.5), nn.Linear(1024, 256), nn.ReLU(), nn.Dropout(0.5), nn.Linear(1024, 7) # 7 is number of classes ) The model.fc has to start from 2048 units because that's what the ResNet50's feature extractor produces. The error is basically saying that it was expecting 4D input (because of your Conv2D layer at begining of model.fc) but got (batch_size, 2048) because that's what the the ResNet50 produced.
https://stackoverflow.com/questions/69690777/
Prediction for pretrained model on handwritten text(images)-Pytorch
I have a problem making a prediction using a pre-trained model that contains an encoder and decoder for handwritten text recognition. What I did is the following: checkpoint = torch.load("Model/SPAN/SPAN-PT-RA_rimes.pt",map_location=torch.device('cpu')) encoder_state_dict = checkpoint['encoder_state_dict'] decoder_state_dict = checkpoint['decoder_state_dict'] img = torch.LongTensor(img).unsqueeze(1).to(torch.device('cpu')) global_pred = decoder_state_dict(encoder_state_dict(img)) This generates this error: TypeError: 'collections.OrderedDict' object is not callable I would highly appreciate your help! ^_^
encoder_state_dict and decoder_state_dict are not the torch Models, but a collection (dictionary) of tensors that include pre-trained parameters of the checkpoint you loaded. Feeding inputs (such as the input image you got transformed) to such collection of tensors does not make sense. In fact, you should use these stat_dicts (i.e., a collection of pre-trained tensors) to load them into the parameters of your model object that is mapped to the network. See torch.nn.Module class.
https://stackoverflow.com/questions/69692316/
Model Created With Pytorch's *list, .children(), and nn.sequential Produces Different Output Tensors
I’m currently trying to use a pretrained DenseNet in my model. I’m following this tutorial: https://pytorch.org/hub/pytorch_vision_densenet/, and it works well, with an input of [1,3,244,244], it returns a [1,1000] tensor, exactly as expected. However, currently I’m using this code to load a pretrained Densenet into my model, and use it as a β€œfeature extraction” model. This is the code in the init function base_model = torch.hub.load('pytorch/vision:v0.10.0', 'densenet121', pretrained=True) self.base_model = nn.Sequential(*list(base_model.children())[:-1]) And it is being used like this in the forward function x = self.base_model(x) This however, taking the same input, returns a tensor of the size: ([1, 1024, 7, 7]). I can not figure out what is not working, I think it is due to the fact that DenseNet connects all the layers together, but I do not know how to get it to work in the same method. Any tips in how to use pretrained DenseNet in my own model?
Generally nn.Modules have logic inside the forward definition, which means it won't be accessible by just converting the model to a sequential block. Most notably, you can generally find downsampling and/or flattening occurring between the CNN and the classifier layer(s) of the network. This is the for DenseNet. If you look at Torchvision's forward implementation of DenseNet here you will see: def forward(self, x: Tensor) -> Tensor: features = self.features(x) out = F.relu(features, inplace=True) out = F.adaptive_avg_pool2d(out, (1, 1)) out = torch.flatten(out, 1) out = self.classifier(out) return out You can see how the tensor outputted by the CNN self.features (shaped (*, 1024, 7, 7)) is processed through a ReLU, Adaptive average pool, and flatten before being fed to the classifier (the last layer).
https://stackoverflow.com/questions/69693177/
Pytorch Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm)
I am a beginner to machine learning and trying to train a model on counting the amount of numbers below 0.5 in a 1D Vector with the length of 10. The input vectors contain number between 0 and 1. I generate the input data and the labels in my script instead of having them in a seperate file, because the data is so simple. This is the Code: import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class MyNet(nn.Module): def __init__(self): super(MyNet, self).__init__() self.lin1 = nn.Linear(10,10) self.lin2 = nn.Linear(10,1) def forward(self,x): x = self.lin1(x) x = F.relu(x) x = self.lin2(x) return x net = MyNet() net.to(device) def train(): criterion = nn.MSELoss() optimizer = optim.SGD(net.parameters(), lr=0.1) for epochs in range(100): target = 0 data = torch.rand(10) for entry in data: if entry < 0.5: target += 1 # print(target) # print(data) data = data.to(device) out = net(data) # print(out) target = torch.Tensor(target) target = target.to(device) loss = criterion(out, target) print(loss) net.zero_grad() loss.backward() optimizer.step() def test(): acc_error = 0 for i in range(100): test_data = torch.rand(10) test_data.to(device) test_target = 0 for entry in test_data: if entry < 0.5: test_target += 1 out = net(test_data) error = test_target - out if error < 0: error *= -1 acc_error += error overall_error = acc_error / 100 print(overall_error) train() test() This is the error: Traceback (most recent call last): File "test1.py", line 70, in <module> test() File "test1.py", line 59, in test out = net(test_data) File "/vol/fob-vol7/mi18/radtklau/SP/sem_project/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "test1.py", line 15, in forward x = self.lin1(x) File "/vol/fob-vol7/mi18/radtklau/SP/sem_project/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/vol/fob-vol7/mi18/radtklau/SP/sem_project/lib64/python3.6/site-packages/torch/nn/modules/linear.py", line 94, in forward return F.linear(input, self.weight, self.bias) File "/vol/fob-vol7/mi18/radtklau/SP/sem_project/lib64/python3.6/site-packages/torch/nn/functional.py", line 1753, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm) The other posts regarding the topic have not solved my problem. Maybe somebody can help. Thanks!
Notice how your error message traces back to test, while train works fine. You've transfered your data correctly in train: data = data.to(device) But not in test: test_data.to(device) Instead it should be reassigned to test_data, since torch.Tensor.to makes a copy: test_data = test_data.to(device)
https://stackoverflow.com/questions/69699586/
Efficient method to compute the row-wise dot product of two square matrices of the same size in PyTorch
Supposing I have two square matrices A, B of the same size A = torch.tensor([[1, 2], [3, 4]]) B = torch.tensor([[1, 1], [1, 1]]) And I want a resulting tensor that consists of the row-wise dot product, say tensor([3, 7]) # i.e. (1*1 + 2*1, 3*1 + 4*1) What is an efficient means of achieving this in PyTorch?
As you said you can use torch.bmm but you first need to broadcast your inputs: >>> torch.bmm(A[..., None, :], B[..., None]) tensor([[[3]], [[7]]]) Alternatively you can use torch.einsum: >>> torch.einsum('ij,ij->i', A, B) tensor([3, 7])
https://stackoverflow.com/questions/69702937/
DistributedDataParallel with gpu device ID specified in PyTorch
I want to train my model through DistributedDataParallel on a single machine that has 8 GPUs. But I want to train my model on four specified GPUs with device IDs 4, 5, 6, 7. How to specify the GPU device ID for DistributedDataParallel? I think the world size will be 4 for this case, but what should be the rank in this case?
You can set the environment variable CUDA_VISIBLE_DEVICES. Torch will read this variable and only use the GPUs specified in there. You can either do this directly in your python code like this: import os os.environ['CUDA_VISIBLE_DEVICES'] = '4, 5, 6, 7' Take care to execute this command before you initialize torch in any way, else the statement will not take effect. The other option would be to set the environment variable temporarily before starting your script in the shell: CUDA_VISIBLE_DEVICES=4,5,6,7 python your_script.py
https://stackoverflow.com/questions/69703158/
EncoderDecoderModel converts classifier layer of decoder
I am trying to do named entity recognition using a Sequence-to-Sequence-model. My output is simple IOB-tags, and thus I only want to predict probabilities for 3 labels for each token (IOB). I am trying a EncoderDecoderModel using the HuggingFace-implementation with a DistilBert as my encoder, and a BertForTokenClassification as my decoder. First, I import my encoder and decoder: encoder = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased") encoder.save_pretrained("Encoder") decoder = BertForTokenClassification.from_pretrained('bert-base-uncased', num_labels=3, output_hidden_states=False, output_attentions=False) decoder.save_pretrained("Decoder") decoder When I check my decoder model as shown, I can clearly see the linear classification layer that has out_features=3: ## sample of output: ) (dropout): Dropout(p=0.1, inplace=False) (classifier): Linear(in_features=768, out_features=3, bias=True) ) However, when I combine the two models in my EncoderDecoderModel, it seems that the decoder is converted into a different kind of classifier - now with out_features as the size of my vocabulary: bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained("./Encoder","./Decoder") bert2bert ## sample of output: (cls): BertOnlyMLMHead( (predictions): BertLMPredictionHead( (transform): BertPredictionHeadTransform( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) (decoder): Linear(in_features=768, out_features=30522, bias=True) ) ) Why is that? And how can I keep out_features = 3 in my model?
Huggingface uses different heads (depending on the network and task) for its models. While a part of these models is the same (such as the Contextualized encoders modules), they vary in the last layer which is the head itself. For example, for classification problems, they use the XForSequenceClassification heads, where X is the name of the language model such as Bert, Bart, and so forth. Being said this, the EncoderDecoderModel model uses the language modeling head while the decoder that you have already stored uses the classification head. As EncoderDecoderModel sees these discrepancies, it uses its own LMhead which is a linear layer with in_features of 768 mapped to 30522 as the number of the vocabularies. To circumvent this issue, you can use the vanilla BERTModel class to output the hidden representations, and then add a linear layer for the classification which takes in the embeddings associated with [CLS] token of BERT with the shape of 768 and then maps it through the linear layer to the output vector of 3, which is the number of your labels.
https://stackoverflow.com/questions/69709015/
Turning variable into a torch Tensor. Afterwards tensor is empty / has no element
The following is my Code. The "sequences" are my training data in the form [139 rows x 4 columns], 0) where the 139x4 are my signals and the 0 is my encoded label. def __getitem__(self, idx): sequence, label = self.sequences[idx] #converting sequence and label to tensors sequence = torch.Tensor(sequence.to_numpy()) print("label before tensor", label) label = torch.Tensor(label).long() print("numel() labels :", label.numel()) print("label shape :", shape(label)) return (sequence, label) The Code output is: >>label bevore tensor 0 (This is my encoded label) >>numel() labels : 0 >>label shape : torch.Size([0]) Why is my label tensor empty?
Because torch.Tensor expects either an array (in which case this array becomes the underlying values) or several ints which will be the size of the tensor. Hence torch.Tensor(0) instantiates a tensor of size 0. Either you use torch.Tensor([0]) or torch.tensor(0). Why these two objects behave in a different manner I don't know, but I'd recommend using the tensor (not capitalized) since it's better documented (the Tensor one seems to be part of the C port) edit : found this useful thread about their difference
https://stackoverflow.com/questions/69709345/
Understanding the TimeSeriesDataSet in pytorch forecasting
Here is a code sample taken from one of pytorch forecasting tutorila: # create dataset and dataloaders max_encoder_length = 60 max_prediction_length = 20 training_cutoff = data["time_idx"].max() - max_prediction_length context_length = max_encoder_length prediction_length = max_prediction_length training = TimeSeriesDataSet( data[lambda x: x.time_idx <= training_cutoff], time_idx="time_idx", target="value", categorical_encoders={"series": NaNLabelEncoder().fit(data.series)}, group_ids=["series"], # only unknown variable is "value" - and N-Beats can also not take any additional variables time_varying_unknown_reals=["value"], max_encoder_length=context_length, max_prediction_length=prediction_length, ) validation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training_cutoff + 1) batch_size = 128 train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0) val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=0) I don't really understand how the validation dataset is done with respect to the time index. I also don't understand why there is no test dataset in the tutorial. is it for a specific reason?
Concerning validation dataset: training dataset is all data except the last max_prediction_length data points of each Time series (each time series correspond to datapoints with same group_ids). Those last datapoints are filtered by the training cutoff (cutoff is the same for each time series because they are of same size) validation data is the last max_prediction_length data points use as targets for each time series (which mean validation data are the last encoder_length + max_prediction_length of each time series). This is done by using parameter min_prediction_idx=training_cutoff + 1 which make the dataset taking only data with time_index with value superior to training_cutoff + 1 (minimal decoder index is always >= min_prediction_idx)
https://stackoverflow.com/questions/69712077/
Create array/tensor of cycle shifted arrays
I want to create 2d tensor (or numpy array, doesn't really matter), where every row will be cycle shifted first row. I do it using for loop: import torch import numpy as np a = np.random.rand(33, 11) miss_size = 64 lp_order = a.shape[1] - 1 inv_a = -np.flip(a, axis=1) mtx_size = miss_size+lp_order # some constant mtx_row = torch.cat((torch.from_numpy(inv_a), torch.zeros((a.shape[0], miss_size - 1 + a.shape[1]))), dim=1) mtx_full = mtx_row.unsqueeze(1) for i in range(mtx_size): mtx_row = torch.roll(mtx_row, 1, 1) mtx_full = torch.cat((mtx_full, mtx_row.unsqueeze(1)), dim=1) unsqueezing is needed because I stack 2d tensors into 3d tensor Is there more efficient way to do that? Maybe linear algebra trick or more pythonic approach.
You can use scipy.linalg.circulant(): scipy.linalg.circulant([1, 2, 3]) # array([[1, 3, 2], # [2, 1, 3], # [3, 2, 1]])
https://stackoverflow.com/questions/69723873/
Whence MaskRCNN's segm IoU metrics = 0?
When training a MaskRCNN on my multi-class instance segmentation custom data set, given an input formatted as: image -) shape: torch.Size([3, 850, 600]), dtype: torch.float32, min: tensor(0.0431), max: tensor(0.9137) boxes -) shape: torch.Size([4, 4]), dtype: torch.float32, min: tensor(47.), max: tensor(807.) masks -) shape: torch.Size([850, 600, 600]), dtype: torch.uint8, min: tensor(0, dtype=torch.uint8), max: tensor(1, dtype=torch.uint8) areas -) shape: torch.Size([4]), dtype: torch.float32, min: tensor(1479.), max: tensor(8014.) labels -) shape: torch.Size([4]), dtype: torch.int64, min: tensor(1), max: tensor(1) iscrowd -) shape: torch.Size([4]), dtype: torch.int64, min: tensor(0), max: tensor(0) I consistently obtain all segmentation IoU metrics as shown below: DONE (t=0.03s). IoU metric: bbox Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.004 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.010 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.004 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.001 IoU metric: segm Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.000 How can I think, debug and fix this?
As your input image size is (850, 600) (H, W) and considering that for this given image you have 4 objects, not 850 with (600, 600) masks. your masks tensor should have dimension (number of objects, 850, 600), thus your input should be: image -) shape: torch.Size([3, 850, 600]), dtype: torch.float32, min: tensor(0.0431), max: tensor(0.9137) boxes -) shape: torch.Size([4, 4]), dtype: torch.float32, min: tensor(47.), max: tensor(807.) masks -) shape: torch.Size([4, 850, 600]), dtype: torch.uint8, min: tensor(0, dtype=torch.uint8), max: tensor(1, dtype=torch.uint8) areas -) shape: torch.Size([4]), dtype: torch.float32, min: tensor(1479.), max: tensor(8014.) labels -) shape: torch.Size([4]), dtype: torch.int64, min: tensor(1), max: tensor(1) iscrowd -) shape: torch.Size([4]), dtype: torch.int64, min: tensor(0), max: tensor(0) How to fix it Because you are trying to solve an instance segmentation problem, ensure that each of your (850, 600) masks are stacked so as to yield a tensor in the (number of masks, 850, 600) shape.
https://stackoverflow.com/questions/69727976/
How can I change the NN weights without affecting the gradients?
Say I have a simple NN: import torch import torch.nn as nn import torch.optim as optim from torch.nn.utils import parameters_to_vector class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 2) self.fc2 = nn.Linear(2, 3) self.fc3 = nn.Linear(3, 1) def forward(self, x): x = self.fc1(x) x = torch.relu(x) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net = Model() opt = optim.Adam(net.parameters()) And also some features features = torch.rand((3,1)) I can train it normally using: for i in range(10): opt.zero_grad() out = net(features) loss = torch.mean(torch.square(torch.tensor(5) - torch.sum(out))) loss.backward() opt.step() However, I'm interested in updating the weights of each layer after each example in the batch. That is, updating the actual weight values by some amount that will be different for each layer. I can print the parameters of each layer with: for i in range(1): opt.zero_grad() out = net(features) print(parameters_to_vector(net.fc1.parameters())) print(parameters_to_vector(net.fc2.parameters())) print(parameters_to_vector(net.fc3.parameters())) loss = torch.mean(torch.square(torch.tensor(5) - torch.sum(out))) loss.backward() opt.step() How can I change the values of the weights before the backprop without affecting the gradient? Say that I want the layers weights' to be updated according to the following functions: def first_layer_update(weight): return weight + 1e-3*weight def second_layer_update(weight): return 1e-2*weight def third_layer_update(weight): return weight - 1e-1*weight
- Using the torch.no_grad context manager. This allows you to perform (in-place or out-of-place) operations on your tensors without Autograd keeping track of those changes. As @user3474165 explained: def first_layer_update(weight): with torch.no_grad(): return weight + 1e-3*weight def second_layer_update(weight): with torch_no_grad(): return 1e-2*weight def third_layer_update(weight): with torch.no_grad(): return weight - 1e-1*weight Or differently without altering your functions using the context manager when calling them: with torch.no_grad(): first_layer_update(net.fc1.weight) second_layer_update(net.fc2.weight) third_layer_update(net.fc3.weight) - Using the @torch.no_grad decorator. A variant is to use the @torch.no_grad decorator: @torch.no_grad() def first_layer_update(weight): return weight + 1e-3*weight @torch.no_grad(): def second_layer_update(weight): return 1e-2*weight @torch.no_grad(): def third_layer_update(weight): return weight - 1e-1*weight And call these with: first_layer_update(net.fc1.weight), second_layer_update(net.fc2.weight), etc... - Mutating torch.Tensor.data. An alternative to wrapping your operations with the torch.no_grad context is to mutate the weights using their data attribute. This means calling your functions with: >>> first_layer_update(net.fc1.weight.data) >>> second_layer_update(net.fc2.weight.data) >>> third_layer_update(net.fc3.weight.data) Which would mutate the weights (not the biases) of the three layers with their respective update policies. In a nutshell, if you want to mutate all parameters of a nn.Module you can either do: >>> with torch.no_grad(): ... update_policy(parameters_to_vector(net.layer.parameters())) Or >>> update_policy(parameters_to_vector(net.layer.parameters().data))
https://stackoverflow.com/questions/69728410/
How to np.concatenate list with tensors?
I have a list with tensors: [tensor([[0.4839, 0.3282, 0.1773, ..., 0.2931, 1.2194, 1.3533], [0.4395, 0.3462, 0.1832, ..., 0.7184, 0.4948, 0.3998]], device='cuda:0'), tensor([[1.0586, 0.2390, 0.2315, ..., 0.9662, 0.1495, 0.7092], [0.6403, 0.0527, 0.1832, ..., 0.1467, 0.8238, 0.4422]], device='cuda:0')] I want to stack all [1xfeatures] matrices into one by np.concatenate(X). but this error appears: TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first. How to fix it?
Your tensors are still on the GPU and numpy operations happen on CPU. You can either send both tensors back to cpu first numpy.concatenate((a.cpu(), b.cpu()), as the error message indicates. Or you can avoid moving off the GPU and use a torch.cat() a = torch.ones((6),) b = torch.zeros((6),) torch.cat([a,b], dim=0) # tensor([1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0.])
https://stackoverflow.com/questions/69729160/
Why the accuracy of my neural network does not increase?
I tried to implement in python using pytorch from scratch a convolutional neural network based on the structure of AlexNet using the CIFAR10 dataset but my accuracy is very very low (10%). How can I improve my accuracy? Is there a structural problem or I have only to change the hyperparameters?I am sorry to there are trivial errors but I am a beginner in neural networks. import matplotlib.pyplot as plt import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision from collections import OrderedDict import torchvision.datasets as datasets import torchvision.transforms as transforms from torchvision.datasets import CIFAR10 import os class Config: def __init__(self): self.random_seed = 42 self.n_epochs = 50 self.batch_size_train = 256 self.batch_size_test = 1000 self.learning_rate = 0.0001 self.momentum = 0.5 self.log_interval = 500 self.dropout_probability = 0.5 conf = Config() torch.manual_seed(conf.random_seed) torch.cuda.manual_seed(conf.random_seed) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(conf.random_seed) train_loader = torch.utils.data.DataLoader( torchvision.datasets.CIFAR10('/files/', train=True, download=True, transform=torchvision.transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ])), batch_size=conf.batch_size_train, shuffle=True ) test_loader = torch.utils.data.DataLoader( torchvision.datasets.CIFAR10('/files/', train=False, download=True, transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5) ) ])), batch_size=conf.batch_size_test, shuffle=True ) examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) example_data.shape class AlexNet(nn.Module): def __init__(self): super(AlexNet,self).__init__() self.feature_extraction = nn.Sequential( nn.Conv2d(3, 96, kernel_size=11,stride = 4,padding = 2), nn.ReLU(), nn.MaxPool2d(kernel_size=3,stride=2), nn.Conv2d(96, 256, kernel_size = 5, padding=2), nn.ReLU(), nn.MaxPool2d(kernel_size=3,stride=2), nn.Conv2d(256, 384, kernel_size = 3, padding=1), nn.ReLU(), nn.Conv2d(384, 384, kernel_size = 3, padding=1), nn.ReLU(), nn.Conv2d(384, 256, kernel_size = 3, padding=1), nn.ReLU(), nn.MaxPool2d(kernel_size=3,stride=2) ) self.classifier = nn.Sequential ( nn.Dropout(p=0.5, inplace=True), nn.Linear(3072, out_features=4096), nn.ReLU(), nn.Dropout(p=0.5, inplace=True), nn.Linear(in_features=4096, out_features=4096), nn.ReLU(), nn.Linear(in_features=4096, out_features=1000) ) def forward(self,x): x = x.view(x.size(0),-1) return self.classifier(x) network = AlexNet() model_parameters = filter(lambda p: p.requires_grad, network.parameters()) params = sum([np.prod(p.size()) for p in model_parameters]) print("The model has {} parameters.".format(f'{params:,}')) optimizer = optim.Adam( network.parameters(), lr=conf.learning_rate ) train_losses = [] train_counter = [] test_losses = [] test_counter = [i*len(train_loader.dataset) for i in range(conf.n_epochs + 1)] for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(train_loader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = network(inputs) loss = F.nll_loss(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() if i % 256 == 255: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 256)) running_loss = 0.0 print('Finished Training') correct = 0 total = 0 # since we're not training, we don't need to calculate the gradients for our outputs with torch.no_grad(): for data in test_loader: images, labels = data # calculate outputs by running images through the network outputs = network(images) # the class with the highest energy is what we choose as prediction _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total))
As you may have noticed there are plenty of loss functions in any deep learning package. You must choose the appropriate one based on problem criteria, such as multiclass/binary, multilabel/simple, log_logits, already softmaxed logits, and ... . nll_loss often used in with log_softmax logits, but you have used it with raw logits. Based on what was said adding log_softmax to forward path would do the work. So the model would change to this: def forward(self,x): x = x.view(x.size(0),-1) x = self.classifier(x) return torch.nn.functional.log_softmax(x, 1) In this way I have got Accuracy of the network on the 10000 test images: 43 % after one epoch.
https://stackoverflow.com/questions/69734256/
Mismatching dims in GRU for classification
I'm trying to complete a task and write simple RNN. Here's the class: class RNNBaseline(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout, pad_idx): super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx) self.rnn = nn.GRU(input_size=embedding_dim, hidden_size=hidden_dim) #RNN(embedding_dim, hidden_dim) self.fc = nn.Linear(hidden_dim, output_dim) # YOUR CODE GOES HERE self.dropout = nn.Dropout(dropout) def forward(self, text, text_lengths, hidden = None): #text = [sent len, batch size] embedded = self.embedding(text) #embedded = [sent len, batch size, emb dim] #pack sequence packed_embedded = nn.utils.rnn.pack_padded_sequence(embedded, text_lengths) # cell arg for LSTM, remove for GRU # packed_output, (hidden, cell) = self.rnn(packed_embedded) # unpack sequence # output, output_lengths = nn.utils.rnn.pad_packed_sequence(packed_output) #output = [sent len, batch size, hid dim * num directions] #output over padding tokens are zero tensors #hidden = [num layers * num directions, batch size, hid dim] #cell = [num layers * num directions, batch size, hid dim] #concat the final forward (hidden[-2,:,:]) and backward (hidden[-1,:,:]) hidden layers #and apply dropout output, hidden = self.rnn(packed_embedded, hidden) #hidden = None # concatenate #hidden = [batch size, hid dim * num directions] or [batch_size, hid dim * num directions] return self.fc(hidden) For now I'm not using LSTM or trying to do bidirectional RNN, I just want simple GRU to train without errors. This is the training function: import numpy as np min_loss = np.inf cur_patience = 0 for epoch in range(1, max_epochs + 1): train_loss = 0.0 model.train() pbar = tqdm(enumerate(train_iter), total=len(train_iter), leave=False) pbar.set_description(f"Epoch {epoch}") for it, ((text, txt_len), label) in pbar: #YOUR CODE GOES HERE opt.zero_grad() input = text.to(device) labels = label.to(device) output = model(input, txt_len.type(torch.int64).cpu()) train_loss = loss_func(output, labels) train_loss.backward() opt.step() train_loss /= len(train_iter) val_loss = 0.0 model.eval() pbar = tqdm(enumerate(valid_iter), total=len(valid_iter), leave=False) pbar.set_description(f"Epoch {epoch}") for it, ((text, txt_len), label) in pbar: # YOUR CODE GOES HERE input = text.to(device) labels = label.to(device) output = model(input, txt_len.type(torch.int64).cpu()) val_loss = loss_func(output, labels) val_loss /= len(valid_iter) if val_loss < min_loss: min_loss = val_loss best_model = model.state_dict() else: cur_patience += 1 if cur_patience == patience: cur_patience = 0 break print('Epoch: {}, Training Loss: {}, Validation Loss: {}'.format(epoch, train_loss, val_loss)) model.load_state_dict(best_model) And some variables: vocab_size = len(TEXT.vocab) emb_dim = 100 hidden_dim = 256 output_dim = 1 n_layers = 2 bidirectional = False dropout = 0.2 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] patience=3 opt = torch.optim.Adam(model.parameters()) loss_func = nn.BCEWithLogitsLoss() max_epochs = 1 But I get this error: ValueError: Target size (torch.Size([64])) must be the same as input size (torch.Size([1, 64, 1])) ... in this line: ---> 18 train_loss = loss_func(output, labels) What am I doing wrong?
nn.BCEWithLogitsLoss expects both outputs and targets (or in your case labels) to be of size [b,d] where b is the batch size and d is the number of classes (or dimension of whatever you are trying to predict). Currently, your outputs are of size [b,d,1] and your targets are of size [d]. Two fixes are necessary, and both are very simple: Add a batch dimension to your targets (labels). This is a common error when using a dataset that returns data elements because it generally does not add a batch dimension. Encapsulating your dataset class within a pytorch dataloader, but if you don't want to do this simply add an unsqueeze() operation. Note that the unsqueeze operation only works with a batch size of 1, otherwise using dataloader is probably a better bet. Your output has an empty 3rd dimension, which can easily be flattened with a squeeze() operation. Both unsqueeze and squeeze are differentiable so shouldn't present problems for backpropagation. ... code before here for it, ((text, txt_len), label) in pbar: # YOUR CODE GOES HERE input = text.to(device) labels = label.to(device).unsqueeze(0) # added unsqueeze operation output = model(input, txt_len.type(torch.int64).cpu()) output = output.squeeze(-1) # added squeeze on last dim val_loss = loss_func(output, labels) ... code after here
https://stackoverflow.com/questions/69739651/
pipenv is unable to find the right package version using pytorch index
I'm confused by how pipenv resolves the available package versions. I specified the index and I clearly see version 0.8.1 in the list on their download site This is my pipenv version. I'm running on Mac OSX and installed pipenv through brew. ~ pipenv --version pipenv, version 2021.5.29 Pipfile [[source]] url = "https://pypi.org/simple" verify_ssl = true name = "pypi" [[source]] name = "pytorch" url = "https://download.pytorch.org/whl/torch_stable.html" verify_ssl = false [packages] ... torch = {index="pytorch", version="==1.7.1"} torchtext = {index="pytorch", version="==0.8.1"} ... [dev-packages] [requires] python_version = "3.7" The error when I run pipenv install: Pipfile.lock not found, creating... Locking [dev-packages] dependencies... Locking [packages] dependencies... Building requirements... Resolving dependencies... ✘ Locking Failed! CRITICAL:pipenv.patched.notpip._internal.index.package_finder:Could not find a version that satisfies the requirement torchtext==0.8.1 (from -r /var/folders/0q/10lqt8nn6_l87q71lx_d233r0000gn/T/pipenv2ix9if8erequirements/pipenv-qm980cdf-constraints.txt (line 7)) (from versions: 0.1.1, 0.2.0, 0.2.1, 0.2.3, 0.3.1, 0.4.0, 0.5.0, 0.6.0) [ResolutionFailure]: File "/usr/local/Cellar/pipenv/2021.5.29/libexec/lib/python3.9/site-packages/pipenv/resolver.py", line 741, in _main ... The error tells me it can't find 0.8.1 and that I can only choose from a super limited set. Why is this? What can I do to fix this?
You are using the wrong version of Python. It's really that simple. Read very carefully in the list of library versions in that list: Torchtext 0.8.1 exists for: cp36m = CPython 3.6m cp37m = CPython 3.7m cp38 = CPython 3.8 cp39 = CPython 3.9 You are using: CPython 3.7. CPython 3.7 (yours) and CPython 3.7m (theirs) is not the same thing. The "M" stands for a special malloc version of Python which is faster. You can't install an "M" library on a non-"M" Python. Upgrade to CPython 3.8 or 3.9. I recommend using Pipenv + Pyenv together, which makes it trivial to install different versions of CPython.
https://stackoverflow.com/questions/69743918/
Pytorch cifar10 images are not normalized
transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) trainset.data[0] I am using the above code and expect that data will be normalized. But it is not, below is the result. I need to access data using the data method to do some more processing. Output is below. array([[[ 59, 62, 63], [ 43, 46, 45], [ 50, 48, 43], ..., [158, 132, 108], [152, 125, 102], [148, 124, 103]],
The torchvision.transforms.Normalize is merely a shift-scale operator. Given parameters mean (the "shift") and std (the "scale"), it will map the input to (input - shift) / scale. Since you are using mean=0.5 and std=0.5 on all three channels, the results with be (input - 0.5) / 0.5 which is only normalizing your data if its statistic is in fact mean=0.5 and std=0.5 which is of course not the case. With that in mind, what you should be doing is providing the actual dataset's statistics. For CIFAR10, these can be for example found here: mean = [0.4914, 0.4822, 0.4465] std = [0.2470, 0.2435, 0.2616] With those values, you will be able to normalize your data properly to mean=0 and std=1. I've written a more general, long-form answer here.
https://stackoverflow.com/questions/69747119/
copy segment from image tensor
I have three tensors: A - (1, 3, 256, 256) B - (1, 3, 256, 256) - this is a white image tensor C - (256, 256) - this is the segment tensor For instance C would look like: tensor([[ 337, 337, 337, ..., 340, 340, 340], [ 337, 337, 337, ..., 340, 340, 340], [ 337, 337, 337, ..., 340, 340, 340], ..., [1022, 1022, 1022, ..., 1010, 1010, 1010], [1022, 1022, 1022, ..., 1010, 1010, 1010], [1022, 1022, 1022, ..., 1010, 1010, 1010]], device='cuda:0') where 37 could indicate a building etc. Tensor C gives the location of the segment shape. What I want is to copy the same segment based on the location from tensor A onto tensor B. This would be photoshopping the segment onto a white image tensor. This is similar to masking and I looked into mask_select (https://pytorch.org/docs/stable/generated/torch.masked_select.html) but that only returns 1D tensor back.
You do not need to select the pixels in C, only to mask them: select = 337 # which segment to select select_mask = (C == select)[None, None, ...] # create binary mask and add singleton dimensions # this is the part where you select the right part of A B = B * (1 - select_mask) + A * select_mask
https://stackoverflow.com/questions/69747657/
Differentiable affine transformation on patches of images in pytorch
I have a tensor of object bounding boxes, e.g. with the shape of [10,4] which correspond to a batch of images e.g. with shape [2,3,64,64] and transformation matrices for each object with shape [10,6] and a vector that defines which object index belongs to which image. I would like to apply the affine transformations on patches of the images and replace those patches after applying the transformations. I am doing this using a for loop now, but the way I am doing it is not differntiable (I get the in place operation error from pytorch). I wanted to know if there is a differntiable way to do this. e.g. via grid_sample? Here is my current code: for obj_num in range(obj_vecs.shape[0]): #batch_size im_id = obj_to_img[obj_num] x1, y1, x2, y2 = boxes_pred[obj_num] im_patch = img[im_id, :, x1:x2, y1:y2] im_patch = im_patch[None, :, :, :] img[im_id, :, x1:x2, y1:y2] = self.VITAE.stn(im_patch, theta_mean[obj_num], inverse=False)[0]
There are a few ways to perform differentiable crops in PyTorch. Let's take a minimal example in 2D: >>> x1, y1, x2, y2 = torch.randint(0, 9, (4,)) (tensor(7), tensor(3), tensor(5), tensor(6)) >>> x = torch.randint(0, 100, (9,9), dtype=float, requires_grad=True) tensor([[18., 34., 28., 41., 1., 14., 77., 75., 23.], [62., 33., 64., 41., 16., 70., 47., 45., 19.], [20., 69., 5., 51., 1., 16., 20., 63., 52.], [51., 25., 8., 30., 40., 67., 41., 27., 33.], [36., 6., 95., 53., 69., 84., 51., 42., 71.], [46., 72., 88., 82., 71., 75., 86., 36., 15.], [66., 19., 58., 50., 91., 28., 7., 83., 4.], [94., 50., 34., 34., 92., 45., 48., 97., 76.], [80., 34., 19., 13., 77., 77., 51., 15., 13.]], dtype=torch.float64, requires_grad=True) Given x1, x2 (resp. y1, y2 the patch index boundaries on the height dimension (resp. width dimension). You can get the grid of coordinates corresponding do you patch using a combination of torch.arange and torch.meshgrid: >>> sorted_range = lambda a, b: torch.arange(a, b) if b >= a else torch.arange(b, a) >>> xi, yi = sorted_range(x1, x2), sorted_range(y1, y2) (tensor([3, 4, 5, 6]), tensor([5])) >>> i, j = torch.meshgrid(xi, yi) (tensor([[3], [4], [5], [6]]), tensor([[5], [5], [5], [5]])) With that setup, you can extract and replace patches of x. You can extract the patch by indexing x directly: >>> patch = x[i, j].reshape(len(xi), len(yi)) tensor([[67.], [84.], [75.], [28.]], dtype=torch.float64, grad_fn=<ViewBackward>) Here is the mask for illustration purposes: tensor([[0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 1., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=torch.float64, grad_fn=<IndexPutBackward>) You can replace the values in x with the result from some transformation on the patch using torch.Tensor.index_put: >>> values = 2*patch tensor([[134.], [168.], [150.], [ 56.]], dtype=torch.float64, grad_fn=<MulBackward0>) >>> x.index_put(indices=(i, j), values=values) tensor([[ 18., 34., 28., 41., 1., 14., 77., 75., 23.], [ 62., 33., 64., 41., 16., 70., 47., 45., 19.], [ 20., 69., 5., 51., 1., 16., 20., 63., 52.], [ 51., 25., 8., 30., 40., 134., 41., 27., 33.], [ 36., 6., 95., 53., 69., 168., 51., 42., 71.], [ 46., 72., 88., 82., 71., 150., 86., 36., 15.], [ 66., 19., 58., 50., 91., 56., 7., 83., 4.], [ 94., 50., 34., 34., 92., 45., 48., 97., 76.], [ 80., 34., 19., 13., 77., 77., 51., 15., 13.]], dtype=torch.float64, grad_fn=<IndexPutBackward>)
https://stackoverflow.com/questions/69752807/
Dataset not found or corrupted. You can use download=True to download it
Recently I downloaded CelebA dataset from this page. I want to apply some transformations to this data set: To do it firstly let's define transformations: from torchvision import transforms from torchvision.datasets CelebA celeba_transforms = transforms.Compose([ transforms.CenterCrop(130), transforms.Resize([64, 64]), transforms.ToTensor() ]) And now execute it: CelebA(root='img_align_celeba', split='train', download=False, transform=celeba_transforms) However result of this code is an error: Dataset not found or corrupted. You can use download=True to download it Setting download=True is also not working. Could you please help me with applying those transformations to this data set?
Finally I resolved the issue. I'm posting my solution: Problem number one There is a problem with downloading zip file img_align_celeba.zip due to reaching daily quota. Solution to this problem is simply downloading this file from internet e.g. Kaggle. Problem number two When using function CelebA function with download=True program will think for a while and then return error mentioned in a question title. Cause of the problem are broken .txt files which I listed below (those files are also downloaded via CelebA function): For correct working of this function you have to download those .txt files directly from internet. I found them here. When you download all of those and replace old ones function CelebA should work without any problems.
https://stackoverflow.com/questions/69755609/
PIL to numpy and PIL to tensor is different
I have image and sum torch tensor and numpy array is diffrent, why? How to torch_img.sum() = numpy_float_img.sum()? from PIL import Image from torchvision import transforms as T # Read image with PIL img = Image.open(img_path).resize((224,224)) torch_img = T.ToTensor()(img) numpy_img = np.asarray(img) numpy_img_float = np.asarray(img).astype(np.float32) print(torch_img.sum(), numpy_img.sum(), numpy_img_float.sum()) ->56914.496, 14513196, 14513196.0 Does anyone have any idea why?
Notice how torch_img is in the [0,1] range while numpy_img and numpy_img_float are both in the [0, 255] range. Looking at the documentation for torchvision.transforms.ToTensor, if the provided input is a PIL image, then the values will be mapped to [0, 1]. In contrast, numpy.array will have the values remain in the [0, 255] range. Other than that the small variations in results are caused by different floating-point precisions.
https://stackoverflow.com/questions/69757350/
Why my trained model output is same for each random input?
I trained my model on the Python platform. after training, I faced up with same output for each random input. I solved this problem by deactivating BatchNorm layers with the model.eval() method. but when I tried to load my trained model in C++ with Pytorch C++ API, this problem showed up again, and model.eval() not helping me at this time. I faced the same output for each random input again. This is my C++ model loading function: std::vector<torch::jit::script::Module> module_loader(std::string file_addr) { std::vector<torch::jit::script::Module> modul; torch::jit::script::Module model = torch::jit::load(file_addr); model.eval(); modul.push_back(model); return modul; } And this is my testing function: void test(std::vector<torch::jit::script::Module> &model) { std::vector<torch::jit::IValue> inputs; inputs.push_back(torch::rand({1, 2, 64, 172})); torch::Tensor output = model[0].forward(inputs).toTensor(); std::cout << output << std::endl; } After all, I put it all together in main() like this: int main() { auto modul = module_loader(MODEL_ADDRESS); test(modul); } MODEL_ADDRESS is a macro for the address of the trained model on my local disk. The output of the program is this for every run: 0.3231 [ CPUFloatType{1,1} ]
I debugged multiple times my code and retry to save the model. Finally, I found the answer. I used a server to train my model, and for exporting the model to C++ I load my model weight in Python Shell into a model object. The problem is here, I should go to eval() before exporting the model for C++. C++ eval() does not work for the model is trained in Python. This solved my problem.
https://stackoverflow.com/questions/69759030/
Pyhon: the difference of "from XX import * " when XX is file created by myself or XX is python library file(like pytorch)
When i use pytorch,i am wondering why i can't import partial functions or variables from torchvision.models.vgg using "from torchvision.models.vgg import *" Like this: from torchvision.models.vgg import * if __name__=="__main__": print(cfgs["A"]) and from torchvision.models.vgg import * from typing import Union, List, Dict, Any, cast cfgs: Dict[str, List[Union[str, int]]] = { 'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], 'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'], 'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'], } if __name__=="__main__": print(make_layers(cfgs["A"])) variables "cfgs" and function "make_layers" can't be found But i could use function like "vgg16_bn()" which is in the same file as "make_layers" from torchvision.models.vgg import * if __name__=="__main__": print(vgg16_bn()) I need to import like this: from torchvision.models.vgg import cfgs,make_layers if __name__=="__main__": print(make_layers(cfgs["A"])) But if the file i import is created by myself like this: b.py: url="http" def a (): return True main.py could find variable "url" and function "a()" from b import * if __name__=="__main__": print(url) print(a()) I don't know why it happens like this,could someone explain it.Thanks!
torchvision/models/vgg.py sets this particular __all__ indicating their exported symbols: __all__ = [ "VGG", "vgg11", "vgg11_bn", "vgg13", "vgg13_bn", "vgg16", "vgg16_bn", "vgg19_bn", "vgg19", ] * imports (while generally discouraged) will by default expose the names in __all__ (or if __all__ is missing, any names which do not start with _)
https://stackoverflow.com/questions/69763445/
Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm
So I have received this error when running my simple regression code: class linear_regression(torch.nn.Module): def __init__(self, inputSize, outputSize): super(linear_regression, self).__init__() self.linear = torch.nn.Linear(inputSize, outputSize) def forward(self, X): out = self.linear(X) return out #fit_linear_reg(train_ds, train_X_ds, train_y_ds, test_X_ds, which_case, fold_no, p_t) def fit_linear_reg(train_X_torch, train_y_torch, test_X_torch, case_type, fold_no, p_t): size_input = train_X_torch.shape[1] size_output = train_y_torch.shape[1] model = linear_regression(size_input, size_output) model.to(torch.device(device_name)) learningRate = 0.01 epochs = 1 criterion = torch.nn.MSELoss() optimizer = torch.optim.SGD(model.parameters(), lr=learningRate) training_loss_list = [] for epoch in range(epochs): # Converting inputs and labels to Variable if torch.cuda.is_available(): inputs = Variable(train_X_torch.cuda().float()) labels = Variable(train_y_torch.cuda().float()) else: inputs = Variable(train_X_torch.float()) labels = Variable(train_y_torch.float()) # Clear gradient buffers because we don't want any gradient from previous epoch to carry forward, dont want to cummulate gradients optimizer.zero_grad() # get output from the model, given the inputs outputs = 0 if torch.cuda.is_available(): outputs = model(inputs.to(device)) else: outputs = model(inputs) # get loss for the predicted output loss = criterion(outputs, labels.) print(loss) # get gradients w.r.t to parameters loss.backward() # update parameters optimizer.step() training_loss_list.append(loss.item()) #print('epoch {}, loss {}'.format(epoch, loss.item())) torch.save(model.state_dict(), './results/weights_' + case_type + '_' + str(fold_no) + '_' + p_t) return (model(test_X_torch.float()), training_loss_list) I have tried to pass my variabls to cuda, however, I am still receiving this error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-48-ccf21586daaf> in <module> 40 41 #print("khodesh shekl", test_X_ds.shape) ---> 42 preds, training_loss_list_pt = fit_general(train_X_ds, train_y_ds, test_X_ds, test_y_ds, which_case, k_no, p_t) 43 k_no += 1 44 #print("final shape", preds.shape) <ipython-input-46-abaeea73fcec> in fit_general(train_X_ds, train_y_ds, test_X_ds, test_y_torch, which_case, fold_no, p_t) 2 3 if(which_case == "reg_simple"): #ok ----> 4 a, b = fit_linear_reg(train_X_ds, train_y_ds, test_X_ds, which_case, fold_no, p_t) 5 return a, b 6 <ipython-input-45-35542c8a0f30> in fit_linear_reg(train_X_torch, train_y_torch, test_X_torch, case_type, fold_no, p_t) 54 torch.save(model.state_dict(), './results/weights_' + case_type + '_' + str(fold_no) + '_' + p_t) 55 ---> 56 return (model(test_X_torch.float()), training_loss_list) /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), <ipython-input-45-35542c8a0f30> in forward(self, X) 5 6 def forward(self, X): ----> 7 out = self.linear(X) 8 return out 9 /usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py in forward(self, input) 92 93 def forward(self, input: Tensor) -> Tensor: ---> 94 return F.linear(input, self.weight, self.bias) 95 96 def extra_repr(self) -> str: /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1751 if has_torch_function_variadic(input, weight): 1752 return handle_torch_function(linear, (input, weight), input, weight, bias=bias) -> 1753 return torch._C._nn.linear(input, weight, bias) 1754 1755 RuntimeError: Tensor for argument #2 'mat1' is on CPU, but expected it to be on GPU (while checking arguments for addmm) Probably I have missed a variable to pass it to CUDA, but what can it be here? Here is the code that I pass my data to the fit_linear_reg function: #simple model device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device_name = device print("running device: ", device) #fixme: k-fold cross validation n_crossVal = 2 kf = KFold(n_splits = n_crossVal) #, random_state=1, shuffle=True fixme for p_t in key_set_1: print(p_t) cur_ds = [] for i, roi in enumerate(key_set_2): if(i==0): cur_ds = brain_ds[p_t + '_' + roi] else: cur_ds = np.hstack((cur_ds, brain_ds[p_t + '_' + roi])) print(cur_ds.shape) print(n_train) size_input = cur_ds.shape[1] preds_case = np.zeros(glove_ds.shape) k_no = 0 training_loss_list_pt = [] # Linear regression for k_train_index, k_test_index in kf.split(cur_ds): train_X_ds = torch.from_numpy(cur_ds[k_train_index, :]) train_y_ds = torch.from_numpy(glove_ds[k_train_index, :]) train_ds = TensorDataset(train_X_ds, train_y_ds) test_X_ds = torch.from_numpy(cur_ds[k_test_index, :]) test_y_ds = torch.from_numpy(glove_ds[k_test_index, :]) test_ds = TensorDataset(test_X_ds, test_y_ds) #print("khodesh shekl", test_X_ds.shape) preds, training_loss_list_pt = fit_linear_reg(train_X_ds, train_y_ds, test_X_ds, test_y_ds, which_case, k_no, p_t) k_no += 1 #print("final shape", preds.shape) preds_case[k_test_index, :] = preds.detach().numpy() plot_the_training_loss_in_each_fold(training_loss_list_pt, which_case, k_no, p_t) #todo print(training_loss_list_pt) print("prediction results (correlation) for " + p_t + " ") print(np.corrcoef(preds_case, torch.from_numpy(glove_ds))) #todo #calculate the shuffled data #compare the shuffled with normal ones #statistical test #Saving the results #todo #if(which_case == "MM") ## For MM: # Model ke train shod, create the updated glove using each fold, # do linear prediction using the new updated gloves # Compare # Put ifs for cases """ train_X_ds = torch.from_numpy(cur_ds[:n_train, :]) train_y_ds = torch.from_numpy(glove_ds[:n_train, :]) train_ds = TensorDataset(train_X_ds, train_y_ds) valid_X_ds = torch.from_numpy(cur_ds[n_train:n_train+n_val, :]) valid_y_ds = torch.from_numpy(glove_ds[n_train:n_train+n_val, :]) valid_ds = TensorDataset(valid_X_ds, valid_y_ds) test_X_ds = torch.from_numpy(cur_ds[n_train+n_val:, :]) test_y_ds = torch.from_numpy(glove_ds[n_train+n_val:, :]) test_ds = TensorDataset(test_X_ds, test_y_ds) print("analyzing", p_t, ' ', roi, ' :') fit_reg(train_ds, train_X_ds, train_y_ds) #model = MM_Net(train_X_ds, train_y_ds) print(train_X_ds.shape, valid_X_ds.shape, test_X_ds.shape) """
You haven't transferred your test data on the GPU: model(test_X_torch.float().cuda())
https://stackoverflow.com/questions/69764390/
Error in 'from torchtext.data import Field, TabularDataset, BucketIterator, Iterator'
I am trying to implement this article https://towardsdatascience.com/bert-text-classification-using-pytorch-723dfb8b6b5b, but I have the following problem. # Preliminaries from torchtext.data import Field, TabularDataset, BucketIterator, Iterator Error ImportError: cannot import name 'Field' from 'torchtext.data' (/usr/local/lib/python3.7/dist-packages/torchtext/data/__init__.py) OSError: /usr/local/lib/python3.7/dist-packages/torchtext/_torchtext.so: undefined symbol: _ZNK3c104Type14isSubtypeOfExtESt10shared_ptrIS0_EPSo
Try from torchtext.legacy.data import Field, TabularDataset, BucketIterator, Iterator Field is a legacy functionality of Torchtext since the 0.9 release. That article you linked was from before that release. If you've got the newest torchtext, but are trying to use legacy functionality, you need to use torchtext.legacy.* Check out this tutorial for more info. Also check this for full torchtext release notes.
https://stackoverflow.com/questions/69765669/
Jax/Flax (very) slow RNN-forward-pass compared to pyTorch?
I recently implemented a two-layer GRU network in Jax and was disappointed by its performance (it was unusable). So, i tried a little speed comparison with Pytorch. Minimal working example This is my minimal working example and the output was created on Google Colab with GPU-runtime. notebook in colab import flax.linen as jnn import jax import torch import torch.nn as tnn import numpy as np import jax.numpy as jnp def keyGen(seed): key1 = jax.random.PRNGKey(seed) while True: key1, key2 = jax.random.split(key1) yield key2 key = keyGen(1) hidden_size=200 seq_length = 1000 in_features = 6 out_features = 4 batch_size = 8 class RNN_jax(jnn.Module): @jnn.compact def __call__(self, x, carry_gru1, carry_gru2): carry_gru1, x = jnn.GRUCell()(carry_gru1, x) carry_gru2, x = jnn.GRUCell()(carry_gru2, x) x = jnn.Dense(4)(x) x = x/jnp.linalg.norm(x) return x, carry_gru1, carry_gru2 class RNN_torch(tnn.Module): def __init__(self, batch_size, hidden_size, in_features, out_features): super().__init__() self.gru = tnn.GRU( input_size=in_features, hidden_size=hidden_size, num_layers=2 ) self.dense = tnn.Linear(hidden_size, out_features) self.init_carry = torch.zeros((2, batch_size, hidden_size)) def forward(self, X): X, final_carry = self.gru(X, self.init_carry) X = self.dense(X) return X/X.norm(dim=-1).unsqueeze(-1).repeat((1, 1, 4)) rnn_jax = RNN_jax() rnn_torch = RNN_torch(batch_size, hidden_size, in_features, out_features) Xj = jax.random.normal(next(key), (seq_length, batch_size, in_features)) Yj = jax.random.normal(next(key), (seq_length, batch_size, out_features)) Xt = torch.from_numpy(np.array(Xj)) Yt = torch.from_numpy(np.array(Yj)) initial_carry_gru1 = jnp.zeros((batch_size, hidden_size)) initial_carry_gru2 = jnp.zeros((batch_size, hidden_size)) params = rnn_jax.init(next(key), Xj[0], initial_carry_gru1, initial_carry_gru2) def forward(params, X): carry_gru1, carry_gru2 = initial_carry_gru1, initial_carry_gru2 Yhat = [] for x in X: # x.shape = (batch_size, in_features) yhat, carry_gru1, carry_gru2 = rnn_jax.apply(params, x, carry_gru1, carry_gru2) Yhat.append(yhat) # y.shape = (batch_size, out_features) #return jnp.concatenate(Y, axis=0) jitted_forward = jax.jit(forward) Results # uncompiled jax version %time forward(params, Xj) CPU times: user 7min 17s, sys: 8.18 s, total: 7min 25s Wall time: 7min 17s # time for compiling %time jitted_forward(params, Xj) CPU times: user 8min 9s, sys: 4.46 s, total: 8min 13s Wall time: 8min 12s # compiled jax version %timeit jitted_forward(params, Xj) The slowest run took 204.20 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 5: 115 Β΅s per loop # torch version %timeit lambda: rnn_torch(Xt) 10000000 loops, best of 5: 65.7 ns per loop Questions Why is my Jax-implementation so slow? What am i doing wrong? Also, why is compiling taking so long? The sequence is not that long.. Thank you :)
The reason the JAX code compiles slowly is that during JIT compilation JAX unrolls loops. So in terms of XLA compilation, your function is actually very large: you call rnn_jax.apply() 1000 times, and compilation times tend to be roughly quadratic in the number of statements. By contrast, your pytorch function uses no Python loops, and so under the hood it is relying on vectorized operations that run much faster. Any time you use a for loop over data in Python, a good bet is that your code will be slow: this is true whether you're using JAX, torch, numpy, pandas, etc. I'd suggest finding an approach to the problem in JAX that relies on vectorized operations rather than relying on slow Python looping.
https://stackoverflow.com/questions/69767707/
How to add weight normalisation to PyTorch's pretrained VGG16?
I want to add weight normalization to PyTorch pre-trained VGG-16. One possible solution which I can think of is as follows, from torch.nn.utils import weight_norm as wn import torchvision.models as models class ResnetEncoder(nn.Module): def __init__(self): super(ResnetEncoder, self).__init__() ... self.encoder = models.vgg16(pretrained=True).features ... def forward(self, input_image): self.features = [] x = (input_image - self.mean) / self.std self.features.append(self.encoder(x)) ... return self.features class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.encoder = ResnetEncoder() # this is basically VGG16 self.decoder = DepthDecoder(self.encoder.num_ch_enc) for k,m in self.encoder.encoder._modules.items(): if isinstance(m,nn.Conv2d): m = wn(m) def forward(self,x): return self.decoder(self.encoder(x)) vgg_backbone_model = Net() vgg_backbone_model.train() ... But I do not know if this is the correct way to add weight normalization to pre-trained VGG16.
You should be using nn.Module.modules instead of accessing the _modules attribute. Doing m = wn(m) won't update the parameters of the layer but instead make a copy and overwrite the local variable m. Instead, you should override the layer itself from the nn.Module, one way to do such thing is to use setattr: for k, v in model.named_modules(): if isinstance(v, nn.Conv2d): setattr(model, k, weight_norm(v))
https://stackoverflow.com/questions/69767730/
AttributeError: 'GPT2Model' object has no attribute 'gradient_checkpointing'
I am trying to load a GPT2 fine tuned model in flask initially. The model is being loaded during the init functions using: app.modelgpt2 = torch.load('models/model_gpt2.pt', map_location=torch.device('cpu')) app.modelgpt2tokenizer = GPT2Tokenizer.from_pretrained('gpt2') But while performing the prediction task as followed in the snippet below: from flask import current_app input_ids = current_app.modelgpt2tokenizer.encode("sample sentence here", return_tensors='pt') sample_outputs = current_app.modelgpt2.generate(input_ids, do_sample=True, top_k=50, min_length=30, max_length=300, top_p=0.95, temperature=0.7, num_return_sequences=1) It throws the following error as mentioned in the question: AttributeError: 'GPT2Model' object has no attribute 'gradient_checkpointing' The error trace is listed starting from the model.generate function: File "/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/venv/lib/python3.8/site-packages/transformers/generation_utils.py", line 1017, in generate return self.sample( File "/venv/lib/python3.8/site-packages/transformers/generation_utils.py", line 1531, in sample outputs = self( File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/venv/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1044, in forward transformer_outputs = self.transformer( File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/venv/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 861, in forward print(self.gradient_checkpointing) File "/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1177, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'GPT2Model' object has no attribute 'gradient_checkpointing' Checked with modeling_gpt2.py, by default self.gradient_checkpointing is set False in the constructor of the class.
This issue is found to be occurring only if the framework is run using venv or deployment frameworks like uWSGI or gunicorn. It is resolved when transformers version 4.10.0 is used instead of the latest package.
https://stackoverflow.com/questions/69773687/
Constructing parameter groups in pytorch
In the torch.optim documentation, it is stated that model parameters can be grouped and optimized with different optimization hyperparameters. It says that For example, this is very useful when one wants to specify per-layer learning rates: optim.SGD([ {'params': model.base.parameters()}, {'params': model.classifier.parameters(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9) This means that model.base’s parameters will use the default learning rate of 1e-2, model.classifier’s parameters will use a learning rate of 1e-3, and a momentum of 0.9 will be used for all parameters. I was wondering how to define such groups that have parameters() attribute. What came to my mind was something in the form of class MyModel(nn.Module): def __init__(self): super(MyModel, self).__init__() self.base() self.classifier() self.relu = nn.ReLU() def base(self): self.fc1 = nn.Linear(1, 512) self.fc2 = nn.Linear(512, 264) def classifier(self): self.fc3 = nn.Linear(264, 128) self.fc4 = nn.Linear(128, 964) def forward(self, y0): y1 = self.relu(self.fc1(y0)) y2 = self.relu(self.fc2(y1)) y3 = self.relu(self.fc3(y2)) return self.fc4(y3) How should I modify the snippet above to be able to get model.base.parameters()? Is the only way to define a nn.ParameterList and explicitly add weights and biases of the desired layers to that list? What is the best practice?
I will show three approaches to solving this. In the end though, it comes down to personal preference. - Grouping parameters with nn.ModuleDict. I noticed here an answer using nn.Sequential to group the layers which allow to target different sections of the model using the parameters attribute of nn.Sequential. Indeed base and classifier might be more than sequential layers. I believe a more general approach is to leave the module as is, but instead, initialize an additional nn.ModuleDict module which will contain all parameters ordered by the optimization group in separate nn.ModuleLists: class MyModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1, 512) self.fc2 = nn.Linear(512, 264) self.fc3 = nn.Linear(264, 128) self.fc4 = nn.Linear(128, 964) self.params = nn.ModuleDict({ 'base': nn.ModuleList([self.fc1, self.fc2]), 'classifier': nn.ModuleList([self.fc3, self.fc4])}) def forward(self, y0): y1 = self.relu(self.fc1(y0)) y2 = self.relu(self.fc2(y1)) y3 = self.relu(self.fc3(y2)) return self.fc4(y3) Then you can define your optimizer with: optim.SGD([ {'params': model.params.base.parameters()}, {'params': model.params.classifier.parameters(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9) Do note MyModel's parameters' generator won't contain duplicate parameters. - Creating an interface for accessing parameter groups. A different solution is to provide an interface in the nn.Module to separate the parameters into groups: class MyModel(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(1, 512) self.fc2 = nn.Linear(512, 264) self.fc3 = nn.Linear(264, 128) self.fc4 = nn.Linear(128, 964) def forward(self, y0): y1 = self.relu(self.fc1(y0)) y2 = self.relu(self.fc2(y1)) y3 = self.relu(self.fc3(y2)) return self.fc4(y3) def base_params(self): return chain(m.parameters() for m in [self.fc1, self.fc2]) def classifier_params(self): return chain(m.parameters() for m in [self.fc3, self.fc4]) Having imported itertools.chain as chain. Then define your optimizer with: optim.SGD([ {'params': model.base_params()}, {'params': model.classifier_params(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9) - Using child nn.Modules. Lastly, you can define your module sections as submodules (here it comes down as the method as the nn.Sequential one, yet you can generalize this to any submodules). class Base(nn.Sequential): def __init__(self): super().__init__(nn.Linear(1, 512), nn.ReLU(), nn.Linear(512, 264), nn.ReLU()) class Classifier(nn.Sequential): def __init__(self): super().__init__(nn.Linear(264, 128), nn.ReLU(), nn.Linear(128, 964)) class MyModel(nn.Module): def __init__(self): super().__init__() self.base = Base() self.classifier = Classifier() def forward(self, y0): features = self.base(y0) out = self.classifier(features) return out Here again you can use the same interface as the first method: optim.SGD([ {'params': model.base.parameters()}, {'params': model.classifier.parameters(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9) I would argue this is the best practice. However, it forces you to define each of your components into separate nn.Module, which can be a hassle when experimenting with more complex models.
https://stackoverflow.com/questions/69774137/
What is the formal definition for "view" in this context?
Consider the following paragraph from the sub-section named The essence of tensors from the section named Tensors: Multidimensional arrays of the chapter named It starts with a tensor from the book titled Deep Learning with PyTorch by Eli Stevens et al. Python lists or tuples of numbers are collections of Python objects that are individually allocated in memory, as shown on the left in figure 3.3. PyTorch tensors or NumPy arrays, on the other hand, are views over (typically) contiguous memory blocks containing unboxed C numeric types rather than Python objects. Each element is a 32-bit (4-byte) float in this case, as we can see on the right side of figure 3.3. This means storing a 1D tensor of 1,000,000 float numbers will require exactly 4,000,000 contiguous bytes, plus a small overhead for the metadata (such as dimensions and numeric type). And the figure they are referring to is shown below, taken from the book The above paragraph is saying that tensors are views over contiguous memory blocks. What exactly is meant by a view in this context?
A "view" is how you interpret this data, or more precisely, the shape of the tensor. For example, given a memory block with 40 contiguous bytes (10 contiguous floats), you can either view it as a 2x5 tensor, or a 5x2 tensor. In pytorch, the API to change the view of a tensor is view(). Some examples: Python 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> x = torch.randn(10, dtype=torch.float32) >>> x.shape torch.Size([10]) >>> >>> x = x.view(2, 5) >>> x.shape torch.Size([2, 5]) >>> >>> x = x.view(5, 2) >>> x.shape torch.Size([5, 2]) Of course, some views are forbidden for 10 floats: >>> x = x.view(3, 3) Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: shape '[3, 3]' is invalid for input of size 10 view does not change the data in the underlying memory. It merely changes how you "view" the tensor.
https://stackoverflow.com/questions/69776322/
Iterable pytorch dataset with multiple workers
So I have a text file bigger than my ram memory, I would like to create a dataset in PyTorch that reads line by line, so I don't have to load it all at once in memory. I found pytorch IterableDataset as potential solution for my problem. It only works as expected when using 1 worker, if using more than one worker it will create duplicate recods. Let me show you an example: Having a testfile.txt containing: 0 - Dummy line 1 - Dummy line 2 - Dummy line 3 - Dummy line 4 - Dummy line 5 - Dummy line 6 - Dummy line 7 - Dummy line 8 - Dummy line 9 - Dummy line Defining a IterableDataset: class CustomIterableDatasetv1(IterableDataset): def __init__(self, filename): #Store the filename in object's memory self.filename = filename def preprocess(self, text): ### Do something with text here text_pp = text.lower().strip() ### return text_pp def line_mapper(self, line): #Splits the line into text and label and applies preprocessing to the text text, label = line.split('-') text = self.preprocess(text) return text, label def __iter__(self): #Create an iterator file_itr = open(self.filename) #Map each element using the line_mapper mapped_itr = map(self.line_mapper, file_itr) return mapped_itr We can now test it: base_dataset = CustomIterableDatasetv1("testfile.txt") #Wrap it around a dataloader dataloader = DataLoader(base_dataset, batch_size = 1, num_workers = 1) for X, y in dataloader: print(X,y) It outputs: ('0',) (' Dummy line\n',) ('1',) (' Dummy line\n',) ('2',) (' Dummy line\n',) ('3',) (' Dummy line\n',) ('4',) (' Dummy line\n',) ('5',) (' Dummy line\n',) ('6',) (' Dummy line\n',) ('7',) (' Dummy line\n',) ('8',) (' Dummy line\n',) ('9',) (' Dummy line',) That is correct. But If I change the number of workers to 2 the output becomes ('0',) (' Dummy line\n',) ('0',) (' Dummy line\n',) ('1',) (' Dummy line\n',) ('1',) (' Dummy line\n',) ('2',) (' Dummy line\n',) ('2',) (' Dummy line\n',) ('3',) (' Dummy line\n',) ('3',) (' Dummy line\n',) ('4',) (' Dummy line\n',) ('4',) (' Dummy line\n',) ('5',) (' Dummy line\n',) ('5',) (' Dummy line\n',) ('6',) (' Dummy line\n',) ('6',) (' Dummy line\n',) ('7',) (' Dummy line\n',) ('7',) (' Dummy line\n',) ('8',) (' Dummy line\n',) ('8',) (' Dummy line\n',) ('9',) (' Dummy line',) ('9',) (' Dummy line',) Which is incorrect, as is creating duplicates of each sample per worker in the data loader. Is there a way to solve this issue with pytorch? So a dataloader can be created to not load all file in memory with support for multiple workers.
So I found an answer within the torch discuss forum https://discuss.pytorch.org/t/iterable-pytorch-dataset-with-multiple-workers/135475/3 where they pointed out I should be using the worker info to slice consecutively to the batch size. The new dataset would look like this: class CustomIterableDatasetv1(IterableDataset): def __init__(self, filename): #Store the filename in object's memory self.filename = filename def preprocess(self, text): ### Do something with text here text_pp = text.lower().strip() ### return text_pp def line_mapper(self, line): #Splits the line into text and label and applies preprocessing to the text text, label = line.split('-') text = self.preprocess(text) return text, label def __iter__(self): worker_total_num = torch.utils.data.get_worker_info().num_workers worker_id = torch.utils.data.get_worker_info().id #Create an iterator file_itr = open(self.filename) #Map each element using the line_mapper mapped_itr = map(self.line_mapper, file_itr) #Add multiworker functionality mapped_itr = itertools.islice(mapped_itr, worker_id, None, worker_total_num) return mapped_itr Special thanks to @Ivan who also pointed out the slicing solution. With two workers it returns the same data as 1 worker only
https://stackoverflow.com/questions/69778356/
How to normalize pytorch model output to be in range [0,1]
lets say I have model called UNet output = UNet(input) that output is a vector of grayscale images shape: (batch_size,1,128,128) What I want to do is to normalize each image to be in range [0,1]. I did it like this: for i in range(batch_size): output[i,:,:,:] = output[i,:,:,:]/torch.amax(output,dim=(1,2,3))[i] now every image in the output is normalized, but when I'm training such model, pytorch claim it cannot calculate the gradients in this procedure, and I understand why. my question is what is the right way to normalize image without killing the backpropogation flow? something like output = UNet(input) output = output.normalize output2 = some_model(output) loss = .. loss.backward() optimize.step() my only option right now is adding a sigmoid activation at the end of the UNet but i dont think its a good idea.. update - code (gen2,disc = unet,discriminator models. est_bias is some output): update 2x code: with torch.no_grad(): est_bias_for_disc = gen2(input_img) est_bias_for_disc /= est_bias_for_disc.amax(dim=(1,2,3), keepdim=True) disc_fake_hat = disc(est_bias_for_disc.detach()) disc_fake_loss = BCE(disc_fake_hat, torch.zeros_like(disc_fake_hat)) disc_real_hat = disc(bias_ref) disc_real_loss = BCE(disc_real_hat, torch.ones_like(disc_real_hat)) disc_loss = (disc_fake_loss + disc_real_loss) / 2 if epoch<=epochs_till_gen2_stop: disc_loss.backward(retain_graph=True) # Update gradients opt_disc.step() # Update optimizer then theres seperate training: opt_gen2.zero_grad() est_bias = gen2(input_img) est_bias /= est_bias.amax(dim=(1,2,3), keepdim=True) disc_fake = disc(est_bias) ADV_loss = BCE(disc_fake, torch.ones_like(disc_fake)) gen2_loss = ADV_loss gen2_loss.backward() opt_gen2.step()
You are overwriting the tensor's value because of the indexing on the batch dimension. Instead, you can perform the operation in vectorized form: output = output / output.amax(dim=(1,2,3), keepdim=True) The keepdim=True argument keeps the shape of torch.Tensor.amax's output equal to that of its inputs allowing you to perform an in-place operation with it.
https://stackoverflow.com/questions/69778474/
Understanding the PyTorch implementation of Conv2DTranspose
I am trying to understand an example snippet that makes use of the PyTorch transposed convolution function, with documentation here, where in the docs the author writes: "The padding argument effectively adds dilation * (kernel_size - 1) - padding amount of zero padding to both sizes of the input." Consider the snippet below where a sample image of shape [1, 1, 4, 4] containing all ones is input to a ConvTranspose2D operation with arguments stride=2 and padding=1 with a weight matrix of shape (1, 1, 4, 4) that has entries from a range between 1 and 16 (in this case dilation=1 and added_padding = 1*(4-1)-1 = 2) sample_im = torch.ones(1, 1, 4, 4).cuda() sample_deconv = nn.ConvTranspose2d(1, 1, 4, 2, 1, bias=False).cuda() sample_deconv.weight = torch.nn.Parameter( torch.tensor([[[[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.]]]]).cuda()) Which yields: >>> sample_deconv(sample_im) tensor([[[[ 6., 12., 14., 12., 14., 12., 14., 7.], [12., 24., 28., 24., 28., 24., 28., 14.], [20., 40., 44., 40., 44., 40., 44., 22.], [12., 24., 28., 24., 28., 24., 28., 14.], [20., 40., 44., 40., 44., 40., 44., 22.], [12., 24., 28., 24., 28., 24., 28., 14.], [20., 40., 44., 40., 44., 40., 44., 22.], [10., 20., 22., 20., 22., 20., 22., 11.]]]], device='cuda:0', grad_fn=<CudnnConvolutionTransposeBackward>) Now I have seen simple examples of transposed convolution without stride and padding. For instance, if the input is a 2x2 image [[2, 4], [0, 1]], and the convolutional filter with one output channel is [[3, 1], [1, 5]], then the resulting tensor of shape (1, 1, 3, 3) can be seen as the sum of the four colored matrices in the image below: The problem is I can't seem to find examples that use strides and/or padding in the same visualization. As per my snippet, I am having a very difficult time understanding how the padding is applied to the sample image, or how the stride works to get this output. Any insights appreciated, even just understanding how the 6 in the (0,0) entry or the 12 in the (0,1) entry of the resulting matrix are computed would be very helpful.
The output spatial dimensions of nn.ConvTranspose2d are given by: out = (x - 1)s - 2p + d(k - 1) + op + 1 where x is the input spatial dimension and out the corresponding output size, s is the stride, d the dilation, p the padding, k the kernel size, and op the output padding. If we keep the following operands: For each value of the input, we compute a buffer (of the corresponding color) by calculating the product with each element of the kernel. Here are the visualizations for s=1, p=0, s=1, p=1, s=2, p=0, and s=2, p=1: s=1, p=0: output is 3x3 For the blue buffer, we have (1) 2*k_top-left = 2*3 = 6; (2) 2*k_top-right = 2*1 = 2; (3) 2*k_bottom-left = 2*1 = 2; (4) 2*k_bottom-right = 2*5 = 10. s=1, p=1: output is 1x1 s=2, p=0: output is 4x4 s=2, p=2: output is 2x2
https://stackoverflow.com/questions/69782823/
Convolution and convolution transposed do not cancel each other
I'm trying to implement an autoencoder CNN. However, I have the following problem: The last convolutional layer of my encoder is defined as follows: Conv2d(128, 256, 3, padding=1, stride=2) The input of this layer has shape (1, 128, 24, 24). Thus, the output has shape (1, 256, 12, 12). After this layer, I have ReLU activation and BatchNorm. Neither of these changes the shape of the output. Then I have a first ConvTranspose2d layer defined as: ConvTranspose2d(256, 128, 3, padding=1, stride=2) But the output of this layer has shape (1, 128, 23, 23). As far as I know, if we use the same kernel size, stride, and padding in ConvTrapnpose2d as in the preceding Conv2d layer, then the output of this 2 layers block must have the same shape as its input. So, my question is: what is wrong with my understanding? And how can I fix this issue?
I would first like to note that the nn.ConvTranspose2d layer is not the inverse of nn.Conv2d as explained in its documentation page: it is not an actual deconvolution operation as it does not compute a true inverse of convolution As far as I know, if we use the same kernel size, stride, and padding in ConvTranspose2d as in the preceding Conv2d layer, then the output of this 2 layers block must have the same shape as its input. This is not always true! It depends on the input spatial dimensions. In terms of spatial dimensions the 2D convolution will output: out = [(x + 2p - d(k - 1) - 1)/s + 1] where [x] is the whole part of x. while the 2D transpose convolution will output: out = (x - 1)s - 2p + d(k - 1) + op + 1 where x = input_dimension, out = output_dimension, k = kernel_size, s = stride, d = dilation, p = padding, and op = output_padding. If you look at the convT o conv operator (i.e. convT(conv(x))) then you have: out = (out_conv - 1)s - 2p + d(k - 1) + op + 1 = ([(x + 2p - d(k - 1) - 1)/s + 1] - 1)s - 2p + d(k - 1) + op + 1 Which equals to x only if we have [(x + 2p - d(k - 1) - 1)/s + 1] = (x + 2p - d(k - 1) - 1)/s + 1, that is: if x is odd, in this case: out = ((x + 2p - d(k - 1) - 1)/s + 1 - 1)s - 2p + d(k - 1) + op + 1 = x + op And out = x when op = 0. Otherwise if x is even then: out = x - 1 + op And setting op = 1 gives out = x. Here is an example: >>> conv = nn.Conv2d(1, 1, 3, stride=2, padding=1) >>> convT = nn.ConvTranspose2d(1, 1, 3, stride=2, padding=1) >>> convT(conv(torch.rand(1, 1, 25, 25))).shape # x even (1, 1, 25, 25) #<- out = x >>> convT = nn.ConvTranspose2d(1, 1, 3, stride=2, padding=1, output_padding=1) >>> convT(conv(torch.rand(1, 1, 24, 24))).shape # x odd (1, 1, 24, 24) #<- out = x - 1 + op
https://stackoverflow.com/questions/69786125/
module 'torch' has no attribute 'linalg'
I am getting this error AttributeError: module 'torch' has no attribute 'linalg' when updating the parameters using optimizer.step(model.closure). I am using Pytorch version 1.4.0.
linalg was introduced to pytorch only on a later version (1.7.0). Update pytorch and try again.
https://stackoverflow.com/questions/69787137/
pytorch save and load model
Is there any difference between original model and saved then loaded model? Before training, I just saved model and then loaded because I wanted to know if there is any changes during saving and loading. Here's my code just model for test class test_model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(in_channels = 3, out_channels = 50, kernel_size = 3, stride=1, padding=1, bias = False) self.maxpool1 = nn.MaxPool2d(2, 2) self.bn1 = nn.BatchNorm2d(50) self.conv2_ = nn.Conv2d(in_channels = 50, out_channels = 10, kernel_size = 1, stride=1, padding=0, bias = False) self.conv2 = nn.Conv2d(in_channels = 10, out_channels = 50, kernel_size = 3, stride=1, padding=1, bias = False) self.maxpool2 = nn.MaxPool2d(2, 2) self.bn2 = nn.BatchNorm2d(50) self.conv3_ = nn.Conv2d(in_channels = 50, out_channels = 10, kernel_size = 1, stride=1, padding=0, bias = False) self.conv3 = nn.Conv2d(in_channels = 10, out_channels = 50, kernel_size = 3, stride=1, padding=1, bias = False) self.maxpool3 = nn.MaxPool2d(2, 2) self.bn3 = nn.BatchNorm2d(50) self.conv4_ = nn.Conv2d(in_channels = 50, out_channels = 20, kernel_size = 1, stride=1, padding=0, bias = False) self.conv4 = nn.Conv2d(in_channels =20, out_channels = 100, kernel_size = 3, stride=1, padding=1, bias = False) self.maxpool4 = nn.MaxPool2d(2, 2) self.bn4 = nn.BatchNorm2d(100) self.conv5_ = nn.Conv2d(in_channels = 100, out_channels = 10, kernel_size = 1, stride=1, padding=0, bias = False) self.conv5 = nn.Conv2d(in_channels = 10, out_channels = 100, kernel_size = 3, stride=1, padding=1, bias = False) self.maxpool5 = nn.MaxPool2d(2, 2) self.bn5 = nn.BatchNorm2d(100) self.fc = nn.Sequential(Flatten(), nn.Linear(100*7*7, 100), nn.ReLU(), nn.Linear(100,100)) def forward(self, inputs): feature_map1 = self.conv1(inputs) feature_map1 = self.maxpool1(feature_map1) feature_map1 = self.bn1(feature_map1) feature_map2 = self.conv2_(feature_map1) feature_map2 = self.conv2(feature_map2) feature_map2 = self.maxpool2(feature_map2) feature_map2 = self.bn2(feature_map2) feature_map3 = self.conv3_(feature_map2) feature_map3 = self.conv3(feature_map3) feature_map3 = self.maxpool3(feature_map3) feature_map3 = self.bn3(feature_map3) feature_map4 = self.conv4_(feature_map3) feature_map4 = self.conv4(feature_map4) feature_map4 = self.maxpool4(feature_map4) feature_map4 = self.bn4(feature_map4) feature_map5 = self.conv5_(feature_map4) feature_map5 = self.conv5(feature_map5) feature_map5 = self.maxpool5(feature_map5) feature_map5 = self.bn5(feature_map5) output = self.fc(feature_map5) return output then model_cpu = test_model() save and load torch.save(model_cpu, '/home/mskang/hyeokjong/model_cpu.pt') model_load = torch.load('/home/mskang/hyeokjong/model_cpu.pt') and model_load == model_cpu ------------------------------------ False However print(model_load) print(model_cpu) are seemed same furthermore I also trained two models(model_load, model_cpu) and results looks same too. So I think those tow models are same and should be same But why False
They have the same underlying model but are different Python objects. That is why __eq__ returns False when trying model_load == model_cpu. You can see model_load and model_cpu as two copies of the same nn.Module.
https://stackoverflow.com/questions/69787273/
Why does PATH differ when I connect to my Docker container with ssh or with exec/attach?
I build a Docker image based on the following Dockerfile: ARG PYTORCH="1.6.0" ARG CUDA="10.1" ARG CUDNN="7" FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel ENV TORCH_CUDA_ARCH_LIST="6.0 6.1 7.0+PTX" ENV TORCH_NVCC_FLAGS="-Xfatbin -compress-all" ENV CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" # SSH SERVER RUN apt-get update && apt-get install openssh-server sudo -y RUN echo "PermitRootLogin yes" >> /etc/ssh/sshd_config RUN echo 'root:root' | chpasswd WORKDIR / EXPOSE 22 CMD ["service ssh start"] I launch the Docker container with docker run -it -d -p 7220:22 --name ssh-server-test ssh-server-image /bin/bash If I connect to the container with docker exec -it ssh-server-test /bin/bash or docker attach ssh-server-test, I get the PATH I expect: root@9264667daf83:/# echo $PATH /opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin However, if I start an ssh server with root@9264667daf83:/# service ssh start * Starting OpenBSD Secure Shell server sshd [ OK ] root@9264667daf83:/# and I connect to the Docker container through ssh as root, then the PATH is completely different! root@9264667daf83:~# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games Why? And how can I get the right PATH variable also when I connect to the container through ssh?
The environment variable you are asking of has been set by docker itself, using ENV clause. Check the source code of the initial Dockerfile. Variables that are set by ENV exist on the build stage, run, and when you exec into a running container. More here. But when you SSH into the container, the usual Linux path for sourcing files like ~/.bashrc is working. But there is no PATH with conda, nvidia, etc in these files. As a workaround, you can patch /root/.bashrc on the build stage with the corresponding export PATH. For example, you can add to the Dockerfile RUN echo 'export PATH=/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:$PATH' >> /root/.bashrc UPD I case if you want to use exactly the same PATH as on the build stage, you can use RUN echo "export PATH=${PATH}" >> /root/.bashrc
https://stackoverflow.com/questions/69788652/
Combing two torchvision.dataset objects into a single DataLoader in PyTorch
I am training a GANS on the Cifar-10 dataset in PyTorch (and hence don't need train/val/test splits), and I want to be able to combine the torchvision.datasets.CIFAR10 in the snippet below to form one single torch.utils.data.DataLoader iterator. My current solution is something like : import torchvision import torch batch_size = 128 cifar_trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False) cifar_testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=False) cifar_dl1 = torch.utils.data.DataLoader(cifar_trainset, batch_size=batch_size, num_workers=12, persistent_workers=True, shuffle=True, pin_memory=True) cifar_dl2 = torch.utils.data.DataLoader(cifar_testset, batch_size=batch_size, num_workers=12, persistent_workers=True, shuffle=True, pin_memory=True) And then in my training loop I have something like: for dl in [cifar_dl1, cifar_l2]: for data in dl: # training The problem with this approach in a multi-threaded context, where I have found for my setup and this task that the optimal number of workers is 12, is that now I am declaring 24 workers in total which is clearly too many, not to mention the start-up time costs associated with re-iterating over each dataloader in spite of the benefits of the persistent workers flag for each. Any solutions to this problem much appreciated.
You can use ConcatDataset from torch.utils.data module. Code Snippet: import torch import torchvision batch_size = 128 cifar_trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=False) cifar_testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=False) cifar_dataset = torch.utils.data.ConcatDataset([cifar_trainset, cifar_testset]) cifar_dataloader = torch.utils.data.DataLoader(cifar_dataset, batch_size=batch_size, num_workers=12, persistent_workers=True, shuffle=True, pin_memory=True) for data in cifar_dataloader: # training
https://stackoverflow.com/questions/69792591/
RuntimeError: shape '[10, 3, 150, 150]' is invalid for input of size 472500
I'm trying to perform a Convolutional operation on the covid CT Dataset and constantly getting this error. My image size in the train loader was (10, 150, 150, 3) and I reshaped it into [10, 3, 150, 150], using torch.reshape(). Can anybody help me with problem My CNN Code class BConv(nn.Module): def __init__(self, out=3): super(BConv, self).__init__() #(10, 150, 150, 3) self.conv1=nn.Conv2d(in_channels=3,out_channels=12,kernel_size=3,stride=1,padding=1) self.bn1=nn.BatchNorm2d(num_features=12) self.relu1=nn.ReLU() self.pool=nn.MaxPool2d(kernel_size=2) self.conv2=nn.Conv2d(in_channels=12,out_channels=20,kernel_size=3,stride=1,padding=1) self.relu2=nn.ReLU() # self.conv3=nn.Conv2d(in_channels=20,out_channels=32,kernel_size=3,stride=1,padding=1) # self.bn3=nn.BatchNorm2d(num_features=32) # self.relu3=nn.ReLU() self.fc=nn.Linear(in_features= 20*75*75, out_features=3) def forward(self,input): output=self.conv1(input) #print("output 1", output.shape) output=self.bn1(output) #print("output 1", output.shape) output=self.relu1(output) #print("output 1", output.shape) output=self.pool(output) #print("output 1", output.shape) output=self.conv2(output) #print("output 1", output.shape) output=self.relu2(output) #print("output 1", output.shape) # output=self.conv3(output) # output=self.bn3(output) # output=self.relu3(output) print(output.shape) #Above output will be in matrix form, with shape (256,32,75,75) output=output.view(output.size(0), -1) output=self.fc(output) return output Data Preprocessing class Ctdataset(Dataset): def __init__(self, path): self.data= pd.read_csv(path, delimiter=" ") data= self.data.values.tolist() self.image= [] self.labels=[] for i in data: self.image.append(i[0]) self.labels.append(i[1]) #print(len(self.image), len(self.labels)) #self.class_map = {"0": 0, "1":1 , "2": 2} def __len__(self): return len(self.image) def __getitem__(self, idx): img_path = os.path.join("2A_images", self.image[idx]) img= Image.open(img_path).convert("RGB") img= img.resize((150, 150)) img= np.array(img) img= img.astype(float) return img, label
Here I'm considering your whole model including the third block consisting of conv3, bn3, and relu3. There are a few things to note: Reshaping is substantially different from permuting the axes. When you say you have an input shape of (batch_size, 150, 150, 3), it means the channel axis is last. Since PyTorch 2D builtin layers work in the NHW format you need to permute the axes: you can do so with torch.Tensor.permute: >>> x = torch.rand(10, 150, 150, 3) >>> x.permute(0, 3, 1, 2).shape (10, 3, 150, 150) Assuming your input is shaped (batch_size, 3, 150, 150), then the output shape of relu3 will be (32, 75, 75). As such the following fully connected layer must have exactly 32*75*75 input features. However you need to flatten this tensor as you did in your code with a view: output = output.view(output.size(0), -1). Another approach is to define a self.flatten = nn.Flatten() layer and call it with output = self.flatten(output). As of PyTorch v1.8.0, an alternative to setting the in_features in your fully connected layer is to use nn.LazyLinear which will initialize it for you based on the first inference: >>> self.fc = nn.LazyLinear(out_features=3) Side note: you don't need to define separate ReLU layers with relu1, relu2, and relu3 as they're non-parametric functions: >>> self.relu = nn.ReLU() Here is the full code for reference: class BConv(nn.Module): def __init__(self, out=3): super().__init__() # input shape (10, 150, 150, 3) self.conv1 = nn.Conv2d(3, 12,kernel_size=3, stride=1, padding=1) self.bn1 = nn.BatchNorm2d(num_features=12) self.pool = nn.MaxPool2d(kernel_size=2) self.conv2 = nn.Conv2d(12, 20,kernel_size=3, stride=1, padding=1) self.conv3 = nn.Conv2d(20, 32, kernel_size=3, stride=1, padding=1) self.bn3 = nn.BatchNorm2d(num_features=32) self.relu = nn.ReLU() self.flatten = nn.Flatten() self.fc = nn.Linear(in_features=32*75*75, out_features=out) def forward(self,input): output = input.permute(0, 3, 1, 2) output = self.conv1(output) output = self.bn1(output) output = self.relu(output) output = self.pool(output) output = self.conv2(output) output = self.relu(output) output = self.conv3(output) output = self.bn3(output) output = self.relu(output) output = self.flatten(output) output = self.fc(output) return output
https://stackoverflow.com/questions/69794901/
Augmentation in torch vision transform is not working as expected
I'm developing a CNN using pytorch. my model gives good accuracy on both training and test set without augmentation but I wanted to learn augmentation so I have used torchvision transforms for the augmentation and after applying the augmentation model started doing worst and loss is not at all decreasing. so I tried to debug and observed that the augmented image looks distorted/unexpected can somebody please help me solve this. custom datset class traindataset(Dataset): def __init__(self,data,train_end_idx,augmentation = None): ''' data: data is a pandas dataframe generated from csv file where it has columns-> [name,labels,col 1,col2,...,col784]. shape of data->(10000, 786) ''' self.data=data self.augmentation=augmentation self.train_end=train_end_idx self.target=self.data.iloc[:self.train_end,1].values self.image=self.data.iloc[:self.train_end,2:].values#contains full data def __len__(self): return len(self.target); def __getitem__(self,idx): self.target=self.target self.ima=self.image[idx].reshape(1,784) #only takes the selected index if self.augmentation is not None: self.ima = self.augmentation(self.ima) return torch.tensor(self.target[idx]),self.ima Augmentation used torchvision_transform = transforms.Compose([ np.uint8, transforms.ToPILImage(), transforms.Resize((28,28)), transforms.RandomRotation([45,135]), transforms.ToTensor() ]) Augmented image(PFA for the picture) transformed=torchvision_transform(x) plt.imshow(transformed.squeeze().numpy(), interpolation='nearest') plt.show() Normal image x=data.iloc[:1,2:].values plt.imshow(x.reshape(28,28), interpolation='nearest') plt.show() The first image is with augmentation and the second image is without augmentation. if you want you can play with the code here without downloading anything.
It seems like the transforms.Resize() function did not correctly reshape the tensor. Reshaping first seems to fix the issue and produce correct images (you did this step for the albumentations section). transformed = torchvision_transform(x.reshape(28,28))
https://stackoverflow.com/questions/69796927/
Indexing a tensor with None in PyTorch
I've seen this syntax to index a tensor in PyTorch, not sure what it means: v = torch.div(t, n[:, None]) where v, t, and n are tensors. What is the role of "None" here? I can't seem to find it in the documentation.
Similar to NumPy you can insert a singleton dimension ("unsqueeze" a dimension) by indexing this dimension with None. In turn n[:, None] will have the effect of inserting a new dimension on dim=1. This is equivalent to n.unsqueeze(dim=1): >>> n = torch.rand(3, 100, 100) >>> n[:, None].shape (3, 1, 100, 100) >>> n.unsqueeze(1).shape (3, 1, 100, 100) Here are some other types of None indexings. In the example above : is was used as a placeholder to designate the first dimension dim=0. If you want to insert a dimension on dim=2, you can add a second : as n[:, :, None]. You can also place None with respect to the last dimension instead. To do so you can use the ellipsis syntax ...: n[..., None] will insert a dimension last, i.e. n.unsqueeze(dim=-1). n[..., None, :] on the before last dimension, i.e. n.unsqueeze(dim=-2).
https://stackoverflow.com/questions/69797614/
Install PyTorch without installing Python 3.9 on Win7
Got clean installation of Anaconda (Python 3.8.8), tried to install PyTorch by running conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch as suggested by the docs. The thing is, conda tries to install Python 3.9.7 in the process, which does not support Windows 7 (the famous "api-ms-win-core-path-l1-1-0.dll missing error"). Is there a way to make the installation of PyTorch without switching to the latest Python?
If you start with a clean package with a specific python version you could use the --freeze-installed flag to prevent the installer from making any changes to the installed packages, see documentation.
https://stackoverflow.com/questions/69797960/
Prevent Updating for Specific Element of Convolutional Weight Matrix
I’m trying to set one element of weight to 1 and then hold it the same until the end of learning (prevent it from updating in the next epochs). I know I can set requires_grad = False but I just want this process for one element not all of the elements.
You can attach a backward hook on your nn.Module such that during backpropagation you can overwrite the element of interest to 0. This makes sure its value never changes without preventing backpropagation of the gradient to the input. The new API for backward hooks is nn.Module.register_full_backward_hook. First construct a callback function that will be used as the layer hook: def freeze_single(index): def callback(module, grad_input, grad_output): module.weight.grad.data[index] = 0 return callback Then, we can attach this hook to any nn.Module. For instance, here I've decided to freeze component [0, 1, 2, 1] of the convolutional layer: >>> conv = nn.Conv2d(3, 1, 3) >>> conv.weight.data[0, 1, 2, 1] = 1 >>> conv.register_full_backward_hook(freeze_single((0, 1, 2, 1))) Everything is set correctly, let us try: >>> x = torch.rand(1, 3, 10, 10, requires_grad=True) >>> conv(x).mean().backward() Here we can verify the gradient of component [0, 1, 2, 1] is indeed equal to 0: >>> conv.weight.grad tensor([[[[0.4954, 0.4776, 0.4639], [0.5179, 0.4992, 0.4856], [0.5271, 0.5219, 0.5124]], [[0.5367, 0.5035, 0.5009], [0.5703, 0.5390, 0.5207], [0.5422, 0.0000, 0.5109]], # <- [[0.4937, 0.5150, 0.5200], [0.4817, 0.5070, 0.5241], [0.5039, 0.5295, 0.5445]]]]) You can detach/reattach the hook anytime with: >>> hook = conv.register_full_backward_hook(freeze_single((0, 1, 2, 1))) >>> hook.remove() Don't forget if you remove the hook, the value of that component will change when you update your weights. You will have to reset it to 1 if you so desire. Otherwise, you can implement a second hook - a register_forward_pre_hook hook this time - to handle that.
https://stackoverflow.com/questions/69801287/
Is there any efficient way to calculate covariance matrix using PyTorch?
I want to calculate covariance matrix from vectors a and b, like k[i][j] = exp( -(a[i]-b[j])**2 ). In numpy, I can write as follows, import numpy as np r = np.subtract.outer(a, b) k = np.exp(-r*r) In PyTorch, I can write naive code, but it's slower than numpy. import torch for i in range(len(a)): for j in range(len(b)): k[i][j] = torch.exp( -(a[i]-b[j])**2 ) How should I write efficient code using PyTorch?
You can use broadcasting: r = a[:, None] - b[None, :] k = torch.exp(-r**2)
https://stackoverflow.com/questions/69813844/
How to create a PyTorch hook with conditions?
I'm learning about hooks and working with binarized neural network. The issue is that sometimes my gradients are 0 in the backwards pass. I'm trying to replace those gradients with a certain value. Say I have the following network import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 2) self.fc2 = nn.Linear(2, 3) self.fc3 = nn.Linear(3, 1) def forward(self, x): x = self.fc1(x) x = torch.relu(x) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net = Model() opt = optim.Adam(net.parameters()) And also some features features = torch.rand((3,1)) I can train it normally using: for i in range(10): opt.zero_grad() out = net(features) loss = torch.mean(torch.square(torch.tensor(5) - torch.sum(out))) loss.backward() opt.step() How can I attach a hook function that will have the following conditions for the backwards pass (for each layer): If all the gradients in a single layer are 0, change them to 1.0. If one of the gradients is 0 but the there's at least one gradient that is not 0, change it to 0.5.
You can attach a callback function on your nn.Module with nn.Module.register_full_backward_hook: You will have to handle both cases: if all elements are equal to zero using torch.all, else (i.e. at least one is non zero) if at least one is equal to zero using torch.any. def grad_mod(module, grad_inputs, grad_outputs): if module.weight.grad is None: # safety measure for last layer return None # and layers w/ require_grad=False flat = module.weight.grad.view(-1) if torch.all(flat == 0): flat.data.fill_(1.) elif torch.any(flat == 0): flat.data.scatter_(0, (flat == 0).nonzero()[:,0], value=.5) The instruction in the first clause will fill all values to 1. while the instruction in the second will only replace zero values with .5. Attach the hook on an nn.Module: >>> net.fc3.register_full_backward_hook(grad_mod) Here I use print statements before and after mutating flat to showcase the effect of the hook: >>> net(torch.rand((3,1))).backward(torch.tensor([[0],[1],[2]])) >>> tensor([0.0947, 0.0000, 0.0000]) # before >>> tensor([0.0947, 0.5000, 0.5000]) # after >>> net(torch.rand((3,1))).backward(torch.tensor([[0],[1],[2]])) >>> tensor([0., 0., 0.]) # before >>> tensor([1., 1., 1.]) # after In order to apply this hook to multiple layers you can wrap grad_mod and utilize nn.Module.apply recursive behavior: >>> def apply_grad_mod(module): ... if hasattr(module, 'weight'): ... module.register_full_backward_hook(grad_mod) Then the following will apply the hook on all layer weights. >>> net.apply(apply_grad_mod) Note: you will have to extend this behavior if you wish to also affect the biases!
https://stackoverflow.com/questions/69817536/
Is there a pytorch function to find unique tuples in a given tensor (of size N*h*w*2)?
I am trying to extract the unique tuples in a (N * h * w * 2) tensor. For example, an 1 * 2 * 3 * 2 tensor where there are 6 tuples: a = torch.tensor([[[[1,2], [2,3], [3,4]], [[4,5], [1,2], [3,4]]]]) and I am trying to find the indices of the unique tuples (i.e., indices of [1,2], [2,3], [3,4], [4,5], where duplicates are removed). I've already checked out torch.unique(), but it seems not working.
You compute the difference between all pairs: d = torch.abs(a.view(-1, 1, 2) - a.view(1, -1, 2)).sum(dim=-1) Then you can find pairs with zero difference (masking non-unique pairs using triu): i, j = torch.where((d + torch.triu(torch.ones_like(d))) == 0) Resulting with: i,j (tensor([4, 5]), tensor([0, 2])) That is the 4th pair in a is identical to the 0th, and the 5th is identical to the second.
https://stackoverflow.com/questions/69819069/
Predicting Sentiment of Raw Text using Trained BERT Model, Hugging Face
I'm predicting sentiment analysis of Tweets with positive, negative, and neutral classes. I've trained a BERT model using Hugging Face. Now I'd like to make predictions on a dataframe of unlabeled Twitter text and I'm having difficulty. I've followed the following tutorial (https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/) and was able to train a BERT model using Hugging Face. Here's an example of predicting on raw text however it's only one sentence and I would like to use a column of Tweets. https://curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/#predicting-on-raw-text review_text = "I love completing my todos! Best app ever!!!" encoded_review = tokenizer.encode_plus( review_text, max_length=MAX_LEN, add_special_tokens=True, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'].to(device) attention_mask = encoded_review['attention_mask'].to(device) output = model(input_ids, attention_mask) _, prediction = torch.max(output, dim=1) print(f'Review text: {review_text}') print(f'Sentiment : {class_names[prediction]}') Review text: I love completing my todos! Best app ever!!! Sentiment : positive Bill's response works. Here's the solution. def predictionPipeline(text): encoded_review = tokenizer.encode_plus( text, max_length=MAX_LEN, add_special_tokens=True, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'].to(device) attention_mask = encoded_review['attention_mask'].to(device) output = model(input_ids, attention_mask) _, prediction = torch.max(output, dim=1) return(class_names[prediction]) df2['prediction']=df2['cleaned_tweet'].apply(predictionPipeline)
You can use the same code to predict texts from the dataframe column. model = ... tokenizer = ... def predict(review_text): encoded_review = tokenizer.encode_plus( review_text, max_length=MAX_LEN, add_special_tokens=True, return_token_type_ids=False, pad_to_max_length=True, return_attention_mask=True, return_tensors='pt', ) input_ids = encoded_review['input_ids'].to(device) attention_mask = encoded_review['attention_mask'].to(device) output = model(input_ids, attention_mask) _, prediction = torch.max(output, dim=1) print(f'Review text: {review_text}') print(f'Sentiment : {class_names[prediction]}') return class_names[prediction] df = pd.DataFrame({ 'texts': ["text1", "text2", "...."] }) df_dataset["sentiments"] = df.apply(lambda l: predict(l.texts), axis=1)
https://stackoverflow.com/questions/69820318/
In pytorch, what is the difference between indexing with square brackets and "index_select"?
Say there are two pytorch tensors a, which is float32 with shape [M, N], and b, which is int64 with shape [K]. The values in b are within [0, M-1], so the following line gives a new tensor c indexed by b: c = a[b] # [K, N] tensor whose i-th row is a[b[i]], with `IndexBackward` However, in a project of mine, this line always reports the following error (which is detected with torch.autograd.detect_anomaly(): with torch.autograd.detect_anomaly(): [W python_anomaly_mode.cpp:104] Warning: Error detected in IndexBackward. Traceback of forward call that caused the error: ... File "/home/user/project/model/network.py", line 60, in index_points c = a[b] (function _print_stack) Traceback (most recent call last): File "main.py", line 589, in <module> main() File "main.py", line 439, in main train_stats = train( File "/home/user/project/train_eval.py", line 866, in train total_loss.backward() File "/home/user/.local/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/user/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward Variable._execution_engine.run_backward( RuntimeError: merge_sort: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered Note that the line c = a[b] above is not the only occurrence of said error, but just one among many other lines with square-bracket indexing. However, the problem magically goes away when I change the indexing style from c = a[b] to c = a.index_select(0, b) I don't understand why indexing with square brackets leads to illegal memory access, but this gives me enough reason to believe square-bracket indexing and index_select are implemented differently. Understanding that could be the key to explain this. Also, since the project is rather large and not public, I can't share the exact codes here. You can just treat things above as background and focus on how square-bracket indexing and index_select are different. Thanks! Additional information: ubuntu 20.04 + cuda 11.2 + RTX3090 pytorch 1.9.0 + torchvision 1.10.0 + pytorch3d 0.6.0 The project involves training a network, and the error only occurs when I use the Pulsar renderer from pytorch3d to render something (in fact, anything, even if the rendered data are completely irrelevant to the original code).
torch.index_select returns a new tensor which copies the indexed fields into a new memory location (docs). torch.Tensor.select or slicing returns a view of the original tensor (docs). Without seeing more of your code, it's hard to say why this particular difference in functionality might cause the above error.
https://stackoverflow.com/questions/69824591/
Lost type in PyTorch optimization
I'm trying to implement a simple minimizer in PyTorch, here is the code: for i in range(10): print('i =', i, ' q =', q) v_trans = transform_dq(v, q) loss = mse(v_trans, v_target) loss.backward() q = q - eta * q.grad print('Final q = ', q) Where v and q are tensors, and eta = 0.01. The problem is that at the line q = q - eta * q.grad I get an error on the second iteration of the loop: TypeError: unsupported operand type(s) for *: 'float' and 'NoneType' It looks like the update for q changes the graph in an unwanted way (q is not a leaf of the graph anymore and hence it doesn't have a grad). If that is the case, then how to implement this simple minimizer?
First, you need to reset q's gradients before each iteration. Second, you should update q outside "gradient scope": with torch.no_grad(): q = q - eta * q.grad
https://stackoverflow.com/questions/69827874/
PyTorch RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
I've found a lot of answers on this topic but none of them helped. The error is: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same The training loop: model = BrainModel() model.to(device) loss_function = nn.BCELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.01) for epoch in range(EPOCHS): for sequences, labels in train_dataloader: optimizer.zero_grad() labels = labels.view(BATCH_SIZE, -1) sequences, labels = sequences.to(device), labels.view(BATCH_SIZE, -1).to(device) print(next(model.parameters()).is_cuda, sequences.get_device(), labels.get_device()) out = model(sequences) # ERROR HERE out, labels = out.type(torch.FloatTensor), labels.type(torch.FloatTensor) loss = loss_function(out, labels) loss.backward() optimizer.step() You can see one print inside the loop, and its output is: True 0 0 which means that all - the model, the x and y - are on cuda. The same code works well when I use CPU but not GPU. I do not understand what else I need to move to device. I always write it the way I did here and it always worked fine :C
Needed to do this: use nn.ModuleList instead of python list self.convolutions1 = nn.ModuleList([nn.Conv2d(1, 3, 5, 2, 2) for _ in range(sequence_size)]) emb_dim = calc_embedding_size(self.convolutions1[0], input_size) self.convolutions2 = nn.ModuleList([nn.Conv2d(3, 6, 3, 1, 0) for _ in range(sequence_size)]) emb_dim = calc_embedding_size(self.convolutions2[0], emb_dim) self.convolutions3 = nn.ModuleList([nn.Conv2d(6, 9, 5, 1, 0) for _ in range(sequence_size)]) emb_dim = calc_embedding_size(self.convolutions3[0], emb_dim) And use torch.cuda.FloatTensor when training on GPU: out, labels = out.type(torch.cuda.FloatTensor), labels.type(torch.cuda.FloatTensor)
https://stackoverflow.com/questions/69832196/
Google Colab recently raise error ModuleNotFoundError: No module named 'google.cloud.storage.retry'
My code just worked properly on local and colab, however recently faced the following error on colab. I use google colab to run my code. The allennlp package was installed. Error when run code
pip install --upgrade google-cloud-storage restart runtime The above command solved the issue for me!
https://stackoverflow.com/questions/69835469/
Dropping layers in Transformer models (PyTorch / HuggingFace)
I came across this interesting paper on layers dropping in Transformer models and I am actually trying to implement it. However, I am wondering what would be a good practice to perform "layer dropping". I have have a couple of ideas but have no idea what would be the cleanest/safest way to go here: masking the unwanted layers (some sort of pruning) copying the wanted layers into a new model If anyone has already done this before or has suggestion I'm all ears! Cheers
I think one of the safest ways would be simply to skip the given layers in the forward pass. For example, suppose you are using BERT and that you added the following entry to the config: config.active_layers = [False, True] * 6 # using a 12 layers model Then you could modify the BertEncoder class like the following: class BertEncoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False def forward( self, hidden_states, attention_mask=None, head_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, past_key_values=None, use_cache=None, output_attentions=False, output_hidden_states=False, return_dict=True, ): all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None next_decoder_cache = () if use_cache else None for i, layer_module in enumerate(self.layer): ########### MAGIC HERE ############# if not self.config.active_layers[i]: continue if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_head_mask = head_mask[i] if head_mask is not None else None past_key_value = past_key_values[i] if past_key_values is not None else None if self.gradient_checkpointing and self.training: if use_cache: logger.warning( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, past_key_value, output_attentions) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, ) else: layer_outputs = layer_module( hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions, ) hidden_states = layer_outputs[0] if use_cache: next_decoder_cache += (layer_outputs[-1],) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) if self.config.add_cross_attention: all_cross_attentions = all_cross_attentions + (layer_outputs[2],) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [ hidden_states, next_decoder_cache, all_hidden_states, all_self_attentions, all_cross_attentions, ] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=next_decoder_cache, hidden_states=all_hidden_states, attentions=all_self_attentions, cross_attentions=all_cross_attentions, ) At the moment you may need to write your special BERT class using the new Encoder layer. However, you should be able to load the weights from the pre-trained models provided by huggingface. BertEncoder code taken from here
https://stackoverflow.com/questions/69835532/
What functions or modules require contiguous input?
As I understand, you need to call tensor.contiguous() explicitly whenever some function or module needs a contiguous tensor. Otherwise you get exceptions like: RuntimeError: invalid argument 1: input is not contiguous at .../src/torch/lib/TH/generic/THTensor.c:231 (E.g. via.) What functions or modules require contiguous input? Is this documented? Or phrased differently, what are situations where you need to call contiguous? E.g. Conv1d, does it require contiguous input? The documentation does not mention this. When the documentation does not mention this, this would always imply that it does not require contiguous input? (I remember in Theano, any op getting some non-contiguous input, which required it to be contiguous, would just convert it automatically.)
After additional digging under the hood through source_code, it seems that view is the only function that explicitly causes an exception when a non-contiguous input is passed. One would expect any operation using Tensor Views to have the potential of failing with non-contiguous input. In reality, it seems to be the case that most or all of these functions are: (a.) implemented with support for non-contiguous blocks (see example below), i.e. the tensor iterators can handle multiple pointers to the various chunks of the data in memory, perhaps at the expense of performance, or else (b.) a call to .contiguous() wraps the operation (One such example shown here for torch.tensor.diagflat()). reshape is essentially the contiguous()-wrapped form of view. By extension, it seems, the main benefit of view over reshape would be the explicit Exception when tensors are unexpectedly non-contiguous versus code silently handling this discrepancy at the cost of performance. This conclusion is based on: Testing of all Tensor View ops with non-contiguous inputs. Source code analysis of other non-Tensor View functions of interest (e.g. Conv1D, which includes calls to contiguous as necessary in all non-trivial input cases). Inference from pytorch's design philosophy as a simple, at times slow, easy-to-use language. Cross-posting on Pytorch Discuss. Extensive review of web reported errors involving non-contiguous errors, all of which revolve around problematic calls to view. I did not comprehensively test all pytorch functions, as there are thousands. EXAMPLE OF (a.): import torch import numpy import time # allocation start = time.time() test = torch.rand([10000,1000,100]) torch.cuda.synchronize() end = time.time() print("Allocation took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) # view of a contiguous tensor start = time.time() test.view(-1) torch.cuda.synchronize() end = time.time() print("view() took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) # diagonal() on a contiguous tensor start = time.time() test.diagonal() torch.cuda.synchronize() end = time.time() print("diagonal() took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) # Diagonal and a few tensor view ops on a non-contiguous tensor test = test[::2,::2,::2] # indexing is a Tensor View op resulting in a non-contiguous output print(test.is_contiguous()) # False start = time.time() test = test.unsqueeze(-1).expand([test.shape[0],test.shape[1],test.shape[2],100]).diagonal() torch.cuda.synchronize() end = time.time() print("non-contiguous tensor ops() took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) # reshape, which requires a tensor copy operation to new memory start = time.time() test = test.reshape(-1) + 1.0 torch.cuda.synchronize() end = time.time() print("reshape() took {} sec. Data is at address {}. Contiguous: {}".format(end - start,test.storage().data_ptr(),test.is_contiguous())) The following is output: Allocation took 4.269254922866821 sec. Data is at address 139863636672576. Contiguous: True view() took 0.0002810955047607422 sec. Data is at address 139863636672576. Contiguous: True diagonal() took 6.532669067382812e-05 sec. Data is at address 139863636672576. Contiguous: True False non-contiguous tensor ops() took 0.00011277198791503906 sec. Data is at address 139863636672576. Contiguous: False reshape() took 0.13828253746032715 sec. Data is at address 94781254337664. Contiguous: True A few tensor view operations in block 4 are performed on a non-contiguous input tensor. The operation runs without error, maintains the data in the same memory addresses, and runs relatively faster than an operation requiring a copy to new memory addresses (such as reshape in block 5). Thus, it seems these operations are implemented in a way that handles non-contiguous inputs without requiring a data copy.
https://stackoverflow.com/questions/69840389/
When should one call .eval() and .train() when doing MAML with the PyTorch higher library?
I was going through the omniglot maml example and saw that they have net.train() at the top of their testing code. This seems like a mistake since that means the stats from each task at meta-testing is shared: def test(db, net, device, epoch, log): # Crucially in our testing procedure here, we do *not* fine-tune # the model during testing for simplicity. # Most research papers using MAML for this task do an extra # stage of fine-tuning here that should be added if you are # adapting this code for research. net.train() n_test_iter = db.x_test.shape[0] // db.batchsz qry_losses = [] qry_accs = [] for batch_idx in range(n_test_iter): x_spt, y_spt, x_qry, y_qry = db.next('test') task_num, setsz, c_, h, w = x_spt.size() querysz = x_qry.size(1) # TODO: Maybe pull this out into a separate module so it # doesn't have to be duplicated between `train` and `test`? n_inner_iter = 5 inner_opt = torch.optim.SGD(net.parameters(), lr=1e-1) for i in range(task_num): with higher.innerloop_ctx(net, inner_opt, track_higher_grads=False) as (fnet, diffopt): # Optimize the likelihood of the support set by taking # gradient steps w.r.t. the model's parameters. # This adapts the model's meta-parameters to the task. for _ in range(n_inner_iter): spt_logits = fnet(x_spt[i]) spt_loss = F.cross_entropy(spt_logits, y_spt[i]) diffopt.step(spt_loss) # The query loss and acc induced by these parameters. qry_logits = fnet(x_qry[i]).detach() qry_loss = F.cross_entropy( qry_logits, y_qry[i], reduction='none') qry_losses.append(qry_loss.detach()) qry_accs.append( (qry_logits.argmax(dim=1) == y_qry[i]).detach()) qry_losses = torch.cat(qry_losses).mean().item() qry_accs = 100. * torch.cat(qry_accs).float().mean().item() print( f'[Epoch {epoch+1:.2f}] Test Loss: {qry_losses:.2f} | Acc: {qry_accs:.2f}' ) log.append({ 'epoch': epoch + 1, 'loss': qry_losses, 'acc': qry_accs, 'mode': 'test', 'time': time.time(), }) however whenever I do eval instead I get that my MAML model diverges (though my test is on mini-imagenet): >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5939, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5940, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5940, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5940, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5941, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5940, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5942, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5940, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5940, grad_fn=<NormBackward1>) >maml_old (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >>maml_old (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5939, grad_fn=<NormBackward1>) eval_loss=0.9859228551387786, eval_acc=0.5907692521810531 args.meta_learner.lr_inner=0.01 ==== in forward2 >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(171440.6875, grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(208426.0156, grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(17067344., grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(40371.8125, grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(1.0911e+11, grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(21.3515, grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(5.4257e+13, grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(128.9109, grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(3994.7734, grad_fn=<NormBackward1>) >maml_new (before inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(9.5937, grad_fn=<NormBackward1>) >maml_new (after inner adapt): fmodel.model.features.conv1.weight.norm(2)=tensor(1682896., grad_fn=<NormBackward1>) eval_loss_sanity=nan, eval_acc_santiy=0.20000000298023224 So what is one suppose to do to avoid this divergence? note: retraining is really expensive. Takes 18 days to train a 5cnn with maml for me. A Distributed soln would really help here https://github.com/learnables/learn2learn/issues/170 perhaps just using train during training (even if evaluating during training might be a good idea so that the batch stats are saved in the checkpoint) or next time train stuff with batch stats from the beginning related: https://github.com/facebookresearch/higher/issues/107 https://discuss.pytorch.org/t/when-should-one-call-eval-and-train-when-doing-maml-with-the-pytorch-higher-library/136022 How to use have batch norm not forget batch statistics it just used in Pytorch? https://discuss.pytorch.org/t/how-does-pytorch-s-batch-norm-know-if-the-forward-pass-its-doing-is-for-inference-or-training/16857/10 https://stats.stackexchange.com/questions/544048/what-does-the-batch-norm-layer-for-maml-model-agnostic-meta-learning-do-for-du/551153#551153 https://github.com/tristandeleu/pytorch-maml/issues/19
TLDR: Use mdl.train() since that uses batch statistics (but inference will not be deterministic anymore). You probably won't want to use mdl.eval() in meta-learning. BN intended behaviour: Importantly, during inference (eval/testing) running_mean, running_std is used - that was calculated from training(because they want a deterministic output and to use estimates of the population statistics). During training the batch statistics is used but a population statistic is estimated with running averages. I assume the reason batch_stats is used during training is to introduce noise that regularizes training (noise robustness) in meta-learning I think using batch statistics is the best during testing (and not calculate the running means) since we are supposed to be seeing new /tasksdistribution anyway. Price we pay is loss of determinism. Could be interesting just out of curiosity what the accuracy is using population stats estimated from meta-trian. This is likely why I don't see divergence in my testing with the mdl.train(). So just make sure you use mdl.train() (since that uses batch statistics https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html#torch.nn.BatchNorm2d) but that either the new running stats that cheat aren't saved or used later.
https://stackoverflow.com/questions/69845469/
How to operate on angle indexes of an Array or Tensor without loops
Is there an algorithm that can assign, get indexes or operate on an array giving a batch of angles, origins and values without any loops, non-differentiability or performance heavy computation. Finding a function that operates on angles in a 3 dimensional space [W,H,D,C] is much more appreciated.
The following basically does what you want. Notice that it does actually contain a loop but not of the type one should avoid. The loop has only 4 iterations so it is not a performance problem. What one should avoid with numpy/pytorch is looping over all entries in a large array. img = torch.zeros([9,9,3]) points = np.stack(np.indices([9,9])).reshape(2,-1) blue = (0,0,1) red = (1,0,0) orange = (1,1/2,0) green = (0,1,0) angles = [0,45,225,280] origions = np.array([(4,4),(4,4),(4,4),(4,4)]) colors = [blue, red, orange, green] def angle_from(p): return np.rad2deg(np.arctan2(*(points-p.reshape(2,1)))) % 360 def set_color(angle, origin, color): angles = angle_from(np.array(origin)) mask = angles - angle == 0 img.view(-1,3)[np.where(mask),:] = torch.tensor(color, dtype=torch.float) for angle, origin, color in zip(angles,origions,colors): set_color(angle, origin, color) rad = np.deg2rad(angle) x = (origin[0],origin[1]+10*np.cos(rad)) y = (origin[0],origin[1]+10*np.sin(rad)) plt.plot(x, y, c='white') angle = 280 rad = np.deg2rad(angle) plt.scatter(*points) plt.imshow(img, origin='lower') Seems almost perfect except that you cheated with the green area. As you can see in the picture the centers of the green squares are not actually on the ray with the angle you claim it is. I plotted the centers and rays with the origin and angle you chose so one can see that more easily. I suspect that is often going to be the case and you want a way to choose the green spots. My approach there was to pick not one but two rays and show the squares with their centers being between the rays. def set_color_between_rays(angles, origins, color): angles1 = angle_from(np.array(origins[0])) angles2 = angle_from(np.array(origins[1])) mask = ((angles1 - angles[0]) >= 0) & ((angles2 - angles[1]) <= 0) img.view(-1,3)[np.where(mask),:] = torch.tensor(color, dtype=torch.float) angle = 295 rad = np.deg2rad(angle) origins = np.array([[3.6,3.6],[4.3,4.3]]) plt.plot((origins[0,0],origins[0,0]+10*np.cos(rad)),(origins[0,1],origins[0,1]+10*np.sin(rad)),c='green') plt.plot((origins[1,0],origins[1,0]+10*np.cos(rad)),(origins[1,1],origins[1,1]+10*np.sin(rad)),c='green') set_color_between_rays([angle, angle], origins, green) plt.scatter(*points) plt.imshow(img, origin='lower')
https://stackoverflow.com/questions/69858078/
Dimension of tensordot between 2 3D tensors
I have a rather quick question on tensordot operation. I'm trying to figure out if there is a way to perform a tensordot product between two tensors to get the right output of shape that I want. One of the tensors is B X L X D dimensions and the other one is B X 1 X D dimensions and I'm trying to figure out if it's possible to end up with B X D matrix at the end. Currently I'm looping through the B dimension and performing a matrix multiplication between 1 X D and D X L (transposing L X D) matrices and stacking them to end up with B X L matrix at the end. This is obviously not the fastest way possible as a loop can be expensive. Would it be possible to get the desired output of B X D shape by performing a quick tensordot? I cannot seem to figure out a way to get rid of 1 of the B's. Any insight or direction would be very much appreciated.
One option Is to use torch.bmm() which does exactly that (docs). It takes tensors of shape (b, n, m) and (b, m, p) and returns the batch matrix multiplication of shape (b, n, p). (I assume you ment a result of B X L since the matrix multiplication of 1 X D and D X L is of shape 1 X L and not 1 X D). In your case: import torch B, L, D = 32, 10, 512 a = torch.randn(B, 1, D) #shape (B X 1 X D) b = torch.randn(B, L, D) #shape (B X L X D) b = b.transpose(1,2) #shape (B X D X L) result = torch.bmm(a, b) result = result.squeeze() print(result.shape) >>> torch.Size([32, 10]) Alternatively You can use torch.einsum(), which is more compact but less readable in my opinion: import torch B, L, D = 32, 10, 512 a = torch.randn(B, 1, D) b = torch.randn(B, L, D) result = torch.einsum('abc, adc->ad', a, b) print(result.shape) >>> torch.Size([32, 10]) The squeeze at the end is in order to make your result of shape (32, 10) instead of shape (32, 1, 10).
https://stackoverflow.com/questions/69859276/
Augmentation using Albumentations in Pytorch OD
I followed the pytorch tutorial for object detection on the website here. I decided to add more augmentations using albumentations if it would improve my traning. However after calling the __getitem__() method in the dataset class I get this error. AttributeError Traceback (most recent call last) <ipython-input-54-563a9295c274> in <module>() ----> 1 train_ds.__getitem__(220) 2 frames <ipython-input-48-0169e540fb13> in __getitem__(self, idx) 45 } 46 ---> 47 transformed = self.transforms(**image_data) 48 img = transformer['image'] 49 target['boxes'] = torch.as_tensor(transformed['bboxes'],dtype=torch.float332) /usr/local/lib/python3.7/dist-packages/albumentations/core/composition.py in __call__(self, force_apply, **data) 172 if dual_start_end is not None and idx == dual_start_end[0]: 173 for p in self.processors.values(): --> 174 p.preprocess(data) 175 176 data = t(force_apply=force_apply, **data) /usr/local/lib/python3.7/dist-packages/albumentations/core/utils.py in preprocess(self, data) 58 data = self.add_label_fields_to_data(data) 59 ---> 60 rows, cols = data["image"].shape[:2] 61 for data_name in self.data_fields: 62 data[data_name] = self.check_and_convert(data[data_name], rows, cols, direction="to") AttributeError: 'Image' object has no attribute 'shape' I have include the augmentation codes I used as well. def transform_ds(train): if train: return A.Compose([ A.HorizontalFlip(p=0.2), A.VerticalFlip(p=0.2), A.RandomSizedBBoxSafeCrop(height=450,width=450,erosion_rate=0.2,p=0.3), A.RandomBrightness(limit=(0.2,0.5),p=0.3), A.RandomContrast(limit=(0.2,0.5),p=0.3), A.Rotate(limit=90,p=0.3), A.GaussianBlur(blur_limit=(3,3),p=0.1), ToTensorV2() ], bbox_params=A.BboxParams( format='pascal_voc', min_area=0, min_visibility=0, label_fields=['labels'] )) else: return A.Compose([ToTensor()])
Images in PyTorch are loaded via pillow library (PIL.Image.open specifically). If you look at albumentations docs its transformations required torch.Tensor (or np.ndarray object). In order to do it, you should place A.ToTensorV2 as a first transformation and use other documentation transforms after that.
https://stackoverflow.com/questions/69859954/
Pytorch throws CUDA runtime error on WSL2
I install Nvidia Windows Driver and CUDA according to this article. After the installation of Nvidia Windows Driver, I’ve checked CUDA version by running β€œ/usr/lib/wsl/lib/nvidia-smi”: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 510.00 Driver Version: 510.06 CUDA Version: 11.6 | |-------------------------------+----------------------+----------------------+ Then I installed CUDA Toolkit 11.3 according to this this article. After this , I checked the CUDA Toolkit version by running β€œ/usr/local/cuda/bin/nvcc --version” and got: nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Mon_May__3_19:15:13_PDT_2021 Cuda compilation tools, release 11.3, V11.3.109 Build cuda_11.3.r11.3/compiler.29920130_0 Then I install Pytorch through pip: pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html Then verify the installation of torch like this: import torch x = torch.rand(5, 3) print(x) and this: import torch torch.cuda.is_available() Until now, everything goes well. However, when I train a network and call the backward() method of loss, torch throws a runtime error like this: Traceback (most recent call last): File "train.py", line 118, in train_loop loss.backward() File "/myvenv/lib/python3.6/site-packages/torch/_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/myvenv/lib/python3.6/site-packages/torch/autograd/__init__.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: CUDA error: unknown error CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. I've tried to reinstall CUDA toolkit many times but always got the same error. Any suggestions?
Through some simple experiments, I find the solution. The cause is my GPU memory is too small (2GB) to run a relatively large text batch (32). When I decrease the batch size to 16, training script runs well. However, I still don't know why CUDA can't throw an exception with a more clear message for this kind of OOM error.
https://stackoverflow.com/questions/69861580/
Finding 2D boolean patterns in larger boolean tensors/arrays
I am looking for a way to find a 2D pattern in a MxNxR tensor/array with pytorch or numpy. For instance, to see if a dictionary of tensor of boolean pattern (e.g. {6x6 : freq}) exist in a larger boolean tensor (e.g. 3x256x256). Then I want to update my patterns and frequencies of the dictionary. I was hoping that there was a pytorchi way of doing it, instead of having loops over it, or have an optimized loop for doing it. As far as I know, torch.where works when we have a scalar value. I’m not sure how should I do, if I have a tensor of 6x6 instead of a value. I looked into Finding Patterns in a Numpy Array , but I don't think that it's feasible to follow it for a 2D pattern.
I'm thinking maybe you can pull this off using convolutions. Let's imagine you have an input made up of 0 and 1. Here we will take a minimal example with an u=input of 3x3 and a 2x2 pattern: >>> x = torch.tensor([[1., 0., 0.], [0., 1., 0.], [1., 0., 0.]]) And the pattern would be: >>> pattern = torch.tensor([[1., 0.], [0., 1.]]) Here the pattern can be found in the upper left corner of the input. We perform a convolution with nn.functional.conv2d with 1 - pattern as the kernel. >>> img, mask = x[None, None], pattern[None, None] >>> M = F.conv2d(img, 1 - mask) tensor([[[[0., 1.], [2., 0.]]]]) There is a match if and only if the result is equal to the number of 1s in the pattern: >>> M == mask.sum(dim=(2,3))) tensor([[[[ True, False], [False, False]]]]) You can deduce the frequencies from this final boolean mask. You can extend this method to multiple patterns by adding in kernels in your convolution.
https://stackoverflow.com/questions/69864024/
How to deal with dropout in between LSTM layers when using PackedSequence in PyTorch?
I'm creating an LSTM Autoencoder for feature extraction for my master's thesis. However, I'm having a lot of trouble with combining dropout with LSTM layers. Since it's an Autoencoder, I'm having a bottleneck which is achieved by having two separate LSTM layers, each with num_layers=1, and a dropout in between. I have time series with very different lengths and have found packed sequences to be a good idea for that reason. But, from my experiments, I must pack the data before the first LSTM, unpack before the dropout, then pack again before the second LSTM. This seems wildly inefficient. Is there a better way? I'm providing some example code and an alternative way to implement it below. Current, working, but possibly suboptimal solution: class Encoder(nn.Module): def __init__(self, seq_len, n_features, embedding_dim, hidden_dim, dropout): super(Encoder, self).__init__() self.seq_len = seq_len self.n_features = n_features self.embedding_dim = embedding_dim self.hidden_dim = hidden_dim self.lstm1 = nn.LSTM( input_size=n_features, hidden_size=self.hidden_dim, num_layers=1, batch_first=True, ) self.lstm2 = nn.LSTM( input_size=self.hidden_dim, hidden_size=embedding_dim, num_layers=1, batch_first=True, ) self.drop1 = nn.Dropout(p=dropout, inplace=False) def forward(self, x): x, (_, _) = self.lstm1(x) x, lens = pad_packed_sequence(x, batch_first=True, total_length=self.seq_len) x = self.drop1(x) x = pack_padded_sequence(x, lens, batch_first=True, enforce_sorted=False) x, (hidden_n, _) = self.lstm2(x) return hidden_n.reshape((-1, self.n_features, self.embedding_dim)), lens Alternative, possibly better, but currently not working solution; class Encoder2(nn.Module): def __init__(self, seq_len, n_features, embedding_dim, hidden_dim, dropout): super(Encoder2, self).__init__() self.seq_len = seq_len self.n_features = n_features self.embedding_dim = embedding_dim self.hidden_dim = hidden_dim self.lstm1 = nn.LSTM( input_size=n_features, hidden_size=self.hidden_dim, num_layers=2, batch_first=True, dropout=dropout, proj_size=self.embedding_dim, ) def forward(self, x): _, (h_n, _) = self.lstm1(x) return h_n[-1].unsqueeze(1), lens Any help and tips about working with time-series, packed sequences, lstm-cells and dropout would be immensely appreciated, as I'm not finding much documentation/guidance elsewhere on the internet. Thank you! Best, Lars Ankile
For the hereafter, after a lot of trial and error, the following full code for the Autoencoder seems to work very well. Getting the packing and unpacking to work correctly was the main hurdle. The clue is, I think, to try to utilize the LSTM modules for what they're worth by using the proj_size, num_layers, and dropout parameters. class EncoderV4(nn.Module): def __init__( self, seq_len, n_features, embedding_dim, hidden_dim, dropout, num_layers ): super().__init__() self.seq_len = seq_len self.n_features = n_features self.embedding_dim = embedding_dim self.hidden_dim = hidden_dim self.num_layers = num_layers self.lstm1 = nn.LSTM( input_size=n_features, hidden_size=self.hidden_dim, num_layers=num_layers, batch_first=True, dropout=dropout, proj_size=self.embedding_dim, ) def forward(self, x): _, (h_n, _) = self.lstm1(x) return h_n[-1].unsqueeze(1) class DecoderV4(nn.Module): def __init__(self, seq_len, input_dim, hidden_dim, n_features, num_layers): super().__init__() self.seq_len = seq_len self.input_dim = input_dim self.hidden_dim = hidden_dim self.n_features = n_features self.num_layers = num_layers self.lstm1 = nn.LSTM( input_size=input_dim, hidden_size=hidden_dim, num_layers=num_layers, proj_size=n_features, batch_first=True, ) def forward(self, x, lens): x = x.repeat(1, self.seq_len, 1) x = pack_padded_sequence(x, lens, batch_first=True, enforce_sorted=False) x, _ = self.lstm1(x) return x class RecurrentAutoencoderV4(nn.Module): def __init__( self, seq_len, n_features, embedding_dim, hidden_dim, dropout, num_layers ): super().__init__() self.encoder = EncoderV4( seq_len, n_features, embedding_dim, hidden_dim, dropout, num_layers ) self.decoder = DecoderV4( seq_len, embedding_dim, hidden_dim, n_features, num_layers ) def forward(self, x, lens): x = self.encoder(x) x = self.decoder(x, lens) return x The full code and a paper using this Autoencoder can be found at GitHub and arXiv, respectively.
https://stackoverflow.com/questions/69864893/
nvcc fatal : Unsupported gpu architecture 'compute_86'
I have a Nvidia RTX 3090 ti 24GB with this drivers CUDA Version: 11.4 Driver Version: 470.74 18.04.1-Ubuntu SMP Cuda compilation tools, release 9.1, V9.1.85 I've looked for this card architecture and it is Ampere so the version of library are compute_86 or sm_86(if I am not wrong). But while compiling with nvcc it gives me back nvcc fatal : Unsupported gpu architecture 'compute_86' I've runned nvcc --help and I've found something strange, it returned me that for gpu-code and gpu-architecture Allowed values for this option: 'compute_30','compute_32','compute_35', 'compute_37','compute_50','compute_52','compute_53','compute_60','compute_61', 'compute_62','compute_70','compute_72','sm_30','sm_32','sm_35','sm_37','sm_50', 'sm_52','sm_53','sm_60','sm_61','sm_62','sm_70','sm_72'. So I'm missing any driver version or some library that has to be donwloaded or I can't compile with my GPU?
In your posted system information, the last line Cuda compilation tools, release 9.1, V9.1.85 indicates that your NVCC is currently V9.1 (use nvcc -V to know for sure). NVCC of this version is too old to support compute_86. A possible reason for which this happens is that you have installed the CUDA toolkit (including NVCC) and the GPU drivers separately, with different CUDA versions. You can solve it by updating it to V11.4 by following the instructions on this official page: developer.nvidia.com/cuda-11-4-2-download-archive. In my experience, managing NVIDIA drivers and CUDA toolkits with apt often messes up the system. So it is recommended to use the official installer instead. Remember to reset the CUDA-related environment variables to link to the new version if you have set them before. To get another specific version of CUDA, you can just google "cuda toolkit (version number) download" and look for the official nvidia website results.
https://stackoverflow.com/questions/69865825/
Rearrange neural network layers in torch.nn.Sequential
I'm looking forward to finding a way for rearranging a sequential, because I'm trying to build a reversible convolutional neural network and I have many layers and I just want to reverse the order of layers in that sequential. For example self.features.append(nn.Conv2d(1, 6, 5)) self.features.append(nn.LeakyReLU()) self.features = nn.Sequential(*self.features) and then I just want to reverse that and first have activation and then convolution. I know this sample is easy but In my case I have many layers and I can't do it by writing the reverse path.
Try this: nn.Sequential(*reversed([layer for layer in original_sequential])) For example: >>> original_sequential = nn.Sequential(nn.Conv2d(1,6,5), nn.LeakyReLU()) >>> original_sequential Sequential( (0): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1)) (1): LeakyReLU(negative_slope=0.01) ) >>> nn.Sequential(*reversed([layer for layer in original_sequential])) Sequential( (0): LeakyReLU(negative_slope=0.01) (1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1)) )
https://stackoverflow.com/questions/69866191/
torch.nn.Sequential of designed blocks problem in giving inputs
I have designed a class that is a block of a network and its forward has three inputs: x, logdet, reverse, and has two outputs. for example, everything is normal when I call this class and use it, like: x = torch.Tensor(np.random.rand(2, 48, 8, 8)) net = Block(inp = 48, oup = 48, mid_channels=48, ksize=3, stride=1, group = 3) a, _ = net(x, reverse = False) But when I want to use it by Sequential(because I need multi blocks after each other), the problem occurs like this: x = torch.Tensor(np.random.rand(2, 48, 8, 8)) conv1_network = nn.Sequential( Block(inp = 48, oup = 48, mid_channels=48, ksize=3, stride=1, group = 3) ) conv1_network(x, reverse = False) My error is: TypeError: forward() got an unexpected keyword argument 'reverse' And it is not normal because I have reverse in my inputs of forward in Block as we see in the first part. I'm looking forward to finding a way to attach some Blocks to each other for example this is a block class Block(nn.Module): def __init__(self, num_channels): super(InvConv, self).__init__() self.num_channels = num_channels # Initialize with a random orthogonal matrix w_init = np.random.randn(num_channels, num_channels) w_init = np.linalg.qr(w_init)[0].astype(np.float32) self.weight = nn.Parameter(torch.from_numpy(w_init)) def forward(self, x, logdet, reverse=False): ldj = torch.slogdet(self.weight)[1] * x.size(2) * x.size(3) if reverse: weight = torch.inverse(self.weight.double()).float() logdet = logdet - ldj else: weight = self.weight logdet = logdet + ldj weight = weight.view(self.num_channels, self.num_channels, 1, 1) z = F.conv2d(x, weight) return z, logdet And my purpose is to attach multi Blocks to each other in Sequential in a for(because I can't use the same Block in my work, I need different convolutions for making a deep network) features = [] for i in range(10): self.features.append(Block(num_channels = 48)) and then I want to use them like this self.features(x, logdet = 0, reverse = False)
You indicated that your Block nn.Module had a reverse option. However nn.Sequential doesn't, so conv1_network(x, reverse=False) is not valid because conv1_network is not a Block. By default, you can't pass kwargs to layers inside a nn.Sequential. You can however inherit from nn.Sequential and do it yourself. Something like: class BlockSequence(nn.Sequential): def forward(self, input, **kwargs): for module in self: options = kwargs if isinstance(module, Block) else {} input = module(input, **options) return input This way, you can create a sequence containing Blocks (and optionally non-Block modules as well): >>> blocks = [] >>> for i in range(10): ... self.blocks.append(Block(num_channels=48)) >>> blocks = BlockSequence(*blocks) Then you will be able to call blocks with the reverse keyword argument, which be relayed to every potential Block child module when called: >>> blocks(x, logdet=0, reverse=False)
https://stackoverflow.com/questions/69871476/
at::Tensor to UIImage
I have a PyTorch model and try run it on iOS. I have the next code: at::Tensor tensor2 = torch::from_blob(imageBuffer2, {1, 1, 256, 256}, at::kFloat); c10::InferenceMode guard; auto output = _impl.forward({tensor1, tensor2}); torch::Tensor tensor_img = output.toTuple()->elements()[0].toTensor(); My question is "How I can convert tensor_img to UIImage?" I found that functions in PyTorch documentation: - (UIImage*)convertRGBBufferToUIImage:(unsigned char*)buffer withWidth:(int)width withHeight:(int)height { char* rgba = (char*)malloc(width * height * 4); for (int i = 0; i < width * height; ++i) { rgba[4 * i] = buffer[3 * i]; rgba[4 * i + 1] = buffer[3 * i + 1]; rgba[4 * i + 2] = buffer[3 * i + 2]; rgba[4 * i + 3] = 255; } size_t bufferLength = width * height * 4; CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, rgba, bufferLength, NULL); size_t bitsPerComponent = 8; size_t bitsPerPixel = 32; size_t bytesPerRow = 4 * width; CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(); if (colorSpaceRef == NULL) { NSLog(@"Error allocating color space"); CGDataProviderRelease(provider); return nil; } CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast; CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault; CGImageRef iref = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, YES, renderingIntent); uint32_t* pixels = (uint32_t*)malloc(bufferLength); if (pixels == NULL) { NSLog(@"Error: Memory not allocated for bitmap"); CGDataProviderRelease(provider); CGColorSpaceRelease(colorSpaceRef); CGImageRelease(iref); return nil; } CGContextRef context = CGBitmapContextCreate(pixels, width, height, bitsPerComponent, bytesPerRow, colorSpaceRef, bitmapInfo); if (context == NULL) { NSLog(@"Error context not created"); free(pixels); } UIImage* image = nil; if (context) { CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref); CGImageRef imageRef = CGBitmapContextCreateImage(context); if ([UIImage respondsToSelector:@selector(imageWithCGImage:scale:orientation:)]) { float scale = [[UIScreen mainScreen] scale]; image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp]; } else { image = [UIImage imageWithCGImage:imageRef]; } CGImageRelease(imageRef); CGContextRelease(context); } CGColorSpaceRelease(colorSpaceRef); CGImageRelease(iref); CGDataProviderRelease(provider); if (pixels) { free(pixels); } return image; } @end If I correctly understand, that function can convert unsigned char * to UIImage. I think that I need convert my tensor_img to unsigned char*, but I don't understand how I can do it.
The 1st code its a torch bridge and 2nd code is UIImage helper which I run from Swift. Anyway, I resolve that issue, we can close it. Code example: for (int i = 0; i < 3 * width * height; i++) { [results addObject:@(floatBuffer[i])]; } NSMutableData* data = [NSMutableData dataWithLength:sizeof(float) * 3 * width * height]; float* buffer = (float*)[data mutableBytes]; for (int j = 0; j < 3 * width * height; j++) { buffer[j] = [results[j] floatValue]; } return buffer;
https://stackoverflow.com/questions/69875312/
Pytorch: Automatically determin the input shape of Linear layer after Conv1d
I want to build a model with a number of Conv1d layers followed by several Linear layers. Since the data length is not needed for Conv1d layers, the Conv1d layers will work for data of any given length. Yet problem comes at Linear layer, because I don't know how to let the model to be experimented with different length of data. Now every time I change the length of input data, the output size of Conv1d layers will change, hence I have to manually reset the in_features of Linear layer. Note: I learned CNN and I know clearly how to calculate the output dimensions by hand. I am looking for a programmatic way to determine it, because I have to experiment many times with different length of input data. Question: In pytorch, how do you automatically figure out the output dimension after many Conv1d layers and set the in_features for the following Linear layer?
You can use the builtin nn.LazyLinear which will find the in_features on the first inference and initialize the appropriate number of weights accordingly: linear = nn.LazyLinear(out_features)
https://stackoverflow.com/questions/69876305/
Loading a HuggingFace model into AllenNLP gives different predictions
I have a custom classification model trained using transformers library based on a BERT model. The model classifies text into 7 different categories. It is persisted in a directory using: trainer.save_model(model_name) tokenizer.save_pretrained(model_name) I'm trying to load such persisted model using the allennlp library for further analysis. I managed to do so after a lot of work. However, when running the model inside the allennlp framework, the model tends to predict very different from the predictions I get when I run it using transformers, which lead me think some part of the loading was not done correctly. There are no errors during the inference, it is just that the predictions don't match. There is little documentation about how to load an existing model, so I'm wondering if someone faced the same situation before. There is just one example of how to do QA classification with ROBERTA, but couldn't extrapolate to what I'm looking for. Anyone have an idea if the steps are following are correct? This is how I'm loading the trained model: transformer_vocab = Vocabulary.from_pretrained_transformer(model_name) transformer_tokenizer = PretrainedTransformerTokenizer(model_name) transformer_encoder = BertPooler(model_name) params = Params( { "token_embedders": { "tokens": { "type": "pretrained_transformer", "model_name": model_name, } } } ) token_embedder = BasicTextFieldEmbedder.from_params(vocab=vocab, params=params) token_indexer = PretrainedTransformerIndexer(model_name) transformer_model = BasicClassifier(vocab=transformer_vocab, text_field_embedder=token_embedder, seq2vec_encoder=transformer_encoder, dropout=0.1, num_labels=7) I also had to implement my own DatasetReader as follows: class ClassificationTransformerReader(DatasetReader): def __init__( self, tokenizer: Tokenizer, token_indexer: TokenIndexer, max_tokens: int, **kwargs ): super().__init__(**kwargs) self.tokenizer = tokenizer self.token_indexers: Dict[str, TokenIndexer] = { "tokens": token_indexer } self.max_tokens = max_tokens self.vocab = vocab def text_to_instance(self, text: str, label: str = None) -> Instance: tokens = self.tokenizer.tokenize(text) if self.max_tokens: tokens = tokens[: self.max_tokens] inputs = TextField(tokens, self.token_indexers) fields: Dict[str, Field] = { "tokens": inputs } if label: fields["label"] = LabelField(label) return Instance(fields) It is instantiated as follows: dataset_reader = ClassificationTransformerReader(tokenizer=transformer_tokenizer, token_indexer=token_indexer, max_tokens=400) To run the model and test out if it works I'm doing the following: instance = dataset_reader.text_to_instance("some sample text here") dataset = Batch([instance]) dataset.index_instances(transformer_vocab) model_input = util.move_to_device(dataset.as_tensor_dict(), transformer_model._get_prediction_device()) outputs = transformer_model.make_output_human_readable(transformer_model(**model_input)) This works and returns the probabilities correctly, but there don't match what I would get running the model using transformers directly. Any idea what's going on?
Answering the original question, the code above loaded most of the components from the original transformer model, but the classifier layer. As Dirk mentioned, it is randomly initialized. The solution is to load the weights of the classifier from transformers into the AllenNLP one. The following code does the trick. from transformers import BertForSequenceClassification model = BasicClassifier(vocab=transformer_vocab, text_field_embedder=token_embedder, seq2vec_encoder=transformer_encoder, dropout=0.1, num_labels=7) # Original model loaded using transformers library classifier = BertForSequenceClassification.from_pretrained(model_name) transformer_model._classification_layer.weight = classifier.classifier.weight transformer_model._classification_layer.bias = classifier.classifier.bias
https://stackoverflow.com/questions/69876688/
In a pytorch tensor, return an array of indices of the rows of specific value
Given the below tensor that has vectors of all zeros and vectors with ones and zeros: tensor([[0., 0., 0., 0.], [0., 1., 1., 0.], [0., 0., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 0.], [0., 0., 1., 0.], [1., 0., 0., 1.], [0., 0., 0., 0.],...]) How can I have an array of indices of the vectors with ones and zeros so the output is like this: indices = tensor([ 1, 3, 5, 6,...]) Update A way to do it is: indices = torch.unique(torch.nonzero(y>0,as_tuple=True)[0]) But I'm not sure if there's a better way to do it.
An alternative way is to use torch.Tensor.any coupled with torch.Tensor.nonzero: >>> x.any(1).nonzero()[:,0] tensor([1, 3, 5, 6]) Otherwise, since the tensor contains only positive value, you can sum the columns and mask: >>> x.sum(1).nonzero()[:,0] tensor([1, 3, 5, 6])
https://stackoverflow.com/questions/69880675/
What does output.data mean in pytorch?
The code below appears in this tutorial. total = 0 # since we're not training, we don't need to calculate the gradients for our outputs with torch.no_grad(): for data in testloader: images, labels = data # calculate outputs by running images through the network outputs = net(images) # the class with the highest energy is what we choose as prediction _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) Why do you write outputs.data here? I want to know the difference from using outputs only.
TLDR; Tensor and Tensor.data are not the same! Please refer to this answer. While Tensor and Tensor.data do share the same memory they are not the same interface to accessing it. Also, notice how Tensor.data is a Tensor, which means the data attribute is recursive... However, there is a difference between the two: operations performed on the data attribute will bypass Autograd's check. This means any computation performed from Tensor.data won't be tracked for backpropagation. In practice, this means that using data for computing is identical to detaching the tensor from its computational graph if any.
https://stackoverflow.com/questions/69900889/
How can I load a model in pytorch without having to remember the parameters used?
I am training a model in pytorch for which I have made a class like so: from torch import nn class myNN(nn.Module): def __init__(self, dense1=128, dense2=64, dense3=32, ...): self.MLP = nn.Sequential( nn.Linear(dense1, dense2), nn.ReLU(), nn.Linear(dense2, dense3), nn.ReLU(), nn.Linear(dense3, 1) ) ... In order to save it I am using: torch.save(model.state_dict(), checkpoint_model_path) and to load it I am using: model = myNN() # or with specified parameters model.load_state_dict(torch.load(model_file)) However, in order for this method to work I have to use the right values in myNN()'s constructor. That means that I would need to somehow remember or store which parameters (layer sizes) I have used in each case in order to properly load different models. Is there a flexible way to save/load models in pytorch where I would also read the size of the layers? E.g. by loading a myNN() object directly or somehow reading the layer sizes from the saved pickle file? I am hesitant to try the second method in Best way to save a trained model in PyTorch? due to the warnings mentioned there. Is there a better way to achieve what I want?
Indeed serializing the whole Python is quite a drastic move. Instead, you can always add user-defined items in the saved file: you can save the model's state along with its class parameters. Something like this would work: First save your arguments in the instance such that we can serialize them when saving the model: class myNN(nn.Module): def __init__(self, dense1=128, dense2=64, dense3=32): super().__init__() self.kwargs = {'dense1': dense1, 'dense2': dense2, 'dense3': dense3} self.MLP = nn.Sequential( nn.Linear(dense1, dense2), nn.ReLU(), nn.Linear(dense2, dense3), nn.ReLU(), nn.Linear(dense3, 1)) We can save the parameters of the model along with its initializer arguments: >>> torch.save([model.kwargs, model.state_dict()], path) Then load it: >>> kwargs, state = torch.load(path) >>> model = myNN(**kwargs) >>> model.load_state_dict(state) <All keys matched successfully>
https://stackoverflow.com/questions/69903636/
Giving output of one neural network as an input to another in pytorch
I have a pretrained convolution neural network which produces and output of shape (X,164) where X is the number of test examples. So output layer has 164 nodes. I want to take this output and give this two another network which is simply a fully connected neural network whereby the first layer has 64 nodes and output layer has 1 node with sigmoid function. How can I do that? My first network looks like: class LambdaBase(nn.Sequential): def __init__(self, fn, *args): super(LambdaBase, self).__init__(*args) self.lambda_func = fn def forward_prepare(self, input): output = [] for module in self._modules.values(): output.append(module(input)) return output if output else input class Lambda(LambdaBase): def forward(self, input): return self.lambda_func(self.forward_prepare(input)) class LambdaMap(LambdaBase): def forward(self, input): return list(map(self.lambda_func,self.forward_prepare(input))) class LambdaReduce(LambdaBase): def forward(self, input): return reduce(self.lambda_func,self.forward_prepare(input)) def get_model(load_weights = True): pretrained_model_reloaded_th = nn.Sequential( # Sequential, nn.Conv2d(4,300,(19, 1)), nn.BatchNorm2d(300), nn.ReLU(), nn.MaxPool2d((3, 1),(3, 1)), nn.Conv2d(300,200,(11, 1)), nn.BatchNorm2d(200), nn.ReLU(), nn.MaxPool2d((4, 1),(4, 1)), nn.Conv2d(200,200,(7, 1)), nn.BatchNorm2d(200), nn.ReLU(), nn.MaxPool2d((4, 1),(4, 1)), Lambda(lambda x: x.view(x.size(0),-1)), # Reshape, nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(2000,1000)), # Linear, nn.BatchNorm1d(1000,1e-05,0.1,True),#BatchNorm1d, nn.ReLU(), nn.Dropout(0.3), nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(1000,1000)), # Linear, nn.BatchNorm1d(1000,1e-05,0.1,True),#BatchNorm1d, nn.ReLU(), nn.Dropout(0.3), nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(1000,164)), # Linear, nn.Sigmoid(), ) if load_weights: sd = torch.load('pretrained_model.pth') pretrained_model_reloaded_th.load_state_dict(sd) return pretrained_model_reloaded_th model = get_model(load_weights = True) If I want to get output for this model on my test set I can simply do: output = model(X.float()) This produces a final output of shape (X,164). Now I want to take this output and give it to another neural network mentioned above. How can I combine these two networks now and how can I optimise these networks together? Insights will be appreciated. Edit: My second model is: # define second model architecture next_model = nn.Sequential( nn.Linear(164, 64), nn.ReLU(), nn.Linear(64, 1), nn.Sigmoid() ) # print model architecture print(next_model) And my classifier is trained as: for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs; data is a list of [inputs, labels] inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training')
If the two models do not need any adapting to be done at the first's model output, you can simply use a nn.Sequential: >>> network = nn.Sequential(model, next_model) And use it the same way as you did with model: >>> output = network(X.float()) Which will correspond to next_model(model(X.float())).
https://stackoverflow.com/questions/69904897/
Horizontal stacking in Pytorch
I am trying to implement transformers and stuck at one point. Say I have input sequence of shape [2,20] where 2 is the number of sample and 20 is the number of words in sequence ( sequence length ). So, I create an array like [0,1,2, ... 19] of shape [1,20]. Now I want to stack it , something like the final shape should be [2,20] to be in-line with input sequence. Like below [[0,1,2, ... 19], [0,1,2, ... 19]] Is there a torch function for doing so. I can loop and create the data and arrays but wanted to avoid it.
If the tensors you want to stack are of shape [1,20], you can use torch.cat() t1 = torch.zeros([1,5]) # tensor([[0., 0., 0., 0., 0.]]) t2 = torch.ones([1,5]) # tensor([[1., 1., 1., 1., 1.]]) torch.cat([t1, t2]) # tensor([[0., 0., 0., 0., 0.], [1., 1., 1., 1., 1.]]) If the tensors are 1-D, you can simply use torch.stack() t1 = torch.zeros([5]) # tensor([0., 0., 0., 0., 0.]) t2 = torch.ones([5]) # tensor([1., 1., 1., 1., 1.]) torch.stack([t1, t2]) # tensor([[0., 0., 0., 0., 0.], [1., 1., 1., 1., 1.]]) Now, for a shorter method for your case, you can do: torch.arange(0,20).repeat(2,1) # tensor([[0,1,2, ... 19], [0,1,2, ... 19]])
https://stackoverflow.com/questions/69907635/
How to understand the results of training a neural network type transformer (BERT)?
I am trying to train Bertclassifier on a classification task by finetuning it but I am having troubles understanding what is display during the training. I put a small sample of what I get {'loss': 1.1328, 'learning_rate': 4.994266055045872e-05, 'epoch': 0.0} {'loss': 1.0283, 'learning_rate': 4.942660550458716e-05, 'epoch': 0.02} {'eval_loss': 0.994676947593689, 'eval_accuracy': 0.507755277897458, 'eval_f1': array([0.00770713, 0.6359277 , 0.44546742]), 'eval_f1_mi': 0.507755277897458, 'eval_f1_ma': 0.36303408438190915, 'eval_runtime': 10.8296, 'eval_samples_per_second': 428.642, 'eval_steps_per_second': 13.482, 'epoch': 0.02} {'loss': 1.0075, 'learning_rate': 4.8853211009174314e-05, 'epoch': 0.05} {'eval_loss': 1.0286471843719482, 'eval_accuracy': 0.46122361051271005, 'eval_f1': array([0.25 , 0.48133484, 0.51830986]), 'eval_f1_mi': 0.46122361051271005, 'eval_f1_ma': 0.41654823359462956, 'eval_runtime': 10.8256, 'eval_samples_per_second': 428.796, 'eval_steps_per_second': 13.486, 'epoch': 0.05} {'loss': 0.9855, 'learning_rate': 4.827981651376147e-05, 'epoch': 0.07} {'eval_loss': 0.9796209335327148, 'eval_accuracy': 0.5320982335200345, 'eval_f1': array([0.14783347, 0.6772202 , 0.2726257 ]), 'eval_f1_mi': 0.5320982335200345, 'eval_f1_ma': 0.36589312424069026, 'eval_runtime': 10.8505, 'eval_samples_per_second': 427.813, 'eval_steps_per_second': 13.456, 'epoch': 0.07} {'loss': 1.0022, 'learning_rate': 4.7706422018348626e-05, 'epoch': 0.09} {'eval_loss': 0.968146026134491, 'eval_accuracy': 0.5364067212408444, 'eval_f1': array([0.38389789, 0.60565553, 0.5487042 ]), 'eval_f1_mi': 0.5364067212408444, 'eval_f1_ma': 0.5127525387411823, 'eval_runtime': 10.9701, 'eval_samples_per_second': 423.15, 'eval_steps_per_second': 13.309, 'epoch': 0.09} {'loss': 0.9891, 'learning_rate': 4.713302752293578e-05, 'epoch': 0.11} {'eval_loss': 0.9413465261459351, 'eval_accuracy': 0.556872037914692, 'eval_f1': array([0.37663886, 0.68815745, 0.28154206]), 'eval_f1_mi': 0.556872037914692, 'eval_f1_ma': 0.4487794533693059, 'eval_runtime': 10.9316, 'eval_samples_per_second': 424.642, 'eval_steps_per_second': 13.356, 'epoch': 0.11} {'loss': 0.9346, 'learning_rate': 4.655963302752294e-05, 'epoch': 0.14} {'eval_loss': 0.9142090082168579, 'eval_accuracy': 0.5769065058164584, 'eval_f1': array([0.19836066, 0.68580399, 0.570319 ]), 'eval_f1_mi': 0.5769065058164584, 'eval_f1_ma': 0.4848278830170361, 'eval_runtime': 10.9471, 'eval_samples_per_second': 424.04, 'eval_steps_per_second': 13.337, 'epoch': 0.14} {'loss': 0.9394, 'learning_rate': 4.5986238532110096e-05, 'epoch': 0.16} {'eval_loss': 0.8802705407142639, 'eval_accuracy': 0.5857389056441189, 'eval_f1': array([0.30735931, 0.71269565, 0.4255121 ]), 'eval_f1_mi': 0.5857389056441189, 'eval_f1_ma': 0.4818556879387581, 'eval_runtime': 10.9824, 'eval_samples_per_second': 422.677, 'eval_steps_per_second': 13.294, 'epoch': 0.16} {'loss': 0.8993, 'learning_rate': 4.541284403669725e-05, 'epoch': 0.18} {'eval_loss': 0.8535333871841431, 'eval_accuracy': 0.5980180956484275, 'eval_f1': array([0.37174211, 0.7155305 , 0.41662443]), 'eval_f1_mi': 0.5980180956484275, 'eval_f1_ma': 0.5012990131553724, 'eval_runtime': 10.8245, 'eval_samples_per_second': 428.842, 'eval_steps_per_second': 13.488, 'epoch': 0.18} {'loss': 0.9482, 'learning_rate': 4.483944954128441e-05, 'epoch': 0.21} {'eval_loss': 0.9535377621650696, 'eval_accuracy': 0.541792330891857, 'eval_f1': array([0.31955151, 0.59248471, 0.57414105]), 'eval_f1_mi': 0.541792330891857, 'eval_f1_ma': 0.4953924209116825, 'eval_runtime': 10.9767, 'eval_samples_per_second': 422.896, 'eval_steps_per_second': 13.301, 'epoch': 0.21} {'loss': 0.8488, 'learning_rate': 4.426605504587156e-05, 'epoch': 0.23} {'eval_loss': 0.8357231020927429, 'eval_accuracy': 0.6214993537268418, 'eval_f1': array([0.35536603, 0.73122392, 0.50070588]), 'eval_f1_mi': 0.6214993537268418, 'eval_f1_ma': 0.5290986104916023, 'eval_runtime': 10.9206, 'eval_samples_per_second': 425.069, 'eval_steps_per_second': 13.369, 'epoch': 0.23} {'loss': 0.8893, 'learning_rate': 4.369266055045872e-05, 'epoch': 0.25} {'eval_loss': 0.7578970789909363, 'eval_accuracy': 0.6712623869021973, 'eval_f1': array([0.41198502, 0.77171541, 0.65677419]), 'eval_f1_mi': 0.6712623869021973, 'eval_f1_ma': 0.6134915401312347, 'eval_runtime': 10.9765, 'eval_samples_per_second': 422.902, 'eval_steps_per_second': 13.301, 'epoch': 0.25} {'loss': 0.9003, 'learning_rate': 4.311926605504588e-05, 'epoch': 0.28} {'eval_loss': 0.791412353515625, 'eval_accuracy': 0.6535975872468763, 'eval_f1': array([0.45641646, 0.76072942, 0.53744893]), 'eval_f1_mi': 0.6535975872468763, 'eval_f1_ma': 0.5848649380875267, 'eval_runtime': 10.9302, 'eval_samples_per_second': 424.696, 'eval_steps_per_second': 13.358, 'epoch': 0.28} {'loss': 0.8345, 'learning_rate': 4.2545871559633024e-05, 'epoch': 0.3} {'eval_loss': 0.7060380578041077, 'eval_accuracy': 0.6999138302455838, 'eval_f1': array([0.50152905, 0.79205975, 0.64349863]), 'eval_f1_mi': 0.6999138302455838, 'eval_f1_ma': 0.6456958112539298, 'eval_runtime': 10.9475, 'eval_samples_per_second': 424.023, 'eval_steps_per_second': 13.336, 'epoch': 0.3} {'loss': 0.8149, 'learning_rate': 4.1972477064220184e-05, 'epoch': 0.32} {'eval_loss': 0.6717478036880493, 'eval_accuracy': 0.7259801809564843, 'eval_f1': array([0.50805932, 0.81245738, 0.71325735]), 'eval_f1_mi': 0.7259801809564843, 'eval_f1_ma': 0.6779246805922554, 'eval_runtime': 10.7574, 'eval_samples_per_second': 431.519, 'eval_steps_per_second': 13.572, 'epoch': 0.32} {'loss': 0.8343, 'learning_rate': 4.139908256880734e-05, 'epoch': 0.34} {'eval_loss': 0.6306226253509521, 'eval_accuracy': 0.7455838000861698, 'eval_f1': array([0.58873995, 0.82795018, 0.70917226]), 'eval_f1_mi': 0.7455838000861698, 'eval_f1_ma': 0.7086207951089967, 'eval_runtime': 10.9006, 'eval_samples_per_second': 425.849, 'eval_steps_per_second': 13.394, 'epoch': 0.34} {'loss': 0.7711, 'learning_rate': 4.0825688073394495e-05, 'epoch': 0.37} {'eval_loss': 0.6052485108375549, 'eval_accuracy': 0.7619560534252477, 'eval_f1': array([0.62346588, 0.84259464, 0.73186813]), 'eval_f1_mi': 0.7619560534252476, 'eval_f1_ma': 0.7326428851759276, 'eval_runtime': 10.8422, 'eval_samples_per_second': 428.143, 'eval_steps_per_second': 13.466, 'epoch': 0.37} Why does the loss start at 1.1328 ? Why the learning rate is changing at each epoch and it is not fixed ? I fixed it at 5e-5 at the beguinning ? How to intrepret the results ? For me the model seems to learn better since the loss decreases at each epoch ? But how to explain it with the change in the learning the rate ? training_args = TrainingArguments( output_dir='/gpfswork/rech/kpf/umg16uw/results_hf', logging_dir='/gpfswork/rech/kpf/umg16uw/logs', do_train=True, do_eval=True, evaluation_strategy="steps", logging_first_step=True, logging_steps=10, num_train_epochs=2.0, per_device_train_batch_size=8, per_device_eval_batch_size=8, learning_rate=5e-5, weight_decay=0.01 )
The loss starts at 1.3, which is arbitrary, because the first epoch is a randomisation of the weights, and so you would be extremely lucky to be accurate early on. The learning rate you supply to TrainingArguments is just the initial learning rate, the training method adapts this automatically. The learning rate changing indicates that the initial rate may be too high or too low, and the method is adapting to prevent over-fitting or under-fitting the data based on the returned loss and accuracy of each epoch. The accuracy and loss are good measures to track across the epochs, less loss is better, more accuracy is better, if you also had an accuracy measure you could compare accuracy to eval_accuracy and if the eval_accuracy becomes higher than the accuracy then you are starting to overfit the data.
https://stackoverflow.com/questions/69909781/
How to convert torch tensor to float?
I am using flask to do inference and I am getting this result. Is their any way to convert this tensor into float because I want to use this result to display in a react app { result: { predictions: "tensor([[-3.4333]], grad_fn=<AddmmBackward>)" } }
From Torch.Tensor.item docs: x = torch.tensor([1.0]) print((x.item()) output: 1.0 type check: print(type(x.item()) output: float
https://stackoverflow.com/questions/69911653/
Is it true that `inplace=True` activations in PyTorch make sense only for inference mode?
According to the discussions on PyTorch forum : What’s the difference between nn.ReLU() and nn.ReLU(inplace=True)? Guidelines for when and why one should set inplace = True? The purpose of inplace=True is to modify the input in place, without allocating memory for additional tensor with the result of this operation. This allows to be more efficient in memory usage but prohibits the possibility to make a backward pass, at least if the operation decreases the amount of information. And the backpropagation algorithm requires to have intermediate activations saved in order to update the weights. Can one say, that this mode, should be turned on in layers only if the model is already trained, and one doesn't want to modify it anymore?
nn.ReLU(inplace=True) saves memory during both training and testing. However, there are some problems we may face when we use nn.ReLU(iplace=True) while calculating gradients. Sometimes, the original values are needed when calculating gradients. Because inplace destroys some of the original values, some usages may be problematic: def forward(self, x): skip = x x = self.relu(x) x += skip # inplace addition # Error! The above two consecutive inplace operations will produce an error. However, it is fine to use first addition, then activation function with inplace=True: def forward(self, x): skip = x x += skip # inplace addition x = self.relu(x) # No error!
https://stackoverflow.com/questions/69913781/
I want to analysis with classification algoritms using BERT's hidden state
I'm using the Huggingface Transformer package and BERT with PyTorch. I try to do text classification with CamembertForSequenceClassification. I can get the result, but I want to challenge more difficult task. I refer to this literature. In section 4.1 of this document, it is stated that After training, we drop the softmax activation layer and use BERT's hidden state as the feature vector, which we then use as input for different classification algorithms. So, I check the modeling_bert.py. There is attention_probs = nn.Softmax(dim=-1)(attention_scores)If I look at it as per the paper, does it mean to use the attention_scores before passing it through Softmax function? If so, how can I use the attention_scores and apply it to the classification algorithm?In short, what I want to do is to use the hidden state of BERT and apply it to Logistic Regression and so on.Thanks for any help.
They did not mean that Softmax layer, because that one is inside BertAttention. They meant the pooler layer on top of BERT. I found their repository provided in the paper: https://github.com/axenov/politik-news It seems when they train, they use the plain BertForSequenceClassification. (Which uses hidden_states -> pooler activation -> linear classifier -> loss) When they predict, they only use the hidden_states (or in bert_modeling.py it's called sequence_output), then they pass it to a different classifier loaded in BiasPredictor.py:L26. So if you want to try a different classifier, use it here.
https://stackoverflow.com/questions/69914131/
AttributeError: module 'torch.optim.lr_scheduler' has no attribute 'LinearLR'
I'm trying to train my own object detection model with Pytorch. But im getting always this error. I tried to change the torch version but this doesn't helped.My packages: torchvision-0.11.1 and torch-1.10.0 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-10-9e52b782b448> in <module>() 4 for epoch in range(num_epochs): 5 # training for one epoch ----> 6 train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10) 7 # update the learning rate 8 lr_scheduler.step() /content/engine.py in train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq) 21 warmup_iters = min(1000, len(data_loader) - 1) 22 ---> 23 lr_scheduler = torch.optim.lr_scheduler.LinearLR( 24 optimizer, start_factor=warmup_factor, total_iters=warmup_iters 25 ) AttributeError: module 'torch.optim.lr_scheduler' has no attribute 'LinearLR'
LinearLR scheduler was only recently introduced (v1.10.0). Please make sure your pytorch version is up to date and try again.
https://stackoverflow.com/questions/69914189/
Bias grad in linear regression remains small compared to weight grad, and intercept is not properly learnt
I have thrown together a dummy model to showcase linear regression in pytorch, but I find that my model is not properly learning. It's doing well when it comes to learning the slope, but the intercept is not really budging. Printing out the grads at every epoch tells me that, indeed, the grad is a lot smaller for the bias. Why is that? How can I remedy it, so the intercept is properly learnt? This is what happens (a set to 0 to illustrate): # Create some dummy data: we establish a linear relationship between x and y a = np.random.rand() b = np.random.rand() a=0 x = np.linspace(start=0, stop=100, num=100) y = a * x + b # Now let's create some noisy measurements noise = np.random.normal(size=100) y_noisy = a * x + b + noise # What's the overall error? mse_actual = np.sum(np.power(y-y_noisy,2))/len(y) # Visualize plt.scatter(x,y_noisy, label='Measurements', alpha=.7) plt.plot(x,y,'r', label='Underlying') plt.legend() plt.show() # Let's learn something! inputs = torch.from_numpy(x).type(torch.FloatTensor).unsqueeze(1) targets = torch.from_numpy(y_noisy).type(torch.FloatTensor).unsqueeze(1) # This is our model (one hidden node + bias) model = torch.nn.Linear(1,1) optimizer = torch.optim.SGD(model.parameters(),lr=1e-5) loss_function = torch.nn.MSELoss() # What does it predict right now? shuffled_inputs, preds = [], [] for input, target in zip(inputs,targets): pred = model(input) shuffled_inputs.append(input.detach().numpy()[0]) preds.append(pred.detach().numpy()[0]) # Visualize plt.scatter(x,y_noisy, color='blue', label='Measurements', alpha=.7) plt.plot(shuffled_inputs, preds, color='orange', label='Predictions', alpha=.7) plt.plot(x,y,'r', label='Underlying') plt.legend() plt.show() # Let's train! epochs = 100 a_s, b_s = [], [] for epoch in range(epochs): # Reset optimizer values optimizer.zero_grad() # Predict values using current model preds = model(inputs) # How far off are we? loss = loss_function(targets,preds) # Calculate the gradient loss.backward() # Update model optimizer.step() for p in model.parameters(): print('Grads:', p.grad) # New parameters a_s.append(list(model.parameters())[0].item()) b_s.append(list(model.parameters())[1].item()) print(f"Epoch {epoch+1} -- loss = {loss}")
It's a bit of a non-answer, but just use more epochs or add more datapoints. When you have 100 datapoints with noise as significant as you had (if you just plot the initial data it becomes obvious) the model will struggle with MSE as a loss. I can't see your image (work blocked imgur...) but I found it looked bad if you didn't adjust the axes on your matplotlib plot because it was so zoomed in on the x axis (when a=0), so I zoomed out of that too: # Create some dummy data: we establish a linear relationship between x and y a = np.random.rand() b = np.random.rand() a=0 N = 10000 x = np.linspace(start=0, stop=100, num=N) y = a * x + b # Now let's create some noisy measurements noise = np.random.normal(size=N)*0.1 y_noisy = a * x + b + noise # What's the overall error? mse_actual = np.sum(np.power(y-y_noisy,2))/len(y) # Visualize plt.figure() plt.scatter(x,y_noisy, label='Measurements', alpha=.7) plt.plot(x,y,'r', label='Underlying') plt.legend() plt.show() # Let's learn something! inputs = torch.from_numpy(x).type(torch.FloatTensor).unsqueeze(1) targets = torch.from_numpy(y_noisy).type(torch.FloatTensor).unsqueeze(1) # This is our model (one hidden node + bias) model = torch.nn.Linear(1,1) optimizer = torch.optim.SGD(model.parameters(),lr=1e-5) loss_function = torch.nn.MSELoss() # Let's train! epochs = 50000 a_s, b_s = [], [] for epoch in range(epochs): # Reset optimizer values optimizer.zero_grad() # Predict values using current model preds = model(inputs) # How far off are we? loss = loss_function(targets,preds) # Calculate the gradient loss.backward() # Update model optimizer.step() #for p in model.parameters(): # print('Grads:', p.grad) # New parameters a_s.append(list(model.parameters())[0].item()) b_s.append(list(model.parameters())[1].item()) print(f"Epoch {epoch+1} -- loss = {loss}") # What does it predict right now? shuffled_inputs, preds = [], [] for input, target in zip(inputs,targets): pred = model(input) shuffled_inputs.append(input.detach().numpy()[0]) preds.append(pred.detach().numpy()[0]) plt.figure() plt.scatter(x,y_noisy, color='blue', label='Measurements', alpha=.7) plt.plot(shuffled_inputs, preds, color='orange', label='Predictions', alpha=.7) plt.plot(x,y,'r', label='Underlying') plt.axis([0,100,y.min()-1,y.max()+1]) plt.legend() plt.show()
https://stackoverflow.com/questions/69915768/
using gpu with simple transformer mt5 training
mt5 fine-tuning does not use gpu(volatile gpu utill 0%) Hi, im trying to fine tuning for ko-en translation with mt5-base model. I think the Cuda setting was done correctly(cuda available is True) But during training, the training set doesn't use GPU except getting dataset first(very short time). I want to use GPU resource efficiently and get advice about translation model fine-tuning here is my code and training env. import logging import pandas as pd from simpletransformers.t5 import T5Model, T5Args import torch logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.WARNING) train_df = pd.read_csv("data/enko_train.tsv", sep="\t").astype(str) eval_df = pd.read_csv("data/enko_eval.tsv", sep="\t").astype(str) train_df["prefix"] = "" eval_df["prefix"] = "" model_args = T5Args() model_args.max_seq_length = 96 model_args.train_batch_size = 64 model_args.eval_batch_size = 32 model_args.num_train_epochs = 10 model_args.evaluate_during_training = True model_args.evaluate_during_training_steps = 1000 model_args.use_multiprocessing = False model_args.fp16 = True model_args.save_steps = 1000 model_args.save_eval_checkpoints = True model_args.no_cache = True model_args.reprocess_input_data = True model_args.overwrite_output_dir = True model_args.preprocess_inputs = False model_args.num_return_sequences = 1 model_args.wandb_project = "MT5 Korean-English Translation" print("Is cuda available?", torch.cuda.is_available()) model = T5Model("mt5", "google/mt5-base", cuda_device=0 , args=model_args) # Train the model model.train_model(train_df, eval_data=eval_df) # Optional: Evaluate the model. We'll test it properly anyway. results = model.eval_model(eval_df, verbose=True) nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Mon_May__3_19:15:13_PDT_2021 Cuda compilation tools, release 11.3, V11.3.109 Build cuda_11.3.r11.3/compiler.29920130_0 gpu 0 = Quadro RTX 6000
it jus out of memory cases. The parameter and dataset weren't loaded on my gpu memory. so i changed my model mt5-base to mt5-small, delete save point, reduce dataset
https://stackoverflow.com/questions/69923334/
Issue installing pytorch in python3.7
Any help on how to install torch on python3.7 would be highly appreciated.
Tried installing torch from source code. https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048 goto section Build from Source( Though it says python3.6 you can check with python 3.7 and python3.8) Once torch is installed, install torchvision from source git clone https://github.com/pytorch/vision cd vision change the version you want to install and run the setup sudo python setup.py install
https://stackoverflow.com/questions/69923724/
Converting 4 dimensional tensors into list of lists of lists (Python)
I have 6 tensors of shape (batch_size, S, S, 1) and I want to combine them in one python list of size (batch_size, S*S, 6) - so every element of tensor should be inside the inner list. Can this be achieved without using loops? What's the efficient way to solve it?
Let batch_size=10 and S=4 for the purpose of this example: >>> x = [torch.rand(10, 4, 4, 1) for _ in range(6)] Indeed the first step is to concatenate the tensor on the last dimension axis=3: >>> y = torch.cat(x, -1) >>> y.shape torch.Size([10, 4, 4, 6]) Then reshape to flatten axis=1 and axis=2, you can do so with torch.flatten here since the two axes as adjacent: >>> y = torch.cat(x, -1).flatten(1, 2) >>> y.shape torch.Size([10, 16, 6])
https://stackoverflow.com/questions/69928114/
How can I get the in and out edges weights for each neuron in a neural network?
Say I have the following network import torch import torch.nn as nn class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 2) self.fc2 = nn.Linear(2, 3) self.fc3 = nn.Linear(3, 1) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net = Model() I know that I can access the weights (i.e edges) at each layer: net.fc1.weight However, I'm trying to create a function that randomly selects a neuron from the entire network and outputs it's in-connections (i.e the edges/weights that are attached to it from the previous layer) and its out-connections (i.e the edges/weights that are going out of it to the next layer). pseudocode: def get_neuron_in_out_edges(list_of_neurons): shuffled_list_of_neurons = shuffle(list_of_neurons) in_connections_list = [] out_connections_list = [] for neuron in shuffled_list_of_neurons: in_connections = get_in_connections(neuron) # a list of connections out_connections = get_out_connections(neuron) # a list of connections in_connections_list.append([neuron,in_connections]) out_connections_list.append([neuron,out_connections]) return in_connections_list, out_connections_list The idea is that I can then access these values and say if they're smaller than 10, change them to 10 in the network. This is for a networks class where we're working on plotting different networks so this doesn't have to make much sense from a machine learning perspective
Let's ignore biases for this discussion. A linear layer computes the output y given weights w and inputs x as: y_i = sum_j w_ij x_j So, for neuron i all the incoming edges are the weights w_ij - that is the i-th row of the weight matrix W. Similarly, for input neuron j it affects all y_i according to the j-th column of the weight matrix W.
https://stackoverflow.com/questions/69929438/
Cannot change device of pytorch model
I am trying to move my device onto a gpu. After running the function to determine if there is an available GPU, and determined there is one (see below) > device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") > device device(type='cuda', index=0) When I call model.to(device) I get no change in the model attribute? > model.to(device) S2SModel( (encoder): Encoder( (lstm): LSTM(5, 32, batch_first=True) ) (decoder): Decoder( (lstm): LSTM(4, 32, batch_first=True) ) (output_layer): Linear(in_features=32, out_features=1, bias=True) ) > model.device 'cpu' Though I have read that you do not need to assign the model.to() calls back to the object, i have tried that too. > model = model.to(device) > model.device 'cpu'
device is likely to be a user-defined attribute here that is different to the actual device the model sits on. This seems to be the reason why model.device returns 'cpu' To check if your model is on CPU or GPU, you can look at its first parameter: >>> next(model.parameters()).device
https://stackoverflow.com/questions/69931349/
Using plt to display pytorch image
x is the image, yis the label, and metadata are dates, times etc. for x, y_true, metadata in train_loader: print(x.shape) The shape returns: torch.Size([16, 3, 448, 448]) How do I go about displaying x as an image? Do I use plt?
Your x is not a single image, but rather a batch of 16 different images, all of size 448x448 pixels. You can use torchvision.utils.make_grid to convert x into a grid of 4x4 images, and then plot it: import torchvision with torch.no_grad(): # no need for gradients here grid = torchvision.utils.make_grid(x, nrow=4) # you might consider normalize=True # convert the grid into a numpy array suitable for plt grid_np = grid.cpu().numpy().transpose(1, 2, 0) # channel dim should be last plt.matshow(grid_np)
https://stackoverflow.com/questions/69932913/
Why tensor object attributes are removed by cloning?
I'm trying to clone a tensor in pytorch and would like to also clone the tensor attributes. Here is an example: import torch from torch import nn a = nn.Parameter(torch.rand(1)) a.adapt = True # define tensor attribute b = a.clone() # clone In the example above, I would like print(b.adapt) to return True; however, I get the following error: Traceback (most recent call last): File "scratch.py", line 13, in <module> print(b.adapt) AttributeError: 'Tensor' object has no attribute 'adapt' I'm wondering why tensor object attributes are removed by cloning and how to fix that.
The function torch.Tensor.clone performs a copy of the tensor's data, not a copy of the Python object. This is the reason why the adapt attribute of a is not available on b. Additionally, it will keep the same grad_fn on the newly created tensor:
https://stackoverflow.com/questions/69935285/
Extracting tensor data with index in pytorch
I would like to have the tensor indexed a certain way. Suppose my data, tensor X shaped (1, 3, 16, 9) is tensor([[[[ 0., 0., 0., 0., 1., 2., 0., 5., 6.], [ 0., 0., 0., 1., 2., 3., 5., 6., 7.], [ 0., 0., 0., 2., 3., 4., 6., 7., 8.], [ 0., 0., 0., 3., 4., 0., 7., 8., 0.], [ 0., 1., 2., 0., 5., 6., 0., 9., 10.], [ 1., 2., 3., 5., 6., 7., 9., 10., 11.], [ 2., 3., 4., 6., 7., 8., 10., 11., 12.], [ 3., 4., 0., 7., 8., 0., 11., 12., 0.], [ 0., 5., 6., 0., 9., 10., 0., 13., 14.], [ 5., 6., 7., 9., 10., 11., 13., 14., 15.], [ 6., 7., 8., 10., 11., 12., 14., 15., 16.], [ 7., 8., 0., 11., 12., 0., 15., 16., 0.], [ 0., 9., 10., 0., 13., 14., 0., 0., 0.], [ 9., 10., 11., 13., 14., 15., 0., 0., 0.], [10., 11., 12., 14., 15., 16., 0., 0., 0.], [11., 12., 0., 15., 16., 0., 0., 0., 0.]], [[ 0., 0., 0., 0., 17., 18., 0., 21., 22.], [ 0., 0., 0., 17., 18., 19., 21., 22., 23.], [ 0., 0., 0., 18., 19., 20., 22., 23., 24.], [ 0., 0., 0., 19., 20., 0., 23., 24., 0.], [ 0., 17., 18., 0., 21., 22., 0., 25., 26.], [17., 18., 19., 21., 22., 23., 25., 26., 27.], [18., 19., 20., 22., 23., 24., 26., 27., 28.], [19., 20., 0., 23., 24., 0., 27., 28., 0.], [ 0., 21., 22., 0., 25., 26., 0., 29., 30.], [21., 22., 23., 25., 26., 27., 29., 30., 31.], [22., 23., 24., 26., 27., 28., 30., 31., 32.], [23., 24., 0., 27., 28., 0., 31., 32., 0.], [ 0., 25., 26., 0., 29., 30., 0., 0., 0.], [25., 26., 27., 29., 30., 31., 0., 0., 0.], [26., 27., 28., 30., 31., 32., 0., 0., 0.], [27., 28., 0., 31., 32., 0., 0., 0., 0.]], [[ 0., 0., 0., 0., 33., 34., 0., 37., 38.], [ 0., 0., 0., 33., 34., 35., 37., 38., 39.], [ 0., 0., 0., 34., 35., 36., 38., 39., 40.], [ 0., 0., 0., 35., 36., 0., 39., 40., 0.], [ 0., 33., 34., 0., 37., 38., 0., 41., 42.], [33., 34., 35., 37., 38., 39., 41., 42., 43.], [34., 35., 36., 38., 39., 40., 42., 43., 44.], [35., 36., 0., 39., 40., 0., 43., 44., 0.], [ 0., 37., 38., 0., 41., 42., 0., 45., 46.], [37., 38., 39., 41., 42., 43., 45., 46., 47.], [38., 39., 40., 42., 43., 44., 46., 47., 48.], [39., 40., 0., 43., 44., 0., 47., 48., 0.], [ 0., 41., 42., 0., 45., 46., 0., 0., 0.], [41., 42., 43., 45., 46., 47., 0., 0., 0.], [42., 43., 44., 46., 47., 48., 0., 0., 0.], [43., 44., 0., 47., 48., 0., 0., 0., 0.]]]] I would like to have those rows where (row_index % n) == i (say n = 4 and i = 0 to 3) is saved in another tensor Y. For example, for the data X[0][0]: [[ 0., 0., 0., 0., 1., 2., 0., 5., 6.], [ 0., 0., 0., 1., 2., 3., 5., 6., 7.], [ 0., 0., 0., 2., 3., 4., 6., 7., 8.], [ 0., 0., 0., 3., 4., 0., 7., 8., 0.], [ 0., 1., 2., 0., 5., 6., 0., 9., 10.], [ 1., 2., 3., 5., 6., 7., 9., 10., 11.], [ 2., 3., 4., 6., 7., 8., 10., 11., 12.], [ 3., 4., 0., 7., 8., 0., 11., 12., 0.], [ 0., 5., 6., 0., 9., 10., 0., 13., 14.], [ 5., 6., 7., 9., 10., 11., 13., 14., 15.], [ 6., 7., 8., 10., 11., 12., 14., 15., 16.], [ 7., 8., 0., 11., 12., 0., 15., 16., 0.], [ 0., 9., 10., 0., 13., 14., 0., 0., 0.], [ 9., 10., 11., 13., 14., 15., 0., 0., 0.], [10., 11., 12., 14., 15., 16., 0., 0., 0.], [11., 12., 0., 15., 16., 0., 0., 0., 0.]] I would like to have a tensor containing the following data, which is basically collection of the rows where row_index % 4 == 0 (here i = 0): [[ 0., 0., 0., 0., 1., 2., 0., 5., 6.], [ 0., 1., 2., 0., 5., 6., 0., 9., 10.], [ 0., 5., 6., 0., 9., 10., 0., 13., 14.], [ 0., 9., 10., 0., 13., 14., 0., 0., 0.]] Similarly, where i = 1, row_index % 4 == i will look like: [[ 0., 0., 0., 1., 2., 3., 5., 6., 7.], [ 1., 2., 3., 5., 6., 7., 9., 10., 11.], [ 5., 6., 7., 9., 10., 11., 13., 14., 15.], [ 9., 10., 11., 13., 14., 15., 0., 0., 0.]] when i = 2, row_index % 4 == i: [[ 0., 0., 0., 2., 3., 4., 6., 7., 8.], [ 2., 3., 4., 6., 7., 8., 10., 11., 12.], [ 6., 7., 8., 10., 11., 12., 14., 15., 16.], [10., 11., 12., 14., 15., 16., 0., 0., 0.]] when i = 3, row_index % 4 == i: [[ 0., 0., 0., 3., 4., 0., 7., 8., 0.], [ 3., 4., 0., 7., 8., 0., 11., 12., 0.], [ 7., 8., 0., 11., 12., 0., 15., 16., 0.], [11., 12., 0., 15., 16., 0., 0., 0., 0.]] I have tried hard coding it and it doesn't seem practical when the data becomes larger and the size becomes dynamic and I assume that there would be a better way to come about it. temp0 = data[0][0][0][:] temp1 = data[0][0][4][:] temp2 = data[0][0][8][:] temp3 = data[0][0][12][:] temp = torch.stack([temp0,temp1,temp2,temp3],dim = 0) Also, it would be great if the result can come back in one tensor like : tensor Y = ([[[ 0., 0., 0., 0., 1., 2., 0., 5., 6.], [ 0., 1., 2., 0., 5., 6., 0., 9., 10.], [ 0., 5., 6., 0., 9., 10., 0., 13., 14.], [ 0., 9., 10., 0., 13., 14., 0., 0., 0.]], [[ 0., 0., 0., 1., 2., 3., 5., 6., 7.], [ 1., 2., 3., 5., 6., 7., 9., 10., 11.], [ 5., 6., 7., 9., 10., 11., 13., 14., 15.], [ 9., 10., 11., 13., 14., 15., 0., 0., 0.]], [[ 0., 0., 0., 2., 3., 4., 6., 7., 8.], [ 2., 3., 4., 6., 7., 8., 10., 11., 12.], [ 6., 7., 8., 10., 11., 12., 14., 15., 16.], [10., 11., 12., 14., 15., 16., 0., 0., 0.]], [[ 0., 0., 0., 3., 4., 0., 7., 8., 0.], [ 3., 4., 0., 7., 8., 0., 11., 12., 0.], [ 7., 8., 0., 11., 12., 0., 15., 16., 0.], [11., 12., 0., 15., 16., 0., 0., 0., 0.]]])
You can achieve this by first constructing a tensor containing the selected rows, then using torch.gather to assemble the final tensor. Assuming we two lists I and N containing the values of i and n respectively: I = [0, 1, 2, 3] N = [4, 4, 4, 4] First we construct the index tensor: >>> index = torch.stack([(torch.arange(16) % n == i).nonzero() for i, n in zip(I, N)]) tensor([[[ 0], [ 4], [ 8], [12]], [[ 1], [ 5], [ 9], [13]], [[ 2], [ 6], [10], [14]], [[ 3], [ 7], [11], [15]]]) Then some expanding and reshaping is required: >>> index_ = index[None].flatten(1,2).expand(X.size(0), -1, X.size(-1)) tensor([[[ 0, 0, 0, 0, 0, 0, 0, 0, 0], [ 4, 4, 4, 4, 4, 4, 4, 4, 4], [ 8, 8, 8, 8, 8, 8, 8, 8, 8], [12, 12, 12, 12, 12, 12, 12, 12, 12], [ 1, 1, 1, 1, 1, 1, 1, 1, 1], [ 5, 5, 5, 5, 5, 5, 5, 5, 5], [ 9, 9, 9, 9, 9, 9, 9, 9, 9], [13, 13, 13, 13, 13, 13, 13, 13, 13], [ 2, 2, 2, 2, 2, 2, 2, 2, 2], [ 6, 6, 6, 6, 6, 6, 6, 6, 6], [10, 10, 10, 10, 10, 10, 10, 10, 10], [14, 14, 14, 14, 14, 14, 14, 14, 14], [ 3, 3, 3, 3, 3, 3, 3, 3, 3], [ 7, 7, 7, 7, 7, 7, 7, 7, 7], [11, 11, 11, 11, 11, 11, 11, 11, 11], [15, 15, 15, 15, 15, 15, 15, 15, 15]]]) As a rule of thumb, we want index_ to have the same number of dimensions as X. Now we can apply torch.gather and reshape to the final form: >>> X.gather(1, index_).reshape(len(X), *index.shape[:2], -1) tensor([[[[ 0., 0., 0., 0., 1., 2., 0., 5., 6.], [ 0., 1., 2., 0., 5., 6., 0., 9., 10.], [ 0., 5., 6., 0., 9., 10., 0., 13., 14.], [ 0., 9., 10., 0., 13., 14., 0., 0., 0.]], [[ 0., 0., 0., 1., 2., 3., 5., 6., 7.], [ 1., 2., 3., 5., 6., 7., 9., 10., 11.], [ 5., 6., 7., 9., 10., 11., 13., 14., 15.], [ 9., 10., 11., 13., 14., 15., 0., 0., 0.]], [[ 0., 0., 0., 2., 3., 4., 6., 7., 8.], [ 2., 3., 4., 6., 7., 8., 10., 11., 12.], [ 6., 7., 8., 10., 11., 12., 14., 15., 16.], [10., 11., 12., 14., 15., 16., 0., 0., 0.]], [[ 0., 0., 0., 3., 4., 0., 7., 8., 0.], [ 3., 4., 0., 7., 8., 0., 11., 12., 0.], [ 7., 8., 0., 11., 12., 0., 15., 16., 0.], [11., 12., 0., 15., 16., 0., 0., 0., 0.]]]]) This method can be extended to batch tensors: >>> index = torch.stack([(torch.arange(16) % n == i).nonzero() for i, n in zip(I, N)]) >>> index_ = index[None,None].flatten(2,3).expand(X.size(0), X.size(1), -1, X.size(-1)) >>> X.gather(2, index_).reshape(*X.shape[:2], *index.shape[:2], -1)
https://stackoverflow.com/questions/69938529/
Pytorch / Numpy dimensions math: Why [n] + [n, 1] = [n, n]
I'm trying to understand the logic for inferring the sematics of addition in numpy/torch. Here is an example that caused a bug in my program: import numpy as np x = np.arange(2) # shape (2,) y = x.reshape(-1, 1) # shape (2, 1) z = y + x # expected (2,) or (2, 1) print(z.shape) # (2, 2) So basically reshape happened in unrelated operation, however x and y still had same number of elements, and I've expected to get resulting shape either [2,] or [2, 1] as the addition happens on the axis where all elements live. My questions: why do I get [2,2] shape? What's bigger picture behind it that can help expect this outcome in similar, but different scenarios?
This is caused by broadcasting, where the following example is given: x = np.arange(4) # shape (4,) xx = x.reshape(4,1) # shape (4,1) y = np.ones(5) # shape (5,) x + y # ValueError: operands could not be broadcast together with shapes (4,) (5,) xx + y # shape (4, 5) "When either of the dimensions compared is 1, the other is used. In other words, dimensions with size 1 are stretched or β€œcopied” to match the other."
https://stackoverflow.com/questions/69942318/
How do I use autograd for a separate function independent of backpropagate in PyTorch?
I have two variables, x and theta. I am trying to minimise my loss with respect to theta only, but as part of my loss function I need the derivative of a different function (f) with respect to x. This derivative itself is not relevant to the minimisation, only its output. However, when implementing this in PyTorch I am getting a Runtime error. A minimal example is as follows: # minimal example of two different autograds import torch from torch.autograd.functional import jacobian def f(theta, x): return torch.sum(theta * x ** 2) def df(theta, x): J = jacobian(lambda x: f(theta, x), x) return J # example evaluations of the autograd gradient x = torch.tensor([1., 2.]) theta = torch.tensor([1., 1.], requires_grad = True) # derivative should be 2*theta*x (same as an analytical) with torch.no_grad(): print(df(theta, x)) print(2*theta*x) tensor([2., 4.]) tensor([2., 4.]) # define some arbitrary loss as a fn of theta loss = torch.sum(df(theta, x)**2) loss.backward() gives the following error RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn If I provide an analytic derivative (2*theta*x), it works fine: loss = torch.sum((2*theta*x)**2) loss.backward() Is there a way to do this in PyTorch? Or am I limited in some way? Let me know if anyone needs any more details. PS I am imagining the solution is something similar to the way that JAX does autograd, as that is what I am more familiar with. What I mean here is that in JAX I believe you would just do: from jax import grad df = grad(lambda x: f(theta, x)) and then df would just be a function that can be called at any point. But is PyTorch the same? Or is there some conflict within .backward() that causes this error?
PyTorch's jacobian does not create a computation graph unless you explicitely ask for it J = jacobian(lambda x: f(theta, x), x, create_graph=True) .. with create_graph argument. The documentation is quite clear about it create_graph (bool, optional) – If True, the Jacobian will be computed in a differentiable manner
https://stackoverflow.com/questions/69942564/
How to change the directory of mlflow logs?
I am using MLflow to log the metrics but I want to change the default saving logs directory. So, instead of writing log files besides my main file, I want to store them to /path/outputs/lg . I don't know how to change it. I use it without in the Model. import os from time import time import mlflow import numpy as np import torch import tqdm # from segmentation_models_pytorch.utils import metrics from AICore.emergency_landing.metrics import IoU, F1 from AICore.emergency_landing.utils import AverageMeter from AICore.emergency_landing.utils import TBLogger class Model: def __init__(self, model, num_classes=5, ignore_index=0, optimizer=None, scheduler=None, criterion=None, device=None, epochs=30, train_loader=None, val_loader=None, tb_logger: TBLogger = None, logger=None, best_model_path=None, model_check_point_path=None, load_from_best_model=None, load_from_model_checkpoint=None, early_stopping=None, debug=False): self.debug = debug self.early_stopping = { 'init': early_stopping, 'changed': 0 } self.optimizer = optimizer self.scheduler = scheduler self.criterion = criterion self.device = device self.epochs = epochs self.train_loader = train_loader self.val_loader = val_loader self.model = model.to(device) self.tb_logger = tb_logger self.logger = logger self.best_loss = np.Inf if not os.path.exists(best_model_path): os.makedirs(best_model_path) self.best_model_path = best_model_path if not os.path.exists(model_check_point_path): os.makedirs(model_check_point_path) self.model_check_point_path = model_check_point_path self.load_from_best_model = load_from_best_model self.load_from_model_checkpoint = load_from_model_checkpoint if self.load_from_best_model is not None: self.load_model(path=self.load_from_best_model) if self.load_from_model_checkpoint is not None: self.load_model_checkpoint(path=self.load_from_model_checkpoint) self.train_iou = IoU(num_classes=num_classes, ignore_index=ignore_index) self.val_iou = IoU(num_classes=num_classes, ignore_index=ignore_index) self.test_iou = IoU(num_classes=num_classes, ignore_index=ignore_index) self.train_f1 = F1(num_classes=num_classes, ignore_index=ignore_index, mdmc_average='samplewise') self.val_f1 = F1(num_classes=num_classes, ignore_index=ignore_index, mdmc_average='samplewise') self.test_f1 = F1(num_classes=num_classes, ignore_index=ignore_index, mdmc_average='samplewise') def metrics(self, is_train=True): if is_train: train_losses = AverageMeter('Training Loss', ':.4e') train_iou = AverageMeter('Training iou', ':6.2f') train_f_score = AverageMeter('Training F_score', ':6.2f') return train_losses, train_iou, train_f_score else: val_losses = AverageMeter('Validation Loss', ':.4e') val_iou = AverageMeter('Validation mean iou', ':6.2f') val_f_score = AverageMeter('Validation F_score', ':6.2f') return val_losses, val_iou, val_f_score def fit(self): self.logger.info("\nStart training\n\n") start_training_time = time() with mlflow.start_run(): for e in range(self.epochs): start_training_epoch_time = time() self.model.train() train_losses_avg, train_iou_avg, train_f_score_avg = self.metrics(is_train=True) with tqdm.tqdm(self.train_loader, unit="batch") as tepoch: tepoch.set_description(f"Epoch {e}") for image, target in tepoch: # Transfer Data to GPU if available image = image.to(self.device) target = target.to(self.device) # Clear the gradients self.optimizer.zero_grad() # Forward Pass # out = self.model(image)['out'] # if unet == true => remove ['out'] out = self.model(image) # Find the Loss loss = self.criterion(out, target) # Calculate Loss train_losses_avg.update(loss.item(), image.size(0)) # Calculate gradients loss.backward() # Update Weights self.optimizer.step() iou = self.train_iou(out.cpu(), target.cpu()).item() train_iou_avg.update(iou) f1_score = self.train_f1(out.cpu(), target.cpu()).item() train_f_score_avg.update(f1_score) tepoch.set_postfix(loss=train_losses_avg.avg, iou=train_iou_avg.avg, f_score=train_f_score_avg.avg) if self.debug: break self.tb_logger.log(log_type='criterion/training', value=train_losses_avg.avg, epoch=e) self.tb_logger.log(log_type='iou/training', value=train_iou_avg.avg, epoch=e) self.tb_logger.log(log_type='f_score/training', value=train_f_score_avg.avg, epoch=e) mlflow.log_metric('criterion/training', train_losses_avg.avg, step=e) mlflow.log_metric('iou/training', train_iou_avg.avg, step=e) mlflow.log_metric('f_score/training', train_f_score_avg.avg, step=e) end_training_epoch_time = time() - start_training_epoch_time print('\n') self.logger.info( f'Training Results - [{end_training_epoch_time:.3f}s] Epoch: {e}:' f' f_score: {train_f_score_avg.avg:.3f},' f' IoU: {train_iou_avg.avg:.3f},' f' Loss: {train_losses_avg.avg:.3f}') # validation step val_loss = self.evaluation(e) # apply scheduler if self.scheduler: self.scheduler.step() # early stopping if self.early_stopping['init'] >= self.early_stopping['changed']: self._early_stopping_model(val_loss=val_loss) else: print(f'The model can not learn more, Early Stopping at epoch[{e}]') break # save best model if self.best_model_path is not None: self._best_model(val_loss=val_loss, path=self.best_model_path) # model check points if self.model_check_point_path is not None: self.save_model_check_points(path=self.model_check_point_path, epoch=e, net=self.model, optimizer=self.optimizer, loss=self.criterion, avg_loss=train_losses_avg.avg) # log mlflow if self.scheduler: mlflow.log_param("get_last_lr", self.scheduler.get_last_lr()) mlflow.log_param("scheduler", self.scheduler.state_dict()) self.tb_logger.flush() if self.debug: break end_training_time = time() - start_training_time print(f'Finished Training after {end_training_time:.3f}s') self.tb_logger.close() def evaluation(self, epoch): print('Validating...') start_validation_epoch_time = time() self.model.eval() # Optional when not using Model Specific layer with torch.no_grad(): val_losses_avg, val_iou_avg, val_f_score_avg = self.metrics(is_train=False) with tqdm.tqdm(self.val_loader, unit="batch") as tepoch: for image, target in tepoch: # Transfer Data to GPU if available image = image.to(self.device) target = target.to(self.device) # out = self.model(image)['out'] # if unet == true => remove ['out'] out = self.model(image) # Find the Loss loss = self.criterion(out, target) # Calculate Loss val_losses_avg.update(loss.item(), image.size(0)) iou = self.val_iou(out.cpu(), target.cpu()).item() val_iou_avg.update(iou) f1_score = self.val_f1(out.cpu(), target.cpu()).item() val_f_score_avg.update(f1_score) tepoch.set_postfix(loss=val_losses_avg.avg, iou=val_iou_avg.avg, f_score=val_f_score_avg.avg) if self.debug: break print('\n') self.tb_logger.log(log_type='criterion/validation', value=val_losses_avg.avg, epoch=epoch) self.tb_logger.log(log_type='iou/validation', value=val_iou_avg.avg, epoch=epoch) self.tb_logger.log(log_type='f_score/validation', value=val_f_score_avg.avg, epoch=epoch) mlflow.log_metric('criterion/validation', val_losses_avg.avg, step=epoch) mlflow.log_metric('iou/validation', val_iou_avg.avg, step=epoch) mlflow.log_metric('f_score/validation', val_f_score_avg.avg, step=epoch) end_validation_epoch_time = time() - start_validation_epoch_time self.logger.info( f'validation Results - [{end_validation_epoch_time:.3f}s] Epoch: {epoch}:' f' f_score: {val_f_score_avg.avg:.3f},' f' IoU: {val_iou_avg.avg:.3f},' f' Loss: {val_losses_avg.avg:.3f}') print('\n') return val_losses_avg.avg def _save_model(self, name, path, params): torch.save(params, path) def _early_stopping_model(self, val_loss): if self.best_loss < val_loss: self.early_stopping['changed'] += 1 else: self.early_stopping['changed'] = 0 def _best_model(self, val_loss, path): if self.best_loss > val_loss: self.best_loss = val_loss name = f'/best_model_loss_{self.best_loss:.2f}'.replace('.', '_') self._save_model(name, path=f'{path}/{name}.pt', params={ 'model_state_dict': self.model.state_dict(), }) print(f'The best model is saved with criterion: {self.best_loss:.2f}') def save_model_check_points(self, path, epoch, net, optimizer, loss, avg_loss): name = f'/model_epoch_{epoch}_loss_{avg_loss:.2f}'.replace('.', '_') self._save_model(name, path=f'{path}/{name}.pt', params={ 'epoch': epoch, 'model_state_dict': net.state_dict(), 'optimizer_state_dict': optimizer.state_dict(), 'criterion': loss, }) print(f'model checkpoint is saved at model_epoch_{epoch}_loss_{avg_loss:.2f}') def load_model_checkpoint(self, path): checkpoint = torch.load(path) self.model.load_state_dict(checkpoint['model_state_dict']) self.optimizer.load_state_dict(checkpoint['optimizer_state_dict']) epoch = checkpoint['epoch'] self.criterion = checkpoint['criterion'] return epoch def load_model(self, path): best_model = torch.load(path) self.model.load_state_dict(best_model['model_state_dict'])
The solution is: mlflow.set_tracking_uri(uri=f'file://{hydra.utils.to_absolute_path("../output/mlruns")}') exp = mlflow.get_experiment_by_name(name='Emegency_landing') if not exp: experiment_id = mlflow.create_experiment(name='Emegency_landing', artifact_location=f'file://{hydra.utils.to_absolute_path("../output/mlruns")}') else: experiment_id = exp.experiment_id And then you should pass the experiment Id to: with mlflow.start_run(experiment_id=experiment_id): pass If you don't mention the /path/mlruns, when you run the command of mlflow ui, it will create another folder automatically named mlruns. so, pay attention to this point to have the same name as mlruns.
https://stackoverflow.com/questions/69944447/
Efficient way to get "neuron-edge-neuron" values in a neural network
I'm working on a visual networks project where I'm trying to plot several node-edge-node values in an interactive graph. I have several neural networks (this is one example): import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 2) self.fc2 = nn.Linear(2, 3) self.fc3 = nn.Linear(3, 1) def forward(self, x): x1 = self.fc1(x) x = torch.relu(x1) x2 = self.fc2(x) x = torch.relu(x2) x3 = self.fc3(x) return x3, x2, x1 net = Model() How can I get the node-edge-node (neuron-edge-neuron) values in the network in an efficient way? Some of these networks have a large number of parameters. Note that for the first layer it will be input-edge-neuron rather than neuron-edge-neuron. I tried saving each node values after the fc layers (ie x1,x2,x3) so I won't have to recompute them, but I'm not sure how to do the edges and match them to their corresponding neurons in an efficient way. The output I'm looking for is a list of lists of node-edge-node values. Though it can also be a tensor of tensors if it's easier. For example, in the above network from the first layer I will have 2 triples (1x2), from the 2nd layer I will have 6 of them (2x3), and in the last layer I will have 3 triples (3x1). The issue is matching nodes (ie neurons) values (one from layer n-1 and one from layer n) with the corresponding edges in an efficient way.
Confession: Let's start by saying that I modified your code a bit to make it convenient. You can do everything in the form it originally was. I also changed the specific number of neurons just for playing around (I am sure you can revert them back). I created a summary object (returned by .forward() function) that contains entire execution trace of the network, i.e. (input, weight, output) tuples for *every layer. class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(3, 5) self.fc2 = nn.Linear(5, 7) self.fc3 = nn.Linear(7, 2) def forward(self, x): summary = [] running_x = x for layer in self.children(): out = layer(running_x) # triplet of (input, weight, output) for each layer summary.append((running_x, layer.weight, out)) running_x = out return summary model = Model() batch_size = 32 X = torch.rand(batch_size, 3) summary = model(X) The core logic is only this much for L in summary: # iterate over the (ip, weight, out) tuple for each layer ip, weight, out = L # unpack them ip = ip[:, :, None, None].repeat(1, 1, out.shape[-1], 1) weight = weight.T[None, :, :, None].repeat(batch_size, 1, 1, 1) out = out[:, None, :, None].repeat(1, ip.shape[1], 1, 1) triplets = torch.cat([ip, weight, out], -1) So the triplets variable (one for each layer) is all you are looking for. It has a size (batch_size, layer_in_dim, layer_out_dim, 3) Let's see specifically the triplets for first layer. >> triplets.shape (32, 3, 5, 3) E.g., given a sample index b = 12, input neuron index i = 1 and output neuron index j = 3, you have exactly node-edge-node tuples >> triplets[b][i][j] tensor([0.7080, 0.3442, 0.7344], ...) Verify: Let's manually verify the correctness. The 12th sample's 1st dimension is # Its the first layer we are looking, so input comes from user >> X[12][1] tensor(0.7080) CHECK. The connecting weight between 1st input neuron and 3rd output neuron for first layer >> model.fc1.weight.T[1][3] # weight matrix is transposed, so had to do .T tensor(0.3442, ...) CHECK. The output of 3rd neuron for 12th sample can be retrieved from its activation tensor >> _, _, out = summary[0] # first layer's output tensor >> out[12][3] tensor(0.7344, ...) ALSO CHECK. I hope that's what you wanted. If anymore info/changed needed, feel free to comment. I don't think it can get any more efficient that that.
https://stackoverflow.com/questions/69946161/
OSError: [WinError 127] The specified procedure could not be found
While importing torch (import torch) I'm facing the following error message: OSError: [WinError 127] The specified procedure could not be found. Error loading "C:\Users\myUserName\anaconda3\lib\site-packages\torch\lib\jitbackend_test.dll" or one of its dependencies. I tried the suggestion from this article but without success. Any ideas how to fix it? My environment: NVIDIA GeForce GTX 1650 Windows 11 Cuda 11.5 Conda 4.10.3 Python 3.8.5 Torch 1.10 Microsoft Visual C++ Redistributable installed (https://aka.ms/vs/17/release/vc_redist.x64.exe)
Fortunately, after extensive research, I found a solution. Someone suggested me to create a new conda environment. And that worked for me! Solution: create new conda env by: conda create --name new-env install python: conda install python=3.8.5 run: conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch test cuda: import torch; print(torch.version.cuda); print(torch.cuda.is_available())
https://stackoverflow.com/questions/69958526/