instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
I can predict one image but not a set of images with a pytorch resnet18 model, how can i predict a set of images in a list using pytorch models?
x is a list of (36, 60, 3) images. I am trying to predict with a pytorch pretrained resnet18 the output on my images. I took x as a list of 2 images. when I take only 1 image, i get the prediction with no errors as it follows: im = x[0] preprocess = transforms.Compose([ transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] )]) # Pass the image for preprocessing and the image preprocessed img_preprocessed = preprocess(im) # Reshape, crop, and normalize the input tensor for feeding into network for evaluation batch_img_tensor = torch.unsqueeze(img_preprocessed, 0) resnet18.eval() out = resnet18(batch_img_tensor).flatten() but it does not work when i set im=x. Something goes wrong in preprocessing line and I get this error: TypeError: pic should be PIL Image or ndarray. Got <class 'list'> I tried Variable (torch.tensot(x)) as follows : x=dataset(source_p) y=Variable(torch.tensor(x)) print(y.shape) resnet18(y) I get the following error : RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[2, 36, 60, 3] to have 3 channels, but got 36 channels instead My question is : how can I predict all images in x list at once? Thanks!
Eventually I created a class that takes x and transforms all elements : class formDataset(Dataset): def __init__(self, imgs, transform=None): self.imgs = imgs self.transform = transform def __len__(self): return len(self.imgs) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() image = self.imgs[idx] sample = {image} if self.transform: sample = self.transform(sample) return sample after I call l_set=formDataset(imgs=x,transform=preprocess) l_loader = DataLoader(l_set, batch_size=2) for data in (l_loader): features=resnet(outputs)
https://stackoverflow.com/questions/69964502/
Check equality of any top k entries in rows of tensor A against argmax in rows of tensor B
New to tensors/pytorch. I have two 2d tensors, A and B. A contains floats that represent the probability assigned to a certain index. B contains a one-hot binary vector in the correct index. A tensor([[0.1, 0.4, 0.5], [0.5, 0.4, 0.1], [0.4, 0.5, 0.1]]) B tensor([[0, 0, 1], [0, 1, 0], [0, 0, 1]]) I would like to find the number of rows where the index of any top-k values of A match the one-hot index in B. In this case, k=2. My attempt: tops = torch.topk(A, 2, dim=1) top_idx = tops.indices top_2_matches = torch.where((torch.any(top_idx, 1) == B.argmax(dim=1))) If done properly, the example should return a tensor([0, 1]), since the first 2 rows have top-2 matches, but I get (tensor([1]),) as a return. Not sure where I'm going wrong here. Thanks for any help!
Try this: top_idx = torch.topk(A, 2, dim=1).indices row_indicator = (top_idx == B.argmax(dim=1).unsqueeze(dim=1)).any(dim=1) top_2_matches = torch.arange(len(row_indicator))[row_indicator] For example: >>> import torch >>> A = torch.tensor([[0.1, 0.4, 0.5], ... [0.5, 0.4, 0.1], ... [0.4, 0.5, 0.1]]) >>> B = torch.tensor([[0, 0, 1], ... [0, 1, 0], ... [0, 0, 1]]) >>> tops = torch.topk(A, 2, dim=1) >>>tops torch.return_types.topk( values=tensor([[0.5000, 0.4000], [0.5000, 0.4000], [0.5000, 0.4000]]), indices=tensor([[2, 1], [0, 1], [1, 0]])) >>> top_idx = tops.indices >>> top_idx tensor([[2, 1], [0, 1], [1, 0]]) >>> index_indicator = top_idx == B.argmax(dim=1).unsqueeze(dim=1) >>> index_indicator tensor([[ True, False], [False, True], [False, False]]) >>> row_indicator = index_indicator.any(dim=1) >>> row_indicator tensor([ True, True, False]) >>> top_2_matches = torch.arange(len(row_indicator))[row_indicator] >>> top_2_matches tensor([0, 1])
https://stackoverflow.com/questions/69965271/
cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not numpy.ndarray
Data preparation, and building the model: dataset = datasets.load_iris() data = dataset.data target = dataset.target data_tensor=torch.from_numpy(data).float() target_tensor=torch.from_numpy(target).long() model = nn.Sequential( bnn.BayesLinear(prior_mu=0, prior_sigma=0.1, in_features=4, out_features=100), nn.ReLU(), bnn.BayesLinear(prior_mu=0, prior_sigma=0.1, in_features=100, out_features=3), ) cross_entropy_loss = nn.CrossEntropyLoss() klloss = bnn.BKLLoss(reduction='mean', last_layer_only=False) klweight = 0.01 optimizer = optim.Adam(model.parameters(), lr=0.01) Run the model: Error occurred for step in range(3000): models = model(data_tensor) cross_entropy = cross_entropy_loss(models, target) kl = klloss(model) total_cost = cross_entropy + klweight*kl optimizer.zero_grad() total_cost.backward() optimizer.step() _, predicted = torch.max(models.data, 1) final = target_tensor.size(0) correct = (predicted == target_tensor).sum() print('- Accuracy: %f %%' % (100 * float(correct) / final)) print('- CE : %2.2f, KL : %2.2f' % (cross_entropy.item(), kl.item())) TypeError ----> 3 cross_entropy = cross_entropy_loss(models, target) TypeError: cross_entropy_loss(): argument 'target' (position 2) must be Tensor, not numpy.ndarray I copied code from a web-learning page, but it shows the attached error, any suggestions? many thanks !!
You used the wrong variable for target. cross_entropy_loss(models, target) would be cross_entropy_loss(models, target_tensor)
https://stackoverflow.com/questions/69965519/
Scikit learn train_test_split into Pytorch Dataloader
I have a dataset for binary classification with PNGs titled as in the attachment below, where the first 0 or 1 in the title determines its class. They're in a folder called "annotation_class", and I have a small script to separate these: import cv2,glob import numpy as np from sklearn.model_selection import train_test_split filelist = glob.glob('annotation_class'+'/*.png') size_row, size_col = 256, 256 X,y = [],[] for name in filelist: img = cv2.imread(name) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) img = cv2.resize(img,(size_row, size_col)) X.append(img) y.append(int(name.split('\\')[-1].split('_')[1])) x_train, x_test, y_train, y_test= train_test_split(X, y, test_size=0.2, train_size=0.8, random_state=4) The returns are all lists. I'm using Pytorch for this project and would like to make a custom Dataset to use Dataloader, but I'm not sure how best to include these after I've used train_test_split. Should I scrap that altogether and use something else? I'd like to end up with two DataLoader's for training and testing.
You don't have to rewrite. You can reuse your core data loading logic inside PyTorch Dataset import cv2,glob import numpy as np from sklearn.model_selection import train_test_split from torch.utils.data import Dataset class MyCoolDataset(Dataset): def __init__(self, dir, train=True): filelist = glob.glob(dir + '/*.png') ... # all your data loading logic using cv2, glob .. x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, train_size=0.8, random_state=4) # two modes - train and test if train: self.x_data, self.y_data = x_train, y_train else: self.x_data, self.y_data = x_test, y_test def __getitem__(self, i): return self.x_data[i], self.y_data[i] Then use a DataLoader as usual dl = DataLoader(MyCoolDataset(...), batch_size=...) for X, Y in dl: pass
https://stackoverflow.com/questions/69967363/
IndexError: Target 11 is out of bounds. cross-entropy
How can I change the attached model to fit my dataset for the Bayesian model? my data include 5 variables and 32 results model = nn.Sequential( bnn.BayesLinear(prior_mu=0, prior_sigma=0.1, in_features=5, out_features=100), nn.ReLU(), bnn.BayesLinear(prior_mu=0, prior_sigma=0.1, in_features=100, out_features=3), ) cross_entropy_loss = nn.CrossEntropyLoss() klloss = bnn.BKLLoss(reduction='mean', last_layer_only=False) klweight = 0.01 optimizer = optim.Adam(model.parameters(), lr=0.01) Next is to run the model for step in range(3000): models = model(data_tensor) cross_entropy = cross_entropy_loss(models, target_tensor) kl = klloss(model) total_cost = cross_entropy + klweight*kl optimizer.zero_grad() total_cost.backward() optimizer.step() _,predicted = torch.max(models.data, 1) final = target_tensor.size(0) correct = (predicted == target_tensor).sum() print('- Accuracy: %f %%' % (100 * float(correct) / final)) print('- CE : %2.2f, KL : %2.2f' % (cross_entropy.item(), kl.item())) --------------------------------------------------------------------------- IndexError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_19752/2114183600.py in <module> 1 for step in range(4000): 2 models = model(data_tensor) ----> 3 cross_entropy = cross_entropy_loss(models, target_tensor) 4 kl = klloss(model) 5 total_cost = cross_entropy + klweight*kl ~\anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used 1104 full_backward_hooks, non_full_backward_hooks = [], [] ~\anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target) 1148 1149 def forward(self, input: Tensor, target: Tensor) -> Tensor: -> 1150 return F.cross_entropy(input, target, weight=self.weight, 1151 ignore_index=self.ignore_index, reduction=self.reduction, 1152 label_smoothing=self.label_smoothing) ~\anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing) 2844 if size_average is not None or reduce is not None: 2845 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2846 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) 2847 2848 IndexError: Target 11 is out of bounds. data_tensor.shape:torch.Size([640, 5]) target_tensor.shape: torch.Size([640]) data properties: data_tensor=torch.from_numpy(data).float() target_tensor=torch.from_numpy(target).float() target_tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]) I am not sure what's wrong with my target_sensor and cross_entropy_Loss. Target sensors are repeat numbers between 0-31, for 20 times
I met the same question with you, please check whether the output number in final Linear part of your model is equal to the num_classes. I am not good at English, please ignore the gramme problem hhh
https://stackoverflow.com/questions/69969094/
Pytorch CNN: Expected input to have 1 channel but got 60000 channels instead
While implementing a NN for Fashion MNIST dataset, I'm getting the following error: RuntimeError: Given groups=1, weight of size [6, 1, 5, 5], expected input[1, 60000, 28, 28] to have 1 channels, but got 60000 channels instead I'm inferring that 60000 is the length of my entire dataset, but not sure why is the algorithm giving this error. Can someone help me fix this please? My dataset: (X_train, y_train), (X_test, y_test) = fashion_mnist.load_data() train_data = [] test_data = [] train_data.append([X_train, y_train]) test_data.append([X_test, y_test]) trainloader = torch.utils.data.DataLoader(train_data, shuffle=True, batch_size=100) testloader = torch.utils.data.DataLoader(test_data, shuffle=True, batch_size=100) I'm getting the error in following order (as per stack trace): 8 #making predictions ----> 9 y_pred = model(images) 32 #first hidden layer ---> 33 x = self.conv1(x) Update 1 Added the line: images = images.transpose(0, 1) to transpose the image as pointed out by Ivan but now getting the error: RuntimeError: expected scalar type Byte but found Float
You input is shaped (1, 60000, 28, 28), while it should be shaped (60000, 1, 28, 28). You can fix this by transposing the first two axes: >>> x.transpose(0, 1)
https://stackoverflow.com/questions/69975400/
What is the correct way to update an input variable during training?
I have an input inp = torch.tensor([1.0]) and a neural network class Model_updater(nn.Module): def __init__(self): super(Model_updater, self).__init__() self.fc1 = nn.Linear(1, 2) self.fc2 = nn.Linear(2, 3) self.fc3 = nn.Linear(3, 2) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = self.fc3(x) return x net_updater = Model_updater() opt_updater = optim.Adam(net_updater.parameters()) I'm trying to update my input using the neural network's output: inp = torch.tensor([1.0]) epochs = 3 for i in range(epochs): opt_updater.zero_grad() inp_copy = inp.detach().clone() mu, sigma = net_updater(inp_copy) dist1 = Normal(mu, torch.abs(sigma)) a = dist1.rsample() inp += a loss = torch.tensor(5.0) - inp loss.backward(retain_graph=True) opt_updater.step() But getting the error: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 2]], which is output 0 of TBackward, is at version 2; expected version 1 I also tried changing the loss calculations with loss = torch.tensor(5.0) - inp_copy But got the error RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn I also tried without the retain_graph=True but I get RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time. Which doesn't really makes sense to me because I don't see where I'm calling backward() twice
Most likely, this is what you want inp1 = inp + a # create a separate variable for updated value inp.data = inp1.data # update the value without touching the graph loss = torch.tensor(5.0) - inp1 # use updated value which has gradient
https://stackoverflow.com/questions/69979036/
How to convert horizontal bounding box to oriented bounding box in object detection task
I am trying to detect oriented bounding boxes with faster rcnn for a long time but I couldn't make it to do so. I aim to detect objects in DOTA dataset. I was using built-in faster rcnn model in pytorch but realized that it does not support OBB. Then I found another library named detectron2 that is built on the pytorch framework. Built-in faster rcnn network in detectron2 is actually compatible with OBB but I could not make that model work with DOTA. Because I could not convert DOTA box annotations to (cx, cy, w, h, a). In DOTA, objects are annotated by coordinates of 4 corners which are (x1,y1,x2,y2,x3,y3,x4,y4). I cant come up with a solution that converts these 4 coordinates to (cx, cy, w, h, a), where cx and cy are the center point of OBB and w, h and a are width, height and angle respectively. Is there any suggestion?
If you have your boxes in an Nx8 tensor/array, you can conver them to (cx, cy, w, h, a) by doing (assuming first point is top left, second point is bottom left, then bottom right, then top right...): def DOTA_2_OBB(boxes): #calculate the angle of the box using arctan (degrees here) angle = (torch.atan((boxes[:,7] - boxes[:,5])/(boxes[:,6] - boxes[:,4]))*180/np.pi).float() #centrepoint is the mean of adjacent points cx = boxes[:,[4,0]].mean(1) cy = boxes[:,[7,3]].mean(1) #calculate w and h based on the distance between adjacent points w = ((boxes[:,7] - boxes[:,1])**2+(boxes[:,6] - boxes[:,0])**2)**0.5 h = ((boxes[:,1] - boxes[:,3])**2+(boxes[:,0] - boxes[:,2])**2)**0.5 return torch.stack([cx,cy,w,h,angle]).T Then giving this a test... In [40]: boxes = torch.tensor([[0,2,1,0,2,2,1,3],[4,12,8,2,12,12,8,22]]).float() In [43]: DOTA_2_OBB(boxes) Out[43]: tensor([[ 1.0000, 1.5000, 1.4142, 2.2361, -45.0000], [ 8.0000, 12.0000, 10.7703, 10.7703, -68.1986]])
https://stackoverflow.com/questions/69990066/
Reshape tensor in custom order (PyTorch)
I have the following tensor t = torch.tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ,15, 16, 17]) I want to reshape it in the following way: t_reshape = torch.tensor([[0, 1, 2, 6, 7, 8, 12, 13, 14], [3, 4, 5, 9, 10, 11, 15, 16, 17]]) Are there ways to efficiently reshape tensors in that fashion?
You can achieve this by reshaping, transposing and reshaping back: >>> t.reshape(3,2,-1).transpose(0,1).reshape(2,-1) tensor([[ 0, 1, 2, 6, 7, 8, 12, 13, 14], [ 3, 4, 5, 9, 10, 11, 15, 16, 17]])
https://stackoverflow.com/questions/69990831/
PyTorch - RuntimeError: Sizes of tensors must match except in dimension 2. Got 55 and 54 (The offending index is 0)
I used a 3DUnet with resblock to segment a CT image with input torch size of [1, 1, 96, 176, 176], but it throws the following error: RuntimeError: Sizes of tensors must match except in dimension 2. Got 55 and 54 (The offending index is 0) Hence I traced back, I found the error comes from outputs = self.decoder_stage2(torch.cat([short_range6, long_range3], dim=1)) + short_range6 The short_range6 has torch.Size([1, 64, 24, 55, 40]) while the long_range3 has torch.Size([1, 128, 24, 54, 40]). I think this is because something not being a power of 2, but cannot find where to modify. Below is the complete structure of the network, really thanks for any help! class ResUNet(nn.Module): def __init__(self, in_channel=1, out_channel=2 ,training=True): super().__init__() self.training = training self.dorp_rate = 0.2 self.encoder_stage1 = nn.Sequential( nn.Conv3d(in_channel, 16, 3, 1, padding=1), nn.PReLU(16), nn.Conv3d(16, 16, 3, 1, padding=1), nn.PReLU(16), ) self.encoder_stage2 = nn.Sequential( nn.Conv3d(32, 32, 3, 1, padding=1), nn.PReLU(32), nn.Conv3d(32, 32, 3, 1, padding=1), nn.PReLU(32), nn.Conv3d(32, 32, 3, 1, padding=1), nn.PReLU(32), ) self.encoder_stage3 = nn.Sequential( nn.Conv3d(64, 64, 3, 1, padding=1), nn.PReLU(64), nn.Conv3d(64, 64, 3, 1, padding=2, dilation=2), nn.PReLU(64), nn.Conv3d(64, 64, 3, 1, padding=4, dilation=4), nn.PReLU(64), ) self.encoder_stage4 = nn.Sequential( nn.Conv3d(128, 128, 3, 1, padding=3, dilation=3), nn.PReLU(128), nn.Conv3d(128, 128, 3, 1, padding=4, dilation=4), nn.PReLU(128), nn.Conv3d(128, 128, 3, 1, padding=5, dilation=5), nn.PReLU(128), ) self.decoder_stage1 = nn.Sequential( nn.Conv3d(128, 256, 3, 1, padding=1), nn.PReLU(256), nn.Conv3d(256, 256, 3, 1, padding=1), nn.PReLU(256), nn.Conv3d(256, 256, 3, 1, padding=1), nn.PReLU(256), ) self.decoder_stage2 = nn.Sequential( nn.Conv3d(128 + 64, 128, 3, 1, padding=1), nn.PReLU(128), nn.Conv3d(128, 128, 3, 1, padding=1), nn.PReLU(128), nn.Conv3d(128, 128, 3, 1, padding=1), nn.PReLU(128), ) self.decoder_stage3 = nn.Sequential( nn.Conv3d(64 + 32, 64, 3, 1, padding=1), nn.PReLU(64), nn.Conv3d(64, 64, 3, 1, padding=1), nn.PReLU(64), nn.Conv3d(64, 64, 3, 1, padding=1), nn.PReLU(64), ) self.decoder_stage4 = nn.Sequential( nn.Conv3d(32 + 16, 32, 3, 1, padding=1), nn.PReLU(32), nn.Conv3d(32, 32, 3, 1, padding=1), nn.PReLU(32), ) self.down_conv1 = nn.Sequential( nn.Conv3d(16, 32, 2, 2), nn.PReLU(32) ) self.down_conv2 = nn.Sequential( nn.Conv3d(32, 64, 2, 2), nn.PReLU(64) ) self.down_conv3 = nn.Sequential( nn.Conv3d(64, 128, 2, 2), nn.PReLU(128) ) self.down_conv4 = nn.Sequential( nn.Conv3d(128, 256, 3, 1, padding=1), nn.PReLU(256) ) self.up_conv2 = nn.Sequential( nn.ConvTranspose3d(256, 128, 2, 2), nn.PReLU(128) ) self.up_conv3 = nn.Sequential( nn.ConvTranspose3d(128, 64, 2, 2), nn.PReLU(64) ) self.up_conv4 = nn.Sequential( nn.ConvTranspose3d(64, 32, 2, 2), nn.PReLU(32) ) # 256*256 self.map4 = nn.Sequential( nn.Conv3d(32, out_channel, 1, 1), nn.Upsample(scale_factor=(1, 1, 1), mode='trilinear', align_corners=False), nn.Softmax(dim=1) ) # 128*128 self.map3 = nn.Sequential( nn.Conv3d(64, out_channel, 1, 1), nn.Upsample(scale_factor=(2, 2, 2), mode='trilinear', align_corners=False), nn.Softmax(dim=1) ) # 64*64 self.map2 = nn.Sequential( nn.Conv3d(128, out_channel, 1, 1), nn.Upsample(scale_factor=(4, 4, 4), mode='trilinear', align_corners=False), nn.Softmax(dim=1) ) # 32*32 self.map1 = nn.Sequential( nn.Conv3d(256, out_channel, 1, 1), nn.Upsample(scale_factor=(8, 8, 8), mode='trilinear', align_corners=False), nn.Softmax(dim=1) ) def forward(self, inputs): long_range1 = self.encoder_stage1(inputs) + inputs short_range1 = self.down_conv1(long_range1) long_range2 = self.encoder_stage2(short_range1) + short_range1 long_range2 = F.dropout(long_range2, self.dorp_rate, self.training) short_range2 = self.down_conv2(long_range2) long_range3 = self.encoder_stage3(short_range2) + short_range2 long_range3 = F.dropout(long_range3, self.dorp_rate, self.training) short_range3 = self.down_conv3(long_range3) long_range4 = self.encoder_stage4(short_range3) + short_range3 long_range4 = F.dropout(long_range4, self.dorp_rate, self.training) short_range4 = self.down_conv4(long_range4) outputs = self.decoder_stage1(long_range4) + short_range4 outputs = F.dropout(outputs, self.dorp_rate, self.training) output1 = self.map1(outputs) short_range6 = self.up_conv2(outputs) outputs = self.decoder_stage2(torch.cat([short_range6, long_range3], dim=1)) + short_range6 outputs = F.dropout(outputs, self.dorp_rate, self.training) output2 = self.map2(outputs) short_range7 = self.up_conv3(outputs) outputs = self.decoder_stage3(torch.cat([short_range7, long_range2], dim=1)) + short_range7 outputs = F.dropout(outputs, self.dorp_rate, self.training) output3 = self.map3(outputs) short_range8 = self.up_conv4(outputs) outputs = self.decoder_stage4(torch.cat([short_range8, long_range1], dim=1)) + short_range8 output4 = self.map4(outputs) if self.training is True: return output1, output2, output3, output4 else: return output4```
You can pad your image's dimensions to be multiple of 32's. By doing this, you won't have to change the 3DUnet's parameters. I will provide you a simple code to show you the way. # I assume that you named your input image as img padding1_mult = math.floor(img.shape[3] / 32) + 1 padding2_mult = math.floor(img.shape[4] / 32) + 1 pad1 = (32 * padding1_mult) - img.shape[3] pad2 = (32 * padding2_mult) - img.shape[4] padding = nn.ReplicationPad2d((0, pad2, pad1, 0, 0 ,0)) img = padding(img) After this operation, your image shape must be torch.Size([1, 1, 96, 192, 192])
https://stackoverflow.com/questions/69998250/
Reshaping a PyTorch tensor to 3 dimensions when it is originally 2 dimensions?
I would like to take a PyTorch tensor that I have, originally of shape torch.Size([15000, 23]) and reshape it such that it is compatible to run in spiking neural network (snnTorch is the framework I am using in PyTorch). The shape of the tensor to input into the SNN should [time x batch_size x feature_dimensions] (more information on this can be found here. Right now, I am using the following code: # Create data of dimensions [time x batch_size x feature_dimensions] time_steps = 200 batch_size = 1 feature_dimensions = torch_input_tensor.size(dim = 1) torch_input_tensor_reshaped = torch.reshape(torch_input_tensor, (time_steps, batch_size, feature_dimensions)) print(torch_input_tensor_reshaped.size()) print(torch_input_tensor_reshaped) When I run this code, I get the following error: RuntimeError: shape '[200, 1, 23]' is invalid for input of size 345000 I may be using the wrong function to do this, but the idea is that I currently have 15000 data points, and 23 input features. I want to essentially feed in the same data point (23 features, 1 data point) 200 times (200 time steps). In the example provided in the link, the use the following code: spk_in = spikegen.rate_conv(torch.rand((200, 784))).unsqueeze(1) The unsqueeze function is for the input along dim=1 to indicate 'one batch' of data. How can I make my data shape compatible to run in an SNN?
The thing with SNNs is that they are time-varying, so if your data is time-static, then your options are either to: pass the same sample at every time step to the network, or convert it into a spike-train before passing it in. You appear to be going for (2), although (1) might be easier. During training, you would pass the same sample to the network over and over again: for step in range(num_steps): cur1 = self.fc1(x) If your input was time varying, you would have to change x to x[step] to iterate through each time step. An example of this with MNIST is given here. If the above code doesn't help, then it'd be useful to see how you define your network. Try something like: # Define Network class Net(nn.Module): def __init__(self): super().__init__() # Initialize layers self.fc1 = nn.Linear(23, 100) # 23 inputs, 100 hidden neurons self.lif1 = snn.Leaky(beta=0.9) # randomly chose 0.9 self.fc2 = nn.Linear(100, num_outputs) # change num_outputs to your number of classes self.lif2 = snn.Leaky(beta=0.9) def forward(self, x): # Initialize hidden states at t=0 mem1 = self.lif1.init_leaky() mem2 = self.lif2.init_leaky() # Record the final layer spk2_rec = [] mem2_rec = [] for step in range(num_steps): cur1 = self.fc1(x) spk1, mem1 = self.lif1(cur1, mem1) cur2 = self.fc2(spk1) spk2, mem2 = self.lif2(cur2, mem2) spk2_rec.append(spk2) mem2_rec.append(mem2) return torch.stack(spk2_rec, dim=0), torch.stack(mem2_rec, dim=0)
https://stackoverflow.com/questions/70008413/
Using multiprocessing with AllenNLP decoding is sluggish compared to non-multiprocessing case
I'm using the AllenNLP (version 2.6) semantic role labeling model to process a large pile of sentences. My Python version is 3.7.9. I'm on MacOS 11.6.1. My goal is to use multiprocessing.Pool to parallelize the work, but the calls via the pool are taking longer than they do in the parent process, sometimes substantially so. In the parent process, I have explicitly placed the model in shared memory as follows: from allennlp.predictors import Predictor from allennlp.models.archival import load_archive import allennlp_models.structured_prediction.predictors.srl PREDICTOR_PATH = "...<srl model path>..." archive = load_archive(PREDICTOR_PATH) archive.model.share_memory() PREDICTOR = Predictor.from_archive(archive) I know the model is only being loaded once, in the parent process. And I place the model in shared memory whether or not I'm going to make use of the pool. I'm using torch.multiprocessing, as many recommend, and I'm using the spawn start method. I'm calling the predictor in the pool using Pool.apply_async, and I'm timing the calls within the child processes. I know that the pool is using the available CPUs (I have six cores), and I'm nowhere near running out of physical memory, so there's no reason for the child processes to be swapped to disk. Here's what happens, for a batch of 395 sentences: Without multiprocessing: 638 total processing seconds (and elapsed time). With a 4-process pool: 293 seconds elapsed time, 915 total processing seconds. With a 12-process pool: 263 seconds elapsed time, 2024 total processing seconds. The more processes, the worse the total AllenNLP processing time - even though the model is explicitly in shared memory, and the only thing that crosses the process boundary during the invocation is the input text and the output JSON. I've done some profiling, and the first thing that leaps out at me is that the function torch._C._nn.linear is taking significantly longer in the multiprocessing cases. This function takes two tensors as arguments - but there are no tensors being passed across the process boundary, and I'm decoding, not training, so the model should be entirely read-only. It seems like it has to be a problem with locking or competition for the shared model resource, but I don't understand at all why that would be the case. And I'm not a torch programmer, so my understanding of what's happening is limited. Any pointers or suggestions would be appreciated.
Turns out that I wasn't comparing exactly the right things. This thread: https://github.com/allenai/allennlp/discussions/5471 goes into all the detail. Briefly, because pytorch can use additional resources under the hood, my baseline test without multiprocessing wasn't taxing my computer enough when running two instances in parallel; I had to run 4 instances to see the penalty, and in that case, the total processing time was essentially the same for 4 parallel nonmultiprocessing invocations, or one multiprocessing case with 4 subprocesses.
https://stackoverflow.com/questions/70008621/
Map each input element to array in pytorch
I have a (A,B) tensor, and I'm looking for a performant way to map each value from that tensor to an array to create a new tensor of size (A,B,N). Here's a functioning piece of code showing what I'm trying to do. A, B, N = 3, 4, 5 my_old_tensor = torch.ones((A,B), dtype=torch.float32) my_new_tensor = torch.zeros((A, B, N), dtype=torch.float32) for val in range(N): my_new_tensor[:,:,val] = (val - my_old_tensor)/2 My code is currently quite slow, and I think the for-loop is the problem. Is there a more pytorch-performant way of doing this that eliminates the for-loop? I've tried something like this x = torch.arange(0, N, 1, dtype=torch.float32) my_new_tensor = (x - my_old_tensor)/2 but that gives "RuntimeError: The size of tensor a (5) must match the size of tensor b (4) at non-singleton dimension 1" Any help would be appreciated!
Use unsqueeze to broadcast my_old_tensor: my_new_tensor = (torch.arange(N, dtype=torch.float32) - my_old_tensor.unsqueeze(-1))/2
https://stackoverflow.com/questions/70009995/
Using PyTorch tensors with scikit-learn
Can I use PyTorch tensors instead of NumPy arrays while working with scikit-learn? I tried some methods from scikit-learn like train_test_split and StandardScalar, and it seems to work just fine, but is there anything I should know when I'm using PyTorch tensors instead of NumPy arrays? According to this question on https://scikit-learn.org/stable/faq.html#how-can-i-load-my-own-datasets-into-a-format-usable-by-scikit-learn : numpy arrays or scipy sparse matrices. Other types that are convertible to numeric arrays such as pandas DataFrame are also acceptable. Does that mean using PyTorch tensors is completely safe?
I don't think PyTorch tensors are directly supported by scikit-learn. But you can always get the underlying numpy array from PyTorch tensors my_nparray = my_tensor.numpy() and then use it with scikit learn functions.
https://stackoverflow.com/questions/70021547/
Installed PyTorch but VS code wont import torch
I have installed PyTorch by just using pip install torch. I also have the correct version of python installed (I don't have two different versions). When I ran the following in VS code it returned the correct version, and when I check if PyTorch is installed with pip it works. import torch print(torch.__version__) But for some reason VS code doesn't recognise torch when I try and import it, or try to inheret from nn.Module in a class. I just get the error "Import torch could not be resolved" and "nn is not defined" I'm really confused as to what to do as I cant find any other people having this issue and all PyTorch VS code examples I look at just install the python extension and have no issue.
Check if vscode is using the same python interpreter and environment in which pytorch was installed. Hit cmd + shift + P and search for Interpreter. Click on Python Interpreter and choose the correct one. Check the image shown below
https://stackoverflow.com/questions/70025033/
Error while using Vgg16 (transfter learning) - RuntimeError: Failed to run torchinfo
I'm trying to use VGG16 with transfer learning, but getting errors: model = torchvision.models.vgg16(pretrained=True) print(model) for param in model.parameters(): param.requires_grad = False input_size = model.classifier[0].in_features model.classifier[0] = nn.Sequential( nn.Linear(input_size, 128), nn.ReLU(), nn.Linear(128, 2)) torchinfo.summary(model, (64, 3, 224, 224)) VGG16: VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace=True) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace=True) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace=True) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace=True) (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (18): ReLU(inplace=True) (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace=True) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace=True) (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (25): ReLU(inplace=True) (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (27): ReLU(inplace=True) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) (classifier): Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace=True) (5): Dropout(p=0.5, inplace=False) (6): Linear(in_features=4096, out_features=1000, bias=True) ) ) Error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 260 if isinstance(x, (list, tuple)): --> 261 _ = model.to(device)(*x, **kwargs) 262 elif isinstance(x, dict): ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1101 or _global_forward_hooks or _global_forward_pre_hooks): -> 1102 return forward_call(*input, **kwargs) 1103 # Do not call functions when jit is used ~/.local/lib/python3.8/site-packages/torchvision/models/vgg.py in forward(self, x) 51 x = torch.flatten(x, 1) ---> 52 x = self.classifier(x) 53 return x ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1119 -> 1120 result = forward_call(*input, **kwargs) 1121 if _global_forward_hooks or self._forward_hooks: ~/.local/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input) 140 for module in self: --> 141 input = module(input) 142 return input ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1119 -> 1120 result = forward_call(*input, **kwargs) 1121 if _global_forward_hooks or self._forward_hooks: ~/.local/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input) 102 def forward(self, input: Tensor) -> Tensor: --> 103 return F.linear(input, self.weight, self.bias) 104 ~/.local/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias) -> 1848 return torch._C._nn.linear(input, weight, bias) 1849 RuntimeError: mat1 and mat2 shapes cannot be multiplied (64x2 and 4096x4096) The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) /tmp/ipykernel_8204/406510959.py in <module> 11 nn.Linear(128, 2)) 12 ---> 13 torchinfo.summary(model, (64, 3, 224, 224)) ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs) 192 input_data, input_size, batch_dim, device, dtypes 193 ) --> 194 summary_list = forward_pass( 195 model, x, batch_dim, cache_forward_pass, device, **kwargs 196 ) ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 268 except Exception as e: 269 executed_layers = [layer for layer in summary_list if layer.executed] --> 270 raise RuntimeError( 271 "Failed to run torchinfo. See above stack traces for more details. " 272 f"Executed layers up to: {executed_layers}" RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: [Sequential: 1, Conv2d: 2, ReLU: 2, Conv2d: 2, ReLU: 2, MaxPool2d: 2, Conv2d: 2, ReLU: 2, Conv2d: 2, ReLU: 2, MaxPool2d: 2, Conv2d: 2, ReLU: 2, Conv2d: 2, ReLU: 2, Conv2d: 2, ReLU: 2, MaxPool2d: 2, Conv2d: 2, ReLU: 2, Conv2d: 2, ReLU: 2, Conv2d: 2, ReLU: 2, MaxPool2d: 2, Conv2d: 2, ReLU: 2, Conv2d: 2, ReLU: 2, Conv2d: 2, ReLU: 2, MaxPool2d: 2, AdaptiveAvgPool2d: 1, Sequential: 2, Linear: 3, ReLU: 3, Linear: 3, ReLU: 2, Dropout: 2] I'm using the following pytorch packages versions: torch==1.10.0 torchinfo==1.5.3 torchvision==0.11.1 What is wrong ? WHat do I need to change in order to use VGG16 (with transfer learning) ?
In case you're trying to change the final classifier, you should change the whole, not only one layer: model.classifier = nn.Sequential( nn.Linear(input_size, 128), nn.ReLU(), nn.Linear(128, 2))
https://stackoverflow.com/questions/70032269/
Understanding key_dim and num_heads in tf.keras.layers.MultiHeadAttention
For example, I have input with shape (1, 1000, 10) (so, src.shape wil be (1, 1000, 10)). Then: This works class Model(tf.keras.Model): def __init__(self): super(Model, self).__init__() self.attention1 = tf.keras.layers.MultiHeadAttention(num_heads=20, key_dim=9) self.dense = tf.keras.layers.Dense(10, activation="softmax") def call(self, src): output = self.attention1(src, src) output = tf.reshape(output, [1, 10000]) output = self.dense(output) return output And this: class Model(tf.keras.Model): def __init__(self): super(Model, self).__init__() self.attention1 = tf.keras.layers.MultiHeadAttention(num_heads=123, key_dim=17) self.dense = tf.keras.layers.Dense(10, activation="softmax") def call(self, src): output = self.attention1(src, src) output = tf.reshape(output, [1, 10000]) output = self.dense(output) return output So, this layer works with whatever num_heads and key_dim but secuence length (i.e. 1000) should be divisible by num_heads. WHY? Is it a bug? For example, the same code for Pytorch doesn't work. Also, what is a key_dim then... Thanks in advance.
There are two dimensions d_k and d_v in the original paper. key_dim corresponds to d_k, which is the size of the key and query dimensions for each head. d_k can be more or less than d_v. d_v = embed_dim/num_head. d_v is the size of the value for each head. In their paper, Vaswani et al. set d_k = d_v. This, however, is not required. Conceptually, you can have d_k << d_v or even d_k >> d_v. In the former, you will have dimensionality reduction for each key/query in each head and in the latter, you will have dimensionality expansion for each key/query in each attention head. The change in dimension is transparently handled in the dimensionality of the weight matrix that is multiplied into each query/key/value.
https://stackoverflow.com/questions/70034327/
freezing layers in a neural network in pytorch
I have a cascaded neural network whereby the the output of first network become the input of second network. The first neural network is pretrained so I just initialise it with those pretrained weights. However, I want to freeze the first neural network so that when training its only updating weights of the second neural network. How can I do that? My network looks like: ###First network class LambdaBase(nn.Sequential): def __init__(self, fn, *args): super(LambdaBase, self).__init__(*args) self.lambda_func = fn def forward_prepare(self, input): output = [] for module in self._modules.values(): output.append(module(input)) return output if output else input class Lambda(LambdaBase): def forward(self, input): return self.lambda_func(self.forward_prepare(input)) class LambdaMap(LambdaBase): def forward(self, input): return list(map(self.lambda_func,self.forward_prepare(input))) class LambdaReduce(LambdaBase): def forward(self, input): return reduce(self.lambda_func,self.forward_prepare(input)) def get_first_model(load_weights = True): pretrained_model_reloaded_th = nn.Sequential( # Sequential, nn.Conv2d(4,300,(19, 1)), nn.BatchNorm2d(300), nn.ReLU(), nn.MaxPool2d((3, 1),(3, 1)), nn.Conv2d(300,200,(11, 1)), nn.BatchNorm2d(200), nn.ReLU(), nn.MaxPool2d((4, 1),(4, 1)), nn.Conv2d(200,200,(7, 1)), nn.BatchNorm2d(200), nn.ReLU(), nn.MaxPool2d((4, 1),(4, 1)), Lambda(lambda x: x.view(x.size(0),-1)), # Reshape, nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(2000,1000)), # Linear, nn.BatchNorm1d(1000,1e-05,0.1,True),#BatchNorm1d, nn.ReLU(), nn.Dropout(0.3), nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(1000,1000)), # Linear, nn.BatchNorm1d(1000,1e-05,0.1,True),#BatchNorm1d, nn.ReLU(), nn.Dropout(0.3), nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(1000,164)), # Linear, nn.Sigmoid(), ) if load_weights: sd = torch.load('pretrained_model.pth') pretrained_model_reloaded_th.load_state_dict(sd) return pretrained_model_reloaded_th ### second network def next_model_architecture(): next_model = nn.Sequential( nn.Linear(164, 64), nn.ReLU(), nn.Linear(64, 1), nn.Sigmoid()) return next_model ### joining two networks def cascading_model(first_model,next_model): network = nn.Sequential(first_model, next_model) return network first_model = get_first_model(load_weights = True) next_model = next_model_architecture() network = cascading_model(first_model,next_model) If I do: first_model = first_model.eval() Will this freeze my first neural network and only update weights of second network during training?
Freezing any parameter is done by setting it's .requires_grad to False. Do so by iterating over all parameters of the module (that you want to freeze) for p in first_model.parameters(): p.requires_grad = False
https://stackoverflow.com/questions/70035391/
Deploy pytorch .pth model in a python script
After successfully training my yolact model using a custom dataset I'm happy with the inference results outputted by eval.py using this command from anaconda terminal: python eval.py --trained_model=./weights/yolact_plus_resnet50_abrasion_39_10000.pth --config=yolact_resnet_abrasion_config --score_threshold=0.8 --top_k=15 --images=./images:output_images Now I want to run this inference from my own python script instead of using the anaconda terminal. I wanna be able to get the bounding boxes of detections made on webcam frames obtained by this code below. Any idea ? import cv2 src = cv2.VideoCapture(0) while True: ret, frame = src.read() cv2.imshow('frame', frame) key = cv2.waitKey(5) if key == (27): break The eval.py code is here at Yolact repository https://github.com/dbolya/yolact/blob/master/eval.py
I will just write the pseudocode here for you. Step 1: Try loading the model using the lines starting from here and ending here Step 2: Use this function for evaluation. Instead of cv2.imread, you just need to send your frame Step 3: Follow this function to get the bounding boxes. Especially this line. Just trackback the 't' variable and you will get your bounding boxes. Hope it helps. Let me know if you need more clarification.
https://stackoverflow.com/questions/70038604/
How to transform output of neural network and still train?
I have a neural network which outputs output. I want to transform output before the loss and backpropogation happen. Here is my general code: with torch.set_grad_enabled(training): outputs = net(x_batch[:, 0], x_batch[:, 1]) # the prediction of the NN # My issue is here: outputs = transform_torch(outputs) loss = my_loss(outputs, y_batch) if training: scheduler.step() loss.backward() optimizer.step() I have a transformation function which I put my output through: def transform_torch(predictions): torch_dimensions = predictions.size() torch_grad = predictions.grad_fn cuda0 = torch.device('cuda:0') new_tensor = torch.ones(torch_dimensions, dtype=torch.float64, device=cuda0, requires_grad=True) for i in range(int(len(predictions))): a = predictions[i] # with torch.no_grad(): # Note: no training happens if this line is kept in new_tensor[i] = torch.flip(torch.cumsum(torch.flip(a, dims = [0]), dim = 0), dims = [0]) return new_tensor My problem is that I get an error on the next to last line: RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation. Any suggestions? I have already tried using "with torch.no_grad():" (commented), but this results in very poor training and I believe that the gradients don't backpropogate properly after the transformation function. Thanks!
The error is quite correct about what the issue is - when you create a new tensor with requires_grad = True, you create a leaf node in the graph (just like parameters of a model) and not allowed to do in-place operation on it. The solution is simple, you do not need to create the new_tensor in advance. It is not supposed to be a leaf node; just create it on the fly new_tensor = [ ] for i in range(int(len(predictions))): a = predictions[i] new_tensor.append(torch.flip(torch.cumsum(torch.flip(a, ...), ...), ...)) new_tensor = torch.stack(new_tensor, 0) This new_tensor will inherit all properties like dtype, device from predictions and will have require_grad = True already.
https://stackoverflow.com/questions/70040998/
How to change my model's parameters manually?
I have a model: import torch import torch.nn as nn import torch.optim as optim class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(1, 3) self.fc2 = nn.Linear(3, 2) self.fc3 = nn.Linear(2, 1) def forward(self, x): x1 = self.fc1(x) x = torch.relu(x1) x2 = self.fc2(x) x = torch.relu(x2) x3 = self.fc3(x) return x3, x2, x1 net = Model() I'm trying to manually update the parameters with i, j = torch.meshgrid(torch.arange(3), torch.arange(2)) i = i.reshape(-1) j = j.reshape(-1) update = torch.ones(6,1) print(i) print(j) print(update.squeeze()) print(net.fc2.weight[j,i].data) net.fc2.weight[j,i].data += update.squeeze() print(net.fc2.weight[j,i].data) >>> tensor([0, 0, 1, 1, 2, 2]) tensor([0, 1, 0, 1, 0, 1]) tensor([1., 1., 1., 1., 1., 1.]) tensor([-0.0209, -0.3770, 0.4982, -0.2123, -0.2630, -0.5580]) tensor([-0.0209, -0.3770, 0.4982, -0.2123, -0.2630, -0.5580]) But nothing seems to change. However, if I do print(net.fc2.weight[1].data) net.fc2.weight[1].data += 1 print(net.fc2.weight[1].data) >>> tensor([-0.3770, -0.2123, -0.5580]) tensor([0.6230, 0.7877, 0.4420]) They do change. What am I doing wrong in the first approach and how can I make it work?
The point you are missing is simple: when you do a "constant indexing", you get a "view" of the tensor, otherwise (i.e. indexing with another tensor) you get a new tensor or a new node in the computation graph. PyTorch provides a .data_ptr() method to peek into the underlying memory pointer. >> net.fc2.weight.data.data_ptr() 2911054070464 >> net.fc2.weight[1].data.data_ptr() 2911054070464 Constant indexing did not change the underlying raw data. However, indexing with a tensor creates a new node and hence a new underlying raw memory location >> net.fc2.weight[j, i].data.data_ptr() 2911054068672 So, in your case, you are creating a new tensor/node with net.fc2.weight[j,i] and assigning new value to it. That's why your original tensor remains unchanged. In the constant indexing case, you are changing the same memory location, hence the change is reflected. To fix your problem, instead of doing this net.fc2.weight[j,i].data += update.squeeze() do this net.fc2.weight.data[j,i] += update.squeeze() .. essentially grabbing the underlying .data first and then indexing it, which means the indexing operation is entirely out of autograd's tracking machinery.
https://stackoverflow.com/questions/70042425/
How to index tensor with high dimensional tensor
I have two tensors x for values and y for indexing. x.shape and y.shape are the same except the last dimension. For example: x=torch.tensor([[1, 6, 7, 5, 6], [8, 6, 7, 8, 4], [2, 8, 3, 5, 6]]) # x.shape:(3,5) y=torch.tensor([[1, 2], [2, 3], [2, 2]]) # y.shape:(3,2) Is there a simple way to slice it x[y]so that the result is: torch.tensor([6,7],[7,8],[3,3]) What if x and y are higher dimensional tensors: x.shape=(a,b,c,d) y.shape=(a,b,c,e) # a, b, c, d, e are positive integer
That's exactly what torch.gather does. >> torch.gather(x, -1, y) # index along the last dimension tensor([[6, 7], [7, 8], [3, 3]]) The exact same command works with your high-dimensional case as well.
https://stackoverflow.com/questions/70046675/
Calling a custom module in pythorch with two parameters
I've tried to create three custom modules as follows: import torch class VerySimple(torch.nn.Module): def __init__(self): super(VerySimple, self).__init__() def forward(self, x): return x * 3.0 class VerySimple2(torch.nn.Module): def __init__(self): super(VerySimple, self).__init__() def forward(self, x, y): return x * y * 3.0 After that I created two very simple networks as such: vs = VerySimple() vs2 = VerySimple2() print(vs(2.0)) print(vs2(2.0, 3.0)) The examples work as I expect when I call this which outputs 6.0 and 18.0 Now I try to create something a little more interesting like so: class Simple2(torch.nn.Module): def __init__(self): super(Simple2, self).__init__() self.model1 = torch.nn.Sequential( torch.nn.Linear(1, 3), torch.nn.ReLU(), torch.nn.Linear(3, 1) ) self.model2 = torch.nn.Sequential( torch.nn.Linear(1, 3), torch.nn.ReLU(), torch.nn.Linear(3, 1) ) def forward(self, x, y): x1 = self.model1(x) y2 = self.model2(y) return torch.cat((x1,y2),1) But now when I get an "AttributeError" with the code below: s2 = Simple2() s2(2,3) What am I doing wrong with the s2(2,3)? Alternatively: What is the minimal working example with s2(2,3)? As requested I add the full log here: 6.0 18.0 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-3-f3b0dc51220a> in <module> 43 44 s2 = Simple2() ---> 45 s2(2,3) /opt/app-root/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), <ipython-input-3-f3b0dc51220a> in forward(self, x, y) 38 39 def forward(self, x, y): ---> 40 x1 = self.model1(x) 41 y2 = self.model2(y) 42 return torch.cat((x1,y1),1) /opt/app-root/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /opt/app-root/lib/python3.6/site-packages/torch/nn/modules/container.py in forward(self, input) 115 def forward(self, input): 116 for module in self: --> 117 input = module(input) 118 return input 119 /opt/app-root/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /opt/app-root/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input) 91 92 def forward(self, input: Tensor) -> Tensor: ---> 93 return F.linear(input, self.weight, self.bias) 94 95 def extra_repr(self) -> str: /opt/app-root/lib/python3.6/site-packages/torch/nn/functional.py in linear(input, weight, bias) 1686 if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops): 1687 return handle_torch_function(linear, tens_ops, input, weight, bias=bias) -> 1688 if input.dim() == 2 and bias is not None: 1689 # fused op is marginally faster 1690 ret = torch.addmm(bias, input, weight.t()) AttributeError: 'int' object has no attribute 'dim' I tried the example with tensors below from Tamir as such: x = torch.tensor([2.0]) y = torch.tensor([3.0]) s2(x,y) But I end up with this error instead: --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-8-c61a0803c9b9> in <module> 43 44 s2 = Simple2() ---> 45 s2(torch.tensor([2.0]), torch.tensor([3.0])) 46 /opt/app-root/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), <ipython-input-8-c61a0803c9b9> in forward(self, x, y) 40 x1 = self.model1(x) 41 y2 = self.model2(y) ---> 42 return torch.cat((x1,y2),1) 43 44 s2 = Simple2() IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) Last NOTE: I had to modify the Simple2 example to this instead to get it to work with Tamir's solution: class Simple2(torch.nn.Module): def __init__(self): super(Simple2, self).__init__() self.model1 = torch.nn.Sequential( torch.nn.Linear(1, 3), torch.nn.ReLU(), torch.nn.Linear(3, 1) ) self.model2 = torch.nn.Sequential( torch.nn.Linear(1, 3), torch.nn.ReLU(), torch.nn.Linear(3, 1) ) def forward(self, x, y): x1 = self.model1(x) y2 = self.model2(y) # replaced this with row below: return torch.cat((x1,y2),1) return x1 + y2
This is probably a type issue, Pytorch Linear and ReLU layer expect Tensors as inputs and your are passing integers. Do something like x = torch.tensor([2]) y = torch.tensor([3]) s2(x,y)
https://stackoverflow.com/questions/70046799/
Error: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
I am trying to train a DNN model using pytorch, and I want to use GPU to train my model. I am able to successfully copy my model to the GPU using model.to(device), where device = cuda:0. However, the standard methods for copying input to the GPU, (RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same), that is, X.to(device) and X.cuda() does not give me the desired output. Following is the method I am currently implementing: def train_loop(self, dataloader, device): size = len(dataloader.dataset) for batch, (X, y) in enumerate(dataloader): # Compute prediction and loss print(device) X.to(device) print(X.is_cuda) y.to(device) pred = self.model(X) loss = self.loss_fn(pred, y) On printing the device value print(device) it shows as: cuda:0. But when I run print(X.is_cuda) it returns false. (Screenshot attached below). Please let me know where I am going wrong. Thank you!
X.to(device) does nothing. change it to: x=x.to(device) Of course this should be done to any parameter\variable you want on the GPU
https://stackoverflow.com/questions/70047616/
RuntimeError due to inplace operation in GAN generator architecture with skip connections
I get the following error for a GAN model I am using to perform image colorization. It uses the LAB color space as is common in image colorization. The generator generates the a ad b channels for a given L channel. The discriminator is fed all three channels after concatenation. RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [64, 64, 128, 128]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). I believe the error is due to the skip connections but I cannot quite put my finger on it. Any help would be appreciated! Here is the model: class NetGen(nn.Module): '''Generator''' def __init__(self): super(NetGen, self).__init__() self.conv1 = nn.Conv2d(1, 64, 3, stride=2, padding=1, bias=False) self.bnorm1 = nn.BatchNorm2d(64) self.relu1 = nn.LeakyReLU(0.1) self.conv2 = nn.Conv2d(64, 128, 3, stride=2, padding=1, bias=False) self.bnorm2 = nn.BatchNorm2d(128) self.relu2 = nn.LeakyReLU(0.1) self.conv3 = nn.Conv2d(128, 256, 3, stride=2, padding=1, bias=False) self.bnorm3 = nn.BatchNorm2d(256) self.relu3 = nn.LeakyReLU(0.1) self.conv4 = nn.Conv2d(256, 512, 3, stride=2, padding=1, bias=False) self.bnorm4 = nn.BatchNorm2d(512) self.relu4 = nn.LeakyReLU(0.1) self.conv5 = nn.Conv2d(512, 512, 3, stride=2, padding=1, bias=False) self.bnorm5 = nn.BatchNorm2d(512) self.relu5 = nn.LeakyReLU(0.1) self.deconv6 = nn.ConvTranspose2d(512, 512, 3, stride=2, padding=1, output_padding=1, bias=False) self.bnorm6 = nn.BatchNorm2d(512) self.relu6 = nn.ReLU() self.deconv7 = nn.ConvTranspose2d(512, 256, 3, stride=2, padding=1, output_padding=1, bias=False) self.bnorm7 = nn.BatchNorm2d(256) self.relu7 = nn.ReLU() self.deconv8 = nn.ConvTranspose2d(256, 128, 3, stride=2, padding=1, output_padding=1, bias=False) self.bnorm8 = nn.BatchNorm2d(128) self.relu8 = nn.ReLU() self.deconv9 = nn.ConvTranspose2d(128, 64, 3, stride=2, padding=1, output_padding=1, bias=False) self.bnorm9 = nn.BatchNorm2d(64) self.relu9 = nn.ReLU() self.deconv10 = nn.ConvTranspose2d(64, 2, 3, stride=2, padding=1, output_padding=1, bias=False) self.tanh = nn.Tanh() def forward(self, x): h = x h = self.conv1(h) h = self.bnorm1(h) h = self.relu1(h) pool1 = h h = self.conv2(h) h = self.bnorm2(h) h = self.relu2(h) pool2 = h h = self.conv3(h) h = self.bnorm3(h) h = self.relu3(h) pool3 = h h = self.conv4(h) h = self.bnorm4(h) h = self.relu4(h) pool4 = h h = self.conv5(h) h = self.bnorm5(h) h = self.relu5(h) h = self.deconv6(h) h = self.bnorm6(h) h = self.relu6(h) h += pool4 h = self.deconv7(h) h = self.bnorm7(h) h = self.relu7(h) h += pool3 h = self.deconv8(h) h = self.bnorm8(h) h = self.relu8(h) h += pool2 h = self.deconv9(h) h = self.bnorm9(h) h = self.relu9(h) h += pool1 h = self.deconv10(h) h = self.tanh(h) return h class NetDis(nn.Module): '''Discriminator''' def __init__(self): super(NetDis, self).__init__() self.main = nn.Sequential( nn.Conv2d(3, 64, 3, stride=2, padding=1, bias=False), nn.BatchNorm2d(64), nn.LeakyReLU(0.1), nn.Conv2d(64, 128, 3, stride=2, padding=1, bias=False), nn.BatchNorm2d(128), nn.LeakyReLU(0.1), nn.Conv2d(128, 256, 3, stride=2, padding=1, bias=False), nn.BatchNorm2d(256), nn.LeakyReLU(0.1), nn.Conv2d(256, 512, 3, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.LeakyReLU(0.1), nn.Conv2d(512, 512, 3, stride=2, padding=1, bias=False), nn.BatchNorm2d(512), nn.LeakyReLU(0.1), nn.Conv2d(512, 512, 8, stride=1, padding=0, bias=False), nn.BatchNorm2d(512), nn.LeakyReLU(0.1), nn.Conv2d(512, 1, 1, stride=1, padding=0, bias=False), nn.Sigmoid() ) def forward(self, x): return self.main(x) Here is the weight init function: def weights_init(m): classname = m.__class__.__name__ if classname.find('Conv') != -1: nn.init.normal_(m.weight.data, 0.0, 0.02) elif classname.find('BatchNorm') != -1: nn.init.normal_(m.weight.data, 1.0, 0.02) nn.init.constant_(m.bias.data, 0) Here is the training and validation code: class Trainer: def __init__(self, epochs, batch_size, learning_rate, num_workers): self.epochs = epochs self.batch_size = batch_size self.learning_rate = learning_rate self.num_workers = num_workers self.train_paths = train_paths self.val_paths = val_paths self.real_label = 1 self.fake_label = 0 def train(self): train_dataset = ColorizeData(paths=self.train_paths) train_dataloader = DataLoader(train_dataset, batch_size=self.batch_size, num_workers=self.num_workers,pin_memory=True, drop_last = True) # Model model_G = NetGen().to(device) model_D = NetDis().to(device) model_G.apply(weights_init) model_D.apply(weights_init) optimizer_G = torch.optim.Adam(model_G.parameters(), lr=self.learning_rate, betas=(0.5, 0.999), eps=1e-8, weight_decay=0) optimizer_D = torch.optim.Adam(model_D.parameters(), lr=self.learning_rate, betas=(0.5, 0.999), eps=1e-8, weight_decay=0) criterion = nn.BCELoss() L1 = nn.L1Loss() model_G.train() model_D.train() # train loop for epoch in range(self.epochs): print("Starting Training Epoch " + str(epoch + 1)) for i, data in enumerate(tqdm(train_dataloader)): inputs, input_ab, input_l = data inputs = inputs.to(device) input_ab = input_ab.to(device) input_l = input_l.to(device) model_D.zero_grad() label = torch.full((self.batch_size,), self.real_label, dtype=torch.float, device=device) output = model_D(torch.cat([input_l, input_ab], dim=1)) errD_real = criterion(torch.squeeze(output), label) errD_real.backward() fake = model_G(input_l) label.fill_(self.fake_label) output = model_D(torch.cat([input_l, fake.detach()], dim=1)) errD_fake = criterion(torch.squeeze(output), label) errD_fake.backward() errD = errD_real + errD_fake optimizer_D.step() model_G.zero_grad() label.fill_(self.real_label) output = model_D(torch.cat([input_l, fake], dim=1)) errG = criterion(torch.squeeze(output), label) errG_L1 = L1(fake.view(fake.size(0),-1), input_ab.view(input_ab.size(0),-1)) errG = errG + 100 * errG_L1 errG.backward() optimizer_G.step() print(f'Training: Epoch {epoch + 1} \t\t Discriminator Loss: {\ errD / len(train_dataloader)} \t\t Generator Loss: {\ errG / len(train_dataloader)}') if (epoch + 1) % 1 == 0: errD_val, errG_val, val_len = self.validate(model_D, model_G, criterion, L1) print(f'Validation: Epoch {epoch + 1} \t\t Discriminator Loss: {\ errD_val / val_len} \t\t Generator Loss: {\ errG_val / val_len}') torch.save(model_G.state_dict(), '../Results/Model_GAN/Generator/saved_model_' + str(epoch + 1) + '.pth') torch.save(model_D.state_dict(), '../Results/Model_GAN/Discriminator/saved_model_' + str(epoch + 1) + '.pth') def validate(self, model_D, model_G, criterion, L1): model_G.eval() model_D.eval() with torch.no_grad(): valid_loss = 0.0 val_dataset = ColorizeData(paths=self.val_paths) val_dataloader = DataLoader(val_dataset, batch_size=self.batch_size, num_workers=self.num_workers, pin_memory=True, drop_last = True) for i, data in enumerate(val_dataloader): inputs, input_ab, input_l = data inputs = inputs.to(device) input_ab = input_ab.to(device) input_l = input_l.to(device) label = torch.full((self.batch_size,), self.real_label, dtype=torch.float, device=device) output = model_D(torch.cat([input_l, input_ab], dim=1)) errD_real = criterion(torch.squeeze(output), label) fake = model_G(input_l) label.fill_(self.fake_label) output = model_D(torch.cat([input_l, fake.detach()], dim=1)) errD_fake = criterion(torch.squeeze(output), label) errD = errD_real + errD_fake label.fill_(self.real_label) output = model_D(torch.cat([input_l, fake], dim=1)) errG = criterion(torch.squeeze(output), label) errG_L1 = L1(fake.view(fake.size(0),-1), input_ab.view(input_ab.size(0),-1)) errG = errG + 100 * errG_L1 return errD, errG, len(val_dataloader) EDIT As suggested by @manaclan here is the code I use to run the pipeline: trainer = Trainer(epochs = 100, batch_size = 64, learning_rate = 0.0002, num_workers = 2) trainer.train() Here is the data loader: class ColorizeData(Dataset): def __init__(self, paths): self.input_transform = T.Compose([T.ToTensor(), T.Resize(size=(256,256)), T.Grayscale(), T.Normalize((0.5), (0.5)) ]) self.lab_transform = T.Compose([T.ToTensor(), T.Resize(size=(256,256)), T.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)) ]) self.paths = paths def __len__(self) -> int: return len(self.paths) def __getitem__(self, index: int) -> Tuple[torch.Tensor, torch.Tensor]: image = Image.open(self.paths[index]).convert("RGB") input_image = self.input_transform(image) image_lab = rgb2lab(image) image_lab = self.lab_transform(image_lab) image_l = image_lab[0, :, :] image_ab = image_lab[1:3, :, :] return (input_image.float(), image_ab.float(), image_l.float().reshape(1, 256, 256)) Here are the imports: from typing import Tuple from torch.utils.data import Dataset, DataLoader import torchvision.transforms as T import torch import numpy as np import os import torch.nn as nn import torchvision.models as models import torchvision import torch.nn.functional as functional device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") from PIL import Image import glob import matplotlib.pyplot as plt from tqdm import tqdm from skimage.color import lab2rgb, rgb2lab, rgb2gray from skimage import io from torchvision.transforms.functional import resize To reproduce the error, just use any dataset of color images. I have the following code to get my train, test, and validation images from the folder "Dataset": path = "../Dataset/" paths = np.array(glob.glob(path + "/*.jpg")) rand_indices = np.random.permutation(len(paths)) # Number of images in dataset train_indices, val_indices, test_indices = rand_indices[:3600], rand_indices[3600:4000], rand_indices[4000:] train_paths = paths[train_indices] val_paths = paths[val_indices] test_paths = paths[test_indices] NOTE: I am using Google Colab, maybe this might be a potential problem? Also, I am using torch version 1.10.0+cu111. I did use a sequential model without skip connections for the generator before this, and I did not have this error then.
So apparently, the problem is the inplace skip connection written as h += poolX. Writing this update out of place as h = h + poolX fixed it. h is needed for gradient calculation in some layers, so inplace modification will mess it up.
https://stackoverflow.com/questions/70051776/
How to clear GPU memory after using model?
I'm trying to free up GPU memory after finishing using the model. I checked the nvidia-smi before creating and trainning the model: 402MiB / 7973MiB After creating and training the model, I checked again the GPU memory status with nvidia-smi: 7801MiB / 7973MiB Now I tried to free up GPU memory with: del model torch.cuda.empty_cache() gc.collect() and checked again the GPU memory: 2361MiB / 7973MiB As you can see not all the GPU memory was released (I expected to get 400~MiB / 7973MiB). I can only relase the GPU memory via terminal (sudo fuser -v /dev/nvidia* and kill pid) Is there a way to free up the GPU memory after I done using the model ?
This happens becauce pytorch reserves the gpu memory for fast memory allocation. To learn more about it, see pytorch memory management. To solve this issue, you can use the following code: from numba import cuda cuda.select_device(your_gpu_id) cuda.close() However, this comes with a catch. It closes the GPU completely. So, you can't start training without restarting everything.
https://stackoverflow.com/questions/70051984/
How unfold operation works in pytorch with dilation and stride?
In my case I am applying this unfold operation on a tensor of A as given below: A.shape=torch.Size([16, 309,128]) A = A.unsqueeze(1) # that's I guess for making it 4 dim for unfold operation A_out= F.unfold(A, (7, 128), stride=(1,128),dilation=(3,1)) A_out.shape=torch.Size([16, 896,291]) I am not getting this 291. If the dilation factor is not there, it would be [16,896,303] right? But if dialtion=3 then it's 291 how? Also here stride is not mentioned so deafualt is 1 but what if it is also mentioned like 4. Please guide.
Also here stride is not mentioned so default is 1 but what if it is also mentioned like 4. Your code already has stride=(1,128). If stride is only set to 4 it will be used like (4,4) in this case. This can be easily verified with formula below. If the dilation factor is not there, it would be [16,896,303] right? Yes. Example below. But if dialtion=3 then it's 291 how? Following the formula given in pytorch docs it comes to 291. After doing A.unsqueeze(1) the shape becomes, [16, 1, 309, 128]. Here, N=16, C=1, H=309, W=128. The output dimension is, (N, C * product(kernel_size), L). With kernel_size=(7,128) So this becomes, (16, 1 * 7 * 128, L) = (16, 896, L). L can be calculated using the formula below with multiplication over each dimension. L = d3 * d4 Over height dimension spatial_size[3] = 309, padding[3] = 0 default, dilation[3] = 3, kernel_size[3] = 7, stride[3] = 1. d3 = (309 + 2 * 0 - 3 * (7 - 1) - 1) / 1 + 1 = 291 Over width dimension spatial_size[4] = 128, padding[4] = 0 default, dilation[4] = 1, kernel_size[4] = 128, stride[4] = 128. d4 = (128 + 2 * 0 - 1 * (128 - 1) - 1) / 128 + 1 = 1 So, using above formula L becomes 291. Code import torch from torch.nn import functional as F A = torch.randn([16, 309,128]) print(A.shape) A = A.unsqueeze(1) print(A.shape) A_out= F.unfold(A, kernel_size=(7, 128), stride=(1,128),dilation=(3,1)) print(A_out.shape) Output torch.Size([16, 309, 128]) torch.Size([16, 1, 309, 128]) torch.Size([16, 896, 291]) Links https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html https://pytorch.org/docs/stable/generated/torch.nn.functional.unfold.html
https://stackoverflow.com/questions/70053242/
How can I make a filter in pytorch conv2d
I am really new to pytorch, and I've been making code convolution myself. To apply convolution on input data, I use conv2d. In the documentation, torch.nn.Conv2d(in_channels, out_channels, kernel_size ...) But where is a filter? To convolute, we should do it on input data with kernel. But there is only kernel size, not the elements of the kernel. For example, There is an input data 5x5 and with 2x2 kernel with all 4 kernel's elements are 1 then I can make 4x4 output. So where can I put the elements of kernel?
The filter weights can be accessed using the weight parameter of the Conv2d object. For example, >>> c = torch.nn.Conv2d(in_channels=2, out_channels=2, kernel_size=3) >>> c.weight Parameter containing: tensor([[[[ 0.2156, 0.0930, -0.2319], [ 0.1333, -0.0846, 0.1848], [ 0.0765, -0.1799, -0.1273]], [[ 0.1173, 0.1650, -0.0876], [-0.1353, 0.0616, -0.1136], [-0.2326, -0.1509, 0.0651]]], [[[-0.2026, 0.2210, 0.0409], [-0.0818, 0.0793, 0.1074], [-0.1430, -0.0118, -0.2100]], [[-0.2025, -0.0508, -0.1731], [ 0.0217, -0.1616, 0.0702], [ 0.1903, -0.1864, 0.1523]]]], requires_grad=True) The weights are initialized by default by sampling from a uniform distribution. You can also initialize weights using various weight initialization schemes. If you want to manually change the weights, you can do it by modifying the weight parameter directly. For example, to set all the weights to 1, use, >>> c.weight.data = torch.ones_like(c.weight) >>> c.weight Parameter containing: tensor([[[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], [[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]], [[[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]], [[1., 1., 1.], [1., 1., 1.], [1., 1., 1.]]]], requires_grad=True) Note that during training, the convolutional layers are typically a part of the computational graph, and their weights get automatically updated when making a backward() call.
https://stackoverflow.com/questions/70054739/
PyTorch Tensor broadcasting
I'm trying to figure out how to do the following broadcast: I have two tensors, of sizes (n1,N) and (n2,N) What I want to do is to multiply each row of the first tensor, with each row of the second tensor, and then sum each of there multiplied row result, so that my final tensor should be of the form (n1,n2). I tried this: x1*torch.reshape(x2,(x2.size(dim=0),x2.size(dim=1),1)) But obviously this doesn't work.. Can't figure out how to do this
What you describe seems to be effectively the same as performing a matrix multiplication between the first tensor and the transpose of the second tensor. This can be done as: torch.matmul(x1, x2.T)
https://stackoverflow.com/questions/70055111/
A question about validation process in PyTorch: val_loss lower than train_loss
Is it possible that running my deep-learning, at some point during training it happens that my validation loss becomes lower than my training loss? I'm attaching my code for the training process: def train_model(model, train_loader,val_loader,lr): "Model training" epochs=100 model.train() train_losses = [] val_losses = [] criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=1e-5) #Reduce learning rate if no improvement is observed after 10 Epochs. scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=2, verbose=True) for epoch in range(epochs): for data in train_loader: y_pred = model.forward(data) loss1 = criterion(y_pred[:, 0], data.y[0]) loss2 = criterion(y_pred[:,1], data.y[1]) train_loss = 0.8*loss1+0.2*loss2 optimizer.zero_grad() train_loss.backward() optimizer.step() train_losses.append(train_loss.detach().numpy()) with torch.no_grad(): for data in val_loader: y_val = model.forward(data) loss1 = criterion(y_val[:,0], data.y[0]) loss2 = criterion(y_val[:,1], data.y[1]) val_loss = 0.8*loss1+0.2*loss2 #scheduler.step(loss) val_losses.append(val_loss.detach().numpy()) print(f'Epoch: {epoch}, train_loss: {train_losses[epoch]:.3f} , val_loss: {val_losses[epoch]:.3f}') return train_losses, val_losses It is a multi-task model where I compute separately the two losses and then consider a weighted sum. What I'm not sure about is the indentation of val_loss that may cause some issues when printing out. In general I would say that I have some perplexities about validation: 1)First I pass all the batches that I have in my train_loader and adjust the training loss. 2)Then, I start iterating over my val_loader to make predictions on single batches of unseen data, but what I append in the val_losses list is the validation loss computed by the model on the last batch inside val_loader. I'm not sure if this is correct. I'm attaching the printed train and val losses during training: Epoch: 0, train_loss: 7.315 , val_loss: 7.027 Epoch: 1, train_loss: 7.227 , val_loss: 6.943 Epoch: 2, train_loss: 7.129 , val_loss: 6.847 Epoch: 3, train_loss: 7.021 , val_loss: 6.741 Epoch: 4, train_loss: 6.901 , val_loss: 6.624 Epoch: 5, train_loss: 6.769 , val_loss: 6.493 Epoch: 6, train_loss: 6.620 , val_loss: 6.347 Epoch: 7, train_loss: 6.452 , val_loss: 6.182 Epoch: 8, train_loss: 6.263 , val_loss: 5.996 Epoch: 9, train_loss: 6.051 , val_loss: 5.788 Epoch: 10, train_loss: 5.814 , val_loss: 5.555 Epoch: 11, train_loss: 5.552 , val_loss: 5.298 Epoch: 12, train_loss: 5.270 , val_loss: 5.022 Epoch: 13, train_loss: 4.972 , val_loss: 4.731 Epoch: 14, train_loss: 4.666 , val_loss: 4.431 Epoch: 15, train_loss: 4.357 , val_loss: 4.129 Epoch: 16, train_loss: 4.049 , val_loss: 3.828 Epoch: 17, train_loss: 3.752 , val_loss: 3.539 Epoch: 18, train_loss: 3.474 , val_loss: 3.269 Epoch: 19, train_loss: 3.220 , val_loss: 3.023 Epoch: 20, train_loss: 2.992 , val_loss: 2.803 Epoch: 21, train_loss: 2.793 , val_loss: 2.613 Epoch: 22, train_loss: 2.626 , val_loss: 2.453 Epoch: 23, train_loss: 2.488 , val_loss: 2.323 Epoch: 24, train_loss: 2.378 , val_loss: 2.220 Epoch: 25, train_loss: 2.290 , val_loss: 2.140 Epoch: 26, train_loss: 2.221 , val_loss: 2.078 Epoch: 27, train_loss: 2.166 , val_loss: 2.029 Epoch: 28, train_loss: 2.121 , val_loss: 1.991 Epoch: 29, train_loss: 2.084 , val_loss: 1.959 Epoch: 30, train_loss: 2.051 , val_loss: 1.932 Epoch: 31, train_loss: 2.022 , val_loss: 1.909 Epoch: 32, train_loss: 1.995 , val_loss: 1.887 Epoch: 33, train_loss: 1.970 , val_loss: 1.867 Epoch: 34, train_loss: 1.947 , val_loss: 1.849 Epoch: 35, train_loss: 1.924 , val_loss: 1.831 Epoch: 36, train_loss: 1.902 , val_loss: 1.815 Epoch: 37, train_loss: 1.880 , val_loss: 1.799 Epoch: 38, train_loss: 1.859 , val_loss: 1.783 Epoch: 39, train_loss: 1.839 , val_loss: 1.769 Epoch: 40, train_loss: 1.820 , val_loss: 1.755 Epoch: 41, train_loss: 1.800 , val_loss: 1.742 Epoch: 42, train_loss: 1.781 , val_loss: 1.730 Epoch: 43, train_loss: 1.763 , val_loss: 1.717 Epoch: 44, train_loss: 1.744 , val_loss: 1.705 Epoch: 45, train_loss: 1.726 , val_loss: 1.694 Epoch: 46, train_loss: 1.708 , val_loss: 1.683 ... So I have the suspect that I'm messing up with indentation..
Validation loss can be lower than the training loss. As you mentioned in point 2, you are only storing/appending the train and validation loss on the last batch. This may not be what you want, and you may want to store the training loss at each iteration and look at its average value at the end. That would give you a better idea of training progress as that would be the loss on the whole data and single batch
https://stackoverflow.com/questions/70057479/
How to find argmax/argmin in only selected indices of a Pytorch tensor
I have a distance tensor tensor([ 5, 10, 2, 3, 4], device='cuda:0') And a indices tensor tensor([ 0, 2, 3], device='cuda:0') I want to find argmax of the distance tensor but only on the subset of indices specified by the indices tensor. In this example, I would be looking at 0th, 2nd and 3rd elements of distance tensor (values 5, 2, 3) and returning the index 0 (the biggest value - 5 is on the 0th place in the distance tensor) tensor([ 0], device='cuda:0') Is something like this feasible without the use of for cycles? Thanks
Here an example. You can check that the maximum dist value for the selected subset of items is at index zero, and the final output tensor contains value zero too. Note that as we are using 1D tensors, dim argument in torch.index_select is zero. import torch dist = torch.randn(5, 1) #tensor([[ 0.3392], # [ 0.4472], # [ 0.1398], # [-1.0379], # [ 0.2950]]) idx = torch.tensor([0,2,3]) #tensor([0, 2, 3]) Just using max function and tensor filtering: max_val = torch.max(torch.index_select(dist, 0, idx)).item() #0.33918169140815735 (dist == max_val).nonzero(as_tuple=True)[0] #tensor([0])
https://stackoverflow.com/questions/70065848/
how to convert a python list of lists to tensor using pytorch
I got a list that contains lists of different length. How can i transform it in a tensor in pytorch without using padding? Is it possible? [[3, 5, 10, 11], [1, 5, 10]]
It depends on what you want to achieve with the data structure. You can use torch.sparse, for example: ll = [[3, 5, 10, 11], [1, 5, 10]] n = len(ll) m = max(len(l) for l in ll) ids = [[], []] values = [] for i, l in enumerate(ll): length = len(l) ids[0] += [i] * length # rows ids[1] += list(range(length)) # cols values += l t = torch.sparse_coo_tensor(ids, values, (n, m)) Otherwise, you can try with embedding techniques for corpus of documents, such as bag-of-words (though it will generate still some "padding"), tf-idf, etc. bag-of-words with possible duplicates in inner lists corpus = [[3, 5, 10, 11], [1, 5, 10]] n = len(corpus) m = max(max(inner) for inner in corpus) t = torch.zeros(n, m) for i, doc in enumerate(corpus): torch.bincount(corpus) bag-of-words with distinct values in inner lists corpus = [[3, 5, 10, 11], [1, 5, 10]] n = len(corpus) m = max(max(inner) for inner in corpus) t = torch.zeros(n, m) for i, doc in enumerate(corpus): t[i, doc] = 1
https://stackoverflow.com/questions/70067595/
Is the rotation matrix in Pybullet converting world coordinates into camera or camera to world?
I am working on a project, where I need to replace the renderings by pybullet with renders generated with pytorch3d. I figured out that pybullet and pytorch3d have different definitions for the coordinate systems (see these links: pybullet, pytorch3d; x and z axes are flipped), and I accounted for that in my code. But I still have inconsistency in the rendered objects. I thought the problem could be that while pytorch3d expects a c2w rotation matrix (i.e. camera to world), pybullet could probably expect a w2c rotation matrix. However, I cannot find any documentation related to this. Has anyone ever encountered this problem, or maybe can give some useful hint on how to find out what exactly pybullet expects its rotation matrix to be? Thanks!
I assume you are talking about the viewMatrix expected by pybullet.getCameraImage(). This should indeed be a world-to-camera rotation matrix. However, in pyBullet the camera is looking in negative z-direction while I usually expect it to be in positive one. I am compensating for this by adding a 180°-rotation around the x-axis: rot_x_180 = np.array( [ [1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1], ] ) tf_mat = rot_x_180 @ tf_world_to_camera view_matrix = tf_mat.flatten(order="F") where tf_world_to_camera is a homogeneous rotation matrix.
https://stackoverflow.com/questions/70073425/
CUDA OOM - But the numbers don't add upp?
I am trying to train a model using PyTorch. When beginning model training I get the following error message: RuntimeError: CUDA out of memory. Tried to allocate 5.37 GiB (GPU 0; 7.79 GiB total capacity; 742.54 MiB already allocated; 5.13 GiB free; 792.00 MiB reserved in total by PyTorch) I am wondering why this error is occurring. From the way I see it, I have 7.79 GiB total capacity. The numbers it is stating (742 MiB + 5.13 GiB + 792 MiB) do not add up to be greater than 7.79 GiB. When I check nvidia-smi I see these processes running | 0 N/A N/A 1047 G /usr/lib/xorg/Xorg 168MiB | | 0 N/A N/A 5521 G /usr/lib/xorg/Xorg 363MiB | | 0 N/A N/A 5637 G /usr/bin/gnome-shell 161MiB | I realize that summing all of these numbers might cut it close (168 + 363 + 161 + 742 + 792 + 5130 = 7356 MiB) but this is still less than the stated capacity of my GPU.
This is more of a comment, but worth pointing out. The reason in general is indeed what talonmies commented, but you are summing up the numbers incorrectly. Let's see what happens when tensors are moved to GPU (I tried this on my PC with RTX2060 with 5.8G usable GPU memory in total): Let's run the following python commands interactively: Python 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> a = torch.zeros(1).cuda() >>> b = torch.zeros(500000000).cuda() >>> c = torch.zeros(500000000).cuda() >>> d = torch.zeros(500000000).cuda() The following are the outputs of watch -n.1 nvidia-smi: Right after torch import: | 0 N/A N/A 1121 G /usr/lib/xorg/Xorg 4MiB | Right after the creation of a: | 0 N/A N/A 1121 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 14701 C python 1251MiB | As you can see, you need 1251MB to get pytorch to start using CUDA, even if you only need a single float. Right after the creation of b: | 0 N/A N/A 1121 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 14701 C python 3159MiB | b needs 500000000*4 bytes = 1907MB, this is the same as the increment in memory used by the python process. Right after the creation of c: | 0 N/A N/A 1121 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 14701 C python 5067MiB | No surprise here. Right after the creation of d: | 0 N/A N/A 1121 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 14701 C python 5067MiB | No further memory allocation, and the OOM error is thrown: Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: CUDA out of memory. Tried to allocate 1.86 GiB (GPU 0; 5.80 GiB total capacity; 3.73 GiB already allocated; 858.81 MiB free; 3.73 GiB reserved in total by PyTorch) Obviously: The "already allocated" part is included in the "reserved in total by PyTorch" part. You can't sum them up, otherwise the sum exceeds the total available memory. The minimum memory required to get pytorch running on GPU (1251M) is not included in the "reserved in total" part. So in your case, the sum should consist of: 792MB (reserved in total) 1251MB (minimum to get pytorch running on GPU, assuming this is the same for both of us) 5.13GB (free) 168+363+161=692MB (other processes) They sum up to approximately 7988MB=7.80GB, which is exactly you total GPU memory.
https://stackoverflow.com/questions/70074789/
Piecewise Activation Function
I'm trying to write a piecewise activation function whose slope between -6 and 0 is 0.1 and the other places are one. And the input(X) size is (B, C, H, W). So I concluded that the best way is the simple line code: x[-6<x and x<0] = x[-6<x and x<0] * 0.1 But I face this error: RuntimeError: bool value of Tensor with more than one value is ambiguous Is there any solution for solving this error?
The most simple version of what you need is: import torch def custom_activ(input): return torch.where((input>-6) & (input<0.) , 0.1*input, input)
https://stackoverflow.com/questions/70079367/
Python: Vectorize Calculation Implemented using Iterative Approach
I'm trying to implement some calculation, but I can't figure how to vectorize my code and not using loops. Let me explain: I have a matrix M[N,C] of either 0 or 1. Another matrix Y[N,1] containing values of [0,C-1] (My classes). Another matrix ds[N,M] which is my dataset. My output matrix is of size grad[M,C] and should be calculated as follow: I'll explain for grad[:,0], same logic for any other column. For each row(sample) in ds, if Y[that sample] != 0 (The current column of output matrix) and M[that sample, 0] > 0 , then grad[:,0] += ds[that sample] If Y[that sample] == 0, then grad[:,0] -= (ds[that sample] * <Num of non zeros in M[that sample,:]>) Here is my iterative approach: for i in range(M.size(dim=1)): for j in range(ds.size(dim=0)): if y[j] == i: grad[:,i] = grad[:,i] - (ds[j,:].T * sum(M[j,:])) else: if M[j,i] > 0: grad[:,i] = grad[:,i] + ds[j,:].T
Since you are dealing with three dimensions n, m, and c (in lowercase to avoid ambiguity), it can be useful to change the shape of all your tensors to (n, m, c), by replicating their values over the missing dimension (e.g. M(m, c) becomes M(n, m, c)). However, you can skip the explicit replication and use broadcasting, so it is sufficient to unsqueeze the missing dimension (e.g. M(m, c) becomes M(1, m, c). Given these considerations, the vectorization of your code becomes as follows cond = y.unsqueeze(2) == torch.arange(M.size(dim=1)).unsqueeze(0) pos = ds.unsqueeze(2) * M.unsqueeze(1) * cond neg = ds.unsqueeze(2) * M.unsqueeze(1).sum(dim=0, keepdim=True) * ~cond grad += (pos - neg).sum(dim=0) Here is a small test to check the validity of the solution import torch n, m, c = 11, 5, 7 y = torch.randint(c, size=(n, 1)) ds = torch.rand(n, m) M = torch.randint(2, size=(n, c)) grad = torch.rand(m, c) def slow_grad(y, ds, M, grad): for i in range(M.size(dim=1)): for j in range(ds.size(dim=0)): if y[j] == i: grad[:,i] = grad[:,i] - (ds[j,:].T * sum(M[j,:])) else: if M[j,i] > 0: grad[:,i] = grad[:,i] + ds[j,:].T return grad def fast_grad(y, ds, M, grad): cond = y.unsqueeze(2) == torch.arange(M.size(dim=1)).unsqueeze(0) pos = ds.unsqueeze(2) * M.unsqueeze(1) * cond neg = ds.unsqueeze(2) * M.unsqueeze(1).sum(dim=0, keepdim=True) * ~cond grad += (pos - neg).sum(dim=0) return grad # Assert equality of all elements function outputs, throws an exception if false assert torch.all(slow_grad(y, ds, M, grad) == fast_grad(y, ds, M, grad)) Feel free to test on other cases as well!
https://stackoverflow.com/questions/70083969/
Is there a way to optimize the calculation of Bernoulli Log-Likelihoods for many multivariate samples?
I currently have two Torch Tensors, p and x, which both have the shape of (batch_size, input_size). I would like to calculate the Bernoulli log likelihoods for the given data, and return a tensor of size (batch_size) Here's an example of what I'd like to do: I have the formula for log likelihoods of Bernoulli Random variables: \sum_i^d x_{i} ln(p_i) + (1-x_i) ln (1-p_i) Say I have p Tensor: [[0.6 0.4 0], [0.33 0.34 0.33]] And say I have the x tensor for the binary inputs based on those probabilities: [[1 1 0], [0 1 1]] And I want to calculate the log likelihood for every sample, which would result in: [[ln(0.6)+ln(0.4)], [ln(0.67)+ln(0.34)+ln(0.33)]] Would it be possible to do this computation without the use of for loops? I know I could use torch.sum(axis=1) to do the final summation between the logs, but is it possible to do the Bernoulli log-likelihood computation without the use of for loops? or use at most 1 for loop? I am trying to vectorize this operation as much as possible. I could've sworn we could use LaTeX for equations before, did something change or is it another website?
Though not a good practice, you can directly use the formula on the tensors as follows (works because these are element wise operations): import torch p = torch.tensor([ [0.6, 0.4, 0], [0.33, 0.34, 0.33] ]) x = torch.tensor([ [1., 1, 0], [0, 1, 1] ]) eps = 1e-8 bll1 = (x * torch.log(p+eps) + (1-x) * torch.log(1-p+eps)).sum(axis=1) print(bll1) #tensor([-1.4271162748, -2.5879497528]) Note that to avoid log(0) error, I have introduced a very small constant eps inside it. A better way to do this is to use BCELoss inside nn module in pytorch. import torch.nn as nn bce = nn.BCELoss(reduction='none') bll2 = -bce(p, x).sum(axis=1) print(bll2) #tensor([-1.4271162748, -2.5879497528]) Since pytorch computes the BCE as a loss, it prepends your formula with a negative sign. The attribute reduction='none' says that I do not want the computed losses to be reduced (averaged/summed) across the batch in any way. This is advisable to use since we do not need to manually take care of numerical stability and error handling (such as adding eps above.) You can indeed verify that the two solutions actually return the same tensor (upto a tolerance): torch.allclose(bll1, bll2) # True or the tensors (without summing each row): torch.allclose((x * torch.log(p+eps) + (1-x) * torch.log(1-p+eps)), -bce(p, x)) # True Feel free to ask for further clarifications.
https://stackoverflow.com/questions/70089367/
I/O Issues in Loading Several Large H5PY Files (Pytorch)
I met a problem! Recently I meet a problem of I/O issue. The target and input data are stored with h5py files. Each target file is 2.6GB while each input file is 10.2GB. I have 5 input datasets and 5 target datasets in total. I created a custom dataset function for each h5py file and then use data.ConcatDataset class to link all the datasets. The custom dataset function is: class MydataSet(Dataset): def __init__(self, indx=1, root_path='./xxx', tar_size=128, data_aug=True, train=True): self.train = train if self.train: self.in_file = pth.join(root_path, 'train', 'train_noisy_%d.h5' % indx) self.tar_file = pth.join(root_path, 'train', 'train_clean_%d.h5' % indx) else: self.in_file = pth.join(root_path, 'test', 'test_noisy.h5') self.tar_file = pth.join(root_path, 'test', 'test_clean.h5') self.h5f_n = h5py.File(self.in_file, 'r', driver='core') self.h5f_c = h5py.File(self.tar_file, 'r') self.keys_n = list(self.h5f_n.keys()) self.keys_c = list(self.h5f_c.keys()) # h5f_n.close() # h5f_c.close() self.tar_size = tar_size self.data_aug = data_aug def __len__(self): return len(self.keys_n) def __del__(self): self.h5f_n.close() self.h5f_c.close() def __getitem__(self, index): keyn = self.keys_n[index] keyc = self.keys_c[index] datan = np.array(self.h5f_n[keyn]) datac = np.array(self.h5f_c[keyc]) datan_tensor = torch.from_numpy(datan).unsqueeze(0) datac_tensor = torch.from_numpy(datac) if self.data_aug and np.random.randint(2, size=1)[0] == 1: # horizontal flip datan_tensor = torch.flip(datan_tensor,dims=[2]) # c h w datac_tensor = torch.flip(datac_tensor,dims=[2]) Then I use dataset_train = data.ConcatDataset([MydataSet(indx=index, train=True) for index in range(1, 6)]) for training. When only 2-3 h5py files are used, the I/O speed is normal and everything goes right. However, when 5 files are used, the training speed is gradually decreasing (5 iterations/s to 1 iterations/s). I change the num_worker and the problem still exists. Could anyone give me a solution? Should I merge several h5py files into a bigger one? Or other methods? Thanks in advance!
Improving performance requires timing benchmarks. To do that you need to identify potential bottlenecks and associated scenarios. You said "with 2-3 files the I/O speed is normal" and "when 5 files are used, the training speed gradually decreases". So, is your performance issue I/O speed, or training speed? Or do you know? If you don't know, you need to isolate and compare I/O performance and training performance separately for the 2 scenarios. In other words, to measure I/O performance (only) you need to run the following tests: Time to read and concatenate 2-3 files, Time to read and concatenate 5 files, Copy the 5 files into 1, and time the read from the merged file, Or, link the 5 files to 1 file, and time. And to measure training speed (only) you need to compare performance for the following tests: Merge 2-3 files, then read and train from the merged file. Merge all 5 files, then read and train from merged file. Or, link the 5 files to 1 file, then read and train from linked file. As noted in my comment, merging (or linking) multiple HDF5 files into one is easy if all datasets are at the root level and all dataset names are unique. I added the external link method because it might provide the same performance, without duplicating large data files. Below is the code that shows both methods. Substitute your file names in the fnames list, and it should be ready to run. If your dataset names aren't unique, you will need to create unique names, and assign in h5fr.copy() -- like this: h5fr.copy(h5fr[ds],h5fw,'unique_dataset_name') Code to merge -or- link files : (comment/uncomment lines as appropriate) import h5py fnames = ['file_1.h5','file_2.h5','file_3.h5'] # consider changing filename to 'linked_' when using links: with h5py.File(f'merge_{len(fnames)}.h5','w') as h5fw: for fname in fnames: with h5py.File(fname,'r') as h5fr: for ds in h5fr.keys(): # To copy datasets into 1 file use: h5fr.copy(h5fr[ds],h5fw) # to link datasets to 1 file use: # h5fw[ds] = h5py.ExternalLink(fname,ds)
https://stackoverflow.com/questions/70089964/
To convert CNN model code from Keras to Pytorch
I’m trying to convert CNN model code from Keras to Pytorch. Here is the Keras Sequential layer model=Sequential() model.add(Conv2D(filters=64, kernel_size = (3,3), activation="relu", input_shape=(28,28,1))) model.add(Conv2D(filters=64, kernel_size = (3,3), activation="relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(BatchNormalization()) model.add(Conv2D(filters=128, kernel_size = (3,3), activation="relu")) model.add(Conv2D(filters=128, kernel_size = (3,3), activation="relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(BatchNormalization()) model.add(Conv2D(filters=256, kernel_size = (3,3), activation="relu")) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Flatten()) model.add(BatchNormalization()) model.add(Dense(512,activation="relu")) model.add(Dense(10,activation="softmax")) model.compile(loss="categorical_crossentropy",optimizer=optimizer,metrics=["accuracy"]) How can I initialize and write forward code on the pytorch model? Especially Flatten and Dense layer. Any comment would appreciate.
I tried to implement it in PyTorch, but check the number of params to make sure that this is the same with your Keras implementation. I tried to write it to be more understandable and simple that's why I wrote down all activation functions. I hope this might be helpful. import torch import torch.nn as nn class Net(nn.Module): def __init__(self, num_classes=10): super(Net, self).__init__() self.conv1 = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(3, 3), padding=(1, 1)) self.relu1 = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=(3, 3), padding=(1, 1)) self.relu2 = nn.ReLU(inplace=True) self.pool1 = nn.MaxPool2d(kernel_size=(2, 2)) self.norm1 = nn.BatchNorm2d(num_features=64) self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=(3, 3), padding=(1, 1)) self.relu3 = nn.ReLU(inplace=True) self.conv4 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=(3, 3), padding=(1, 1)) self.relu4 = nn.ReLU(inplace=True) self.pool2 = nn.MaxPool2d(kernel_size=(2, 2)) self.norm2 = nn.BatchNorm2d(num_features=128) self.conv5 = nn.Conv2d(in_channels=128, out_channels=256, kernel_size=(3, 3), padding=(1, 1)) self.relu5 = nn.ReLU(inplace=True) self.pool3 = nn.MaxPool2d(kernel_size=(2, 2)) self.norm3 = nn.BatchNorm2d(num_features=256) self.fc1 = nn.Linear(in_features=256, out_features=512) self.relu6 = nn.ReLU(inplace=True) self.fc2 = nn.Linear(in_features=512, out_features=10) self.act = nn.Softmax(dim=1) def forward(self, x): x = self.relu1(self.conv1(x)) x = self.relu2(self.conv2(x)) x = self.norm1(self.pool1(x)) x = self.relu3(self.conv3(x)) x = self.relu4(self.conv4(x)) x = self.norm2(self.pool2(x)) x = self.relu5(self.conv5(x)) x = self.norm3(self.pool3(x)) x = x.mean((2, 3), keepdim=True) x = torch.flatten(x, 1) x = self.relu6(self.fc1(x)) x = self.act(self.fc2(x),) return x if __name__ == '__main__': model = Net(num_classes=10) a = torch.randn(1, 3, 224, 224) print("Output: ", model(a).shape) print("Num. params: ", sum(p.numel() for p in model.parameters() if p.requires_grad)) Output Output: torch.Size([1, 10]) Num. params: 692938
https://stackoverflow.com/questions/70094946/
Number of parameters and FLOPS in ONNX and TensorRT model
Does number of parameters and FLOPS (float operations per second) change when convert a model from PyTorch to ONNX or TensorRT format?
I don't think Anvar's post answered OP's question thoroughly so I did a little bit of research. Some general info before the answers to the questions as I believe OP hasn't understood fully what TensorRT and ONNX optimizations happen during the conversion from PyTorch format. Both conversions, Pytorch to ONNX and ONNX to TensorRT increase the performance of the model by using several different optimizations. The tools actually print you information about what they do if you choose the verbose flag for them. The preferred way to convert a Pytorch model to TensorRT is to use Torch-TensorRT as explained here. TensorRT fuses layers and tensors in the model graph, it then uses a large kernel library to select implementations that perform best on the target GPU. ONNX runtime offers mostly graph optimizations such as graph simplifications and node fusions to improve performance. 1. Does the number of parameters change when converting a PyTorch model to ONNX or TensorRT? No: even though the layers are fused the number of parameters does not decrease unless there are some redundant branches in the model. I tested this by downloading the yolov5s.onnx model here. The original model has 7.2M parameters according to the repository authors. Then I used this tool to count the number of parameters in the yolov5.onnx model and got 7225917 as a result. Thus, onnx conversion did not reduce the amount of parameters. I was not able to get as elaborate information for TensorRT model but you can get layer information using trtexec. There is a recent question about this but there are no answers yet. 2. Does the number of FLOPS change when converting a PyTorch model to ONNX or TensorRT? According to this post, no.
https://stackoverflow.com/questions/70097798/
ImportError after installing torchtext 0.11.0 with conda
I have installed pytorch version 1.10.0 alongside torchtext, torchvision and torchaudio using conda. My PyTorch is cpu-only, and I have experimented with both conda install pytorch-mutex -c pytorch and conda install pytorch cpuonly -c pytorch to install the cpuonly version, both yielding the same eror that I will describe in the following lines. I have also installed pytorch-lightning in conda, alongside jsonargparse[summaries via pip in the environment. I have written this code to see whether LightningCLI works or not. # script.py import torch import pytorch_lightning as pl class BoringModel(LightningModule): def __init__(self): super().__init__() self.layer = torch.nn.Linear(32, 2) def forward(self, x): return self.layer(x) def training_step(self, batch, batch_idx): loss = self(batch).sum() self.log("train_loss", loss) return {"loss": loss} def validation_step(self, batch, batch_idx): loss = self(batch).sum() self.log("valid_loss", loss) def test_step(self, batch, batch_idx): loss = self(batch).sum() self.log("test_loss", loss) def configure_optimizers(self): return torch.optim.SGD(self.layer.parameters(), lr=0.1) cli = LightningCLI(BoringModel) But when I run it using python -m script fit --print_config, I get the following error: ImportError: /home/farhood/miniconda3/envs/pytorch_dummy_environment/lib/python3.9/site-packages/torchtext/_torchtext.so: undefined symbol: _ZNK5torch3jit6MethodclESt6vectorIN3c106IValueESaIS4_EERKSt13unordered_mapISsS4_St4hashISsESt8equal_toISsESaISt4pairIKSsS4_EEE Which indicates that there is something broken with my Conda installation, and it's probably related to torchtext somehow. This is the versions of the installed torch related packages: pytorch 1.10.0 cpu_py39hc5866cc_0 conda-forge pytorch-lightning 1.5.2 pyhd8ed1ab_0 conda-forge pytorch-mutex 1.0 cuda pytorch torchaudio 0.10.0 py39_cpu pytorch torchmetrics 0.6.0 pyhd8ed1ab_0 conda-forge torchtext 0.11.0 py39 pytorch torchvision 0.11.1 py39_cpu pytorch
So in order to fix the problem, I had to change my environment.yaml in order to force pytorch to install from the pytorch channel. So this is my environment.yaml now: channels: - defaults - pytorch - conda-forge dependencies: # ML section - pytorch::pytorch - pytorch::torchtext - pytorch::torchvision - pytorch::torchaudio - pytorch::cpuonly - mlflow=1.21.0 - pytorch-lightning>=1.5.2 - pip: - jsonargparse[signatures] Using this I don't get the error anymore. The pytorch related stuff installed now is: cpuonly 2.0 0 pytorch pytorch 1.10.0 py3.9_cpu_0 pytorch pytorch-lightning 1.5.2 pyhd8ed1ab_0 conda-forge pytorch-mutex 1.0 cpu pytorch torchaudio 0.10.0 py39_cpu [cpuonly] pytorch torchtext 0.11.0 py39 pytorch torchvision 0.11.1 py39_cpu [cpuonly] pytorch
https://stackoverflow.com/questions/70098916/
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! When predicting with my model
I trained a model for sequence classification using transformers (BertForSequenceClassification) and I get the error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select) I don't really get where is the problem, if it's on my model, on how I tokenize the data, or what. Here is my code: LOADING THE PRETRAINED MODEL model_state_dict = torch.load("../MODELOS/TRANSFORMERS/TransformersNormal", map_location='cpu') #Doesnt work with map_location='cuda:0' neither model = BertForSequenceClassification.from_pretrained(pretrained_model_name_or_path="bert-base-uncased", state_dict=model_state_dict, cache_dir='./data') CREATING DATALOAD def crearDataLoad(dfv,tokenizer): dft=dfv # usamos el del validacion para que nos salga los resultados y no tener que cambiar mucho codigo #validation=dfv['text'] validation=dfv['text'].str.lower() # para modelos uncased # el fichero que hemos llamado test es usado en la red neuronal validation_labels=dfv['label'] validation_inputs = crearinputs (validation,tokenizer) validation_masks= crearmask (validation_inputs) validation_inputs = torch.tensor(validation_inputs) validation_labels = torch.tensor(validation_labels.values) validation_masks = torch.tensor(validation_masks) from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler# The DataLoader needs to know our batch size for training, so we specify it #Colab batch_size = 32 #local #batch_size = 15 validation_data = TensorDataset(validation_inputs, validation_masks, validation_labels) validation_sampler = SequentialSampler(validation_data) validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size) return validation_dataloader SHOWING RESULTS def resultados(validation_dataloader, model, tokenizer): model.eval() # Tracking variables predictions , true_labels = [], [] pred = [] t_label =[] # Predict for batch in validation_dataloader: # Add batch to GPU , como no tengo lo dejo aquí batch = tuple(t.to(device) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_input_mask, b_labels = batch # Telling the model not to compute or store gradients, saving memory and # speeding up prediction with torch.no_grad(): # Forward pass, calculate logit predictions outputs = model(b_input_ids, #toktype_ids=None, # attention_mask=b_input_mask) #I GET THE ERROR HERE logits = outputs[0] # Move logits and labels to CPU logits = logits.detach().cpu().numpy() label_ids = b_labels.to('cpu').numpy() # Store predictions and true labels # Store predictions and true labels predictions.append(logits) true_labels.append(label_ids) for l in logits: # para cada tupla del logits, se selecciona 0 o 1 dependiendo del valor # que sea el mayor (argmax) pred_labels_i = np.argmax(l).item() pred.append(pred_labels_i) #Si no me equivoco, en pred guardamos las predicciones hechas por el modelo pred=np.asarray(pred).tolist() t_label = [val for sublist in true_labels for val in sublist] # para aplanar la lista de etiquetas #print('predicciones',pred) #print('t_labels',t_label) #print('validation_labels',validation_labels ) print("RESULTADOS KFOLD validacion cruzada") from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report print(classification_report(t_label, pred)) print ("Distribution test {}".format(Counter(t_label))) from sklearn.metrics import confusion_matrix print(confusion_matrix(t_label, pred)) from sklearn.metrics import roc_auc_score print('AUC ROC:') print(roc_auc_score(t_label, pred)) from sklearn.metrics import f1_score result=f1_score(t_label, pred, average='binary',labels=[0,1],pos_label=1,zero_division=0) print('f1-score macro:') print(result) print("****************************************************************") return result I get the error at this line in function resultados: with torch.no_grad(): # Forward pass, calculate logit predictions outputs = model(b_input_ids, #toktype_ids=None, # attention_mask=b_input_mask) #Esto falla MAIN PROGRAM trial_data = pd.DataFrame(trial_dataset) device_name = tf.test.gpu_device_name() if device_name != '/device:GPU:0': print('no hay gpu') print('Found GPU at: {}'.format(device_name)) #import torch# If there's a GPU available... if torch.cuda.is_available(): # Tell PyTorch to use the GPU. device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) # If not... else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') validation_dataloader = crearDataLoad(trial_data,tokenizer) # obteniendo metricas del modelo generado en el paso anterior model.eval() result= resultados(validation_dataloader, model,tokenizer)
You did not move your model to device, only the data. You need to call model.to(device) before using it with data located on device.
https://stackoverflow.com/questions/70102323/
How to combine 2 different shaped pytorch tensors in training?
At the moment my model gives 3 output tensors. I want two of them to be more cooperative. I want to use the combination of self.dropout1(hs) and self.dropout2(cls_hs) to pass through the self.entity_out Linear Layer. The issue is mentioned 2 tensors are in different shapes. Current Code class NLUModel(nn.Module): def __init__(self, num_entity, num_intent, num_scenarios): super(NLUModel, self).__init__() self.num_entity = num_entity self.num_intent = num_intent self.num_scenario = num_scenarios self.bert = transformers.BertModel.from_pretrained(config.BASE_MODEL) self.dropout1 = nn.Dropout(0.3) self.dropout2 = nn.Dropout(0.3) self.dropout3 = nn.Dropout(0.3) self.entity_out = nn.Linear(768, self.num_entity) self.intent_out = nn.Linear(768, self.num_intent) self.scenario_out = nn.Linear(768, self.num_scenario) def forward(self, ids, mask, token_type_ids): out = self.bert(input_ids=ids, attention_mask=mask, token_type_ids=token_type_ids) hs, cls_hs = out['last_hidden_state'], out['pooler_output'] entity_hs = self.dropout1(hs) intent_hs = self.dropout2(cls_hs) scenario_hs = self.dropout3(cls_hs) entity_hs = self.entity_out(entity_hs) intent_hs = self.intent_out(intent_hs) scenario_hs = self.scenario_out(scenario_hs) return entity_hs, intent_hs, scenario_hs Required def forward(self, ids, mask, token_type_ids): out = self.bert(input_ids=ids, attention_mask=mask, token_type_ids=token_type_ids) hs, cls_hs = out['last_hidden_state'], out['pooler_output'] entity_hs = self.dropout1(hs) intent_hs = self.dropout2(cls_hs) scenario_hs = self.dropout3(cls_hs) entity_hs = self.entity_out(concat(entity_hs, intent_hs)) # Concatination intent_hs = self.intent_out(intent_hs) scenario_hs = self.scenario_out(scenario_hs) return entity_hs, intent_hs, scenario_hs Let's say I was successful in concatenating... will the backward propagation work?
Shape of entity_hs (last_hidden_state) is [batch_size, sequence_length, hidden_size], and shape of intent_hs (pooler_output) is just [batch_size, hidden_size] and putting them together may not make sense. It depends on what you want to do. If, for some reason, you want to get output [batch_size, sequence_length, channels], you could tile the intent_hs tensor: intent_hs = torch.tile(intent_hs[:, None, :], (1, sequence_lenght, 1)) ... = torch.cat([entity_hs, intent_hs], dim=2) If you want to get [batch_size, channels], you can reduce the entity_hs tensor for example by averaging: entity_hs = torch.mean(entity_hs, dim=1) ... = torch.cat([entity_hs, intent_hs], dim=1) Yes, the the backward pass will propagate gradients through the concatenation (and the rest).
https://stackoverflow.com/questions/70102790/
Pytorch index with Tensor
I have a 2-dimentional tensor arr with 0 as all the entries. I have a second tensor idx. I want to make all entries in arr with the indices in idx into 1. arr = torch.zeros(size = (2,10)) idx = torch.Tensor([ [0,2], [4,5] ]) arr[idx] = 1 #This doesn't work print(arr) The output should look like this: tensor([[1., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 1., 0., 0., 0., 0.]]) I had high confidence that I would definitely find someone else ask this in SO, however I couldn't find one. I hope it isn't duplicate.
Use scatter() along dim=1 or the innermost dimension in this case i.e. dim=-1. Note that in place of src tensor, I just passed the constant value 1. In [31]: arr = torch.zeros(size=(2, 10)) In [32]: idx = torch.tensor([ ...: [0, 2], ...: [4, 5] ...: ]) In [33]: torch.scatter(arr, 1, idx, 1) Out[33]: tensor([[1., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 1., 0., 0., 0., 0.]]) In [34]: torch.scatter(arr, -1, idx, 1) Out[34]: tensor([[1., 0., 1., 0., 0., 0., 0., 0., 0., 0.], [0., 0., 0., 0., 1., 1., 0., 0., 0., 0.]])
https://stackoverflow.com/questions/70103909/
Can I define a method as an attribute?
The topic above is a bit ambiguous, the explaination is below: class Trainer: """Object used to facilitate training.""" def __init__( self, # params: Namespace, params, model, device=torch.device("cpu"), optimizer=None, scheduler=None, wandb_run=None, early_stopping: callbacks.EarlyStopping = None, ): # Set params self.params = params self.model = model self.device = device # self.optimizer = optimizer self.optimizer = self.get_optimizer() self.scheduler = scheduler self.wandb_run = wandb_run self.early_stopping = early_stopping # list to contain various train metrics # TODO: how to add more metrics? wandb log too. Maybe save to model artifacts? self.history = DefaultDict(list) @staticmethod def get_optimizer( model: models.CustomNeuralNet, optimizer_params: global_params.OptimizerParams(), ): """Get the optimizer for the model. Args: model (models.CustomNeuralNet): [description] optimizer_params (global_params.OptimizerParams): [description] Returns: [type]: [description] """ return getattr(torch.optim, optimizer_params.optimizer_name)( model.parameters(), **optimizer_params.optimizer_params ) Notice that initially I passed in optimizer in the constructor, where I will be calling it outside this class. However, I now put get_optimizer inside the class itself (for consistency purpose, but unsure if it is ok). So, should I still define self.optimizer = self.get_optimizer() or just use self.get_optimizer() at the designated places in the class? The former encourages some readability for me. Addendum: I now put the instance inside the .fit() method where I will call say 5 times to train the model 5 times. In this scenario, even though there won't be any obvious issue as we are using optimizer once per call, will it still be better to not define self.optimizer here? def fit( self, train_loader: torch.utils.data.DataLoader, valid_loader: torch.utils.data.DataLoader, fold: int = None, ): """[summary] Args: train_loader (torch.utils.data.DataLoader): [description] val_loader (torch.utils.data.DataLoader): [description] fold (int, optional): [description]. Defaults to None. Returns: [type]: [description] """ self.optimizer = self.get_optimizer( model=self.model, optimizer_params=OPTIMIZER_PARAMS ) self.scheduler = self.get_scheduler( optimizer=self.optimizer, scheduler_params=SCHEDULER_PARAMS )
There is a difference between the two: calling your get_optimizer will instantiate a new torch.optim.<optimizer> every time. In contrast, setting self.optimizer and accessing it numerous times later will only create a single optimizer instance.
https://stackoverflow.com/questions/70107044/
Pytorch: RuntimeError: result type Float can't be cast to the desired output type Long
I have a model which looks as follows: IMG_WIDTH = IMG_HEIGHT = 224 class AlexNet(nn.Module): def __init__(self, output_dim): super(AlexNet, self).__init__() self._to_linear = None self.x = torch.randn(3, IMG_WIDTH, IMG_HEIGHT).view(-1, 3, IMG_WIDTH, IMG_HEIGHT) self.features = nn.Sequential( nn.Conv2d(3, 64, 3, 2, 1), # in_channels, out_channels, kernel_size, stride, padding nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(64, 192, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(192, 384, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(384, 256, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True), nn.Conv2d(256, 512, 3, padding=1), nn.ReLU(inplace=True), nn.Conv2d(512, 256, 3, padding=1), nn.MaxPool2d(2), nn.ReLU(inplace=True) ) self.conv(self.x) self.classifier = nn.Sequential( nn.Dropout(.5), nn.Linear(self._to_linear, 4096), nn.ReLU(inplace=True), nn.Dropout(.5), nn.Linear(4096, 4096), nn.ReLU(inplace=True), nn.Linear(4096, output_dim), ) def conv(self, x): x = self.features(x) if self._to_linear is None: self._to_linear = x.shape[1] * x.shape[2] * x.shape[3] return x def forward(self, x): x = self.conv(x) h = x.view(x.shape[0], -1) x = self.classifier(h) return x, h Here is my optimizer and loss functions: optimizer = torch.optim.Adam(model.parameters()) criterion = nn.BCEWithLogitsLoss().to(device) Here is my train and evaluate functions: def train(model, iterator, optimizer, criterion, device): epoch_loss, epoch_acc = 0, 0 model.train() for (x, y) in iterator: # features and labels to the device x = x.to(device) y = y.to(device).long() # Zero the gradients optimizer.zero_grad() y_pred, _ = model(x) # Calculate the loss and accuracy loss = criterion(y_pred.squeeze(), y) acc = binary_accuracy(y_pred, y) # Backward propagate loss.backward() # Update the weights optimizer.step() epoch_loss +=loss.item() epoch_acc += acc.item() return epoch_loss/len(iterator), epoch_acc/len(iterator) def evaluate(model, iterator, criterion, device): epoch_loss, epoch_acc = 0, 0 model.eval() with torch.no_grad(): for (x, y) in iterator: x = x.to(device) y = y.to(device).long() y_pred, _ = model(x) loss = criterion(y_pred, y) acc = binary_accuracy(y_pred, y) epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss/len(iterator), epoch_acc/len(iterator) This is the error that I'm getting: RuntimeError: result type Float can't be cast to the desired output type Long What may be possibly my problem because I have tried to convert my labels to long tensors as follows: y = y.to(device).long() But it seems not to work.
I was getting the same error doing this: loss_fn(output, target) where the output was Tensor torch.float32 and target was Tensor torch.int64. What solved this problem was calling the loss function like this: loss_fn(output, target.float())
https://stackoverflow.com/questions/70110429/
Expected 4-dimensional input for 4-dimensional weight [192, 768, 1, 1], but got 2-dimensional input of size [50, 1000] instead
I'm try to modify Inception v3 pre trained in pytorch to have a multi-input. ( 4 output precisely). I get this error: Expected 4-dimensional input for 4-dimensional weight [192, 768, 1, 1], but got 2-dimensional input of size [50, 1000] instead My input shape is : torch.Size([50, 3, 299, 299]) This is the code for my model, class CNN1(nn.Module): def __init__(self, pretrained): super(CNN1, self).__init__() if pretrained is True: self.model = models.inception_v3(pretrained=True) modules = list(self.model.children())[:-1] # delete the last fc layer. self.features = nn.Sequential(*modules) self.fc0 = nn.Linear(2048, 10) #digit 0 self.fc1 = nn.Linear(2048, 10) #digit 1 self.fc2 = nn.Linear(2048, 10) #digit 2 self.fc3 = nn.Linear(2048, 10) #digit 3 def forward(self, x): bs, _, _, _ = x.shape x = self.features(x) x = F.adaptive_avg_pool2d(x, 1).reshape(bs, -1) label0 = self.fc0(x) label1 = self.fc1(x) label2= self.fc2(x) label3= self.fc3(x) return {'label0': label0, 'label1': label1,'label2':label2, 'label3': label3} and this is a piece of iteration: for batch_idx, sample_batched in enumerate(train_dataloader): # importing data and moving to GPU image,label0, label1, label2, label3 = sample_batched['image'].to(device),\ sample_batched['label0'].to(device),\ sample_batched['label1'].to(device),\ sample_batched['label2'].to(device) ,\ sample_batched['label3'].to(device) # zero the parameter gradients optimizer.zero_grad() output=model(image.float()) anyone have a suggestion?
One way to remove layers of a PyTorch model is to use nn.Identity() layer. I think you want to remove the last fully connected layer. If so check this: import torch import torch.nn as nn import torch.nn.functional as F from torchvision import models class CNN1(nn.Module): def __init__(self, pretrained): super(CNN1, self).__init__() if pretrained is True: self.model = models.inception_v3(pretrained=True) else: self.model = models.inception_v3(pretrained=False) # modules = list(self.model.children())[:-1] # delete the last fc layer. self.model.fc = nn.Identity() # # to freeze training of inception weights # for param in self.model.parameters(): # param.requires_grad = False self.fc0 = nn.Linear(2048, 10) self.fc1 = nn.Linear(2048, 10) self.fc2 = nn.Linear(2048, 10) self.fc3 = nn.Linear(2048, 10) def forward(self, x): bs, _, _, _ = x.shape x, aux_x = self.model(x) # x = F.adaptive_avg_pool2d(x, 1).reshape(bs, -1) label0 = self.fc0(x) label1 = self.fc1(x) label2= self.fc2(x) label3= self.fc3(x) return {'label0': label0, 'label1': label1,'label2':label2, 'label3': label3} if __name__ == '__main__': net = CNN1(True) print(net) inp = torch.randn(50, 3, 299, 299) out = net(inp) print('label0 shape:', out['label0'].shape) Note: if you want to freeze training of inception layers set requires_grad = False for each parameter In you code you are assuming all the layer connections are sequential by using nn.Sequential(*modules) line, may be that is causing the error.
https://stackoverflow.com/questions/70111562/
Python: Generate a unique batch from given dataset
I'm applying a CNN to classify a given dataset. My function: def batch_generator(dataset, input_shape = (256, 256), batch_size = 32): dataset_images = [] dataset_labels = [] for i in range(0, len(dataset)): dataset_images.append(cv2.resize(cv2.imread(dataset[i], cv2.IMREAD_COLOR), input_shape, interpolation = cv2.INTER_AREA)) dataset_labels.append(labels[dataset[i].split('/')[-2]]) return dataset_images, dataset_labels This function is supposed to be called for every epoch and it should return a unique batch of size 'batch_size' containing dataset_images (each image is 256x256) and corresponding dataset_label from the labels dictionary. input 'dataset' contains path to all the images, so I'm opening them and resizing them to 256x256. Can someone help me in adding to this code so that is returns the desired batches?
PyTorch has two similar sounding, but very different abstractions for loading data. I strongly recommend reading the documentation on dataloaders here. To summarize A Dataset is an object you generally implement that returns an individual sample (data + label) A DataLoader is a built-in class in pytorch that samples batches of samples from a dataset (potentially in parallel). A (map-style) Dataset is a simple object that just implements two mandatory methods: __getitem__ and __len__. Getitem is the method that is invoked on an object when you use the square-bracket operator i.e. dataset[i] and __len__ is the method that is invoked when you use the python built-in len function on your object, i.e. len(dataset) For pytorch you usually want __getitem__ to return a tuple containing both the data and the label for a single item in your dataset object. For example based on what you provided, something like this should suit your needs from torch.utils.data import Dataset, DataLoader import torchvision.transforms.functional as F class CustomDataset(Dataset): def __init__(self, image_paths, labels, input_shape=(256, 256)): # `image_paths` is what you called `dataset` in your example. # I'm assume this is a list of image paths. # `labels` isn't defined in your script but I assume its a # dict that maps image names to an integer label # between 0 and num classes minus 1 self.image_paths = image_paths self.labels = labels self.input_shape = input_shape def __getitem__(self, index): # return the data and label for the specified index image_path = self.image_paths[index] data = cv2.resize(cv2.imread(image_path, cv2.IMREAD_COLOR), self.input_shape, interpolation = cv2.INTER_AREA) label = self.labels[image_path.split('/')[-2]] # convert data to PyTorch tensor # This converts data from a uint8 np.array of shape HxWxC # between 0 and 255 to a pytorch float32 tensor of shape CxHxW # between 0.0 and 1.0. data = F.to_tensor(data) return data, label def __len__(self): return len(self.image_paths) ... # using what you call "dataset" and "labels" # num_workers > 0 allows you to load data in parallel while network is running dataloader = DataLoader( CustomDataset(dataset, labels, (256, 256)), batch_size=32, shuffle=True, # shuffle tells us to randomly sample the # dataset without replacement num_workers=4 # num workers is the number of worker processes # that load from dataset in parallel while your # model is processing stuff ) # training loop for epoch in range(num_epochs): # iterates over all data in your dataset in a random order # in batches of size 32 each time this loop is run for data_batch, label_batch in dataloader: # data_batch is a pytorch FloatTensor of shape 32x3x256x256 # label_batch is a pytorch LongTensor of shape 32 # if using GPU acceleration now is the time to move data_batch and label_batch to GPU # data_batch = data_batch.cuda() # label_batch = label_batch.cuda() # zero the gradients, pass data through your model, backprop, and step the optimizer ...
https://stackoverflow.com/questions/70114892/
Weird behaviour of loss function in pytorch
I'm computing a custom cost function that is simply taking the exponential of cross-entropy divided by a parameter \eta. During the first iterations (around 20), the training loss is decreasing, but after that, I get suddenly a nan, which I don't understand why is happening. The code I'm using is the following: e_loss = [] eta = 2 #just an example of value of eta I'm using criterion = nn.CrossEntropyLoss() for e in range(epoch): train_loss = 0 for batch_idx, (data, target) in enumerate(train_loader): client_model.train() optimizer.zero_grad() output = client_model(data) loss = torch.exp(criterion(output, target)/eta) # this is the line where I input my custom loss function loss.backward() optimizer.step() train_loss += loss.item()*data.size(0) train_loss = train_loss/len(train_loader) # average losses e_loss.append(train_loss)
Directly using exp is quite unstable when the input is unbounded. Cross-entropy loss can return very large values if the network predicts very confidently the wrong class (b/c -log(x) goes to inf as x goes to 0). A single inaccurate prediction by your model like this can result in numerical precision that would cause gradients to go to nan, which will immediately cause weights and outputs of your model to become nan. For example >>> import torch >>> import torch.nn.functional as F >>> torch.exp(F.cross_entropy(torch.tensor([[-50.0, 50.0]]), torch.tensor([0]))) tensor(inf)
https://stackoverflow.com/questions/70115610/
Accumulating gradients for a larger batch size with PyTorch
In order to mimick a larger batch size, I want to be able to accumulate gradients every N batches for a model in PyTorch, like: def train(model, optimizer, dataloader, num_epochs): model.train() model.cuda() for epoch_num in range(1, num_epochs+1): for batch_num, data in enumerate(dataloader): ims = data.to('cuda:0') loss = model(ims) loss.backward() if batch_num % N == 0 and batch_num != 0: optimizer.step() optimizer.zero_grad(set_to_none=True) For this approach do I need to add the flag retain_graph=True, i.e. loss.backward(retain_graph=True) In this manner, are the gradients per each backward call simply summed per each parameter?
You need to set retain_graph=True if you want to make multiple backward passes over the same computational graph, making use of the intermediate results from a single forward pass. This would have been the case, for instance, if you called loss.backward() multiple times after computing loss once, or if you had multiple losses from different parts of the graph to backpropagate from (a good explanation can be found here). In your case, for each forward pass, you backpropagate exactly once. So you don't need to store the intermediate results from the computational graph once the gradients are computed. In short: Intermediate outputs in the graph are cleared after a backward pass, unless explicitly preserved using retain_graph=True. Gradients accumulate by default, unless explicitly cleared using zero_grad.
https://stackoverflow.com/questions/70119232/
How can I check parameters of Pytorch networks' layers?
import torch import torch.nn as nn from torch.optim import Adam class NN_Network(nn.Module): def __init__(self,in_dim,hid,out_dim): super(NN_Network, self).__init__() self.linear1 = nn.Linear(in_dim,hid) self.linear2 = nn.Linear(hid,out_dim) def forward(self, input_array): h = self.linear1(input_array) y_pred = self.linear2(h) return y_pred in_d = 5 hidn = 2 out_d = 3 net = NN_Network(in_d, hidn, out_d) list(net.parameters()) The result was : [Parameter containing: tensor([[-0.2948, -0.1261, 0.2525, -0.4162, 0.3067], [-0.2483, -0.3600, -0.4090, 0.0844, -0.2772]], requires_grad=True), Parameter containing: tensor([-0.2570, -0.3754], requires_grad=True), Parameter containing: tensor([[ 0.4550, -0.4577], [ 0.1782, 0.2454], [ 0.6931, -0.6003]], requires_grad=True), Parameter containing: tensor([ 0.4181, -0.2229, -0.5921], requires_grad=True)] Without using nn.Parameter, list(net.parmeters()) results as a parameters. What I am curious is that : I didn't used nn.Parameter command, why does it results? And to check any network's layers' parameters, then is .parameters() only way to check it? Maybe the result was self.linear1(in_dim,hid)'s weight, bias and so on, respectively. But is there any way to check what it is?
Instead of .parameters(), you can use .named_parameters() to get more information about the model: for name, param in net.named_parameters(): if param.requires_grad: print(name, param.data) Result: linear1.weight tensor([[ 0.3727, 0.2522, 0.2381, 0.3115, 0.0656], [-0.3322, 0.2024, 0.1089, -0.3370, 0.3917]]) linear1.bias tensor([-0.2089, 0.1105]) linear2.weight tensor([[-0.1090, 0.2564], [-0.3957, 0.6632], [-0.4036, 0.7066]]) linear2.bias tensor([ 0.1398, -0.0585, 0.4297])
https://stackoverflow.com/questions/70120715/
module 'torch.optim' has no attribute 'NAdam'
nadam = torch.optim.NAdam(model.parameters()) This gives the error AttributeError: module 'torch.optim' has no attribute 'NAdam'. My PyTorch version is '1.9.1+cu102', the python version is 3.7.11. VS code does not even suggest the optimizer but the documentation clearly mentions the optimizer. I can import other optimizers like Adam
https://pytorch.org/docs/1.9.1/optim.html From the official website, NAdam is not among the optimizers in pytorch v 1.9.1. Try upgrading to v 1.10.0, and your code should work just fine.
https://stackoverflow.com/questions/70124398/
PyTorch - Computing and printing the training accuracy of the QuickStart Tutorial
to learn PyTorch, I started with the Quickstart Tutorial. In the train() method, I noticed that they don't print the training accuracy during the training session. Only the training loss is printed. Coming from Keras, this was very unusual for me, since the training accuracy automatically printed when you call fit(). So, I decided to modify the tutorial code like the following to print the training accuracy: def train(dataloader, model, optimizer, loss_fn): model.train() size = len(dataloader.dataset) num_batches = len(dataloader) training_loss = 0.0 correct = 0.0 for batch, (imgs, labels) in enumerate(dataloader): imgs = imgs.to(device=device) labels = labels.to(device=device) predictions = model(imgs) loss = loss_fn(predictions, labels) optimizer.zero_grad() loss.backward() optimizer.step() # accumulate the training loss - each batch's loss will be added to trainin_loss training_loss += loss.item() # determines the number of correct predictions correct += (predictions.argmax(1) == labels).type(torch.float).sum().item() # end of for loop - all batches are processed # after all batches are processed, determine the average training loss training_loss = training_loss / num_batches # this would be the training accuracy: number of correct predictions / number of samples in dataset correct = correct / size print(f"{datetime.datetime.now()} Training Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {training_loss:>8f} \n") Is this ok ? As a beginner to PyTorch, I wanted to make sure that is correct before I start training my neural networks.
It all looks correct. Doing things like this should not influence training. loss.backward() computes your gradients and anything not connect to that can not change them. By the way, just run the training, you can't break anything :) (Yet. Just wait when you start building self driving cars.). I thought, in Keras/TensorFlow fit() does not compute accuracy automatically, you still have to specify this metric for example when compiling the model or as a parameter to fit(), e.g.: model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Accuracy()])
https://stackoverflow.com/questions/70126791/
How implement a Mean Standard Error (MSE) metric for NNI (Neural network intelligence) in pytorch?
I am somewhat new to pytorch since I have been using Keras for some years. Now I want to run a network architecture search (NAS) based on DARTS: Differentiable Architecture Search (see https://nni.readthedocs.io/en/stable/NAS/DARTS.html) and it is based on pytorch. All examples available use accuracy as a metric, but I would need to calculate MSE. This is one of the examples available: DartsTrainer(model, loss=criterion, metrics=lambda output, target: accuracy(output, target, topk=(1,)), optimizer=optim, num_epochs=args.epochs, dataset_train=dataset_train, dataset_valid=dataset_valid, batch_size=args.batch_size, log_frequency=args.log_frequency, unrolled=args.unrolled, callbacks=[LRSchedulerCallback(lr_scheduler), ArchitectureCheckpoint("./checkpoints")]) # where the accuracy is defined in a separate function: def accuracy(output, target, topk=(1,)): # Computes the precision@k for the specified values of k maxk = max(topk) batch_size = target.size(0) _, pred = output.topk(maxk, 1, True, True) pred = pred.t() # one-hot case if target.ndimension() > 1: target = target.max(1)[1] correct = pred.eq(target.view(1, -1).expand_as(pred)) res = dict() for k in topk: correct_k = correct[:k].reshape(-1).float().sum(0) res["acc{}".format(k)] = correct_k.mul_(1.0 / batch_size).item() return res As I see in pytorch it is more complicated to calculate metrics then in Keras. Can someone help please? As a trial, I wrote this code: def accuracy_mse(output, target): batch_size = target.size(0) diff = torch.square(output.t()-target)/batch_size diff = diff.sum() res = dict() res["acc_mse"] = diff return res It seems to be working, but I am not 100% sure about it ...
Finally I figured out that the transpose (.t() ) wac causing the problem, so the final code is: def accuracy_mse(output, target): """ Computes the mse """ batch_size = target.size(0) diff = torch.square(output-target)/batch_size diff = diff.sum() res = dict() res["mse"] = diff return res
https://stackoverflow.com/questions/70127615/
PyTorch equivalent to Tensorflow's tf.keras.dot() in terms of behaviour?
In tensorflow, if you have 2 tensors of shape NxTxD and NxDxT respectively (N=batch_size, T=SequenceLength, D=NumberOfFeatures), you can dot them and get an output of NxTxT, as demonstrated below: import tensorflow as tf import numpy as np x1 = np.arange(2 * 4 * 3).reshape(2, 4, 3) x2 = np.flip(np.arange(2 * 4 * 3).reshape(2, 3, 4), 1).copy() print(x1.shape, x2.shape) dotted = tf.keras.layers.Dot(axes=(2, 1))([x1, x2]) print(dotted.shape) dotted (2, 4, 3) (2, 3, 4) (2, 4, 4) <tf.Tensor: shape=(2, 4, 4), dtype=int32, numpy= array([[[ 4, 7, 10, 13], [ 40, 52, 64, 76], [ 76, 97, 118, 139], [ 112, 142, 172, 202]], [[ 616, 655, 694, 733], [ 760, 808, 856, 904], [ 904, 961, 1018, 1075], [1048, 1114, 1180, 1246]]])> If you try to do the same in PyTorch, the result is different: import torch import numpy as np x1 = torch.from_numpy(np.arange(2 * 4 * 3).reshape(2, 4, 3)) x2 = torch.from_numpy(np.flip(np.arange(2 * 4 * 3).reshape(2, 3, 4), 1).copy()) dotted = torch.tensordot(x1, x2, dims=([2], [1])) print(x1.shape, x2.shape) print(dotted.shape) dotted torch.Size([2, 4, 3]) torch.Size([2, 3, 4]) torch.Size([2, 4, 2, 4]) tensor([[[[ 4, 7, 10, 13], [ 40, 43, 46, 49]], [[ 40, 52, 64, 76], [ 184, 196, 208, 220]], [[ 76, 97, 118, 139], [ 328, 349, 370, 391]], [[ 112, 142, 172, 202], [ 472, 502, 532, 562]]], [[[ 148, 187, 226, 265], [ 616, 655, 694, 733]], [[ 184, 232, 280, 328], [ 760, 808, 856, 904]], [[ 220, 277, 334, 391], [ 904, 961, 1018, 1075]], [[ 256, 322, 388, 454], [1048, 1114, 1180, 1246]]]], dtype=torch.int32) Now, Tensorflow's results exist inside the results that pytorch produces (it's a subset of it). In fact, tensorflow's results is basically some kind of "diagonal" in higher dimensions. PyTorch's output is NxTxNxT, so to get exactly the same results as Tensorflow you can do: torch.stack([dotted[i, :, i, :] for i in range(len(dotted))]) tensor([[[ 4, 7, 10, 13], [ 40, 52, 64, 76], [ 76, 97, 118, 139], [ 112, 142, 172, 202]], [[ 616, 655, 694, 733], [ 760, 808, 856, 904], [ 904, 961, 1018, 1075], [1048, 1114, 1180, 1246]]], dtype=torch.int32) but this doesn't negate the fact that you're both: Allocating memory for a tensor of NxTxNxT instead of NxTxT The computational complexity/time increases dramatically Is there a way to get the same 3 dimensional results that tensorflow gives from pytorch, without it computing the 4 dimensional tensor?
I hope you are looking for batch matrix multiplication (bmm) which multiplies two batches of matrices - the two tensors have to be 3D. https://pytorch.org/docs/stable/generated/torch.bmm.html
https://stackoverflow.com/questions/70129174/
Create custom convolutional Loss function that only takes parts of the tensor
I have a convolutional network that gets images, but also a colored border on each image for additional information input to the network. Now I want to calculate the loss, but the usual loss function will also take the predicted border into account. The border is completely random, and is just an input to the system. I don’t want the model to think it has performed badly, when it predicted the wrong color. This happens in the DataLoader.getitem: def __getitem__(self, index): path = self.input_data[index] imgs_path = sorted(glob.glob(path + '/*.png')) #read light conditions lightConditions = [] with open(path +"/lightConditions.json", 'r') as file: lightConditions = json.load(file) #shift light conditions lightConditions.pop(0) lightConditions.append(False) frameNumber = 0 imgs = [] for img_path in imgs_path: img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) im_pil = Image.fromarray(img) #img = cv2.resize(img, (256,448)) if lightConditions[frameNumber] ==False: imgBorder = ImageOps.expand(im_pil,border = 6, fill='black') else: imgBorder = ImageOps.expand(im_pil, border = 6, fill='orange') img = np.asarray(imgBorder) img = cv2.resize(img, (256,448)) #img = cv2.resize(img, (0, 0), fx=0.5, fy=0.5, interpolation=cv2.INTER_CUBIC) #has been 0.5 for official data, new is fx = 2.63 and fy = 2.84 img_tensor = ToTensor()(img).float() imgs.append(img_tensor) frameNumber +=1 imgs = torch.stack(imgs, dim=0) return imgs And then this is done in training: for idx_epoch in range(startEpoch, nEpochs): #set epoch in dataloader for right shuffle ->set seed really random val_loader.sampler.set_epoch(idx_epoch) #Remember time for displaying time for epoch startTimeEpoch = datetime.now() i = 0 if processGPU==0: running_loss = 0 beenValuated = False for index, data_sr in enumerate(train_loader): #Transfer Data to GPU but don't block other processes because this only effects this single process data_sr = data_sr.cuda(processGPU, non_blocking=True) startTimeIteration = time.time() #Remove all dimensions of size 1 data_sr = data_sr.squeeze() # calculate the index of the input images and GT images num_f = len(data_sr) #If model_type is 0 -> only calculate one frame that is marked with gt if cfg.model_type == 0: idx_start = random.randint(-2, 2) idx_all = list(np.arange(idx_start, idx_start + num_f).clip(0, num_f - 1)) idx_gt = [idx_all.pop(int(num_f / 2))] idx_input = idx_all #Else when model_type is 1 then input frames 1,2,3 and predict frame 4 to number of cfg.dec_frames. Set all images that will be predicted to 'gt' images else: idx_all = np.arange(0, num_f) idx_input = list(idx_all[0:4]) idx_gt = list(idx_all[4:4+cfg.dec_frames]) imgs_input = data_sr[idx_input] imgs_gt = data_sr[idx_gt] # get predicted result imgs_pred = model(imgs_input) I use cfg.model_type = 1. This model will give me new images with also a colored border. And usually here follows a loss calculation: loss = criterion_mse(imgs_pred, imgs_gt) But I can no longer use this. Does anyone know how to write a custom loss function that only takes certain parts of the tensor into account or which parts in the tensor represent which images?
You can slice tensors the same way as in numpy. The dimensions of image batches are NCHW. If b is your border size and it is symmetric from all sides then just crop the tensors: loss = criterion_mse(imgs_pred[:, :, b:-b, b:-b] , imgs_gt[:, :, b:-b, b:-b])
https://stackoverflow.com/questions/70130479/
Policy Network returning different outputs for batched states and individual states
I am implementing REINFORCE applied to the CartPole-V0 openAI gym environment. I am trying 2 different implementations of the same, and the issue I am not able to resolve is the following: Upon passing a single state to the Policy Network, I get an output Tensor of size 2, containing the action probabilities of the 2 actions. However, when I pass a `batch of states' to the Policy Network to compute the output action probabilities of all of them, the values that I obtain are very different from when each state is individually passed to the network. Can someone help me understand the issue? My code for the same is below: (Note: this is NOT the complete REINFORCE algorithm -- I am aware that I need to compute the loss from the probabilities. But I am trying to understand the difference in the computation of the two probabilities, which I think should be the same, before proceeding.) # architecture of the Policy Network class PolicyNetwork(nn.Module): def __init__(self, state_dim, n_actions): super().__init__() self.n_actions = n_actions self.model = nn.Sequential( nn.Linear(state_dim, 64), nn.ReLU(), nn.Linear(64, n_actions), nn.Softmax(dim=0) ).float() def forward(self, X): return self.model(X) def train_reinforce_agent(env, episode_length, max_episodes, gamma, visualize_step, learning_rate=0.003): # define the parametric model for the Policy: this is an instantiation of the PolicyNetwork class model = PolicyNetwork(env.observation_space.shape[0], env.action_space.n) # define the optimizer for updating the weights of the Policy Network optimizer = optim.Adam(model.parameters(), lr=learning_rate) # hyperparameters of the reinforce agent EPISODE_LENGTH = episode_length MAX_EPISODES = max_episodes GAMMA = gamma VISUALIZE_STEP = max(1, visualize_step) score = [] for episode in range(MAX_EPISODES): # reset the environment curr_state = env.reset() done = False episode_t = [] # rollout an entire episode from the Policy Network pred_vals = [] for t in range(EPISODE_LENGTH): act_prob = model(torch.from_numpy(curr_state).float()) pred_vals.append(act_prob) action = np.random.choice(np.array(list(range(env.action_space.n))), p=act_prob.data.numpy()) prev_state = curr_state curr_state, _, done, info = env.step(action) episode_t.append((prev_state, action, t+1)) if done: break score.append(len(episode_t)) # reward_batch = torch.Tensor([r for (s,a,r) in episode_t]).flip(dims=(0,)) reward_batch = torch.Tensor([r for (s, a, r) in episode_t]) # compute the return for every state-action pair from the rewards at every time-step batch_Gvals = [] for i in range(len(episode_t)): new_Gval = 0 power = 0 for j in range(i, len(episode_t)): new_Gval = new_Gval + ((GAMMA ** power) * reward_batch[j]).numpy() power += 1 batch_Gvals.append(new_Gval) # normalize the returns for the batch expected_returns_batch = torch.FloatTensor(batch_Gvals) if torch.is_nonzero(expected_returns_batch.max()): expected_returns_batch /= expected_returns_batch.max() # batch the states, actions, prob after the episode state_batch = torch.Tensor([s for (s,a,r) in episode_t]) print("State batch:", state_batch) all_states = [s for (s,a,r) in episode_t] print("All states:", all_states) action_batch = torch.Tensor([a for (s,a,r) in episode_t]) pred_batch_v1 = model(state_batch) pred_batch_v2 = torch.stack(pred_vals) print("Batched state pred_vals:", pred_batch_v1) print("Individual state pred_vals:", pred_batch_v2) ### Why is this different from the above predicted values?? My main function where I pass the environment is: def main(): env = gym.make('CartPole-v0') # train a REINFORCE-agent to learn the optimal policy episode_length = 500 n_episodes = 500 gamma = 0.99 vis_steps = 50 train_reinforce_agent(env, episode_length, n_episodes, gamma, vis_steps)
In your policy, you have Softmax over dim 0. This normalizes the probability of each action across your batch. You want to do it across actions by dim=1.
https://stackoverflow.com/questions/70131381/
How to minimize wrt one set of parameters and maximize wrt other set of parameters simultaneously in a training loop in pytorch?
I have a loss function that includes two sets of parameters to learn. One is a matrix, wrt which I want to maximize the loss, other is the set of parameters for logistic regression, wrt which I want to minimize the loss. In pytorch whenever I use loss.backward(), the loss is minimized wrt both sets of parameters and (-loss).backward() maximizes wrt both. How do I do minimax optimization wrt the sets of parameters in pytorch? Tensorflow probably has this concept of gradient_tape and tape.watch() concept. What's the alternative in pytorch?
You can refer to the gradient reversal idea from https://arxiv.org/abs/1409.7495. But the crux of the idea is this: you have some loss function l(X,Y) where X and Y are parameters. Now you want to update X to minimize loss and update Y to maximize loss, which can be seen as minimizing -l(X,Y). Essentially you want to update parameters X with dl/dX and Y with d(-l)/dY = -dl/dy. You can do this by doing a backpropagation step, modifying the gradients of Y, and applying the update. In pytorch terms, that would be: loss = compute_loss() loss.backward() # modify gradients of Y Y.grad.data = -Y.grad.data optimizer.step() optimizer.zero_grad()
https://stackoverflow.com/questions/70135992/
Speed up training deep learning model in pytorch
I am working with a training deep learning model with the Pytorch framework. And I add torch.no_grad to speed up the training phase model.train() for epoch in range(epochs): for data, label in loader: data, label = data.to(device), label.to(device) with torch.no_grad(): out = model(data) out.requires_grad = True #model.zero_grad(), loss(), loss.backward, optim.step The speed is improved, but have something wrong with the gradient update, the model doesn't converge correctly. Can someone explain to me why it doesn't work?
Simply, when using the torch.no_grad context manager, the gradients are not computed, so the model cannot receive any update. torch.no_grad is meant to be used in other cases, for example when evaluating the model. From the docs: Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward()
https://stackoverflow.com/questions/70140986/
Why i'm getting the message "UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors."?
I'm building my own dataset: class MyDataset(Dataset): def __init__(self, folders): self.folders = folders def __len__(self): return len(self.folders) def __getitem__(self, item): pos_file_list = glob(self.folders[item] + "/*") positive_img = pos_file_list[1] positive_img = mpimg.imread(positive_img) positive_img = np.transpose(positive_img, (2,0,1)) # positive_img have the type: <class 'numpy.ndarray'>, shape: (3, 128, 128) return positive_img And I'm using it with: batch_size = 128 train_ds = MyDataset(train_folder_list) oTrainDL = DataLoader(train_ds, batch_size=batch_size, shuffle=True, num_workers=2) for i, imgs in enumerate(oTrainDL): break I'm getting the following warrning: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:189.) return default_collate([torch.as_tensor(b) for b in batch]) Why I'm getting warrning messaage ? How can I fix it ?
change from return positive_img to: return torch.tensor(positive_img)
https://stackoverflow.com/questions/70141850/
How can I solve this pytorch two devices error
I ran into a problem with PyTorch: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument mat1 in method wrapper_addmm) model = nn.Sequential( nn.Linear(622, 512), nn.ReLU(), nn.Linear(512, 256), nn.ReLU(), nn.Linear(256, 5), ).to(device) loss_fn = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(model.parameters(), lr=1e-4) train_loader = Data.DataLoader( dataset=train_dataset, batch_size=32, shuffle=True, num_workers=0, ) test_loader = Data.DataLoader( dataset=test_dataset, batch_size=100, shuffle=True, num_workers=0, ) best_acc = 0 best_model = model.cpu().state_dict().copy() # train_acc = 0 # test_acc = 0 for epoch in range(20): for step, (batch_x, batch_y) in enumerate(train_loader): batch_x = batch_x.to(device) batch_y = batch_y.to(device) print(batch_x) print(batch_x.device, 0) out = model(batch_x.to(device)).cuda() print(out.device, 1) loss = loss_fn(out, batch_y.long()) optimizer.zero_grad() loss.backward() optimizer.step() train_acc = np.mean((torch.argmax(out, 1) == batch_y).cpu().numpy()) with torch.no_grad(): for batch_x, batch_y in test_loader: batch_x = batch_x.to(device) batch_y = batch_y.to(device) print(batch_x.device, 2) out = model(batch_x) print(batch_x.device, 3) test_acc = np.mean((torch.argmax(out, 1) == batch_y).cpu().numpy()) if test_acc > best_acc: best_acc = test_acc best_model = model.cpu().state_dict().copy() Can someone help explain that, I've been working on this all day....
Note that .to() has different behavior when applied to nn.Modules and to torch.tensors: while for torch.tensor .to(device) creates a copy of the tensor on the device, with nn.Module .to(device) operates in place. In your code, you move your model to CPU: best_model = model.cpu().state_dict().copy() make sure you move the model back to device after moving it to cpu.
https://stackoverflow.com/questions/70142189/
How to call "backward" in a loop with 2 optimizers?
I have 2 networks that I'm trying to update: import torch import torch.nn as nn import torch.optim as optim from torch.distributions import Normal import matplotlib.pyplot as plt from tqdm import tqdm softplus = torch.nn.Softplus() class Model_RL(nn.Module): def __init__(self): super(Model_RL, self).__init__() self.fc1 = nn.Linear(3, 20) self.fc2 = nn.Linear(20, 30) self.fc3 = nn.Linear(30, 2) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = softplus(self.fc3(x)) return x class Model_FA(nn.Module): def __init__(self): super(Model_FA, self).__init__() self.fc1 = nn.Linear(1, 20) self.fc2 = nn.Linear(20, 30) self.fc3 = nn.Linear(30, 1) def forward(self, x): x = torch.relu(self.fc1(x)) x = torch.relu(self.fc2(x)) x = softplus(self.fc3(x)) return x net_RL = Model_RL() net_FA = Model_FA() The training loop is inps = torch.tensor([[1.0]]) y = torch.tensor(10.0) opt_RL = optim.Adam(net_RL.parameters()) opt_FA = optim.Adam(net_FA.parameters()) baseline = 0 baseline_lr = 0.1 epochs = 100 for _ in tqdm(range(epochs)): for inp in inps: with torch.no_grad(): net_FA(inp) for layer in range(3): out_RL = net_RL(torch.tensor([1.0,2.0,3.0])) mu, std = out_RL dist = Normal(mu, std) update_values = dist.sample() log_p = dist.log_prob(update_values).mean() out = net_FA(inp) reward = -torch.square((y - out)) baseline = (1 - baseline_lr) * baseline + baseline_lr * reward loss_RL = - (reward - baseline) * log_p opt_RL.zero_grad() opt_FA.zero_grad() loss_RL.backward() opt_RL.step() out = net_FA(inp) loss_FA = torch.mean(torch.square(y - out)) opt_RL.zero_grad() opt_FA.zero_grad() loss_FA.backward() opt_FA.step() print("Mean: " + str(mu.detach().numpy()) + ", Goal: " + str(y)) print("Standard deviation: " + str(softplus(std).detach().numpy()) + ", Goal: 0ish") I'm getting 2 main errors: RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward()... And when I add retain_graph=True to both backward calls I get the following RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [30, 1]], which is output 0 of TBackward, is at version 5; expected version 4 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True) My main question is how can I make this training work? But intermediate questions are: why does retain_graph=True is needed here if I'm using a loop? From here: "there is no need to use retain_graph=True. In each loop, a new graph is created" Why does it seem as if the retain_graph=True makes training significantly slower (if I remove the other backward call)? This doesn't really makes sense to me as in each epoch a new computational graph should be created (and not just one that is being extended).
I think the line baseline = (1 - baseline_lr) * baseline + baseline_lr * reward causing the error. Because: previous state of baseline is used to get new state of baseline. PyTorch will track all these states inside a graph. backward will flush the graph. variable baseline of time - t + 1 will try to backpropagate through baseline of time - t. But at time - t + 1 graph behind baseline of time - t doesn't exist. This leads to error Solution: As you are not optimizing variable baseline or anything behind baseline Initializebaseline as torch tensor. detach it from graph before updating state. Try this: # intialize baseline as torch tensor baseline = torch.tensor(0.) baseline_lr = 0.1 epochs = 100 for _ in tqdm(range(epochs)): for inp in inps: with torch.no_grad(): net_FA(inp) for layer in range(3): out_RL = net_RL(torch.tensor([1.0,2.0,3.0])) mu, std = out_RL dist = Normal(mu, std) update_values = dist.sample() log_p = dist.log_prob(update_values).mean() out = net_FA(inp) reward = -torch.square((y - out)) # detach baseline from graph baseline = (1 - baseline_lr) * baseline.detach() + baseline_lr * reward loss_RL = - (reward - baseline) * log_p opt_RL.zero_grad() opt_FA.zero_grad() loss_RL.backward() opt_RL.step() out = net_FA(inp) loss_FA = torch.mean(torch.square(y - out)) opt_RL.zero_grad() opt_FA.zero_grad() loss_FA.backward() opt_FA.step() But actually I don't know why you are updating the networks, 3 times for the same input?
https://stackoverflow.com/questions/70147878/
I made an code with command append, to compare two tensors with pytorch. What's wrong with it?
xxx=torch.tensor([True,True,False,True]) xxxz=torch.tensor([True,False,False,True]) def zzz(): asdf=[] for i in range(4): if xxx[i] == xxxz[i] and xxx[i] == True: asdf.append(i) return asdf zzz() What I expected was: [0,3]. But the result was : [0]. I don't get what's wrong with it. Did I use append wrongly?
Your return statement is inside for and if. Pull it outside at the level of function definition def zzz(): asdf=[] for i in range(4): if xxx[i] == xxxz[i] and xxx[i] == True: asdf.append(i) # return asdf # NOPE return asdf # THIS Earlier, your function was returning just after encountering 0. That's why 3 was missing.
https://stackoverflow.com/questions/70151335/
PyTorch installation asks for python=3.1 . Python Version installed: 3.10.0
(pgqa) raphy@pc:~/pythonMatters/PathGenerator$ conda install pytorch torchvision torchaudio cpuonly -c pytorch Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Collecting package metadata (repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. PackagesNotFoundError: The following packages are not available from current channels: - python=3.1 Current channels: - https://conda.anaconda.org/pytorch/linux-64 - https://conda.anaconda.org/pytorch/noarch - https://repo.anaconda.com/pkgs/main/linux-64 - https://repo.anaconda.com/pkgs/main/noarch - https://repo.anaconda.com/pkgs/r/linux-64 - https://repo.anaconda.com/pkgs/r/noarch To search for alternate channels that may provide the conda package you're looking for, navigate to https://anaconda.org and use the search bar at the top of the page. (pgqa) raphy@pc:~/pythonMatters/PathGenerator$ python3 --version Python 3.10.0 Tried installing PyTorch with pip but got these errors: (pgqa) raphy@pc:~/pythonMatters/PathGenerator$ pip3 install torch==1.10.0+cpu torchvision==0.11.1+cpu torchaudio==0.10.0+cpu -f https://download.pytorch.org /whl/cpu/torch_stable.html Looking in links: https://download.pytorch.org/whl/cpu/torch_stable.html ERROR: Could not find a version that satisfies the requirement torch==1.10.0+cpu (from versions: none) ERROR: No matching distribution found for torch==1.10.0+cpu O.S.: Ubuntu 20.04 pip version: (pgqa) raphy@pc:~$ pip --version pip 21.2.4 from /home/raphy/anaconda3/envs/pgqa/lib/python3.10/site-packages/pip (python 3.10) (pgqa) raphy@pc:~$ pip3 --version pip 21.2.4 from /home/raphy/anaconda3/envs/pgqa/lib/python3.10/site-packages/pip (python 3.10) Python Version: (pgqa) raphy@pc:~$ python --version Python 3.10.0 Why does it asks for python3.1 if the latest stable python version is 3.10 ? How to solve the problem?
Conda 4.10 is incompatible with python 3.10. The reason you cannot install many packages with conda is that conda 4.10 has a known bug handling python 3.10. Update your conda to 4.11 or revert your python to 3.9 or older. Read more here in this SO answer. As a workaround, you can still try to use pip.
https://stackoverflow.com/questions/70154524/
RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead
I wrote below code using pytorch and ran into a runtime error: tns = torch.tensor([1,0,1]) tns.mean() --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-666-194e5ab56931> in <module> ----> 1 tns.mean() RuntimeError: mean(): input dtype should be either floating point or complex dtypes. Got Long instead. However, if I change the tensor to float, the error goes away: tns = torch.tensor([1.,0,1]) tns.mean() --------------------------------------------------------------------------- tensor(0.6667) My question is why the error happens. The data type of the first tenor is int64 instead of Long, why does PyTorch take it as Long?
This is because torch.int64 and torch.long both refer to the same data type, of 64-bit signed integers. See here for an overview of all data types.
https://stackoverflow.com/questions/70159221/
Why should I use a 2**N value and how do I choose the right one?
I'm working through the lessons on building a neural network and I'm confused as to why 512 is used for the linear_relu_stack in the example code: class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28*28, 512), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10), nn.ReLU() ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logits I started googling around and saw many examples of the torch.nn.Linear function using various values of 2**N but it isn't clear to me why they are using powers of 2 nor how they are choosing which value to use.
The reason is how hardware makes the process. In deep learning matrix operations are the main computations and source of floating point operations (FLOPs). Single Instruction Multiple Data (SIMD) operations in CPUs happen in batch sizes, which are powers of 2. Consider take a look if you are interested: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/37631.pdf And for GPUs: https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html Memory allocated through the CUDA Runtime API, such as via cudaMalloc(), is guaranteed to be aligned to at least 256 bytes. Therefore, choosing sensible thread block sizes, such as multiples of the warp size (i.e., 32 on current GPUs), facilitates memory accesses by warps that are properly aligned. (Consider what would happen to the memory addresses accessed by the second, third, and subsequent thread blocks if the thread block size was not a multiple of warp size, for example.) This means that any multiple of 32 will optimize the memory access, and thus, the processing speed, while you are using a gpu. About the right value, pyramidal shape usually works better, because as you go deeper, the neural network tends to create internal representations of the transformed data, in an expected hierarchical, thus, pyramidal shape. So a good guess is to use decreasing amounts of neurons at each layer as you come close to the output, e.g: self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28*28, 512), nn.ReLU(), nn.Linear(512, 128), nn.ReLU(), nn.Linear(128, 10), nn.ReLU() ) But there is no general rule and you can find whole fields of study (like Neural Architecture Search) about how to find optimal hyper-parameters for neural networks. you can take a look here for some deeper information: https://arxiv.org/pdf/1608.04064.pdf
https://stackoverflow.com/questions/70159370/
How does a gradient backpropagates through random samples?
I'm learning about policy gradients and I'm having hard time understanding how does the gradient passes through a random operation. From here: It is not possible to directly backpropagate through random samples. However, there are two main methods for creating surrogate functions that can be backpropagated through. They have an example of the score function: probs = policy_network(state) # Note that this is equivalent to what used to be called multinomial m = Categorical(probs) action = m.sample() next_state, reward = env.step(action) loss = -m.log_prob(action) * reward loss.backward() Which I tried to create an example of: import torch import torch.nn as nn import torch.optim as optim from torch.distributions import Normal import matplotlib.pyplot as plt from tqdm import tqdm softplus = torch.nn.Softplus() class Model_RL(nn.Module): def __init__(self): super(Model_RL, self).__init__() self.fc1 = nn.Linear(1, 20) self.fc2 = nn.Linear(20, 30) self.fc3 = nn.Linear(30, 2) def forward(self, x): x1 = self.fc1(x) x = torch.relu(x1) x2 = self.fc2(x) x = torch.relu(x2) x3 = softplus(self.fc3(x)) return x3, x2, x1 # basic net_RL = Model_RL() features = torch.tensor([1.0]) x = torch.tensor([1.0]) y = torch.tensor(3.0) baseline = 0 baseline_lr = 0.1 epochs = 3 opt_RL = optim.Adam(net_RL.parameters(), lr=1e-3) losses = [] xs = [] for _ in tqdm(range(epochs)): out_RL = net_RL(x) mu, std = out_RL[0] dist = Normal(mu, std) print(dist) a = dist.sample() log_p = dist.log_prob(a) out = features * a reward = -torch.square((y - out)) baseline = (1-baseline_lr)*baseline + baseline_lr*reward loss = -(reward-baseline)*log_p opt_RL.zero_grad() loss.backward() opt_RL.step() losses.append(loss.item()) This seems to work magically fine which again, I don't understand how the gradient passes through as they mentioned that it can't pass through the random operation (but then somehow it does). Now since the gradient can't flow through the random operation I tried to replace mu, std = out_RL[0] with mu, std = out_RL[0].detach() and that caused the error: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn. If the gradient doesn't pass through the random operation, I don't understand why would detaching a tensor before the operation matter.
It is indeed true that sampling is not a differentiable operation per se. However, there exist two (broad) ways to mitigate this - [1] The REINFORCE way and [2] The reparameterization way. Since your example is related to [1], I will stick my answer to REINFORCE. What REINFORCE does is it entirely gets rid of sampling operation in the computation graph. However, the sampling operation remains outside the graph. So, your statement .. how does the gradient passes through a random operation .. isn't correct. It does not pass through any random operation. Let's see your example mu, std = out_RL[0] dist = Normal(mu, std) a = dist.sample() log_p = dist.log_prob(a) Computation of a does not involve creating a computation graph. It is technically equivalent to plugging in some offline data from a dataset (as in supervised learning) mu, std = out_RL[0] dist = Normal(mu, std) # a = dist.sample() a = torch.tensor([1.23, 4.01, -1.2, ...], device='cuda') log_p = dist.log_prob(a) Since we don't have offline data beforehand, we create them on the fly and the .sample() method does merely that. So, there is no random operation on the graph. The log_p depends on mu and std deterministically, just like any standard computation graph. If you cut the connection like this mu, std = out_RL[0].detach() .. of course it is going to complaint. Also, do not get confused by this operation dist = Normal(mu, std) log_p = dist.log_prob(a) as it does not contain any randomness by itself. This is merely a shortcut for writing the tedious log-likelihood formula for Normal distribution.
https://stackoverflow.com/questions/70163823/
How to correctly replace the real part of the FFT?
I try to reproduce a data augmentation method, which comes from the paper: Qinwei Xu, Ruipeng Zhang, Ya Zhang, Yanfeng Wang and Qi Tian "A Fourier-based Framework for Domain Generalization" (CVPR 2021). It is mentioned in the paper that they set the real part to a constant (the constant in the paper is 20000) to eliminate the amplitude and realize the reconstruction of the image relying only on the phase. Below is my code: img = process_img("./data/house.jpg", 128) img_fft = torch.fft.fft2(img, dim=(-2, -1)) amp = torch.full(img_fft.shape, 200000) img_fft.real = amp img_ifft = torch.fft.ifft2(img_fft, dim=(-2, -1)) img_ifft = img_ifft.squeeze(0) img_ifft = img_ifft.transpose(2, 0) img_ifft = np.array(img_ifft) cv2.imshow("", img_ifft.real) Among them, the process_img function is only used to convert ndarray to tensor, as shown below: loader = transforms.Compose([transforms.ToTensor()]) def process_img(img_path, img_size): img = cv2.imread(img_path) img = cv2.resize(img, (img_size, img_size)) img = img.astype(np.float32) / 255.0 img = loader(img) img = img.unsqueeze(0) return img The first is the original image, the second is the image provided by the paper, and the third is the image generated by my code: It can be seen that the images generated by my method are very different from those provided in the paper, and there are some artifacts. Why is there such a result?
You are confusing "real"/"imaginary" parts of complex numbers with "amplitude"/"phase" representation. Here's the quick guide: A complex number z can be expressed by either a sum of its real part x and its imaginary part y: z = x + j y Alternatively, once can express the same complex number z as a rotated vector with amplitude r and an angle phi: z = r exp(j phi) Where r = sqrt(x^2 + y^2) and phi=atan2(x,y). This image (from Wikipedia) explain this visually: In your code, you replace the "real" part, but in the paper, they suggest replacing the "amplitude". If you want to replace the amplitude: const_amp = ... # whatever the constant amplitude you want new_fft = const_amp * torch.exp(1j * img_fft.angle()) # reconstruct the new image from the modulated Fourier: img_ifft = torch.fft.ifft2(new_fft, dim=(-2, -1)) This results with the following image:
https://stackoverflow.com/questions/70165427/
PyTorch model tracing not working: We don't have an op for aten::fill_
I am stuck on tracing a PyTorch model on this specific module with an error: RuntimeError: 0INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":611, please report a bug to PyTorch. We don't have an op for aten::fill_ but it isn't a special case. Argument types: Tensor, bool, Candidates: aten::fill_.Scalar(Tensor(a!) self, Scalar value) -> (Tensor(a!)) aten::fill_.Tensor(Tensor(a!) self, Tensor value) -> (Tensor(a!)) Here is the reduced code example to reproduce the bug: import torch import torch.nn.functional as F import torch.nn as nn class SurroundPattern(nn.Module): def __init__(self, crop_size=1./2): super(SurroundPattern, self).__init__() self.crop_size = crop_size def forward(self, x, s): H,W = x.shape[2:] crop_h = (int(H / 2 - self.crop_size / 2 * H), int(H / 2 + self.crop_size / 2 * H)) crop_w = (int(W / 2 - self.crop_size / 2 * W), int(W / 2 + self.crop_size / 2 * W)) x_mask = torch.zeros(H,W,device=x.device, dtype=torch.bool) x_mask[crop_h[0] : crop_h[1], crop_w[0] : crop_w[1]] = True inside_indices = torch.where(x_mask) inside_part = x[:, :, inside_indices[0], inside_indices[1]] inside_feat = inside_part.mean(2) outside_indices = torch.where(~x_mask) outside_part = x[:, :, outside_indices[0], outside_indices[1]] outside_feat = outside_part.mean(2) fused = torch.stack([inside_feat, outside_feat], dim=2).unsqueeze(3) if s is None: return fused SH,SW = s.shape[2:] crop_sh = (int(SH / 2 - self.crop_size / 2 * SH), int(SH / 2 + self.crop_size / 2 * SH)) crop_sw = (int(SW / 2 - self.crop_size / 2 * SW), int(SW / 2 + self.crop_size / 2 * SW)) s_mask = torch.zeros(SH, SW, device=s.device, dtype=torch.bool) s_mask[crop_sh[0] : crop_sh[1], crop_sw[0] : crop_sw[1]] = True s_inside_indices = torch.where(s_mask) inside_sal = s[:, :, s_inside_indices[0], s_inside_indices[1]].flatten(1) s_outside_indices = torch.where(~s_mask) outside_sal = s[:, :, s_outside_indices[0], s_outside_indices[1]].flatten(1) if outside_sal.shape != inside_sal.shape: outside_sal = F.adaptive_max_pool1d(outside_sal.unsqueeze(1), output_size=784) outside_sal = outside_sal.squeeze(1) fused_sal = torch.stack([inside_sal, outside_sal], dim=2).unsqueeze(3) return fused, fused_sal x = torch.randn(2, 512, 7, 7) s = torch.randn(2, 1, 56, 56) patt = SurroundPattern() traced_cell = torch.jit.trace(patt, (x, s)) print(traced_cell) How to figure out where exactly the problem is? Is there a way to fix it with another functions? Thank you!
The problem is that you try to fill in a bool Tensor which is apparently not yet supported in jit (or a bug) Replacing this: x_mask= torch.zeros(H,W,device=x.device, dtype=torch.bool) x_mask[crop_h[0] : crop_h[1], crop_w[0] : crop_w[1]] = True with: x_mask= torch.zeros(H,W,device=x.device) x_mask[crop_h[0] : crop_h[1], crop_w[0] : crop_w[1]] = 1 should resolve the error. This of course is suboptimal to the target Tensor type but you should be able to perform any other operation you would be doing with torch.BoolTensor
https://stackoverflow.com/questions/70166946/
How to load custom model in pytorch
I'm trying to load my pretrained model (yolov5n) and test it with the following code in PyTorch: import os import torch model = torch.load(os.getcwd()+'/weights/last.pt') # Images imgs = ['https://example.com/img.jpg'] # Inference results = model(imgs) # Results results.print() results.save() # or .show() results.xyxy[0] # img1 predictions (tensor) results.pandas().xyxy[0] # img1 predictions (pandas) and I'm getting the following error: ModuleNotFoundError Traceback (most recent call last) in 3 import torch 4 ----> 5 model = torch.load(os.getcwd()+'/weights/last.pt') My model is located in the folder /weights/last.py, I'm not sure what I'm doing false. Could you please tell me, what it's missing in my code.
You should be able to find the weights in this directory: yolov5/runs/train/exp/weights/last.pt Then you load the weights with a line like this: model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/runs/train/exp/weights/last.pt', force_reload=True) I have an example of a notebook that loads custom models and from that directory after training the model here https://github.com/pylabel-project/samples/blob/main/pylabeler.ipynb
https://stackoverflow.com/questions/70167811/
pytorch assigning to results of gather
Suppose I have the following matrix, and its top 3 results across each row: p = torch.randn(5, 7) val, idx = p.topk(3, dim=-1) I wish to assign x to the top 3 results of each row where x is: x = torch.randn(5, 3) Now I know that doing torch.gather(p, -1, idx) will get me the correct elements that I want to replace, but I cannot replace against the function gather. What is the best way of getting the effect of: torch.gather(p, -1, idx) = x
One solution is to use list-style indexing to p: # create dummy indices to index the correct row (we need one value per value in idx) row_idx = torch.arange(len(p)).unsqueeze(1).repeat(1,3) # use flattened views p[row_idx.view(-1),idx.view(-1)] = x.view(-1) List-based indexing does require contiguous memory tensors, so you may pay a small computational penalty if p is non-contiguous, but I suspect any non-looping solution to this indexing task would have the same requirement.
https://stackoverflow.com/questions/70168932/
What is total_loss,loss_cls etc
I want to train a custom dataset on using faster_rcnn or mask_rcnn with the Pytorch and Detectron2 .Everything works well but I wanted to know I want to know what are the results I have. [11/29 20:16:31 d2.utils.events]: eta: 0:24:04 iter: 19 total_loss: 9.6 loss_cls: 1.5 loss_box_reg: 0.001034 loss_mask: 0.6936 loss_rpn_cls: 6.773 loss_rpn_loc: 0.5983 time: 1.4664 data_time: 0.0702 lr: 4.9953e-06 max_mem: 2447M I have this as result and I want to know what all of this means
Those are metrics printed out at every iteration of the training loop. The most important ones are the loss values, but below are basic descriptions of them all (eta and iter are self-explanatory I think). total_loss: This is a weighted sum of the following individual losses calculated during the iteration. By default, the weights are all one. loss_cls: Classification loss in the ROI head. Measures the loss for box classification, i.e., how good the model is at labelling a predicted box with the correct class. loss_box_reg: Localisation loss in the ROI head. Measures the loss for box localisation (predicted location vs true location). loss_rpn_cls: Classification loss in the Region Proposal Network. Measures the "objectness" loss, i.e., how good the RPN is at labelling the anchor boxes as foreground or background. loss_rpn_loc: Localisation loss in the Region Proposal Network. Measures the loss for localisation of the predicted regions in the RPN. loss_mask: Mask loss in the Mask head. Measures how "correct" the predicted binary masks are. For more details on the losses (1) and (2), take a look at the Fast R-CNN paper and the code. For more details on the losses (3) and (4), take a look at the Faster R-CNN paper and the code. For more details on the loss (5), take a look at the Mask R-CNN paper and the code. time: Time taken by the iteration. data_time: Time taken by the dataloader in that iteration. lr: The learning rate in that iteration. max_mem: Maximum GPU memory occupied by tensors in bytes.
https://stackoverflow.com/questions/70169219/
Parameters for Optimizer in ShuffleNet
In ShuffleNet the parameters that send to the optimizer first pass from the following function which makes a dictionary of weights: def get_parameters(model): group_no_weight_decay = [] group_weight_decay = [] for pname, p in model.named_parameters(): if pname.find('weight') >= 0 and len(p.size()) > 1: # print('include ', pname, p.size()) group_weight_decay.append(p) else: # print('not include ', pname, p.size()) group_no_weight_decay.append(p) assert len(list(model.parameters())) == len(group_weight_decay) + len(group_no_weight_decay) groups = [dict(params=group_weight_decay), dict(params=group_no_weight_decay, weight_decay=0.)] return groups and then: optimizer = optim.Adam(get_parameters(model), lr=0.01) But what is the difference between this function and just using model.parameters instead of get_parameters(model)?
With model.parameters() yo get all the parameters of the model in a single "group", and thus all hyper parameters of the optimizer are the same for all model.parameters(). In contrast, get_parameters() groups model.parameters() into two groups: group_weight_decay and group_no_weight_decay. As the names suggest, for the parameters of the second group the optimizer sets the weight_decay hyper parameter to zero.
https://stackoverflow.com/questions/70170572/
how to send an numpy array or a pytorch Tensor through http post request using requests module and Flask
I have an image and I want to send it to the serve. I'm using requests module to perform simple post request as following(info is a dictionary): import requests print(type(info["array_image"])) print(type(info["visual_features"])) response = requests.post("url", data=info) output : <class 'numpy.ndarray'> <class 'torch.Tensor'> on the server side I'm trying to receive them as arrays at least: from flask import Flask, request @app.route('/path', methods=['POST']) def function_name(): visual_features = request.form['visual_features'] array_image = request.form['array_image'] print(type(array_image)) print(type(visual_features)) output: <class 'str'> <class 'str'> I want to get a bytes array to build the image, but what I'm getting is a string... If I didn't find a way I'll encode arrays in bas64 and then decode it in the server...
Thanks to @praba230890 for giving me an easy to follow example. I would still write the solution here down, since the provided link doesn't fit my case exactly. import pickle import io bytes_image = pickle.dumps(info["array_image"]) stream = io.BytesIO(bytes_image) files = {"bytes_image": stream} info["array_image"] = None response = http.post("url", data=info, files=files) and in the server side: from flask import Flask, request @app.route('/path', methods=['POST']) def function_name(): image = request.files.get('bytes_image') bytes_image = image.read() if you want to get the image from a file, then: requests.post("http://localhost:5000/predict", files={"file": open('<PATH/TO/.jpg/FILE>/cat.jpg','rb')}) The Solution I'm currently using: remember info["array_image"] is a numpy array, and info is a dictionary import io info["image_shape_width"] = info["array_image"].shape[0] info["image_shape_height"] = info["array_image"].shape[1] bytes_image = info["array_image"].tobytes() stream = io.BytesIO(bytes_image) files = {"bytes_image": stream} info["array_image"] = None response = http.post(self.ip + "path", data=info, files=files) then receive it from flask import Flask, request import numpy as np @app.route('/path', methods=['POST']) def function_name(): bytes_image = request.files.get('bytes_image') bytes_image = bytes_image.read() array_image = np.frombuffer(bytes_image, dtype=dtype) shape = (int(request.form['image_shape_width']), int(request.form['image_shape_height']), 3) array_image = np.reshape(array_image, shape) image = Image.fromarray(array_image)
https://stackoverflow.com/questions/70174676/
Projected gradient descent on probability simplex in pytorch
I have a matrix A of dimension 1000x70000. my loss function includes A and I want to find optimal value of A using gradient descent where the constraint is that the rows of A remain in probability simplex (i.e. every row sums up to 1). I have initialised A as given below A=np.random.dirichlet(np.ones(70000),1000) A=torch.tensor(A,requires_grad=True) and my training loop looks like as given below for epoch in range(500): y_pred=forward(X) y=model(torch.mm(A.float(),X)) l=loss(y,y_pred) l.backward() A.grad.data=-A.grad.data optimizer.step() optimizer.zero_grad() if epoch%2==0: print("Loss",l,"\n")
An easy way to accomplish that is not to use A directly for computation but use a row normalized version of A. # you can keep 'A' unconstrained A = torch.rand(1000, 70000, requires_grad=True) then divide each row by its summation (keeping row sum always 1) for epoch in range(500): y_pred = forward(X) B = A / A.sum(-1, keepdim=True) # normalize rows manually y = model(torch.mm(B, X)) l = loss(y,y_pred) ... So now, at each step, B is the constrained matrix - i.e. the quantity of your interest. However, the optimization would still be on (unconstrained) A. Edit: @Umang Gupta remined me in the comment section that OP wanted to have "probability simplex" which means there would be another constraint, i.e. A >= 0. To accomplish that, you may simply apply some appropriate activation function (e.g. torch.exp, torch.sigmoid) on A in each iteration A_ = torch.exp(A) B = A_ / A_.sum(-1, keepdim=True) # normalize rows the exact choice of function depends on the behaviour of training dynamics which needs to be experimented with.
https://stackoverflow.com/questions/70175196/
Creating and Use a PyTorch DataLoader
I am trying to create a PyTorch Dataset and DataLoader object using a sample data. This is the tab seperated dataset: 1 0 0.171429 1 0 0 0.966805 0 0 1 0.085714 0 1 0 0.188797 1 1 0 0.000000 0 0 1 0.690871 2 1 0 0.057143 0 1 0 1.000000 1 0 1 1.000000 0 0 1 0.016598 2 1 0 0.171429 1 0 0 0.802905 0 0 1 0.171429 1 0 0 0.966805 1 1 0 0.257143 0 1 0 0.329876 0 This is the code to create the Dataset above and DataLoader object: import numpy as np import torch as T device = T.device("cpu") # to Tensor or Module # --------------------------------------------------- # predictors and label in same file # data has been normalized and encoded like: # sex age region income politic # [0] [2] [3] [6] [7] # 1 0 0.057143 0 1 0 0.690871 2 class PeopleDataset(T.utils.data.Dataset): def __init__(self, src_file, num_rows=None): x_tmp = np.loadtxt(src_file, max_rows=num_rows, usecols=range(0,7), delimiter="\t", skiprows=0, dtype=np.float32) y_tmp = np.loadtxt(src_file, max_rows=num_rows, usecols=7, delimiter="\t", skiprows=0, dtype=np.long) self.x_data = T.tensor(x_tmp, dtype=T.float32).to(device) self.y_data = T.tensor(y_tmp, dtype=T.long).to(device) def __len__(self): return len(self.x_data) # required def __getitem__(self, idx): if T.is_tensor(idx): idx = idx.tolist() preds = self.x_data[idx, 0:7] pol = self.y_data[idx] sample = \ { 'predictors' : preds, 'political' : pol } return sample # --------------------------------------------------- def main(): print("\nBegin PyTorch DataLoader demo ") # 0. miscellaneous prep T.manual_seed(0) np.random.seed(0) print("\nSource data looks like: ") print("1 0 0.171429 1 0 0 0.966805 0") print("0 1 0.085714 0 1 0 0.188797 1") print(" . . . ") # 1. create Dataset and DataLoader object print("\nCreating Dataset and DataLoader ") train_file = "people_train.txt" train_ds = PeopleDataset(train_file, num_rows=8) bat_size = 3 train_ldr = T.utils.data.DataLoader(train_ds, batch_size=bat_size, shuffle=True) # 2. iterate thru training data twice for epoch in range(2): print("\n==============================\n") print("Epoch = " + str(epoch)) for (batch_idx, batch) in enumerate(train_ldr): print("\nBatch = " + str(batch_idx)) X = batch['predictors'] # [3,7] # Y = T.flatten(batch['political']) # Y = batch['political'] # [3] print(X) print(Y) print("\n==============================") print("\nEnd demo ") if __name__ == "__main__": main() The code is simply saved with the filename "demo.py". The code should succesfully execute once the command 'python demo.py' is executed on a command prompt screen. I use Anaconda Prompt which has Torch (v 1.10) installed. I have tried numerous methods to get the above working, but I only get an error which says: Source data looks like: 1 0 0.171429 1 0 0 0.966805 0 0 1 0.085714 0 1 0 0.188797 1 . . . Creating Dataset and DataLoader --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-8-cfb1177991f2> in <module>() 81 82 if __name__ == "__main__": ---> 83 main() 4 frames <ipython-input-8-cfb1177991f2> in main() 59 60 train_file = "people_train.txt" ---> 61 train_ds = PeopleDataset(train_file, num_rows=8) 62 63 bat_size = 3 <ipython-input-8-cfb1177991f2> in __init__(self, src_file, num_rows) 20 x_tmp = np.loadtxt(src_file, max_rows=num_rows, 21 usecols=range(0,7), delimiter="\t", ---> 22 skiprows=0, dtype=np.float32) 23 y_tmp = np.loadtxt(src_file, max_rows=num_rows, 24 usecols=7, delimiter="\t", skiprows=0, /usr/local/lib/python3.7/dist-packages/numpy/lib/npyio.py in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack, ndmin, encoding, max_rows) 1137 # converting the data 1138 X = None -> 1139 for x in read_data(_loadtxt_chunksize): 1140 if X is None: 1141 X = np.array(x, dtype) /usr/local/lib/python3.7/dist-packages/numpy/lib/npyio.py in read_data(chunk_size) 1058 continue 1059 if usecols: -> 1060 vals = [vals[j] for j in usecols] 1061 if len(vals) != N: 1062 line_num = i + skiprows + 1 /usr/local/lib/python3.7/dist-packages/numpy/lib/npyio.py in <listcomp>(.0) 1058 continue 1059 if usecols: -> 1060 vals = [vals[j] for j in usecols] 1061 if len(vals) != N: 1062 line_num = i + skiprows + 1 IndexError: list index out of range I am not able to see which part of the index is wrong, as I don't feel there seem to be anything wrong with the indexing. Can someone please help me ?
Your data seems to be space-separated, not tab-separated. So, when you specify delimiter="\t", the entire row is read as a single column. But because of usecols=range(0,7), NumPy expects there to be seven columns, and throws an error when trying to iterate over them. To fix this, either change the whitespaces to tabs in your data, or change the delimiter argument to delimiter=" ".
https://stackoverflow.com/questions/70177661/
Why loss function always return zero after first epoch?
Why the loss function is always printing zero after the first epoch? I suspect it's because of loss = loss_fn(outputs, torch.max(labels, 1)[1]). But if I use loss = loss_fn(outputs, labels), I will get the error RuntimeError: 0D or 1D target tensor expected, multi-target not supported . nepochs = 5 losses = np.zeros(nepochs) loss_fn = nn.CrossEntropyLoss() optimizer = optim.Adam(modell.parameters(), lr = 0.001) for epoch in range(nepochs): running_loss = 0.0 n = 0 for data in train_loader: #single batch if(n == 1): break; inputs, labels = data optimizer.zero_grad() outputs = modell(inputs) #loss = loss_fn(outputs, labels) loss = loss_fn(outputs, torch.max(labels, 1)[1]) loss.backward() optimizer.step() running_loss += loss.item() n += 1 losses[epoch] = running_loss / n print(f"epoch: {epoch+1} loss: {losses[epoch] : .3f}") The model is: def __init__(self, labels=10): super(Classifier, self).__init__() self.fc = nn.Linear(3 * 64 * 64, labels) def forward(self, x): out = x.reshape(x.size(0), -1) out = self.fc (out) return out Any idea? The labels are a 64 elements tensor like this: tensor([[7],[1],[ 2],[3],[ 2],[9],[9],[8],[9],[8],[ 1],[7],[9],[2],[ 5],[1],[3],[3],[8],[3],[7],[1],[7],[9],[8],[ 8],[3],[7],[ 5],[ 1],[7],[3],[2],[1],[ 3],[3],[2],[0],[3],[4],[0],[7],[1],[ 8],[4],[1],[ 5],[ 3],[4],[3],[ 4],[8],[4],[1],[ 9],[7],[3],[ 2],[ 6],[4],[ 8],[3],[ 7],[3]])
Usually loss calculation is loss = loss_fn(outputs, labels) and here outputs is as following: _ , outputs = torch.max(model(input), 1) or outputs = torch.max(predictions, 1)[0] Common practice is modifying outputs instead of labels: torch.max() returns a namedtuple (values, indices) where values is the maximum value of each row of the input tensor in the given dimension dim. And indices is the index location of each maximum value found (argmax). In your code snippet the labels is not indices of the labels, so when you calculate the loss, the function should look like this: loss = loss_fn(torch.max(outputs, 1)[0], labels)
https://stackoverflow.com/questions/70178582/
Detectron2: No instances in prediction
I'm trying to train Detectron2 on a custom dataset that I annotated with coco-annotator. After training I wanted to predict Instances of my Image, but I dont get any shown. Training: from detectron2.engine import DefaultTrainer cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) cfg.DATASETS.TRAIN = ("TrashTron_train",) cfg.DATASETS.TEST = ("TrashTron_val",) # cfg.DATASETS.TEST = () cfg.DATALOADER.NUM_WORKERS = 2 cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo cfg.SOLVER.IMS_PER_BATCH = 2 cfg.SOLVER.BASE_LR = 0.00025 # pick a good LR cfg.SOLVER.MAX_ITER = 300 # 300 iterations seems good enough for this toy dataset; you will need to train longer for a practical dataset cfg.SOLVER.STEPS = [] # do not decay learning rate cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 # faster, and good enough for this toy dataset (default: 512) cfg.MODEL.ROI_HEADS.NUM_CLASSES = 24 # only has one class (ballon). (see https://detectron2.readthedocs.io/tutorials/datasets.html#update-the-config-for-new-datasets) # NOTE: this config means the number of classes, but a few popular unofficial tutorials incorrect uses num_classes+1 here. os.makedirs(cfg.OUTPUT_DIR, exist_ok=True) trainer = DefaultTrainer(cfg) trainer.resume_or_load(resume=False) trainer.train() Prediction: test_data = [{'1191.jpg': '/content/datasets/val/1191.jpg', 'image_id': 1308}] cfg = get_cfg() cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set a custom testing threshold cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth") # path to the model we just trained cfg.MODEL.ROI_HEADS.NUM_CLASSES = 24 predictor = DefaultPredictor(cfg) outputs = predictor(im) # print(outputs["instances"].pred_densepose) im = cv2.imread(test_data[0]["1191.jpg"]) v = Visualizer(im[:, :, ::-1], metadata=MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), scale=0.5, instance_mode=ColorMode.IMAGE_BW) out = v.draw_instance_predictions(outputs["instances"].to("cpu")) img = cv2.cvtColor(out.get_image()[:, :, ::-1], cv2.COLOR_RGBA2RGB) plt.imshow(img) The corresponding image is shown, but no instances. Any suggestions? The overall evaluation scores aren't that great, but I picked the best class and there I also dont get any predictions...
I would try to lower the threshold, since you have said that overall training scores were not great. In this answer in official repo, following code is suggested to change the threshold: cfg.MODEL.TENSOR_MASK.SCORE_THRESH_TEST = 0.5 at another answer at the same thread, other thresholds are modified as well. cfg.MODEL.RETINANET.SCORE_THRESH_TEST = args.confidence_threshold cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = args.confidence_threshold cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = args.confidence_threshold
https://stackoverflow.com/questions/70181775/
FiftyOne Save Image with Overlaid Detections
I am using Pytorch and Fiftyone to process image detections and then visualize these image detections around people like so: However, I am having difficulty saving this in an easily viewable manner. I want to be able to save the processed image with the bounding boxes overlaid onto the image through the script, which I can only do now by right clicking and downloading the image from the application above. FiftyOne provides multiple options for exporting data: https://voxel51.com/docs/fiftyone/user_guide/export_datasets.html#supported-formats, but all of these export the detection for use in another script (by saving the images and detections seperately in a .txt/.json/etc file) rather than a 'final visualization' image. How can I save the image you see above (including the detection boxes) using FiftyOne? If there is no built in method, can I export it to another type of dataset and save the detections there?
FiftyOne has this capability built-in allowing you to draw labels on samples and save them to disk for any dataset, view, or even just individual samples: https://voxel51.com/docs/fiftyone/user_guide/draw_labels.html In general, it can be as simple as: import fiftyone as fo # The Dataset or DatasetView containing the samples you wish to draw dataset_or_view = fo.Dataset(...) # The directory to which to write the annotated media output_dir = "/path/for/output" # The list of `Label` fields containing the labels that you wish to render on # the source media (e.g., classifications or detections) label_fields = ["ground_truth", "predictions"] # for example # Render the labels! dataset_or_view.draw_labels(output_dir, label_fields=label_fields) The draw_labels() method also accepts a DrawConfig that provides a lot of options for how to render the labels when drawing them.
https://stackoverflow.com/questions/70190686/
Colab Notebook: Cannot import name 'container_abcs' from 'torch._six'
I'm trying to run the deit colab notebook found here: https://colab.research.google.com/github/facebookresearch/deit/blob/colab/notebooks/deit_inference.ipynb but I'm running into an issue in the second cell, specifically the import timm line, which returns this: ImportError: cannot import name 'container_abcs' from 'torch._six'
Issue related to this error here: Try a specific version of timm library: !pip install timm==0.3.2
https://stackoverflow.com/questions/70193443/
Pretrained lightning-bolts VAE not doing proper inference on training dataset
I'm using the CIFAR-10 pre-trained VAE from lightning-bolts. It should be able to regenerate images with the quality shown on this picture taken from the docs (LHS are the real images, RHS are the generated) However, when I write a simple script that loads the model, the weights, and tests it over the training set, I get a much worse reconstruction (top row are real images, bottom row are the generated ones): Here is a link to a self-contained colab notebook that reproduces the steps I've followed to produce the pictures. Am I doing something wrong on my inference process? Could it be that the weights are not as "good" as the docs claim? Thanks!
First, the image from the docs you show is for the AE, not the VAE. The results for the VAE look much worse: https://pl-bolts-weights.s3.us-east-2.amazonaws.com/vae/vae-cifar10/vae_output.png Second, the docs state "Both input and generated images are normalized versions as the training was done with such images." So when you load the data you should specify normalize=True. When you plot your data, you will need to 'unnormalize' the data as well: from pl_bolts.datamodules import CIFAR10DataModule from pl_bolts.models.autoencoders import VAE from pytorch_lightning import Trainer import matplotlib.pyplot as plt import numpy as np import torch from torchvision import transforms torch.manual_seed(17) np.random.seed(17) vae = VAE(32, lr=0.00001) vae = vae.from_pretrained("cifar10-resnet18") dm = CIFAR10DataModule(".", normalize=True) dm.prepare_data() dm.setup("fit") dataloader = dm.train_dataloader() print(dm.default_transforms()) mean = torch.tensor(dm.default_transforms().transforms[1].mean) std = torch.tensor(dm.default_transforms().transforms[1].std) unnormalize = transforms.Normalize((-mean / std).tolist(), (1.0 / std).tolist()) X, _ = next(iter(dataloader)) vae.eval() X_hat = vae(X) fig, axes = plt.subplots(2, 10, figsize=(10, 2)) for i in range(10): ax_real = axes[0][i] ax_real.imshow(np.transpose(unnormalize(X[i]), (1, 2, 0))) ax_real.get_xaxis().set_visible(False) ax_real.get_yaxis().set_visible(False) ax_gen = axes[1][i] ax_gen.imshow(np.transpose(unnormalize(X_hat[i]).detach().numpy(), (1, 2, 0))) ax_gen.get_xaxis().set_visible(False) ax_gen.get_yaxis().set_visible(False) Which gives something like this: Without normalization it looks like:
https://stackoverflow.com/questions/70197274/
BERT Domain Adaptation
I am using transformers.BertForMaskedLM to further pre-train the BERT model on my custom dataset. I first serialize all the text to a .txt file by separating the words by a whitespace. Then, I am using transformers.TextDataset to load the serialized data with a BERT tokenizer given as tokenizer argument. Then, I am using BertForMaskedLM.from_pretrained() to load the pre-trained model (which is what transformers library presents). Then, I am using transformers.Trainer to further pre-train the model on my custom dataset, i.e., domain adaptation, for 3 epochs. I save the model with trainer.save_model(). Then, I want to load the further pre-trained model to get the embeddings of the words in my custom dataset. To load the model, I am using AutoModel.from_pretrained() but this pops up a warning. Some weights of the model checkpoint at {path to my further pre-trained model} were not used when initializing BertModel So, I know why this pops up. Because I further pre-trained using transformers.BertForMaskedLM but when I load with transformers.AutoModel, it loads it as transformers.BertModel. What I do not understand is if this is a problem or not. I just want to get the embeddings, e.g., embedding vector with a size of 768.
You saved a BERT model with LM head attached. Now you are going to load the serialized file into a standalone BERT structure without any extra element and the warning is issued. This is pretty normal and there is no Fatal error to do so! You can check the list of unloaded params like below: from transformers import BertTokenizer, BertModel from transformers import BertTokenizer, BertLMHeadModel, BertConfig import torch lmbert = BertLMHeadModel.from_pretrained('bert-base-cased', config=config) lmbert.save_pretrained('you_desired_path/BertLMHeadModel') lmbert_params = [] for name, param in lmbert.named_parameters(): lmbert_params.append(name) bert = BertModel.from_pretrained('you_desired_path/BertLMHeadModel') bert_params = [] for name, param in bert.named_parameters(): bert_params.append(name) params_ralated_to_lm_head = [param_name for param_name in lmbert_params if param_name.replace('bert.', '') not in bert_params] params_ralated_to_lm_head output: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias']
https://stackoverflow.com/questions/70201921/
manually computing cross entropy loss in pytorch
I am trying to compute cross_entropy loss manually in Pytorch for an encoder-decoder model. I used the code posted here to compute it: Cross Entropy in PyTorch I updated the code to discard padded tokens (-100). The final code is this: class compute_crossentropyloss_manual: """ y0 is the vector with shape (batch_size,C) x shape is the same (batch_size), whose entries are integers from 0 to C-1 """ def __init__(self, ignore_index=-100) -> None: self.ignore_index=ignore_index def __call__(self, y0, x): loss = 0. n_batch, n_class = y0.shape # print(n_class) for y1, x1 in zip(y0, x): class_index = int(x1.item()) if class_index == self.ignore_index: # <------ I added this if-statement continue loss = loss + torch.log(torch.exp(y1[class_index])/(torch.exp(y1).sum())) loss = - loss/n_batch return loss To verify that it works fine, I tested it on a text generation task, and I computed the loss using pytorch.nn implementation and using this code. The loss values are not identical: using nn.CrossEntropyLoss: Using the code from the link above: Am I missing something? I tried to get the source code of nn.CrossEntropyLoss but I wasn't able. In this link nn/functional.py at line 2955, you will see that the function points to another cross_entropy loss called torch._C._nn.cross_entropy_loss; I can't find this function in the repo. Edit: I noticed that the differences appear only when I have -100 tokens in the gold. Demo example: y = torch.randint(1, 50, (100, 50), dtype=float) x = torch.randint(1, 50, (100,)) x[40:] = -100 print(criterion(y, x).item()) print(criterion2(y, x).item()) > 25.55788695847976 > 10.223154783391905 and when we don't have -100: x[40:] = 30 # any positive number print(criterion(y, x).item()) print(criterion2(y, x).item()) > 24.684453267596453 > 24.684453267596453
I solved the problem by updating the code. I discarded before the -100 tokens (the if-statement above), but I forgot to reduce the hidden_state size (which is called n_batch in the code above). After doing that, the loss numbers are identical to the nn.CrossEntropyLoss values. The final code: class CrossEntropyLossManual: """ y0 is the vector with shape (batch_size,C) x shape is the same (batch_size), whose entries are integers from 0 to C-1 """ def __init__(self, ignore_index=-100) -> None: self.ignore_index=ignore_index def __call__(self, y0, x): loss = 0. n_batch, n_class = y0.shape # print(n_class) for y1, x1 in zip(y0, x): class_index = int(x1.item()) if class_index == self.ignore_index: n_batch -= 1 continue loss = loss + torch.log(torch.exp(y1[class_index])/(torch.exp(y1).sum())) loss = - loss/n_batch return loss
https://stackoverflow.com/questions/70202761/
pytorch derivative returns none on .grad
i1 = tr.tensor(0.0, requires_grad=True) i2 = tr.tensor(0.0, requires_grad=True) x = tr.tensor(2*(math.cos(i1)*math.cos(i2) - math.sin(i1)*math.sin(i2)) + 3*math.cos(i1),requires_grad=True) y = tr.tensor(2*(math.sin(i1)*math.cos(i2) + math.cos(i1)*math.sin(i2)) + 3*math.sin(i1),requires_grad=True) z = (x - (-2))**2 + (y - 3)**2 z.backward() dz_t1 = i1.grad dz_t2 = i2.grad print(dz_t1) print(dz_t2) im trying to run the following code, but im facing an issue after z.backward(). i1.grad and i1.grad return none. from what i understand the cause of this issue is with the way backward() is evaluated in torch. so something along the lines of i1.retain_grad() has to be used to avoid this issue, i tried doing that but i still get none. i1.retain_grad and i2.retain_grad() were placed before z.backward() and after z.backward() and i still get none as an answer. whats happening exactly and how do i fix it? y.grad and x.grad work fine.
Use: i1 = tr.tensor(0.0, requires_grad=True) i2 = tr.tensor(0.0, requires_grad=True) x = 2*(torch.cos(i1)*torch.cos(i2) - torch.sin(i1)*torch.sin(i2)) + 3*torch.cos(i1) y = 2*(torch.sin(i1)*torch.cos(i2) + torch.cos(i1)*torch.sin(i2)) + 3*torch.sin(i1) z = (x - (-2))**2 + (y - 3)**2 z.backward() dz_t1 = i1.grad dz_t2 = i2.grad print(dz_t1) print(dz_t2) Here, using torch.sin and torch.cos ensures that the output is a torch tensor that is connected to i1 and i2 in the computational graph. Also, creating x and y using torch.tensor like you did detaches them from the existing graph, which again prevents gradients from flowing back through to i1 and i2.
https://stackoverflow.com/questions/70205007/
Pytorch Geometric Expected all tensors to be on the same device [demo code not working]
I wanted to try out the link prediction functionality demonstrated here. Here are my versions: PyTorch Geometric v2.0.2 PyTorch v1.9.0+cu111 I'm very baffled why cuda:0 is printed for every tensor yet I see the error when I pass the data through RandomLinkSplit. import torch import torch_geometric.transforms as T from torch_geometric.nn import GCNConv from torch_geometric.datasets import Planetoid from torch_geometric.utils import negative_sampling device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') transform = T.Compose([ T.NormalizeFeatures(), T.ToDevice(device), ]) dataset = Planetoid(root='/tmp/Planetoid', name='Cora', transform=transform) data = dataset[0] print(data.to_dict()) print(data.keys) transform = T.RandomLinkSplit(num_val=0.05, num_test=0.1, is_undirected=True,) train_data, val_data, test_data = transform(data) output: {'x': tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], device='cuda:0'), 'edge_index': tensor([[ 0, 0, 0, ..., 2707, 2707, 2707], [ 633, 1862, 2582, ..., 598, 1473, 2706]], device='cuda:0'), 'y': tensor([3, 4, 4, ..., 3, 3, 3], device='cuda:0'), 'train_mask': tensor([ True, True, True, ..., False, False, False], device='cuda:0'), 'val_mask': tensor([False, False, False, ..., False, False, False], device='cuda:0'), 'test_mask': tensor([False, False, False, ..., True, True, True], device='cuda:0')} ['val_mask', 'test_mask', 'edge_index', 'train_mask', 'x', 'y'] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /tmp/ipykernel_72/414574324.py in <module> 20 21 transform = T.RandomLinkSplit(num_val=0.05, num_test=0.1, is_undirected=True,) ---> 22 train_data, val_data, test_data = transform(data) /usr/local/lib/python3.7/dist-packages/torch_geometric/transforms/random_link_split.py in __call__(self, data) 204 train_edges, 205 neg_edge_index[:, num_neg_val + num_neg_test:], --> 206 out=train_store, 207 ) 208 self._create_label( /usr/local/lib/python3.7/dist-packages/torch_geometric/transforms/random_link_split.py in _create_label(self, store, index, neg_edge_index, out) 284 if neg_edge_index.numel() > 0: 285 edge_label = torch.cat([edge_label, neg_edge_label], dim=0) --> 286 edge_index = torch.cat([edge_index, neg_edge_index], dim=-1) 287 out[self.key] = edge_label 288 out[f'{self.key}_index'] = edge_index RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking arugment for argument tensors in method wrapper__cat)
The issue was indeed a bug. Thank you for reporting it.
https://stackoverflow.com/questions/70205917/
Give two inputs to torch::jit::script::Module forward method
I am trying to build and train a network in python using pytorch . My forward method takes two inputs as follows : def forward(self, x1, x2): I trained this model in python and saved using torch.jit.script . Then I load this model in c++ using the torch::jit::load. How do I now pass the inputs to the model in c++ ? If I try passing two separate tensors to the forward method like the following std::vector<torch::jit::IValue> inputs1{tensor1}; std::vector<torch::jit::IValue> inputs2{tensor2}; at::Tensor output = module.forward(inputs1,inputs2).toTensor(); then I receive an error saying that the method forward expects 1 argument, 2 provided. I can't also concatenate the two tensors since the shapes are different in all axis.
The problem is by concatenating the two tensors and giving the concatenated tensor as input to the model. Then in the forward method, we can create two separate tensors using the concatenated tensor and use them separately for the output computation. For concatenation to work, I appended the tensors with 0's so that they are of the same size in all axis except the one in which concatenation is to be done.
https://stackoverflow.com/questions/70210827/
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1280x5 and 6400x4096)?
Defining Alexnet using the following code,I can train successfully.But when I want to see the output of each layer,it will be an error ‘RuntimeError: mat1 and mat2 shapes cannot be multiplied (1280x5 and 6400x4096)?’ class AlexNet(nn.Module): def __init__(self): super(AlexNet, self).__init__() self.conv = nn.Sequential( nn.Conv2d(1, 96, 11, 4), nn.ReLU(), nn.MaxPool2d(3, 2), nn.Conv2d(96, 256, 5, 1, 2), nn.ReLU(), nn.MaxPool2d(3, 2), nn.Conv2d(256, 384, 3, 1, 1), nn.ReLU(), nn.Conv2d(384, 384, 3, 1, 1), nn.ReLU(), nn.Conv2d(384, 256, 3, 1, 1), nn.ReLU(), nn.MaxPool2d(3, 2) ) self.fc = nn.Sequential( nn.Linear(256*5*5, 4096), nn.ReLU(), nn.Dropout(0.5), nn.Linear(4096, 4096), nn.ReLU(), nn.Dropout(0.5), nn.Linear(4096, 10) ) def forward(self, img): feature = self.conv(img) output = self.fc(feature.view(img.shape[0], -1)) return output X=torch.randn(1,1,224,224) for name,layer in net.named_children(): X=layer(X) print(name,X.shape) Could u help me?
You forgot to flatten the output array of self.conv in the for cycle. You can split it into two cycles, one for the convolution layers, and one for the fully connected ones. X = torch.randn(1, 1, 224, 224) for name, layer in net.conv.named_children(): X = layer(X) print(name, X.shape) X = X.flatten() # or X = X.view(X.shape[0], -1) for name, layer in net.fc.named_children(): X = layer(X) print(name, X.shape)
https://stackoverflow.com/questions/70216022/
Pytorch is throwing an error RuntimeError: result type Float can't be cast to the desired output type Long
How should I get rid of the following error? >>> t = torch.tensor([[1, 0, 1, 1]]).T >>> p = torch.rand(4,1) >>> torch.nn.BCEWithLogitsLoss()(p, t) The above code is throwing the following error: RuntimeError: result type Float can't be cast to the desired output type Long
BCEWithLogitsLoss requires its target to be a float tensor, not long. So you should specify the type of t tensor by dtype=torch.float32: import torch t = torch.tensor([[1, 0, 1, 1]], dtype=torch.float32).T p = torch.rand(4,1) loss_fn = torch.nn.BCEWithLogitsLoss() print(loss_fn(p, t)) Output: tensor(0.5207)
https://stackoverflow.com/questions/70216222/
OSError: [Errno 22] Invalid argument | _pickle.UnpicklingError: pickle data was truncated
When trying to train a ResNet I get this error. Any help as to why this happens would be appreciated. This happens when I try to iterate through the Dataloader: File "C:\Users\JCout\AppData\Local\Temp/ipykernel_2540/2174299330.py", line 1, in <module> runfile('C:/Users/JCout/Documents/GitHub/Hybrid_resnet/transfer_learning.py', wdir='C:/Users/JCout/Documents/GitHub/Hybrid_resnet') File "C:\Users\JCout\anaconda3\lib\site-packages\debugpy\_vendored\pydevd\_pydev_bundle\pydev_umd.py", line 167, in runfile execfile(filename, namespace) File "C:\Users\JCout\anaconda3\lib\site-packages\debugpy\_vendored\pydevd\_pydev_imps\_pydev_execfile.py", line 25, in execfile exec(compile(contents + "\n", file, 'exec'), glob, loc) File "C:/Users/JCout/Documents/GitHub/Hybrid_resnet/transfer_learning.py", line 24, in <module> model, train_loss, test_loss = train.train_model(training, testing, File "C:\Users\JCout\Documents\GitHub\Hybrid_resnet\train.py", line 70, in train_model train_stats = train.train_step(model, criterion, optimizer, train_loader) File "C:\Users\JCout\Documents\GitHub\Hybrid_resnet\train.py", line 121, in train_step for i, (x_imgs, labels) in enumerate(train_loader): File "C:\Users\JCout\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 359, in __iter__ return self._get_iterator() File "C:\Users\JCout\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\JCout\anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 918, in __init__ w.start() File "C:\Users\JCout\anaconda3\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\JCout\anaconda3\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\JCout\anaconda3\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\JCout\anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\JCout\anaconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) OSError: [Errno 22] Invalid argument I'm also getting this traceback after the error which I find a bit weird since I'm not using pickle at all. The data consists of 2 .tif files and 2 .mat files for the data/targets. Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\JCout\anaconda3\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\JCout\anaconda3\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\JCout\anaconda3\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\JCout\anaconda3\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) _pickle.UnpicklingError: pickle data was truncated
Not entirely sure what was happening, but I had pin_memory = True num_workers = 4 within the DataLoader. Removing these removed the error.
https://stackoverflow.com/questions/70218051/
I want to see data from torch.utils.data.DataLoader. How Can I?
import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.transforms as transforms train_set = torchvision.datasets.MNIST(root = './data/MNIST',train = True,download = True,\transform = transfroms.Compose([transfroms.ToTensor()]) print(len(train_set)) # 60000 train_loader = torch.utils.data.DataLoader(train_set, batch_size=100) print(len(train_loader)) # 600 It seems like because of the batch_size, length of train_loader decreased. I think there are 100 tensors and one classification in a batch. I just want to see the elements or shape of it. How can I do? Also, ### Model Omitted ### model = ConvNet().to(device) criterion = nn.CrossEntropyLoss().to(device) optimizer = torch.optim.Adam(model.parameters(), lr = learning_rate) for epoch in range(5): avg_cost = 0 for data, target in train_loader: data = data.to(device) target = target.to(device) optimizer.zero_grad() hypothesis = model(data) cost = criterion(hypothesis, target) cost.backward() optimizer.step() avg_cost += cost / len(train_loader) print('[Epoch: {:>4}] cost = {:>.9}'.format(epoch + 1, avg_cost)) I think the training per epoch trains with 60,000 tensors right? Then I think the avg_cost should be divided by 60,000, not 600(which is len(train_loader))... Am I wrong with it?
You can get one batch of train data from trainloader using the code below and you can easily check it's shape. I hope this may help to get what you want. batch= iter(trainloader) images, labels = batch.next() print(images.shape) # torch.Size([num_samples, in_channels, H, W]) print(labels.shape)
https://stackoverflow.com/questions/70224152/
convert a python list to tensor pytorch
I want to convert a list of pixel values to tensor, but I got an error. My code calculate the pixel values (RGB) for each detected object in the image. How we can convert the list to tensor?? my code: cropped_images =[] imgs = PIL.Image.open(img_path).convert('RGB') #print(img_path) image_width, image_height = imgs.size imgArrays = np.array(imgs) X = (xCenter*image_width) Y = (yCenter*image_height) W = (Width*image_width) H = (Height*image_height) cropped_image = np.zeros((image_height, image_width)) for i in range(len(X)): x1, y1, w, h = X[i], Y[i], W[i], H[i] x_start = int(x1 - (w/2)) y_start = int(y1 - (h/2)) x_end = int(x_start + w) y_end = int(y_start + h) temp = imgArrays[y_start: y_end, x_start: x_end] cropped_image_pixels = torch.as_tensor(temp) cropped_images.append(cropped_image_pixels) stacked_tensor = torch.stack(cropped_images) print(stacked_tensor) the error: RuntimeError Traceback (most recent call last) <ipython-input-82-653a155c3b71> in <module>() 130 131 if __name__=="__main__": --> 132 main() 2 frames <ipython-input-80-670335a0656c> in __getitem__(self, idx) 76 cropped_image_pixels = torch.as_tensor(temp) 77 cropped_images.append(cropped_image_pixels) ---> 78 stacked_tensor = torch.stack(cropped_images) 79 80 print(stacked_tensor) RuntimeError: stack expects each tensor to be equal size, but got [506, 343, 3] at entry 0 and [520, 334, 3] at entry 1
list of tensors has two tensors and it's clear that both don't have same size torch.stack(tensors, dim=0, *, out=None) → Tensor Concatenates a sequence of tensors along a new dimension. All tensors need to be of the same size. you can use this pseudo code import torchvision.transforms as transforms . . . . temp=[] for img_name in LIST: img=cv2.resize(img,(H,W)) temp.append(img) train_x=np.asarray(temp) transform = transforms.Compose( [transforms.ToTensor(), check doc
https://stackoverflow.com/questions/70224968/
What are the main reasons why some network parameters might become nan after calling optimizer.step in Pytorch?
I am trying to understand why one or two parameters in my Pytorch neural network occasionally become nan after calling optimizer.step(). I have already checked the gradients after calling .backward() and just before calling the optimizer, and they neither contain nans nor are very large. I am doing gradient clipping, but I don't think that this can be responsible since the gradients still look fine after clipping. I am using single-precision floats everywhere. This behavior happens randomly every hundred thousand epochs or so, and is proving very difficult to debug. Unfortunately the code is too long to reproduce here and I haven't been able to replicate the problem in a smaller example. If anyone can suggest possible issues I haven't mentioned above, that would be super helpful. Thanks!
This ended up being ignorance on my part: there were Infs in the gradients that were evading my diagnostic code, as I didn't realize Pytorch's .isnan() method doesn't detect them.
https://stackoverflow.com/questions/70226432/
pytorch: log_softmax base 2?
I want to get surprisal values from logit outputs from PyTorch, using log base 2. One way to do this, given a logits tensor, is: probs = nn.functional.softmax(logits, dim = 2) surprisals = -torch.log2(probs) However, PyTorch provides a function that combines log and softmax, which is faster than the above: surprisals = -nn.functional.log_softmax(logits, dim = 2) But this seems to return values in base e, which I don't want. Is there a function like log_softmax, but which uses base 2? I have tried log2_softmax and log_softmax2, neither of which seems to work, and haven't had any luck finding documentation online.
How about just using the fact that logarithm bases can be easily altered by the following mathematical identity is what F.log_softmax() is giving you. All you need to do is surprisals = - (1 / torch.log(2.)) * nn.functional.log_softmax(logits, dim = 2) Its just a scalar multiplication. So, it hardly has any performance penalty.
https://stackoverflow.com/questions/70229674/
Output Dimensions of convolution in PyTorch
The size of my input images are 68 x 224 x 3 (HxWxC), and the first Conv2d layer is defined as conv1 = torch.nn.Conv2d(3, 16, stride=4, kernel_size=(9,9)). Why is the size of the output feature volume 16 x 15 x 54? I get that there are 16 filters, so there is a 16 in the front, but if I use [(W−K+2P)/S]+1 to calculate dimensions, the dimensions are not divisible. Can someone please explain?
The calculation of feature maps is [(W−K+2P)/S]+1 and here [] brackets means floor division. In your example padding is zero, so the calculation is [(68-9+2*0)/4]+1 ->[14.75]=14 -> [14.75]+1 = 15 and [(224-9+2*0)/4]+1 -> [53.75]=53 -> [53.75]+1 = 54. import torch conv1 = torch.nn.Conv2d(3, 16, stride=4, kernel_size=(9,9)) input = torch.rand(1, 3, 68, 224) print(conv1(input).shape) # torch.Size([1, 16, 15, 54]) You may see different formulas too calculate feature maps. In PyTorch: In general, you may see this: However the result of both cases are the same
https://stackoverflow.com/questions/70231487/
Sorting a tensor list in ascending order
I am working on a facial comparison app that will give me the closest n number of faces to my target face. I have done this with dlib/face_recognition as it uses numpy arrays, however i am now trying to do the same thing with facenet/pytorch and running into an issue because it uses tensors. I have created a database of embeddings and I am giving the function one picture to compare to them. What i would like is for it to sort the list from lowest distances to highest, and give me the lowest 5 results or so. here is the code I am working on that is doing the comparison. at this point i am feeding it a photo and asking it to compare against the embedding database. def face_match(img_path, data_path): # img_path= location of photo, data_path= location of data.pt # getting embedding matrix of the given img img_path = (os.getcwd()+'/1.jpg') img = Image.open(img_path) face = mtcnn(img) # returns cropped face and probability emb = resnet(face.unsqueeze(0)).detach() # detech is to make required gradient false saved_data = torch.load('data.pt') # loading data.pt file embedding_list = saved_data[0] # getting embedding data name_list = saved_data[1] # getting list of names dist_list = [] # list of matched distances, minimum distance is used to identify the person for idx, emb_db in enumerate(embedding_list): dist = torch.dist(emb, emb_db) dist_list.append(dist) namestodistance = list(zip(name_list,dist_list)) print(namestodistance) face_match('1.jpg', 'data.pt') This results in giving me all the names and their distance from the target photo in alphabetical order of the names, in the form of (Adam Smith, tensor(1.2123432)), Brian Smith, tensor(0.6545464) etc. If the 'tensor' wasn't part of every entry I think it would be no problem to sort it. I don't quite understand why its being appended to the entries. I can cut this down to the best 5 by adding [0:5] at the end of dist_list, but I can't figure out how to sort the list, I think the problem is the word tensor being in every entry. I have tried for idx, emb_db in enumerate(embedding_list): dist = torch.dist(emb, emb_db) sorteddist = torch.sort(dist) but for whatever reason this only returns one distance value, and it isn't the smallest one. idx_min = dist_list.index(min(dist_list)), this works fine in giving me the lowest value and then matching it to a name using namelist[idx_min], therefore giving the best match, but I would like the best 5 matches in order as opposed to just the best match. Anyone able to solve this ?
Unfortunately I cannot test your code, but to me it seems like you are operation on a python list of tuples. You can sort that by using a key: namestodistance = [('Alice', .1), ('Bob', .3), ('Carrie', .2)] names_top = sorted(namestodistance, key=lambda x: x[1]) print(names_top[:2]) Of course you have to modify the anonymous function in key to return a sortable value instead of e.g. a torch.tensor. This can be done by using key = lambda x: x[1].item(). Edit: To answer the question that crept up in the comments, we can refactor our code a little. Namely namestodistance = list(map(lambda x: (x[0], x[1].item()), namestodistance) names_top = sorted(namestodistance, key=lambda x: x[1]) print(names_top[:2])
https://stackoverflow.com/questions/70232894/
Intersection over Union used as Metric or Loss
Im currently struggling to understand the use of the IoU. Is the IoU just a Metric to monitor the quality of a network, or is used as a loss function where the value has some impact on the backprop?
For a measure to be used as a loss function, it must be differentiable, with non-trivial gradients. For instance, in image classification, accuracy is the most common measure of success. However, if you try to differentiate accuracy, you'll see that the gradients are zero almost everywhere and therefore one cannot train a model with accuracy as a loss function. Similarly, IoU, in its native form, also has meaningless gradients and cannot be used as a loss function. However, extensions to IoU that preserve gradients exist and can be effectively used as a loss function for training.
https://stackoverflow.com/questions/70234279/
Extracting Autoencoder features from the hidden layer
I have developed some code to apply Autoencoder on my dataset, in order to extract hidden features from it. I have a dataset that consists of 84 variables, and they have been normalised. epochs = 10 batch_size = 128 lr = 0.008 # Convert Input and Output data to Tensors and create a TensorDataset input = torch.Tensor(input.to_numpy()) output = torch.tensor(output.to_numpy()) data = torch.utils.data.TensorDataset(input, output) # Split to Train, Validate and Test sets using random_split number_rows = len(input) # The size of our dataset or the number of rows in excel table. test_split = int(number_rows*0.3) train_split = number_rows - test_split train_set, test_set = random_split(data, [train_split, test_split]) # Create Dataloader to read the data within batch sizes and put into memory. train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size, shuffle = True) test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size) The model structure: # Model structure class AutoEncoder(nn.Module): def __init__(self): super(AutoEncoder, self).__init__() # Encoder self.encoder = nn.Sequential( nn.Linear(84, 128), nn.Tanh(), nn.Linear(128, 64), nn.Tanh(), nn.Linear(64, 16), nn.Tanh(), nn.Linear(16, 2), ) # Decoder self.decoder = nn.Sequential( nn.Linear(2, 16), nn.Tanh(), nn.Linear(16, 64), nn.Tanh(), nn.Linear(64, 128), nn.Tanh(), nn.Linear(128, 84), nn.Sigmoid() ) def forward(self, inputs): codes = self.encoder(inputs) decoded = self.decoder(codes) return codes, decoded Optimiser and Loss function # Optimizer and loss function model = AutoEncoder() optimizer = torch.optim.Adam(model.parameters(), lr=lr) loss_function = nn.MSELoss() The training steps: # Train for epoch in range(epochs): for data, labels in train_loader: inputs = data.view(-1, 84) # Forward codes, decoded = model(inputs) # Backward optimizer.zero_grad() loss = loss_function(decoded, inputs) loss.backward() optimizer.step() # Show progress print('[{}/{}] Loss:'.format(epoch+1, epochs), loss.item()) The Autoencoder model is saved as: # Save torch.save(model,'autoencoder.pth') At this point, I would like to ask some help to understand how I could extract the features from the hidden layer. These features extracted from the hidden layer will be used in another classification algorithm.
You need to place an hook to your model. And you can use this hook to extract features from any layer. However it is a lot easier if you don't use nn.Sequential because it combines the layer together and they act as one. I run your code using this function: There is a function for Feature Extraction which basically takes model as an input and place a hook using index of layer. class FE(nn.Module): def __init__(self,model_instance, output_layers, *args): super().__init__(*args) self.output_layers = output_layers self.selected_out = OrderedDict() self.pretrained = model_instance self.fhooks = [] print("model_instance._modules.keys():",model_instance._modules.keys()) for i,l in enumerate(list(self.pretrained._modules.keys())): print("index:",i, ", keys:",l ) if i in self.output_layers: print("------------------------ > Hook is placed output of :" , l ) self.fhooks.append(getattr(self.pretrained,l).register_forward_hook(self.forward_hook(l))) def forward_hook(self,layer_name): def hook(module, input, output): self.selected_out[layer_name] = output return hook def forward(self, x): out = self.pretrained(x,None) return out, self.selected_out And to use: model_hooked=FE(model ,output_layers = [0]) model_instance._modules.keys(): odict_keys(['encoder', 'decoder']) index: 0 , keys: encoder ------------------------ > Hook is placed output of : encoder index: 1 , keys: decoder After placing the hook you can simply put data to new hooked model and it will output 2 values.First one is original output from last layer and second output will be the output from hooked layer out, layerout = model_hooked(data_sample) If you want to extract features from a loaders you can use this function: def extract_features(FE ,layer_name, train_loader, test_loader): extracted_features=[] lbls=[] extracted_features_test=[] lbls_test=[] for data , target in train_loader: out, layerout = FE(data) a=layerout[layer_name] extracted_features.extend(a) lbls.extend(target) for data , target in test_loader: out, layerout = FE(data) a=layerout[layer_name] extracted_features_test.extend(a) lbls_test.extend(target) extracted_features = torch.stack(extracted_features) extracted_features_test = torch.stack(extracted_features_test) lbls = torch.stack(lbls) lbls_test = torch.stack(lbls_test) return extracted_features, lbls ,extracted_features_test, lbls_test And usage is like this : Features_TRAINLOADER , lbls , Features_TESTLOADER, lbls_test =extract_features(model_hooked, "encoder", train_loader, test_loader)
https://stackoverflow.com/questions/70236276/
Pytorch - skip calculating features of pretrained models for every epoch
I am used to work with tenserflow - keras but now I am forced to start working with Pytorch for flexibility issues. However, I don't seem to find a pytorch code that is focused on training only the classifciation layer of a model. Is that not a common practice ? Now I have to wait out the calculation of the feature extraction of the same data for every epoch. Is there a way to avoid that ? # in tensorflow - keras : from tensorflow.keras.applications import vgg16, MobileNetV2, mobilenet_v2 # Load a pre-trained pretrained_nn = MobileNetV2(weights='imagenet', include_top=False, input_shape=(Image_size, Image_size, 3)) # Extract features of the training data only once X = mobilenet_v2.preprocess_input(X) features_x = pretrained_nn.predict(X) # Save features for later use joblib.dump(features_x, "features_x.dat") # Create a model and add layers model = Sequential() model.add(Flatten(input_shape=features_x.shape[1:])) model.add(Dense(100, activation='relu', use_bias=True)) model.add(Dense(Y.shape[1], activation='softmax', use_bias=False)) # Compile & train only the fully connected model model.compile( loss="categorical_crossentropy", optimizer=keras.optimizers.Adam(learning_rate=0.001)) history = model.fit( features_x, Y_train, batch_size=16, epochs=Epochs)
Assuming you already have the features ìn features_x, you can do something like this to create and train the model: # create a loader for the data dataset = torch.utils.data.TensorDataset(features_x, Y_train) loader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True) # define the classification model in_features = features_x.flatten(1).size(1) model = torch.nn.Sequential( torch.nn.Flatten(), torch.nn.Linear(in_features=in_features, out_features=100, bias=True), torch.nn.ReLU(), torch.nn.Linear(in_features=100, out_features=Y.shape[1], bias=False) # Softmax is handled by CrossEntropyLoss below ) model.train() # define the optimizer and loss function optimizer = torch.optim.Adam(model.parameters(), lr=0.001) loss_function = torch.nn.CrossEntropyLoss() # training loop for e in range(Epochs): for batch_x, batch_y in enumerate(loader): optimizer.zero_grad() # clear gradients from previous batch out = model(batch_x) # forward pass loss = loss_function(out, batch_y) # compute loss loss.backward() # backpropagate, get gradients optimizer.step() # update model weights
https://stackoverflow.com/questions/70236736/
How to make one-hot data compatible with non one-hot?
I'm making a machine learning model to calculate game win rate on different character combination. I got error at last line using loss function. I think it's because the input is one-hot vector. The output of the model doesn't compatile with target data. Because target data is just boolean value, win or lose. Please give me advice to get through this problem. How to make one-hot input compatible with non one-hot? '''for example, when the number of character is 4 and eahc team member is 2. x_data is [ [[0,0,1,0], [0,1,0,0], [1,0,0,0,],[0,1,0,0]], [game2]...] team A1, temaA2, temaB1 teamB2 ''' y_data = [[0], [0], [0], [1], [1], [1]] # team blue win: 1, lose : 0 x_train = torch.FloatTensor(x_data) y_train = torch.FloatTensor(y_data) class BinaryClassifier(nn.Module): def __init__(self): super(BinaryClassifier, self).__init__() self.layer1 = nn.Sequential( nn.Linear(in_features=num_characters, out_features=10, bias=True), nn.ReLU(), ) self.layer2 = nn.Sequential( nn.Linear(in_features=10, out_features=1, bias=True), nn.Sigmoid(), ) def forward(self, x): x = self.layer1(x) x = self.layer2(x) return torch.sigmoid(x) model = BinaryClassifier() optimizer = optim.SGD(model.parameters(), lr=1) nb_epochs = 1000 for epoch in range(nb_epochs + 1): hypothesis = model(x_train) cost = nn.BCELoss(hypothesis, y_train) # RuntimeError: bool value of Tensor with more than one value is ambiguous
First, your issue is not about One-hot encoding, because the output of your model is a number and Y_data is 0-1, so they're compatible. Your problem is about instantiating the loss. Therefore, you have to instantiate the loss and then pass arguments: ... model = BinaryClassifier() optimizer = torch.optim.SGD(model.parameters(), lr=1) loss = nn.BCELoss() nb_epochs = 1000 for epoch in range(nb_epochs + 1): hypothesis = model(x_train) cost = loss(hypothesis, y_train) About your x_data, if your data is like: [[0,0,1,0], [0,1,0,0], [1,0,0,0,], [0,1,0,0],...] in self.layer1 you should specify in_features with 4. If x_data is like: [ [[0,0,1,0], [0,1,0,0], [1,0,0,0,], [0,1,0,0]], [[0,0,1,0], [0,1,0,0], [1,0,0,0,], [0,1,0,0]], ...] and you want to use a Linear layer, you have to flatten each sample because a linear layer accepts 1-dim input. For example, the above would be: [[0,0,1,0,0,1,0,0,1,0,0,0,0,1,0,0], [0,0,1,0,0,1,0,0,1,0,0,0,0,1,0,0], ...] and in_features=16. For your information, you can use CNN (Convolutional Neural Net) for 2 and more dimensions inputs, and for series inputs, you can use RNN (Recurrent neural network). Hope it can be helpful.
https://stackoverflow.com/questions/70242612/
How does backpropagation work when some layers are frozen in the network?
Assume there are two models to be used: X and Y. Data is sequentially passed through X and Y. Only the parameters of model Y need to be optimized with respect to a loss computed over output of model Y. Is the following snippet a correct implementation of this requirement. Few specific queries I need answers to: What does the with torch.no_grad() exactly do? As only the parameters of model Y are registered with the optimizer do we still need to freeze model X to be correct or is it only required to reduce the computational load? More generally I want an explanation on how the computation graph and backpropagation behaves in the presence of with torch.no_grad() or when some layers are freezed by setting the corresponding requires_grad parameter to False. Also comment on whether we can have non-consecutive layers in the network frozen at once. optimizer = AdamW(model_Y.parameters(), lr= , eps= , ...) optimizer.zero_grad() with torch.no_grad(): A = model_X(data) B = model_Y(A) loss = some_function(B) loss.backward() optimizer.step()
torch.no_grad serves as a context manager that disables gradient computation. This is very useful for, e.g., inference where no use of .backward() call will be done. It saves both memory and computations. In your example, you can treat A, the output of model_X as your "inputs": you will not modify anything related to model_X. In this case, you do not care about gradients w.r.t model_X: not w.r.t its parameters nor w.r.t its inputs. It is safe to wrap this call A = model_X(data) with the context of torch.no_grad(). However, in other cases, you might not want to modify the weights of model_X ("freeze" them), but you might still need to propagate gradients through them - if you want to modify elements feeding model_X. It's all in the chain rule.
https://stackoverflow.com/questions/70244183/