instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
PyTorch Model can only recognize birds when birds are close to camera
update 1 The easiest way to stop cropping images is: to pass in a tuple to size parameter. So it looks like this: size=(299,299) Unfortunately, it does not solve my problem. So the question is still opening. I just trained my birds model. It works fine when I was testing it with close pictures. But when I moved the pictures further away my camera, the model was not able to detect birds. My guess is in this line of code data = ImageDataBunch.from_folder(path=path_img, bs=48, valid_pct= 0.2, ds_tfms=get_transforms(), size=299, classes=['birds', 'others']) The get_transforms() and size=299 the function and size parameters crop my training images to centralize the images, so that birds in the images appear to be closer to the camera. How to fix it?
Your dataset is biased toward birds at a certain scale, i.e., their size, in pixels, span a very small range (you can verify this). Center-cropping the images will not change that - the size of the birds (in pixels) will not change. Therefore, your model cannot handle scale changes. In order to overcome this limitation of the model you need to make it more scale-robust. The simplest way to achieve this is to add scale augmentations to your data loader (before the crop). By introducing random scale to each training image you effectively change the size of the bird (in pixels) thus your model "sees" birds with a wider range of sizes (in pixels).
https://stackoverflow.com/questions/59509006/
Convert pytorch tensor to opencv mat and vice versa in C++
I want to convert pytorch tensors to opencv mat and vice versa in C++. I have these two functions: cv::Mat TensorToCVMat(torch::Tensor tensor) { std::cout << "converting tensor to cvmat\n"; tensor = tensor.squeeze().detach().permute({1, 2, 0}); tensor = tensor.mul(255).clamp(0, 255).to(torch::kU8); tensor = tensor.to(torch::kCPU); int64_t height = tensor.size(0); int64_t width = tensor.size(1); cv::Mat mat(width, height, CV_8UC3); std::memcpy((void *)mat.data, tensor.data_ptr(), sizeof(torch::kU8) * tensor.numel()); return mat.clone(); } torch::Tensor CVMatToTensor(cv::Mat mat) { std::cout << "converting cvmat to tensor\n"; cv::cvtColor(mat, mat, cv::COLOR_BGR2RGB); cv::Mat matFloat; mat.convertTo(matFloat, CV_32F, 1.0 / 255); auto size = matFloat.size(); auto nChannels = matFloat.channels(); auto tensor = torch::from_blob(matFloat.data, {1, size.height, size.width, nChannels}); return tensor.permute({0, 3, 1, 2}); } In my code I load two images (image1 and image2) and I want to convert them to pytorch tensors and then back to opencv mat to check if it works. The problem is that I get an memory access error on the first call of TensorToCVMat and I cant figure out whats wrong as I do not have much experience with C++ programming. cv::Mat image1; image1 = cv::imread(argv[1]); if (!image1.data) { std::cout << "no image data\n"; return -1; } cv::Mat image2; image2 = cv::imread(argv[2]); if (!image2.data) { std::cout << "no image data\n"; return -1; } torch::Tensor tensor1 = CVMatToTensor(image1); cv::Mat new_image1 = TensorToCVMat(tensor1); // <<< this is where the memory access error is thrown torch::Tensor tensor2 = CVMatToTensor(image2); cv::Mat new_image2 = TensorToCVMat(tensor2); It would be great if you could give me hints or an explanation to solve this problem.
Not sure if the error is happening at the memcpy step. But you can use the void* data variant of the Mat constructor Mat (int rows, int cols, int type, void *data, size_t step=AUTO_STEP) and you can skip the memcpy step tensor = uint8_tensor //shape: (h, w, 3) cv::Mat mat = cv::Mat(height, width, CV_8UC3, tensor.data_ptr()); return mat;
https://stackoverflow.com/questions/59512310/
'Sequential' object has no attribute 'features' while extracting vgg19 pytorch features
I'm trying to extract the features of images using VGG19 network (the output should be of dim : [1 , 7 , 7 , 512] per frame here is the code I have used : deep_net = models.vgg19(pretrained=True).cuda() deep_net = nn.Sequential(*list(deep_net.children())[:-2]) deep_net.eval() save_file_sample_path = '/media/data1/out.npy' input_image = torch.zeros(1, 3, 224, 224) output_feat = np.zeros(shape=[1, 49, 512]) with torch.no_grad(): im = default_loader('/media/data1/images/frame612.jpg') im = transform(im) input_image[0, :, :] = im input_image = input_image.cuda() output_feat = deep_net(input_image) output_feat = output_feat.features[:-2].view(1, 512, 49).transpose(1, 2) But I get the following error : AttributeError: 'Sequential' object has no attribute 'features' At the line : output_feat = output_feat.features[:-2].view(1, 512, 49).transpose(1, 2) Any idea why this does not work anymore? and how to fix? Thanks!
It's because you are rebuilding deep_net with nn.Sequential so it loses the attribute features. deep_net = models.vgg19(pretrained=True) deep_net.features Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) ... (36): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) deep_net = nn.Sequential(*list(deep_net.children())[:-2]) deep_net.features AttributeError: 'Sequential' object has no attribute 'features' The equivalent you want now is this: list(deep_net.children())[0][:-2]
https://stackoverflow.com/questions/59512941/
Fast.Ai EarlyStoppingCallback does not work
callbacks = [EarlyStoppingCallback(learn, monitor='error_rate', min_delta=1e-5, patience=5)] learn.fit_one_cycle(30, callbacks=callbacks, max_lr=slice(1e-5,1e-3)) As you can see, I use patience = 5 and min_delta=1e-5 and monitor='error_rate' My understanding is: patience tells how many epochs it waits if improvement is less than min_delta on the monitored value, in this case it's error_rate. So if my understanding was correct, then it would not stop at Epoch 6. So is this my understanding wrong or the debug in fast.ai lib ?
It keeps track of the best error rate and compares the min_delta to the difference between this epoch and that value: class EarlyStoppingCallback(TrackerCallback): ... if self.operator(current - self.min_delta, self.best): self.best,self.wait = current,0 else: self.wait += 1 if self.wait > self.patience: print(f'Epoch {epoch}: early stopping') return {"stop_training":True} ... So self.wait only increases if the decrease in error was large enough. Once the 5th time occurs it stops. np.greater(0.000638 - 1e-5, 0.000729) False There does seem to be an issue though, because clearly if the error rate jumped very high we would not want to assign this to self.best. And I believe the point of this callback is to stop training if the error rate starts to increase - which right now it is doing the opposite. So in TrackerCallback there might need to be a change in: mode_dict['auto'] = np.less if 'loss' in self.monitor else np.greater to mode_dict['auto'] = np.less if 'loss' in self.monitor or 'error' in self.monitor else np.greater
https://stackoverflow.com/questions/59517321/
torch.max slower with GPU than with CPU when specifying dimension
t1_h = torch.tensor(np.arange(100000), dtype=torch.float32) cuda0 = torch.device('cuda:0') t1_d = torch.tensor(np.arange(100000), dtype=torch.float32, device = cuda0) %timeit -n 10000 max_h = torch.max(t1_h, 0) %timeit -n 10000 max_d = torch.max(t1_d, 0) 10000 loops, best of 3: 144 µs per loop 10000 loops, best of 3: 985 µs per loop As you can see above, GPU takes much more time than CPU. But if I don't specify dimension for calculating max, then GPU is faster. %timeit -n 10000 max_h = torch.max(t1_h) %timeit -n 10000 max_d = torch.max(t1_d) 10000 loops, best of 3: 111 µs per loop 10000 loops, best of 3: 41.8 µs per loop I also tried with argmax instead of max but it is working correctly (GPU faster than CPU). %timeit -n 10000 cs_h = torch.argmax(t1_h, 0) %timeit -n 10000 cs_d = torch.argmax(t1_d, 0) 10000 loops, best of 3: 108 µs per loop 10000 loops, best of 3: 18.1 µs per loop Is there any reason why torch.max is slow on GPU after specifying dimension?
I discovered this myself, and opened an issue in PyTorch. It looks like it'll be fixed soon - maybe version 1.5 or 1.6? - but in the meantime the suggested workaround is to use ii=a.argmax(0) maxval = a.gather(0, ii.unsqueeze(0)).squeeze(0)
https://stackoverflow.com/questions/59517626/
PyTorch CNN: Loss is unchanging
I have tried researching a situation for my unchanging loss, and all the answers I found were specific to the code. I just started learning about CNNs and majority of the CNN is from an example and modified to fit the needs of my dataset. I am trying to classify types of ECGs (normal, atrial fibrillation, other, noisy). When I try to train the CNN the loss remains the same, I think this is because my CNN does not learn and only outputs zeros. So far I have tried changing the learning rate/loss function and have made no difference. I am doing this on Google Colab so feel free to edit the code, and don't forget to change hardware acceleration under the runtime tab to GPU. Code: import os import cv2 import numpy as np from tqdm import tqdm from scipy.io import loadmat import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import matplotlib.pyplot as plt if torch.cuda.is_available():   device = torch.device("cuda:0")   print("Running on GPU -", device ) else:   device = torch.device("cpu")   print("Running on CPU -", device ) REBUILD_DATA = True # processing data class ECG_DATA():   ECG_LENGTH = 3000   LABEL_SIZE = 485   DATA = "physionet.org/files/challenge-2017/1.0.0/training/"   NORMAL = "physionet.org/files/challenge-2017/1.0.0/training/RECORDS-normal"    AF = "physionet.org/files/challenge-2017/1.0.0/training/RECORDS-af"   OTHER = "physionet.org/files/challenge-2017/1.0.0/training/RECORDS-other"   NOISY = "physionet.org/files/challenge-2017/1.0.0/training/RECORDS-noisy"   LABELS = {NORMAL: 0, AF: 1, OTHER:2, NOISY: 3}   trainingData = []   dataCount = {NORMAL: 0, AF: 0, OTHER: 0, NOISY: 0}   def make_training_data(self):     for records in self.LABELS:       with open(records) as label:         for ecgFile in tqdm(label):           ecg = loadmat(self.DATA+ecgFile[:-1]+".mat")["val"][0].tolist()           if records == self.NOISY:             #self.zero_padding(ecg)             for x in range(self.ECG_LENGTH, len(ecg), self.ECG_LENGTH):               if self.dataCount[records] <= self.LABEL_SIZE and x <= len(ecg):                 self.trainingData.append([np.array(ecg[x-self.ECG_LENGTH:x]), np.eye(len(self.LABELS))[self.LABELS[records]]])                 self.dataCount[records] += 1           elif self.dataCount[records] <= self.LABEL_SIZE and self.ECG_LENGTH <= len(ecg):             self.trainingData.append([np.array(ecg[:self.ECG_LENGTH]), np.eye(len(self.LABELS))[self.LABELS[records]]])             self.dataCount[records] += 1              print(self.dataCount)     np.random.shuffle(self.trainingData)     np.save("training_Data.npy", self.trainingData)      def zero_padding(self, ecg):     ecg += [0] * (self.ECG_LENGTH-(len(ecg)%self.ECG_LENGTH)) class Net(nn.Module):     def __init__(self):         super().__init__() # just run the init of parent class (nn.Module)         self.conv1 = nn.Conv1d(1, 32, 5) # input is 1 image, 32 output channels, 5x5 kernel / window         self.conv2 = nn.Conv1d(32, 64, 5) # input is 32, bc the first layer output 32. Then we say the output will be 64 channels, 5x5 kernel / window         self.conv3 = nn.Conv1d(64, 128, 5)         x = torch.randn(1,3000).view(-1,1,3000)         self._to_linear = None         self.convs(x)         self.fc1 = nn.Linear(self._to_linear, 512) #flattening.         self.fc2 = nn.Linear(512, 4) # 512 in, 2 out bc we're doing 2 classes (dog vs cat).     def convs(self, x):         x = F.max_pool1d(F.relu(self.conv1(x)), 1) # adjust shape of pooling?         x = F.max_pool1d(F.relu(self.conv2(x)), 1) # x = F.max_pool1d(F.relu(self.conv1(x)), (2, 2))         x = F.max_pool1d(F.relu(self.conv3(x)), 1)         if self._to_linear is None:             self._to_linear = x[0].shape[0]*x[0].shape[1]         return x     def forward(self, x):         x = self.convs(x)         x = x.view(-1, self._to_linear)  # .view is reshape ... this flattens X before          x = F.relu(self.fc1(x))         x = self.fc2(x) # bc this is our output layer. No activation here.         return F.softmax(x, dim=1) net = Net().to(device) print(net) if REBUILD_DATA:   ECG = ECG_DATA()   ECG.make_training_data() training_data = np.load("training_Data.npy", allow_pickle=True) print(len(training_data)) optimizer = optim.Adam(net.parameters(), lr = 0.01) loss_function = nn.MSELoss().to(device) X = torch.Tensor([i[0] for i in training_data]) y = torch.Tensor([i[1] for i in training_data])   VAL_PCT = 0.1 val_size = int(len(X)*VAL_PCT) print(val_size) train_X = X[:-val_size] train_y = y[:-val_size] test_X = X[-val_size:] test_y = y[-val_size:] print(len(train_X), len(test_X)) BATCH_SIZE = 100 EPOCHS = 1 plot = [] for epoch in range(EPOCHS):     for i in tqdm(range(0, len(train_X), BATCH_SIZE)): # from 0, to the len of x, stepping BATCH_SIZE at a time. [:50] ..for now just to dev         #print(f"{i}:{i+BATCH_SIZE}")         batch_X = train_X[i:i+BATCH_SIZE].view(-1,1,3000).to(device)         batch_y = train_y[i:i+BATCH_SIZE].to(device)         net.zero_grad()         outputs = net(batch_X)         loss = loss_function(outputs, batch_y)         loss.backward()         optimizer.step()    # Does the update          plot.append([epoch, float(loss)])     print(f"\nEpoch: {epoch}. Loss: {loss}") plot = list(map(list, zip(*plot))) plt.plot(plot[0], plot[1])
At the end of your network there's a softmax layer but in your training you use MSELoss. This tells me that your model is outputting probabilities but then you are calculating loss as if it is continuous. Not sure exactly how that is working for you but I would suspect this is a reason for faulty loss. As mentioned below in the comments, you can use CrossEntropyLoss with your softmax layer intact - since your task is about classification.
https://stackoverflow.com/questions/59525926/
FastAI PyTorch Train_loss and valid_loss look very good, but the model recognize nothing
Update 1 I’m thinking that it might be the mistake in my detector code. So, here is my code for using the trained learner/model to predict images. import requests import cv2 bytes = b'' stream = requests.get(url, stream=True) bytes = bytes + stream.raw.read(1024) # I have my mobile video streaming to this url. the resolution for the video streaming is: 2048 x 1080 a = bytes.find(b'\xff\xd8') b = bytes.find(b'\xff\xd9') if a != -1 and b != -1: jpg = bytes[a:b+2] bytes = bytes[b+2:] img = cv2.imdecode(np.fromstring(jpg, dtype=np.uint8), cv2.IMREAD_COLOR) processedImg = Image(pil2tensor(img, np.float32).div_(255)) predict = learn.predict(processedImg) self.objectClass = predict[0].obj and I read the document of imdecode() method, it returns image in B G R order. Could it because of different channel data used when in training and detecting? Or Could it because that I trained with image size 299 x 450, but when detecting the input image size from the video streaming is 2048 x 1080 without resizing it? new to FastAi, ML and Python. I trained my “Birds Or Not-Birds” model. The train_loss, valid_loss and error_rate were improving. If I only trained 3 epochs, then the model worked(meaning it can recognize whether there are birds or no birds in images), then I increased to 30 epochs, all metrics look very good, but the model does not recognize things anymore, whatever images I input, the model always return Not-Birds. here is the training output: Here are the plots of learn.recorder Here is my code: from fastai.vision import * from fastai.metrics import error_rate from fastai.callbacks import EarlyStoppingCallback,SaveModelCallback from datetime import datetime as dt from functools import partial path_img = '/minidata' train_folder = 'train' valid_folder = 'validation' tunedTransform = partial(get_transforms, max_zoom=1.5) data = ImageDataBunch.from_folder(path=path_img, train=train_folder, valid=valid_folder, ds_tfms=tunedTransform(), size=(299, 450), bs=40, classes=['birds', 'others'], resize_method=ResizeMethod.SQUISH) data = data.normalize(imagenet_stats) learn = cnn_learner(data, models.resnet50, metrics=error_rate) learn.fit_one_cycle(30, max_lr=slice(5e-5,5e-4)) learn.recorder.plot_lr() learn.recorder.plot() learn.recorder.plot_losses() Here is my dataset folder structure: minidata train birds (7500 images) others (around 7300 images) validation birds (1008 images) others (around 872 images)
Your learning rate schedule is sub-optimal for this dataset. Try to first figure out the best learning rate for this network and dataset with LRFinder. This can be done by exploring the loss behavior for different learning rates with learn.lr_find() learn.recorder.plot() Edit: It looks like you are re-training the last layer in your network. Instead try training more layers from scratch. as: learn.unfreeze(2)
https://stackoverflow.com/questions/59548794/
Language translation using TorchText (PyTorch)
I have recently started with ML/DL using PyTorch. The following pytorch example explains how we can train a simple model for translating from German to English. https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html However I am confused on how to use the model for running inference on custom input. From my understanding so far : 1) We will need to save the "vocab" for both German (input) and English(output) [using torch.save()] so that they can be used later for running predictions. 2) At the time of running inference on a German paragraph, we will first need to convert the German text to tensor using the german vocab file. 3) The above tensor will be passed to the model's forward method for translation 4) The model will again return a tensor for the destination language i.e., English in current example. 5) We will use the English vocab saved in first step to convert this tensor back to English text. Questions: 1) If the above understanding is correct, can the above steps be treated as a generic approach for running inference on any language translation model if we know the source and destination language and have the vocab files for the same? Or can we use the vocab provided by third party libraries like spacy? 2) How do we convert the output tensor returned from model back to target language? I couldn't find any example on how to do that. The above blog explains how to convert the input text to tensor using source-language vocab. I could easily find various examples and detailed explanation for image/vision models but not much for text.
Yes globally what you are saying is correct, and of course you can any vocab, e.g. provided by spacy. To convert a tensor into natrual text, one of the most used thechniques is to keep both a dict that maps indexes to words and an other dict that maps words to indexes, the code below can do this: tok2idx = defaultdict(lambda: 0) idx2tok = {} for seq in sequences: for tok in seq: if not tok in tok2idx: tok2idx[tok] = index idx2tok[index] = tok index += 1 Here sequences is a list of all the sequences (i.e. sentences in your dataset). You can change the model easily if you have only a list of words or tokens, by only keeping the inner loop.
https://stackoverflow.com/questions/59549980/
Numpy / PyTorch - how to assign value with indices in different dimensions?
Suppose I have a matrix and some indices a = np.array([[1, 2, 3], [4, 5, 6]]) a_indices = np.array([[0,2], [1,2]]) Is there any efficient way to achieve following operation? for i in range(2): a[i, a_indices[i]] = 100 # a: np.array([[100, 2, 100], [4, 100, 100]])
Use np.put_along_axis - In [111]: np.put_along_axis(a,a_indices,100,axis=1) In [112]: a Out[112]: array([[100, 2, 100], [ 4, 100, 100]]) Alternaytively, if you want to do with the explicit way, i.e. integer-based indexing - In [115]: a[np.arange(len(a_indices))[:,None], a_indices] = 100
https://stackoverflow.com/questions/59551458/
Running a PyTorch dataloader/Dataset on multiple distributed CPUs
I wonder if there is a way to distributed the dataloader/Dataset to many CPUs, even when using a single GPU. Specifically, I would like to have a Dataset class, and the __getitem__ function will be distributed across many different CPUs (using mpi maybe? but any other way is also good). Thanks EDIT My title was erroneously edited, I am not trying to distribute the model itslef, I only want to distribute the data loading/parsing of the model EDIT - 2 Some interesting discussion in this direction is available here
You can do this of course, but mind you - it is not always very effective for general Machine Learning needs, due to the hefty communication costs. Use DistributedDataParallel Implements distributed data parallelism that is based on torch.distributed package at the module level. This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged. In practice, I'd recommend you utilize the pytorch_lightning package, to reduce some of the boilerplate code you need to write for this to work. Reference: DistributedDataParallel,pytorch_lightning
https://stackoverflow.com/questions/59552122/
Padding a tensor until reaching required size
I'm working with certian tensors with shape of (X,42) while X can be in a range between 50 to 70. I want to pad each tensor that I get until it reaches a size of 70. so all tensors will be (70,42). is there anyway to do this when I the begining size is a variable X? thanks for the help!
Use torch.nn.functional.pad - Pads tensor. import torch import torch.nn.functional as F source = torch.rand((3,42)) source.shape >>> torch.Size([3, 42]) # here, pad = (padding_left, padding_right, padding_top, padding_bottom) source_pad = F.pad(source, pad=(0, 0, 0, 70 - source.shape[0])) source_pad.shape >>> torch.Size([70, 42])
https://stackoverflow.com/questions/59553580/
Why does loss decrease but accuracy decreases too (Pytorch, LSTM)?
I have built a model with LSTM - Linear modules in Pytorch for a classification problem (10 classes). I am training the model and for each epoch I output the loss and accuracy in the training set. The ouput is as follows: epoch: 0 start! Loss: 2.301875352859497 Acc: 0.11388888888888889 epoch: 1 start! Loss: 2.2759320735931396 Acc: 0.29 epoch: 2 start! Loss: 2.2510263919830322 Acc: 0.4872222222222222 epoch: 3 start! Loss: 2.225804567337036 Acc: 0.6066666666666667 epoch: 4 start! Loss: 2.199286699295044 Acc: 0.6511111111111111 epoch: 5 start! Loss: 2.1704766750335693 Acc: 0.6855555555555556 epoch: 6 start! Loss: 2.1381614208221436 Acc: 0.7038888888888889 epoch: 7 start! Loss: 2.1007182598114014 Acc: 0.7194444444444444 epoch: 8 start! Loss: 2.0557992458343506 Acc: 0.7283333333333334 epoch: 9 start! Loss: 1.9998993873596191 Acc: 0.7427777777777778 epoch: 10 start! Loss: 1.9277743101119995 Acc: 0.7527777777777778 epoch: 11 start! Loss: 1.8325848579406738 Acc: 0.7483333333333333 epoch: 12 start! Loss: 1.712520718574524 Acc: 0.7077777777777777 epoch: 13 start! Loss: 1.6056485176086426 Acc: 0.6305555555555555 epoch: 14 start! Loss: 1.5910680294036865 Acc: 0.4938888888888889 epoch: 15 start! Loss: 1.6259561777114868 Acc: 0.41555555555555557 epoch: 16 start! Loss: 1.892195224761963 Acc: 0.3655555555555556 epoch: 17 start! Loss: 1.4949012994766235 Acc: 0.47944444444444445 epoch: 18 start! Loss: 1.4332982301712036 Acc: 0.48833333333333334 For loss function I have used nn.CrossEntropyLoss and Adam Optimizer. Although the loss is constantly decreasing, the accuracy increases until epoch 10 and then begins for some reason to decrease. Why is this happening ? Even if my model is overfitting, doesn't that mean that the accuracy should be high ?? (always speaking for accuracy and loss measured on the training set, not the validation set)
Decreasing loss does not mean improving accuracy always. I will try to address this for the cross-entropy loss. CE-loss= sum (-log p(y=i)) Note that loss will decrease if the probability of correct class increases and loss increases if the probability of correct class decreases. Now, when you compute average loss, you are averaging over all the samples, some of the probabilities may increase and some of them can decrease, making overall loss smaller but also accuracy drops.
https://stackoverflow.com/questions/59554880/
Pytorch on Google VM (Linux) does not recognize GPU
I created a Google VM instance using this available image: c1-deeplearning-common-cu100-20191226 Description Google, Deep Learning Image: Base, m39 (with CUDA 10.0), A Debian based image with CUDA 10.0 I then installed Anaconda onto this VM, then installed Pytorch using the following command line as recommended by the Pytorch website: conda install pytorch torchvision cudatoolkit=10.1 -c pytorch (this corresponds to Linux, Python 3.7, CUDA 10.1) From Python, I ran this code to check the GPU detection: import torch torch.cuda.is_available() False From the nvidia-smi tool, this is the result even after the main body of code is running the training: (base) redexces.bf@tensorflow-1x-2x:~$ nvidia-smi Thu Jan 2 01:33:10 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.104 Driver Version: 410.104 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla P4 Off | 00000000:00:04.0 Off | 0 | | N/A 37C P0 22W / 75W | 0MiB / 7611MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ Clearly, there are no running processes nor any memory allocated. This problem appears to be related to Pytorch only; the same VM also has Tensorflow-gpu installed in a separate conda environment which recognizes the GPU and utilizes it as I would expect. Am I missing any pieces? Again the same CUDA driver and image are working fine for tensorflow.
I was able to resolve the issue. Not being a computer science guy, I figured that it could be an nvidia driver compatibility issue. Since Pytorch was built using CUDA 10.1 driver, and the deep learning image had CUDA 10.0 installed, I created another VM instance but this time instead of using the public image noted earlier, I used the gcloud command line to specify deep learning with cu10.1 driver. This made it all work as expected.
https://stackoverflow.com/questions/59557542/
Merge two tensor in pytorch
Tensor a: tensor([[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3]]) Tensor b: tensor([4,4,4,4]) Question 1: How to merge two tensors and get result c: tensor([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]) Question 2: How to divide tensor c and get original a and b.
Question 1: Merge two tensors - torch.cat((a, b.unsqueeze(1)), 1) >>> tensor([[1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4], [1, 2, 3, 4]]) First, we use torch.unsqueeze to add single dim in b tensor to match a dim to be concanate. Then use torch.cat Concatenates tensors a and b. Question 2: a = c[:][:,:-1] a >>> tensor([[1, 2, 3], [1, 2, 3], [1, 2, 3], [1, 2, 3]]) b = c[:][:,-1:].squeeze(1) b >>> tensor([4, 4, 4, 4])
https://stackoverflow.com/questions/59558460/
What is the difference between model.to(device) and model=model.to(device)?
Suppose the model is originally stored on CPU, and then I want to move it to GPU0, then I can do: device = torch.device('cuda:0') model = model.to(device) # or model.to(device) What is the difference between those two lines?
No semantic difference. nn.Module.to function moves the model to the device. But be cautious. For tensors (documentation): # tensor a is in CPU device = torch.device('cuda:0') b = a.to(device) # a is still in CPU! # b is in GPU! # a and b are different For models (documentation): # model a is in CPU device = torch.device('cuda:0') b = a.to(device) # a and b are in GPU # a and b point to the same model
https://stackoverflow.com/questions/59560043/
How to implying bagging method for LSTM neural network using pyTorch?
Like the title, my question is how to apply the Bagging method for LSTM using the PyTorch library? I have built one using TensorFlow on python. But now to implied into the system using C and C++, the requirement is I need to using PyTorch? Is there any recommendation for not need to use the PyTorch and applying directly the model built on tensorflow into real predicting in the system?? Please help!
If you want to create an ensemble in PyTorch, you can train multiple models separately and then define a class to use them together: class MyEnsemble(nn.Module): def __init__(self, firstModel, secondModel): super(MyEnsemble, self).__init__() self.firstModel = modelA self.secondModel = modelB self.classifier = nn.Linear(in_features, n_classes) #define accordingly self.relu = nn.ReLU() def forward(self, x1, x2): x1 = self.firstModel(x1) x2 = self.secondModel(x2) x = torch.cat((x1, x2), dim=1) x = self.classifier(self.relu(x)) return x If you want to use your TensorFlow model, there are multiple ways of doing so. One can export it to C++ Tensorflow -> C++
https://stackoverflow.com/questions/59560372/
How can I optimize the 5-layer loop using functions provided by torch?
x is the tensor with the shape of (16, 10, 4, 25, 53), y has the same size as x. mean's shape is (25, 53), the size of jc and ac are both (16, 10, 4). How can I optimize the following expression with torch functions? for k in range(x.size()[0]): for s in range(x.size()[1]): for u in range(x.size()[2]): for i in range(x.size()[3]): for j in range(x.size()[4]): num1 += (x[k][s][u][i][j] - mean[i][j] - jc[k][s][u]) * (y[k][s][u][i][j] - mean[i][j] - ac[k][s][u]) num2 += (y[k][s][u][i][j] - mean[i][j] - jc[k][s][u]) ** 2 num3 += (y[k][s][u][i][j] - mean[i][j] - ac[k][s][u]) ** 2
I think you are looking at broadcasting your tensors along singleton dimensions. First, you need the number of dimensions to be the same, so if mean is of shape (25,53) then mean[None, None, None, ...] is of shape (1, 1, 1, 25, 53) - you did not change anything in the underlying data, but the number of dimensions is now 5 instead of only 2 and these singleton dimensions can be broadcast to the corresponding dimensions of x and y. An optimized code using broadcasting will look something like: num1 = ((x - mean[None, None, None, ...] - jc[..., None, None]) * (y - mean[None, None, None, ...] - ac[..., None, None])).sum() num2 = ((y - mean[None, None, None, ...] - jc[..., None, None]) ** 2).sum() # shouldn't it be x here? num3 = ((y - mean[None, None, None, ...] - ac[..., None, None]) ** 2).sum()
https://stackoverflow.com/questions/59561002/
What does the interleave_keys() function in torchtext library do exactly?
You can find this function at torchtext/data/utils.py file I have given the official code with documentation below def interleave_keys(a, b): """Interleave bits from two sort keys to form a joint sort key. Examples that are similar in both of the provided keys will have similar values for the key defined by this function. Useful for tasks with two text fields like machine translation or natural language inference. """ def interleave(args): return ''.join([x for t in zip(*args) for x in t]) return int(''.join(interleave(format(x, '016b') for x in (a, b))), base=2) A more detailed explanation would be helpful to understand how it returns an integer based on how similar the given two strings are. And the format function used inside it is the commonly used builtin function in python
So upon breaking down the function I was able to figure out what this function is doing. format(x, '016b') This piece of code converts the integer (a and b which is actually no of words in the sentences in my case) to 16 digit binary number. And the interleave function takes out the pairs (of the same position) of binary representations join them like this, For easy understanding lets assume 4 digit binary for 2 and 11 2's binary representation is : 0 0 1 0 11's binary representation is: 1 0 1 1 So the output here would be 01001101 (01,00,11,01 has been combined) which when converting to integer will give 77
https://stackoverflow.com/questions/59564451/
Is there any method to generate a piecewise function for tensors in pytorch?
I want to get a piecewise function like this for tensors in pytorch. But I don't know how to define it. I use a very stupid method to do it, but it seems not to work in my code. def trapezoid(self, X): Y = torch.zeros(X.shape) Y[X % (2 * pi) < (0.5 * pi)] = (X[X % (2 * pi) < (0.5 * pi)] % (2 * pi)) * 2 / pi Y[(X % (2 * pi) >= (0.5 * pi)) & (X % (2 * pi) < 1.5 * pi)] = 1.0 Y[X % (2 * pi) >= (1.5 * pi)] = (X[X % (2 * pi) >= (1.5 * pi)] % (2 * pi)) * (-2 / pi) + 4 return Y could do you help me find out how to design the function trapezoid, so that for tensor X, I can get the result directly using trapezoid(X)?
Since your function has period 2π we can focus on [0,2π]. Since it's piecewise linear, it's possible to express it as a mini ReLU network on [0,2π] given by: trapezoid(x) = 1 - relu(x-1.5π)/0.5π - relu(0.5π-x)/0.5π Thus, we can code the whole function in Pytorch like so: import torch import torch.nn.functional as F from torch import tensor from math import pi def trapezoid(X): # Left corner position, right corner position, height a, b, h = tensor(0.5*pi), tensor(1.5*pi), tensor(1.0) # Take remainder mod 2*pi for periodicity X = torch.remainder(X,2*pi) return h - F.relu(X-b)/a - F.relu(a-X)/a Plotting to double check produces the correct picture: import matplotlib.pyplot as plt X = torch.linspace(-10,10,1000) Y = trapezoid(X) plt.plot(X,Y) plt.title('Pytorch Trapezoid Function')
https://stackoverflow.com/questions/59578581/
PyTorch: is there a definitive training loop similar to Keras' fit()?
I'm coming over from Keras to PyTorch, and one of the surprising things I've found is that I'm supposed to implement my own training loop. In Keras, there is a de facto fit() function that: (1) runs gradient descent and (2) collects a history of metrics for loss and accuracy over both the training set and validation set. In PyTorch, it appears that the programmer needs to implement the training loop. Since I'm new to PyTorch, I don't know if my training loop implementation is correct. I just want to compare apples-to-apples loss and accuracy metrics with what I'm seeing in Keras. I've already read through: the official PyTorch 60-minute blitz, where they provide a sample training loop. official PyTorch example code, where I've found the training loop placed in-line with other code. the O'Reilly book Programming PyTorch for Deep Learning with its own training loop. Stanford CS230 sample code. various blog posts (e.g. here and here). So I'm wondering: is there a definitive, universal training loop implementation that does the same thing and reports the same numbers as the Keras fit() function? My points of frustration: Pulling data out of the dataloader is not consistent between image data and NLP data. Correctly computing loss and accuracy is not consistent in any sample code I've seen. Some code examples use Variable, while others do not. Unnecessarily detailed: moving data to/from the GPU; knowing when to call zero_grad(). For what it's worth, here is my current implementation. Are there any obvious bugs? import time def train(model, optimizer, loss_fn, train_dl, val_dl, epochs=20, device='cuda'): ''' Runs training loop for classification problems. Returns Keras-style per-epoch history of loss and accuracy over training and validation data. Parameters ---------- model : nn.Module Neural network model optimizer : torch.optim.Optimizer Search space optimizer (e.g. Adam) loss_fn : Loss function (e.g. nn.CrossEntropyLoss()) train_dl : Iterable dataloader for training data. val_dl : Iterable dataloader for validation data. epochs : int Number of epochs to run device : string Specifies 'cuda' or 'cpu' Returns ------- Dictionary Similar to Keras' fit(), the output dictionary contains per-epoch history of training loss, training accuracy, validation loss, and validation accuracy. ''' print('train() called: model=%s, opt=%s(lr=%f), epochs=%d, device=%s\n' % \ (type(model).__name__, type(optimizer).__name__, optimizer.param_groups[0]['lr'], epochs, device)) history = {} # Collects per-epoch loss and acc like Keras' fit(). history['loss'] = [] history['val_loss'] = [] history['acc'] = [] history['val_acc'] = [] start_time_sec = time.time() for epoch in range(epochs): # --- TRAIN AND EVALUATE ON TRAINING SET ----------------------------- model.train() train_loss = 0.0 num_train_correct = 0 num_train_examples = 0 for batch in train_dl: optimizer.zero_grad() x = batch[0].to(device) y = batch[1].to(device) yhat = model(x) loss = loss_fn(yhat, y) loss.backward() optimizer.step() train_loss += loss.data.item() * x.size(0) num_train_correct += (torch.max(yhat, 1)[1] == y).sum().item() num_train_examples += x.shape[0] train_acc = num_train_correct / num_train_examples train_loss = train_loss / len(train_dl.dataset) # --- EVALUATE ON VALIDATION SET ------------------------------------- model.eval() val_loss = 0.0 num_val_correct = 0 num_val_examples = 0 for batch in val_dl: x = batch[0].to(device) y = batch[1].to(device) yhat = model(x) loss = loss_fn(yhat, y) val_loss += loss.data.item() * x.size(0) num_val_correct += (torch.max(yhat, 1)[1] == y).sum().item() num_val_examples += y.shape[0] val_acc = num_val_correct / num_val_examples val_loss = val_loss / len(val_dl.dataset) print('Epoch %3d/%3d, train loss: %5.2f, train acc: %5.2f, val loss: %5.2f, val acc: %5.2f' % \ (epoch+1, epochs, train_loss, train_acc, val_loss, val_acc)) history['loss'].append(train_loss) history['val_loss'].append(val_loss) history['acc'].append(train_acc) history['val_acc'].append(val_acc) # END OF TRAINING LOOP end_time_sec = time.time() total_time_sec = end_time_sec - start_time_sec time_per_epoch_sec = total_time_sec / epochs print() print('Time total: %5.2f sec' % (total_time_sec)) print('Time per epoch: %5.2f sec' % (time_per_epoch_sec)) return history
Short answer: there is no equivalent training loop for PT and TF.keras and there shall never be one. First of all, the training loop is syntactical sugar that is supposed to makes one's life easier. From my point of view, "making life easier" is a moto of TF.keras framework and this is the main reason it has it. Training loop can not be formalized as well defined practice, it might vary a lot depending on the task/dataset/procedure/metric/you_name_it and it would require a lot of effort to match all the options for 2 frameworks. Furthermore, creating a defining interface for training loop in Pytorch might be too restrictive for many actual users of the framework. Matching the outputs of network would require matching behaviors of every operation within 2 frameworks, which would be impossible. First of all, the frameworks don't necessarily provide same sets of operations. Operations can be grouped into higher level abstracts differently. Also, some common functions like sigmoid or BatchNorm might look well mathematically defined on paper, but in reality have dozens of implementation specific details. Also, when improvements are introduced to the operations it is up to the community to integrate these updates into main framework distributions or plane ignore them. Needless to say, developers of 2 frameworks make these decisions independently and likely have different motivation behind them. To sum it all up, matching high level details of 2 frameworks would require enormous effort and would probably be very disruptive for the existing users.
https://stackoverflow.com/questions/59584457/
Pass user specified parameters to DataLoader
I am using U - Net and implementing the weighting technique described in the papers from 2015 (U-Net: Convolutional Networks for Biomedical Image Segmentation) and 2019 (U-Net – Deep Learning for Cell Counting, Detection, and Morphometry). In that technique there is a variance σ and a weight w_0. I would like, especially the σ, to be a learnable parameter instead of guessing which value is best from dataset to dataset. From what I found, I can do this using nn.Parameter. To use the learned σ from epoch to epoch, I need somehow to pass this new value to the get_item function of the DataSet through the DataLoader. My current take on this, is to extend torch.utils.data.DataLoader where the new init has an extra parameter accepting the user specified/learnable parameters. Given the source code of torch.utils.data.DataLoader, I do not understand where and how the DataLoader calls the DataSet instance and hence to pass these parameters. Code wise, in the DataSet definition there is the function def __getitem__(self, index): that I can change as def __getitem__(self, index, sigma): and to make use of the updated, newly learned σ. My problem is that during training, I iterate through training dataset as for epoch in range( checkpoint[ 'epoch'], num_epochs): .... for ii, ( X, y, y_weight, fname) in enumerate( dataLoader[ phase]): In that enumeration of DataLoader, how can I pass the new σ to the DataLoader such that the DataLoader will pass it to the DataSet getitem function mentioned above? EDIT Currently, I define inside the DataSet class a parameter sigma class MedicalImageDataset( Dataset): def __init__(self, fname, img_transform = None, mask_transform = None, weight_transform = None, sigma = 8): ... self.sigma = sigma def __getitem__(self, index): sigma = self.sigma ... which I update through the DataLoader as dataLoader[ 'train'].dataset.sigma = model.sigma where, model.sigma is a custom parameter defined as model.register_parameter( name = 'sigma', param = torch.nn.Parameter( torch.tensor( 16, dtype = torch.float16), requires_grad = True)) after creating the model. My problem is, that model.sigma doesn't look being updated from epoch to epoch. Specifically, is the same as the initial value. Why is this? Having a look at optimizer.state_dict() I couldn't find any parameter named 'sigma', whereas I can find one in model.named_parameters(). Finally, this parameter sigma is not attached to any layer, it's kinda "free".
What you need to do is to set sigma as an attribute of the Dataset and change it between epochs. For the dataset definition class UNetDataset(object): def __init__(self, ..., sigma=5): self.sigma = sigma Now, within __getitem__, you can use the sigma value using self.sigma Now within your training cycle, after every epoch, you can change the sigma value by setting the sigma attribute of the Dataset for epoch in range(num_epochs): dataset.sigma = #whatever value you want for i,(x,y) in enumarate(DataLoader):
https://stackoverflow.com/questions/59586493/
How could I know whether a function in Pytorch allocates new memory or not?
Recently, I got stuck in a situation that, in my model, the input data really consumed a lot of memory. And this lead to a lot of memory usage when I operate the data in my network layers. I really want to know whether the operations will allocate new memory block or not. I saw the pytorch doc only found how to use the function. I wonder is there a doc or some websites or anything else official to help me out. For example, will functions like view(), permute() or contiguous() allocate new memory block or not and how do you know that. It really caught me, thanks for helping.
If you are running on a GPU, the best way to check memory consumption is to use the linux command nvidia-smi. You can call this in jupyter-notebook using !nvidia-smi. This way, after any Pytorch command, you can check if new memory has been allocated or not
https://stackoverflow.com/questions/59591601/
How can I get hidden_states from BertForSequenceClassification?
I read the official tutorial(https://huggingface.co/transformers/model_doc/bert.html) and tried to set config, but it doesn't work. from transformers import PretrainedConfig model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2) model.config.output_hidden_states = True model.load_state_dict(torch.load('../parameter.pkl')) model.cuda() output = model(input)
Output should be a list that holds the hidden states. I expect that because you are loading the parameter.pkl which may not have output hidden states by default, it is overwriting your config.output_hidden_states to False? See what happens if you set it to True after loading the state_dict?
https://stackoverflow.com/questions/59592736/
What is the correct way to measure the total execution time for a pytorch function running on GPU?
Following is an example code showing what I am trying to measure. Here I am using time.perf_counter() to measure time. Is this the correct way to measure execution time in this scenario? If not, what is the correct way? My concern is, GPU evaluations are asynchronous and GPU execution might not be completed when ExecTime is measured below. import torch import torch.nn.functional as F import time Device = torch.device("cuda:0") ProblemSize = 100 NumChannels = 5 NumFilters = 96 ClassType = torch.float32 X = torch.rand(1, NumChannels, ProblemSize, ProblemSize, dtype=ClassType).to(Device) weights = torch.rand(NumFilters, NumChannels, 10, 10, dtype=ClassType).to(Device) #warm up Y = F.conv2d(X, weights) Y = F.conv2d(X, weights) #time t = time.perf_counter() Y = F.conv2d(X, weights) ExecTime = time.perf_counter() - t
I think you are looking for pyotrch's bottleneck profiler.
https://stackoverflow.com/questions/59596483/
About pytorch learning rate scheduler
here is my code optimizer = optim.SGD(net.parameters(), lr=0.1) scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.5) for i in range(15): lr = scheduler.get_lr()[0] lr1 = optimizer.param_groups[0]["lr"] print(i, lr, lr1) scheduler.step() And here is the result 0 0.1 0.1 1 0.1 0.1 2 0.1 0.1 3 0.1 0.1 4 0.1 0.1 5 0.025 0.05 6 0.05 0.05 7 0.05 0.05 8 0.05 0.05 9 0.05 0.05 10 0.0125 0.025 11 0.025 0.025 12 0.025 0.025 13 0.025 0.025 14 0.025 0.025 We can see that the when scheduler.step() is applied, the learning rate first decreases 0.25 times, then bounces back to 0.5 times. Is it the problem of scheduler.get_lr() lr or the problem of scheduler.step() About the envirioment python=3.6.9 pytorch=1.1.0 In addition, I can't find this problem when pytorch=0.4.1 is used.
Yes, the "problem" is in the use of get_lr(). To get the current LR, what you need is actually the get_last_lr(). If you take a look at the implementation: def get_lr(self): if not self._get_lr_called_within_step: warnings.warn("To get the last learning rate computed by the scheduler, " "please use `get_last_lr()`.", UserWarning) if (self.last_epoch == 0) or (self.last_epoch % self.step_size != 0): return [group['lr'] for group in self.optimizer.param_groups] return [group['lr'] * self.gamma for group in self.optimizer.param_groups] When it is in the step=5, it does not satisfy the conditions (because step_size=5), and it will return the lr * gamma. The awkward thing is that you should be getting a warning when you call get_lr() out the the step() function (as you can see in the implementation above) and apparently you didn't. The warning was added only 3 months ago, so you won't have it on v1.1.0. For the sake of completeness, what the step() method does is that it adds 1 to the last_epoch and updates the LR by calling the get_lr() function (see here): self.last_epoch += 1 values = self.get_lr()
https://stackoverflow.com/questions/59599603/
Method for feeding multi-class image data-set where folders name can be used as labels in Pytorch?
I want to feed the multiclass image data-set in Pytorch, in the main folder of data-set I have 15 more folders with different names, I want to use folders names as the labels. For example, one folder name is Aeroplanes and contain the images (1245 images) other folder name is Cars and contains images of the Cars (997), likewise, each folder has different numbers of images. Now I want to load them to train my model and to test it, but I don't have separate folders for the training and testing. I want to use folder names as labels and also want to split the data-set into training and testing as an equal ratio. Your guidance, in this case, will be appreciated. Thanks
To split your dataset into train and test datasets you could use random_split function: import torch from torchvision import datasets, transforms from torch.utils import data import numpy as np dataset = datasets.ImageFolder('path_to_dataset', transform=transforms.ToTensor()) lengths = [int(np.ceil(0.5*len(dataset))), int(np.floor(0.5*len(dataset)))] train_set, test_set = data.random_split(dataset, lengths) train_dataloader = data.DataLoader(train_set, batch_size=...) test_dataloader = data.DataLoader(test_set, batch_size=...) In case you want to perform separate transformations on your train and test datasets look here: How to use different data augmentation for Subsets in PyTorch
https://stackoverflow.com/questions/59603064/
Pytorch mask tensor with boolean numpy array
I have a 84x84 pytorch tensor named target. I need to mask it with an 84x84 boolean numpy array which consists of True and False. When I do target = target[mask], I get the error TypeError: can't convert np.ndarray of type numpy.bool_. The only supported types are: double, float, float16, int64, int32, and uint8. Surprisingly, I get this error only when running on a GPU. When running on a CPU, everything works fine. How can I fix this?
I think there is some confusion with the types. But this works. import torch tensor = torch.randn(84,84) c = torch.randn(tensor.size()).bool() c[1, 2:5] = False x = tensor[c].size() For testing I created a tensor with random values. Afterwards 3 elements are set to False. In the last step I look get the size 7053 resulting from 84^2 - 3. Hope that helps somehow.
https://stackoverflow.com/questions/59604918/
AttributeError: 'MpoImageFile' object has no attribute 'shape'
images, labels = next(iter(self.loader)) grid = torchvision.utils.make_grid(images) images, labels = next(iter(self.loader)) triggers the error. I have a custom dataset class where I load each image (RGB) from an url : image = Image.open(urllib.request.urlopen(URL)) and I apply some albumentations transforms. The code works when I read an image for which I have a path using cv2. However, it doesn't work when I read an image from the url. Note that I verified that the urls aren't broken. Here's the traceback: /usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __next__(self) 344 def __next__(self): 345 index = self._next_index() # may raise StopIteration --> 346 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 347 if self._pin_memory: 348 data = _utils.pin_memory.pin_memory(data) /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /content/transform_dataset.py in __getitem__(self, idx) 49 labels = torch.from_numpy(item[2:].values.astype("float32")) 50 #print("self.root,item,self.image_transform,self.transform,self.size", self.root,item,self.image_transform,self.transform,self.size) ---> 51 image = load_image(self.root,item.ID,item.URL,self.image_transform) 52 return image, labels 53 /content/transform_dataset.py in load_image(root, ID, URL, image_transform) 81 print(image.shape) 82 image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB) ---> 83 image = image_transform(image=image)["image"] 84 return image /usr/local/lib/python3.6/dist-packages/albumentations/core/composition.py in __call__(self, **data) 169 convert_keypoints_to_albumentations, data) 170 --> 171 data = t(**data) 172 173 if dual_start_end is not None and idx == dual_start_end[1]: /usr/local/lib/python3.6/dist-packages/albumentations/core/transforms_interface.py in __call__(self, **kwargs) 26 if (random.random() < self.p) or self.always_apply: 27 params = self.get_params() ---> 28 params = self.update_params(params, **kwargs) 29 if self.targets_as_params: 30 targets_as_params = {k: kwargs[k] for k in self.targets_as_params} /usr/local/lib/python3.6/dist-packages/albumentations/core/transforms_interface.py in update_params(self, params, **kwargs) 66 if hasattr(self, 'interpolation'): 67 params['interpolation'] = self.interpolation ---> 68 params.update({'cols': kwargs['image'].shape[1], 'rows': kwargs['image'].shape[0]}) 69 return params 70 AttributeError: 'MpoImageFile' object has no attribute 'shape'
In order to work with albumentations, you must pass a numpy array to the transforms not a PIL image. So: image = Image.open(urllib.request.urlopen(URL)) image = np.array(image)
https://stackoverflow.com/questions/59613693/
Understanding Pytorch Grid Sample
I have a input tensor of size [1,32,296,400] and I have a pixel set of [1, 56000, 400, 2] After applying grid_sample with mode=‘bilinear’ I have [1, 32, 56000, 400] Can I know what exactly happened here? I know that grid_sample is suppose to effective transform pixels to a new location in a differentiable manner, but these dimensions don't make it clear what is happening.
Please look at the documentation of grid_sample. Your input tensor has a shape of 1x32x296x400, that is, you have a single example in the batch with 32 channels and spatial dimensions of 296x400 pixels. Additionally, you have a "grid" of size 1x56000x400x2 which PyTorch interprets as new locations for a grid of spatial dimensions of 56000x400 where each new location has the x,y coordinates from which to sample the new grid value. Hence the "grid" information is of shape 1x56000x400x2. The output is, as expected, a 2D tensor of shape 1x32x56000x400: batch and channel dimensions are unchanged but the spatial coordinates are in accordance with the "grid" information provided to grid_sample.
https://stackoverflow.com/questions/59620104/
Trying to mask tensor with another tensor of same dimension getting "index 1 is out of bounds for dimension 0 with size 1"
attn_weights = F.softmax(self.attn(torch.cat((input, hidden_cat), 2)), dim=2) attn_weights[mask] = float('-inf') attn_applied = torch.bmm(attn_weights.transpose(0,1),encoder_outputs.transpose(0,1)).transpose(0,1) attn_output = torch.cat((input, attn_applied), 2) So I'm trying to set all the indexes in mask that are equal 1 to negative infinity, but that line attn_weights[mask] = float('-inf') keeps throwing this exception "index 1 is out of bounds for dimension 0 with size 1" not really sure what's going on attn_weights and mask both have the same dimension, which is 1 x 2048 x 40.
turns out the dtype for mask tensor had to be torch.uint8 or torch.bool I had it torch.long
https://stackoverflow.com/questions/59620154/
Pytorch: can we use nn.Module layers directly in forward() function?
Generally, In the constructor, we declare all the layers we want to use. In the forward function, we define how the model is going to be run, from input to output. My question is what if calling those predefined/built-in nn.Modules directly in forward() function? Is this Keras function API style legal for Pytorch? If not, why? Update: TestModel constructed in this way did run successfully, without an alarm. But the training loss will descend slowly compared with the conventional way. import torch.nn as nn from cnn import CNN class TestModel(nn.Module): def __init__(self): super().__init__() self.num_embeddings = 2020 self.embedding_dim = 51 def forward(self, input): x = nn.Embedding(self.num_embeddings, self.embedding_dim)(input) # CNN is a customized class and nn.Module subclassed # we will ignore the arguments for its instantiation x = CNN(...)(x) x = nn.ReLu()(x) x = nn.Dropout(p=0.2)(x) return output = x
You need to think of the scope of the trainable parameters. If you define, say, a conv layer in the forward function of your model, then the scope of this "layer" and its trainable parameters is local to the function and will be discarded after every call to the forward method. You cannot update and train weights that are constantly being discarded after every forward pass. However, when the conv layer is a member of your model its scope extends beyond the forward method and the trainable parameters persists as long as the model object exists. This way you can update and train the model and its weights.
https://stackoverflow.com/questions/59642925/
Basic Pytorch tensor multiplication and addition
I just realize I lack some very basic pytorch tensor math. How do I do the following with a pytorch tensor? lab_rs = (lab_rs * [100, 255, 255] - [0, 128, 128]) This works well in numpy. It's an image with shape (3, 512, 1024) and I want to multiply and subtract values from each color channel individually The error I get trying the same with a tensor is: TypeError: only integer tensors of a single element can be converted to an index
You need to make sure all your operands can be broadcast to the same dimensions: lab_rs = lab_rs * torch.tensor([[[100]], [[255]], [[255.]]]) - torch.tensor([[[0]], [[128]], [[128.]]])
https://stackoverflow.com/questions/59657761/
PyTorch - How to use k-fold cross validation when the data is loaded through ImageFolder?
My data, which is images, is stored on the filesystem, and it is fed into my convolutional neural network through the ImageFolder data loader of PyTorch. Therefore, the training, validation, and test data is manually splitted into different folders on the filesystem. So, how can I apply k-fold cross validation when using ImageFolder?
You can merge the fixed train/val/test folds you currently have using data.ConcatDataset into a single Dataset. Then you can use data.Subset to randomly split the single dataset into different folds over and over.
https://stackoverflow.com/questions/59663573/
How to get quick documentation working with PyCharm and Pytorch
I'm running PyCharm on Windows 10, and installed PyTorch following the getting started guide. Where I used Chocolatey and Anaconda to set up everything. I can run the PyTorch tutorials from inside the PyCharm IDE without any problems. So I feel like I have a proper set up, but there aren't any intellisense documentations for any of the PyTorch APIs. For example; import torch x = torch.randn(128, 20) If I mouse over randn and press CTRL+Q then PyCharm shows me a popup of the function definition without any documentation. I'm expecting to see Python comments from the API documentation for that function: https://pytorch.org/docs/stable/torch.html?highlight=randn#torch.randn I'm a new beginner with Pytorch and Python, but this is something that I often have access to from inside the IDE with many other languages and libraries. So I feel like this should be possible to get working, but I can't seem to find any instructions on how to fix this.
I was able to get it working by doing the following: PyStorm 2019.3 Open the settings for external documentation: File / Settings / Tools / External Documentation Add the following URL patterns: Module Name: torch.nn.functional URL: https://pytorch.org/docs/stable/nn.functional.html#{element.qname} Module Name: torch URL: https://pytorch.org/docs/stable/{module.basename}.html#{element.qname} Seems to work for most APIs but you have to trigger the quick documentation tool window. This won't show docs if you CTRL+CLICK something.
https://stackoverflow.com/questions/59664464/
What does torchvision.transforms.Resize(size, interpolation=2) actually do?
Does it add to the image if too small or crop if too big or just stretch the image to the desired size?
When you set interpolation=2, then you are using Bilinear interpolation, ti can be either used for upsampling or down sampling. In the case of upsampling you are doing something like There are several types of upsampling and down-sampling, but bilinear one uses a combination of the neighbouring pixels to cimpute the new pixel. Look to this links for more informations: link1; link2
https://stackoverflow.com/questions/59666923/
RoBERTa classification RuntimeError: shape '[-1, 9]' is invalid for input of size 8
m = MultiLabelBinarizer() X = pd.read_csv('data/data.csv', sep=None, engine='python') X = X.dropna() Y_train = m.fit_transform(X['labels']) Y_train2 = [list(i) for i in Y_train] data = pd.DataFrame({'text': pd.Series(X[text_col]), 'labels': Y_train2}) data = data.dropna() train_df, eval_df = train_test_split(data, test_size=0.2) numLabels = len(pd.unique(X['labels])) # count of the labels model = MultiLabelClassificationModel('roberta', 'roberta-base', num_labels=numLabels, use_cuda=False) model.train_model(pd.DataFrame(train_df)) My data-structure for the label column is: [[0,1,0,0,0,1,0,0], [0,1,1,0,0,1,0,0], [0,0,0,0,0,0,0,1]....] for every row there is one label-list like [0,1,0,0,0,1,0,0] in the label-column And for the texts there is one text (newspaper article) per row. (got it from that source: https://github.com/ThilinaRajapakse/simpletransformers#minimal-start-for-multilabel-classification) the model can be trained if i train it with only 4 entries. But when i want to train it with the whole dataset it gives me that: RuntimeError: shape '[-1, 9]' is invalid for input of size 8: File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simpletransformers/classification/multi_label_classification_model.py", line 121, in train_model return super().train_model(train_df, multi_label=multi_label, eval_df=eval_df, output_dir=output_dir, show_running_loss=show_running_loss, args=args) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simpletransformers/classification/classification_model.py", line 208, in train_model global_step, tr_loss = self.train(train_dataset, output_dir, multi_label=multi_label, show_running_loss=show_running_loss, eval_df=eval_df, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simpletransformers/classification/classification_model.py", line 306, in train outputs = model(**inputs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/simpletransformers/custom_models/models.py", line 117, in forward loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1, self.num_labels)) RuntimeError: shape '[-1, 9]' is invalid for input of size 8 I have no idea where that size 8 comes from and what to do now since it works with very few entries. Can anyone help?
[0,1,0,0,0,1,0,0] - it is 8 size, but your model expect size 9. it means that, your numLabels = 9. If you have 9 classes, then the label-list in the label-colum should be like this: [0,1,0,0,0,1,0,0,0]. But I think you just need to pass the num_labels as 8
https://stackoverflow.com/questions/59684472/
'int' object has no attribute 'size'"
F.nll_loss: I am getting AttributeError: 'int' object has no attribute 'size' when I try to run this code. I also get a snippet of the module code. raise ValueError('Expected 2 or more dimensions (got {})'.format(dim)) if input.size(0) != target.size(0): raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' format(input.size(0), target.size(0))) import torch from torchvision import transforms, datasets import torch.nn as nn import torch.nn.functional as F import matplotlib.pylab as plt train_dataset = datasets.MNIST(root = '', train =True, download = True, transform =transforms.Compose([transforms.ToTensor()])) test_dataset = datasets.MNIST(root ='', download =True, train =False, transform =transforms.Compose([transforms.ToTensor()])) batch_size = 10 train_loader = torch.utils.data.DataLoader(train_dataset, batch_size, shuffle =True) test_dataset = torch.utils.data.DataLoader(test_dataset, batch_size, shuffle =True) class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(28*28, 64) self.fc2 = nn.Linear(64,64) self.fc3 = nn.Linear(64,64) self.fc4 = nn.Linear(64,10) def forward(self, x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.relu(self.fc2(x)) x = self.fc4(x) return F.log_softmax(x, dim=1) x=torch.rand((28,28)) x=x.view(-1,28*28) net =Net() out=net(x) out import torch.optim as optim optimizer =optim.Adam(net.parameters(), lr=0.001) EPOCHS = 3 for epoch in range(EPOCHS): for data in train_dataset: x, y = data net.zero_grad() x=x.view(-1, 28*28) output = net(x) loss = F.nll_loss(output, y) loss.backward() optimizer.step() print(loss)
Just change the for loop from: for data in train_dataset: to for data in train_loader:
https://stackoverflow.com/questions/59691234/
torch find indices of matching rows in 2 2D tensors
I have two 2D tensors, in different length, both are different subsets of the same original 2d tensor and I would like to find all the matching "rows" e.g A = [[1,2,3],[4,5,6],[7,8,9],[3,3,3] B = [[1,2,3],[7,8,9],[4,4,4]] torch.2dintersect(A,B) -> [0,2] (the indecies of A that B also have) I've only see numpy solutions, that use dtype as dicts, and does not work for pytorch. Here is how I do it in numpy arr1 = edge_index_dense.numpy().view(np.int32) arr2 = edge_index2_dense.numpy().view(np.int32) arr1_view = arr1.view([('', arr1.dtype)] * arr1.shape[1]) arr2_view = arr2.view([('', arr2.dtype)] * arr2.shape[1]) intersected = np.intersect1d(arr1_view, arr2_view, return_indices=True)
This answer was posted before the OP updated the question with other restrictions that changed the problem quite a bit. TL;DR You can do something like this: torch.where((A == B).all(dim=1))[0] First, assuming you have: import torch A = torch.Tensor([[1,2,3],[4,5,6],[7,8,9]]) B = torch.Tensor([[1,2,3],[4,4,4],[7,8,9]]) We can check that A == B returns: >>> A == B tensor([[ True, True, True], [ True, False, False], [ True, True, True]]) So, what we want is: the rows in which they are all True. For that, we can use the .all() operation and specify the dimension of interest, in our case 1: >>> (A == B).all(dim=1) tensor([ True, False, True]) What you actually want to know is where the Trues are. For that, we can get the first output of the torch.where() function: >>> torch.where((A == B).all(dim=1))[0] tensor([0, 2])
https://stackoverflow.com/questions/59705001/
How to calculate geometric mean in a differentiable way?
How to calculate goemetric mean along a dimension using Pytorch? Some numbers can be negative. The function must be differentiable.
A known (reasonably) numerically-stable version of the geometric mean is: import torch def gmean(input_x, dim): log_x = torch.log(input_x) return torch.exp(torch.mean(log_x, dim=dim)) x = torch.Tensor([2.0] * 1000).requires_grad_(True) print(gmean(x, dim=0)) # tensor(2.0000, grad_fn=<ExpBackward>) This kind of implementation can be found, for example, in SciPy (see here), which is a quite stable lib. The implementation above does not handle zeros and negative numbers. Some will argue that the geometric mean with negative numbers is not well-defined, at least when not all of them are negative.
https://stackoverflow.com/questions/59722983/
PyTorch - Custom ReLU squared Implementation
I work on a project and I want to implement the ReLU squared activation function (max{0,x^2}). Is it ok to call it like: # example code def forward(self, x): s = torch.relu(x**2) return s Or should I implement the activation function on my own? In the second case could you please provide me an example on how to do so? Thanks a lot!
It doesn't make much sense to compute max(0, x**2) because x**2 >= 0 no matter what. You probably want to compute max(0, x) ** 2 instead: s = torch.pow(torch.relu(x), 2)
https://stackoverflow.com/questions/59749991/
How to use pytorch to construct multi-task DNN, e.g., for more than 100 tasks?
Below is the example code to use pytorch to construct DNN for two regression tasks. The forward function returns two outputs (x1, x2). How about the network for lots of regression/classification tasks? e.g., 100 or 1000 outputs. It definitely not a good idea to hardcode all the outputs (e.g., x1, x2, ..., x100). Is there an simple method to do that? Thank you. import torch from torch import nn import torch.nn.functional as F class mynet(nn.Module): def __init__(self): super(mynet, self).__init__() self.lin1 = nn.Linear(5, 10) self.lin2 = nn.Linear(10, 3) self.lin3 = nn.Linear(10, 4) def forward(self, x): x = self.lin1(x) x1 = self.lin2(x) x2 = self.lin3(x) return x1, x2 if __name__ == '__main__': x = torch.randn(1000, 5) y1 = torch.randn(1000, 3) y2 = torch.randn(1000, 4) model = mynet() optimizer = torch.optim.Adam(model.parameters(), lr=0.001, weight_decay=1e-4) for epoch in range(100): model.train() optimizer.zero_grad() out1, out2 = model(x) loss = 0.2 * F.mse_loss(out1, y1) + 0.8 * F.mse_loss(out2, y2) loss.backward() optimizer.step()
You can (and should) use nn containers such as nn.ModuleList or nn.ModuleDict to manage arbitrary number of sub-modules. For example (using nn.ModuleList): class MultiHeadNetwork(nn.Module): def __init__(self, list_with_number_of_outputs_of_each_head): super(MultiHeadNetwork, self).__init__() self.backbone = ... # build the basic "backbone" on top of which all other heads come # all other "heads" self.heads = nn.ModuleList([]) for nout in list_with_number_of_outputs_of_each_head: self.heads.append(nn.Sequential( nn.Linear(10, nout * 2), nn.ReLU(inplace=True), nn.Linear(nout * 2, nout))) def forward(self, x): common_features = self.backbone(x) # compute the shared features outputs = [] for head in self.heads: outputs.append(head(common_features)) return outputs Note that in this example each head is more complex than a single nn.Linear layer. The number of different "heads" (and number of outputs) is determined by the length of the argument list_with_number_of_outputs_of_each_head. Important notice: it is crucial to use nn containers, rather than simple pythonic lists/dictionary to store all sub modules. Otherwise pytorch will have difficulty managing all sub modules. See, e.g., this answer, this question and this one.
https://stackoverflow.com/questions/59763775/
TensorFlow / PyTorch: Gradient for loss which is measured externally
I am relatively new to Machine Learning and Python. I have a system, which consists of a NN whose output is fed into an unknown nonlinear function F, e.g. some hardware. The idea is to train the NN to be an inverse F^(-1) of that unknown nonlinear function F. This means that a loss L is calculated at the output of F. However, backpropagation cannot be used in a straightforward manner for calculating the gradients and updating the NN weights because the gradient of F is not known either. Is there any way how to use a loss function L, which is not directly connected to the NN, for the calculation of the gradients in TensorFlow or PyTorch? Or to take a loss that was obtained with any other software (Matlab, C, etc.) use it for backpropagation? As far as I know, Keras keras.backend.gradients only allows to calculate gradients with respect to connected weights, otherwise the gradient is either zero or NoneType. I read about the stop_gradient() function in TensorFlow. But I am not sure whether this is what I am looking for. It allows to not compute the gradient with respect to some variables during backpropagation. But I think the operation F is not interpreted as a variable anyway. Can I define any arbitrary loss function (including a hardware measurement) and use it for backpropagation in TensorFlow or is it required to be connected to the graph as well? Please, let me know if my question is not specific enough.
AFAIK, all modern deep learning packages (pytorch, tensorflow, keras etc.) are relaying on gradient descent (and its many variants) to train networks. As the name suggests, you cannot do gradient descent without gradients. However, you might circumvent the "non differentiability" of your "given" function F by looking at the problem from a slightly different perspective: You are trying to learn a model M that "counters" the effect of F. So you have access to F (but not its gradients) and a set of representative inputs X={x_0, x_1, ... x_n}. For each example x_i you can compute y_i = F(x_i) and your end goal is to have a model M that given y_i will output x_i. Therefore, you can treat y_i as your model's input and compute a loss between M(y_i) and x_i that produced it. This way you do not need to compute gradients through the "black box" F. A pseudo code would look something like: for x in examples: y = F(x) # applying F on x - getting only output WITHOUT any gradients pred = M(y) # apply the trainable model M to the output of F loss = ||x - pred|| # loss will propagate gradients through M and stop at F loss.backward()
https://stackoverflow.com/questions/59766210/
How to circumvent AWS package and ephemeral limits for large packages + large models
We have a production scenario with users invoking expensive NLP functions running for short periods of time (say 30s). Because of the high load and intermittent usage, we're looking into Lambda function deployment. However - our packages are big. I'm trying to fit AllenNLP in a lambda function, which in turn depends on pytorch, scipy, spacy and numpy and a few other libs. What I've tried Following recommendations made here and the example here, tests and additional files are removed. I also use a non-cuda version of Pytorch which gets its' size down. I can package an AllenNLP deployment down to about 512mb. Currently, this is still too big for AWS Lambda. Possible fixes? I'm wondering if anyone of has experience with one of the following potential pathways: Cutting PyTorch out of AllenNLP. Without Pytorch, we're in reach of getting it to 250mb. We only need to load archived models in production, but that does seem to use some of the PyTorch infrastructure. Maybe there are alternatives? Invoking PyTorch in (a fork of) AllenNLP as a second lambda function. Using S3 to deliver some of the dependencies: SIMlinking some of the larger .so files and serving them from an S3 bucket might help. This does create an additional problem: the Semnatic Role Labelling we're using from AllenNLP also requires some language models of around 500mb, for which the ephemeral storage could be used - but maybe these can be streamed directly into RAM from S3? Maybe i'm missing an easy solution. Any direction or experiences would be much appreciated!
You could deploy your models to SageMaker inside of AWS, and run Lambda -> Sagemaker to avoid having to load up very large functions inside of a Lambda. Architecture explained here - https://aws.amazon.com/blogs/machine-learning/call-an-amazon-sagemaker-model-endpoint-using-amazon-api-gateway-and-aws-lambda/
https://stackoverflow.com/questions/59771715/
Pytorch, backprop and composite models
Just a quick check for a question I have. I want to build a model that generates its output based on two models F and G like so. y = G(F(x)) where x is of course the input, and y the output. However, first I want to update the weights of the F(x) and then later update the weights of both F and G based on the value of y. I understand that pytorch offers a way to specify your own backprop method.. but since my "method" seems to build out of basic components, could it be that I can do this with a standard solution? My thoughts would be that I need separate optimizer/loss for the F and G objects. But in addition to that, also some update functionality for the composite model G(F()). Can anyone confirm this as well?
If as you suggest, the optimizers and losses for F and G can be separated, then I don't think that it will be necessary to implement any different update functionalities since you can specify the set of parameters for each optimizer, e.g. optimizer_F = optim.SGD(F.parameters(),...) optimizer_G = optim.SGD(G.parameters(),...) then when you call optimizer_F.step() it will only update the parameters of F and similarly optimizer_G.step() will only update the parameters of G.
https://stackoverflow.com/questions/59772000/
How can I load a model in PyTorch without redefining the model?
I am looking for a way to save a pytorch model, and load it without the model definition. By this I mean that I want to save my model including model definition. For example, I would like to have two scripts. The first would define, train, and save the model. The second would load and predict the model without including the model definition. The method using torch.save(), torch.load() requires me to include the model definition in the prediction script, but I want to find a way to load a model without redefining it in the script.
You can attempt to export your model to TorchScript using tracing. This has limitations. Due to the way PyTorch constructs the model's computation graph on the fly, if you have any control-flow in your model then the exported model may not completely represent your python module. TorchScript is only supported in PyTorch >= 1.0.0, though I would recommend using the latest version possible. For example, a model without any conditional behavior is fine from torch import nn class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 10, 3, padding=1) self.bn1 = nn.BatchNorm2d(10) self.conv2 = nn.Conv2d(10, 20, 3, padding=1) self.bn2 = nn.BatchNorm2d(20) self.fc = nn.Linear(20 * 4 * 4, 2) def forward(self, x): x = self.conv1(x) x = F.relu(x) x = F.max_pool2d(x, 2, 2) x = self.bn1(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2, 2) x = self.bn2(x) x = self.fc(x.flatten(1)) return x We can export this as follows from torch import jit net = Model() # ... train your model # put model in the mode you want to export (see bolded comment below) net.eval() # print example output x = torch.ones(1, 3, 16, 16) print(net(x)) # create TorchScript by tracing the computation graph with an example input x = torch.ones(1, 3, 16, 16) net_trace = jit.trace(net, x) jit.save(net_trace, 'model.zip') If successful then we can load our model into a new python script without using Model. from torch import jit net = jit.load('model.zip') # print example output (should be same as during save) x = torch.ones(1, 3, 16, 16) print(net(x)) The loaded model is also trainable, however, the loaded model will only behave in the mode it was exported in. For example, in this case we exported our model in eval() mode, so using net.train() on the loaded module will have no effect. Control-flow A model like this, which has behavior that changes between passes won't be properly exported. Only the code evaluated during jit.trace will be exported. from torch import nn class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 10, 3, padding=1) self.bn1 = nn.BatchNorm2d(10) self.conv2 = nn.Conv2d(10, 20, 3, padding=1) self.bn2 = nn.BatchNorm2d(20) self.fca = nn.Linear(20 * 4 * 4, 2) self.fcb = nn.Linear(20 * 4 * 4, 2) self.use_a = True def forward(self, x): x = self.conv1(x) x = F.relu(x) x = F.max_pool2d(x, 2, 2) x = self.bn1(x) x = self.conv2(x) x = F.relu(x) x = F.max_pool2d(x, 2, 2) x = self.bn2(x) if self.use_a: x = self.fca(x.flatten(1)) else: x = self.fcb(x.flatten(1)) return x We can still export the model as follows import torch from torch import jit net = Model() # ... train your model net.eval() # print example input x = torch.ones(1, 3, 16, 16) net.use_a = True print('a:', net(x)) net.use_a = False print('b:', net(x)) # save model x = torch.ones(1, 3, 16, 16) net_trace = jit.trace(net, x) jit.save(net_trace, "model.ts") In this case the example outputs are a: tensor([[-0.0959, 0.0657]], grad_fn=<AddmmBackward>) b: tensor([[ 0.1437, -0.0033]], grad_fn=<AddmmBackward>) However, loading import torch from torch import jit net = jit.load("model.ts") # will not match the output from before x = torch.ones(1, 3, 16, 16) net.use_a = True print('a:', net(x)) net.use_a = False print('b:', net(x)) results in a: tensor([[ 0.1437, -0.0033]], grad_fn=<DifferentiableGraphBackward>) b: tensor([[ 0.1437, -0.0033]], grad_fn=<DifferentiableGraphBackward>) Notice that the logic of the branch "a" is not present since net.use_a was False when jit.trace was called. Scripting These limitations can be overcome but require some effort on your end. You can use the scripting functionality to ensure that all the logic is exported.
https://stackoverflow.com/questions/59774328/
Inference pytorch C++ with alexnet and cv::imread image
I am trying to infer with a C++ application an image classification task using an alexnet pre-trained net.I have successfully inferred a dog image loading the net with python: alexnet = torchvision.models.alexnet(pretrained=True) img = Image.open("dog.jpg") transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] )]) img_t = transform(img) batch_t = torch.unsqueeze(img_t, 0) alexnet.forward(batch_t) _, index = torch.max(out, 1) Result index is 208, Labrador_retriever, that looks good. Then I save the net to be loaded from a C++ application example = torch.rand(1, 3, 224, 224) traced_script_module_alex = torch.jit.trace(alexnet, example) traced_script_module.save("alexnet.pt") When I load to C++, I get the wrong result: cv::Mat img = cv::imread("dog.jpg"); cv::resize(img, img, cv::Size(224, 224), cv::INTER_CUBIC); // Convert the image and label to a tensor. torch::Tensor img_tensor = torch::from_blob(img.data, { 1, img.rows, img.cols, 3 }, torch::kByte); img_tensor = img_tensor.permute({ 0, 3, 1, 2 }); // convert to CxHxW img_tensor = img_tensor.to(torch::kFloat); std::vector<torch::jit::IValue> input; input.push_back(img_tensor); torch::jit::script::Module module = torch::jit::load("alexnet.pt"); at::Tensor output = module.forward(input).toTensor(); std::cout << output.argmax(1) << '\n'; the argmax is 463, bucket. I think I am not looking at the same image; what am I missing...?
Your C++ code is missing this part of your Python code: transform = transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize( mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225] )]) img_t = transform(img)
https://stackoverflow.com/questions/59783791/
Pytorch equivalent of tf.Variable
I am trying to implement this code in pytorch: self.scale_var = tf.Variable( 0.1, name='scale_var', trainable=True, dtype=tf.float32, constraint=lambda x: tf.clip_by_value(x, 0, np.infty)) I want to have a scalar value that is trainable and would like to scale a constant with this value in the loss function. Is the below mentioned code appropriate? class pytorch_variable(nn.Module): def __init__(self): super(pytorch_variable,self).__init__() self.var = nn.Parameter(torch.tensor(0.1)) def forward(self): return self.var What is happening right now is that the gradients flow through this, but the trainable scalar value slowly reduces to zero, decreasing by 0.001 from initial value of 0.1 (till zero because I clip the data after loss.backward() call).
In PyTorch, Variable and Tensor were merged, so you are correct that a scalar variable should just be a scalar tensor. In isolation: >>> x=torch.tensor(5.5, requires_grad=True) >>> x.grad >>> x.backward(torch.tensor(12.4)) >>> x.grad tensor(12.4000) 0.001 is a common learning rate, so I'd suspect that's related to the rate at which your trainable variable is being updated.
https://stackoverflow.com/questions/59800247/
How to install torch in python
I tried pip3 install torch --no-cache-dir and, after few seconds, I got this: Collecting torch Downloading https://files.pythonhosted.org/packages/24/19/4804aea17cd136f1705a5e98a00618cb8f6ccc375ad8bfa437408e09d058/torch-1.4.0-cp36-cp36m-manylinux1_x86_64.whl (753.4MB) 100% |████████████████████████████████| 753.4MB 5.7MB/s Exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/lib/python3/dist-packages/pip/commands/install.py", line 342, in run requirement_set.prepare_files(finder) File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 380, in prepare_files ignore_dependencies=self.ignore_dependencies)) File "/usr/lib/python3/dist-packages/pip/req/req_set.py", line 620, in _prepare_file session=self.session, hashes=hashes) File "/usr/lib/python3/dist-packages/pip/download.py", line 821, in unpack_url hashes=hashes File "/usr/lib/python3/dist-packages/pip/download.py", line 663, in unpack_http_url unpack_file(from_path, location, content_type, link) File "/usr/lib/python3/dist-packages/pip/utils/__init__.py", line 617, in unpack_file flatten=not filename.endswith('.whl') File "/usr/lib/python3/dist-packages/pip/utils/__init__.py", line 506, in unzip_file data = zip.read(name) File "/usr/lib/python3.6/zipfile.py", line 1338, in read return fp.read() File "/usr/lib/python3.6/zipfile.py", line 858, in read buf += self._read1(self.MAX_N) File "/usr/lib/python3.6/zipfile.py", line 948, in _read1 data = self._decompressor.decompress(data, n) MemoryError What should I do now to install PyTorch? I tried almost every method mentioned on google. I am working on Ubuntu, I tried using conda too, but I am unable to use that package outside conda.
For pip environment use this pip3 install torchvision For conda environment use this (run this command on anaconda prompt) conda install PyTorch -c PyTorch Update Use this code to turn off your cache pip3 --no-cache-dir install torchvision or pip3 install torchvision--no-cache-dir or pip install --no-cache-dir torchvision Try one by one
https://stackoverflow.com/questions/59800318/
Resize RGB Tensor pytorch
I want to resize a 3-D RBG tensor in pytorch. I know how to resize a 4-D tensor, but unfortunalty this method does not work for 3-D. The input is: #input shape: [3, 100, 200] ---> desired output shape: [3, 80, 120] if I have a 4-D vector it works fine. #input shape: [2, 3, 100, 200] out = torch.nn.functional.interpolate(T,size=(100,80), mode='bilinear') Any suggestions? Thanks in advance!
Thanks to jodag I found the answer: # input shape [3, 200, 120] T = T.unsqueeze(0) T = torch.nn.functional.interpolate(T,size=(100,80), mode='bilinear') T = T.squeeze(0) # output shape [3, 100, 80]
https://stackoverflow.com/questions/59803041/
Unable to allocate GPU memory, when there is enough of cached memory
I am training vgg16 model from scratch on AWS EC2 Deep Learning AMI machine (Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1054-aws x86_64v)) with Python3 (CUDA 10.1 and Intel MKL) (Pytorch 1.3.1) and facing below error while updating model parameters. RuntimeError: CUDA out of memory. Tried to allocate 24.00 MiB (GPU 0; 11.17 GiB total capacity; 10.76 GiB already allocated; 4.81 MiB free; 119.92 MiB cached) Code for updating parameters: def _update_fisher_params(self, current_ds, batch_size, num_batch): dl = DataLoader(current_ds, batch_size, shuffle=True) log_liklihoods = [] for i, (input, target) in enumerate(dl): if i > num_batch: break output = F.log_softmax(self.model(input.cuda().float()), dim=1) log_liklihoods.append(output[:, target]) log_likelihood = torch.cat(log_liklihoods).mean() grad_log_liklihood = autograd.grad(log_likelihood, self.model.parameters()) _buff_param_names = [param[0].replace('.', '__') for param in self.model.named_parameters()] for _buff_param_name, param in zip(_buff_param_names, grad_log_liklihood): self.model.register_buffer(_buff_param_name+'_estimated_fisher', param.data.clone() ** 2) After debugging: log_liklihoods.append(output[:, target]) line throws error after 157 iterations I have the required memory but it does not allocate, I am not getting why updating the gradients is causing the memory problem, as gradients should be de-referenced and released automatically on each iteration. Any idea? I have tried following solutions but no luck. Lowering batch size Freeing cache with torch.cuda.empty_cache() Reducing the number of filters to reduce the memory footprint Machine Specs:
Finally I solved the memory problem! I realized that in each iteration I put the input data in a new tensor, and pytorch generates a new computation graph. That causes the used RAM to grow forever. Then I used .detach() function, and the RAM always stays at a low level. self.model(input.cuda().float()).detach().requires_grad_(True)
https://stackoverflow.com/questions/59805901/
pytorch 1D Dropout leads to unstable learning
I'm implementing an Inception-like CNN in pytorch. After the blocks of convolution layers, I have three fully-connected linear layers followed by a sigmoid activation to give me my final regression output. I'm testing the effects of dropout layers in this network, but it's giving me some unexpected results. Here is the code: class MyInception(nn.Module): def __init__(self, in_channels, verbose=False): super(MyInception, self).__init__() self.v = verbose ic=in_channels; oc=16 self.inceptionBlock1 = InceptionBlock(in_channels=ic, out_channels=oc, maxpool=False, verbose=verbose) self.inceptionBlock2 = InceptionBlock(in_channels=oc * 6, out_channels=oc, maxpool=False, verbose=verbose) self.pool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) self.regressor = nn.Sequential( nn.Linear(oc * 6 * 35 * 35, 1024, bias=True), nn.ReLU(inplace=True), nn.Dropout(p=0.2, inplace=False), # <--- Dropout 1 nn.Linear(1024, 128, bias=True), nn.ReLU(inplace=True), nn.Dropout(p=0.2, inplace=False), # <--- Dropout 2 nn.Linear(128, 1, bias=True), nn.Sigmoid() ) def forward(self, x): x = self.inceptionBlock1(x) x = self.inceptionBlock2(x) x = self.pool(x) x = torch.flatten(x, 1) x = self.regressor(x) return x def train(epochs=10, dot_every=25): running = pd.DataFrame(columns=['Epoch','Round','TrainLoss','TestLoss','LearningRate']) for epoch in range(epochs): train_losses = [] model.train() counter = 0 for images, targets in train_loader: images = images.to(device) targets = targets.to(device) optimizer.zero_grad() outputs = model(images) loss = loss_fn(torch.flatten(outputs), targets) train_losses.append( loss.item() ) loss.backward() optimizer.step() counter += 1 if counter % dot_every == 0: print(".", end='.', flush=True) test_loss = test() else: test_loss = -1. lr = np.squeeze(scheduler.get_lr()) running = running.append(pd.Series([epoch, counter, loss.item(), test_loss, lr], index=running.columns), ignore_index=True) test_loss = test() train_loss = np.mean(np.asarray(train_losses)) running = running.append(pd.Series([epoch, counter, train_loss, test_loss, lr], index=running.columns), ignore_index=True) print("") print(f"Epoch {epoch+1}, Train Loss: {np.round(train_loss,4)}, Test Loss: {np.round(test_loss, 4)}, Learning Rate: {np.format_float_scientific(lr, precision=4)}") return running def test(): model.eval() test_losses = [] for i, (images,targets) in enumerate(test_loader): images = images.to(device) targets = targets.to(device) outputs = model(images) loss = loss_fn(torch.flatten(outputs), targets) test_losses.append( loss.item() ) mean_loss = np.mean(np.asarray(test_losses)) return mean_loss # instantiate the model model = MyInception(in_channels=4, verbose=False).to(device) # define the optimizer and loss function optimizer = Adam(model.parameters(), lr=0.001, weight_decay=0.0001) loss_fn = nn.MSELoss() # run it results = train(epochs=10, dot_every=20) Here is a plot of the MSE losses for the training data. (red = no dropout, green = second dropout only, blue = first dropout only, purple = both dropouts) Runs with dropout have big increases in losses at the epoch boundaries (dashed vertical lines), with the double dropout even having a big jump in loss at the start of epoch 10. The important thing is the test loss. That is much more stable and not too different between either condition after the 5th epoch, so maybe I shouldn't care. But I would like to understand what is going on.
I cracked the case. I realized that I flip model.train() to model.eval() in the test call without setting it back to train() after. Since Dropout behaves differently in train and eval modes, adding in Dropout revealed the bug.
https://stackoverflow.com/questions/59815381/
How to remove certain layers from Fastern-RCNN in Pytorch?
Target: I want to use the pretrained Faster-RCNN model to extract features from image. What I have tried: I use below code to build the model: import torchvision.models as models from PIL import Image import torchvision.transforms as T import torch # download the pretrained fasterrcnn model model = models.detection.fasterrcnn_resnet50_fpn(pretrained=True) model.eval() model.cuda() # remove [2:] layers modules = list(model.children())[:2] model_t=torch.nn.Sequential(*modules) # load image and extract features img = Image.open('data/person.jpg') transform = T.Compose([T.ToTensor()]) img_t = transform(img) batch_t = torch.unsqueeze(img_t, 0).cuda() ft = model_t(batch_t) Error: But I got the following error:TypeError: conv2d(): argument 'input' (position 1) must be Tensor, not tuple Please help! Thank you!
print(model.modules) to get the layer names. Then delete a layer with: del model.my_layer_name
https://stackoverflow.com/questions/59816287/
Make PyTorch variables to float64
How to make all the variables created in a PyTorch file to float64? Is there a single line of code which can do that?
You can set the default tensor type using this one-liner: torch.set_default_tensor_type(torch.DoubleTensor)
https://stackoverflow.com/questions/59826670/
Multiplying and powering python float and pytorch integer
Why does a python float multiplied by a torch.long gives a torch.float but powering a float by a torch.long gives a torch.long? >>> a = 0.9 >>> b = torch.tensor(2, dtype=torch.long) >>> foo = a * b >>> print(foo, foo.dtype) tensor(1.8000) torch.float32 >>> bar = a ** b >>> print(bar, bar.dtype) tensor(0) torch.int64
This looks like a bug, probably in the way pytorch binds ** to __rpow__ or __pow__. E.g. if you tried 0.9 - torch.tensor(2), since 0.9 isn't a tensor, this gets interpreted as torch.tensor(2).__rsub__(0.9), which works correctly. ** behaves the same way, but torch.tensor(2).__rpow__(0.9) incorrectly returns tensor(0) with dtype int64. In the meantime you can use torch.tensor(0.9) ** torch.tensor(2). Filed a bug: https://github.com/pytorch/pytorch/issues/32436
https://stackoverflow.com/questions/59827509/
layer Normalization in pytorch?
shouldn't the layer normalization of x = torch.tensor([[1.5,0,0,0,0]]) be [[1.5,-0.5,-0.5,-0.5]] ? according to this paper paper and the equation from the pytorch doc. But the torch.nn.LayerNorm gives [[ 1.7320, -0.5773, -0.5773, -0.5773]] Here is the example code: x = torch.tensor([[1.5,.0,.0,.0]]) layerNorm = torch.nn.LayerNorm(4, elementwise_affine = False) y1 = layerNorm(x) mean = x.mean(-1, keepdim = True) var = x.var(-1, keepdim = True) y2 = (x-mean)/torch.sqrt(var+layerNorm.eps) where: y1 == tensor([[ 1.7320, -0.5773, -0.5773, -0.5773]]) y2 == tensor([[ 1.5000, -0.5000, -0.5000, -0.5000]])
Yet another simplified implementation of a Layer Norm layer with bare PyTorch. from typing import Tuple import torch def layer_norm( x: torch.Tensor, dim: Tuple[int], eps: float = 0.00001 ) -> torch.Tensor: mean = torch.mean(x, dim=dim, keepdim=True) var = torch.square(x - mean).mean(dim=dim, keepdim=True) return (x - mean) / torch.sqrt(var + eps) def test_that_results_match() -> None: dims = (1, 2) X = torch.normal(0, 1, size=(3, 3, 3)) indices = torch.tensor(dims) normalized_shape = torch.tensor(X.size()).index_select(0, indices) orig_layer_norm = torch.nn.LayerNorm(normalized_shape) y = orig_layer_norm(X) y_hat = layer_norm(X, dim=dims) assert torch.allclose(y, y_hat) Note, that original implementation also has trainable parameters and β (see the docs).
https://stackoverflow.com/questions/59830168/
Bitwise operations in Pytorch
Could someone help me how to perform bitwise AND operations on two tensors in Pytorch 1.4? Apparently I could only find NOT and XOR operations in official document
I don't see them in the docs, but it looks like &, |, __and__, __or__, __xor__, etc are bit-wise: >>> torch.tensor([1, 2, 3, 4]).__xor__(torch.tensor([1, 1, 1, 1])) tensor([0, 3, 2, 5]) >>> torch.tensor([1, 2, 3, 4]) | torch.tensor([1, 1, 1, 1]) tensor([1, 3, 3, 5]) >>> torch.tensor([1, 2, 3, 4]) & torch.tensor([1, 1, 1, 1]) tensor([1, 0, 1, 0]) >>> torch.tensor([1, 2, 3, 4]).__and__(torch.tensor([1, 1, 1, 1])) tensor([1, 0, 1, 0]) See https://github.com/pytorch/pytorch/pull/1556
https://stackoverflow.com/questions/59843006/
How to specify pytorch as a package requirement on windows?
I have a python package which depends on pytorch and which I’d like windows users to be able to install via pip (the specific package is: https://github.com/mindsdb/lightwood, but I don’t think this is very relevant to my question). What are the best practices for going about this ? Are there some project I could use as examples ? It seems like the pypi hosted version of torch & torchvision aren’t windows compatible and the “getting started” section suggests installing from the custom pytorch repository, but beyond that I’m not sure what the ideal solution would be to incorporate this as part of a setup script.
What are the best practices for going about this ? If your project depends on other projects that are not distributed through PyPI then you have to inform the users of your project one way or another. I recommend the following combination: clearly specify (in your project's documentation pages, or in the project's long description, or in the README, or anything like this) which dependencies are not available through PyPI (and possibly the reason why, with the appropriate links) as well as the possible locations to get them from; to facilitate the user experience, publish alongside your project a pre-prepared requirements.txt file with the appropriate --find-links options. The reason why (or main reason, there are others), is that anyone using pip assumes that (by default) everything will be downloaded from PyPI and nowhere else. In other words anyone using pip puts some trust into pypi.org as a source for Python project distributions. If pip were suddenly to download artifacts from other sources, it would breach this trust. It should be the user's decision to download from other sources. So you could provide in your project's documentation an example of requirements.txt file like the following: # ... torch===1.4.0 --find-links https://download.pytorch.org/whl/torch_stable.html torchvision===0.5.0 --find-links https://download.pytorch.org/whl/torch_stable.html # ... Update The best solution would be to help the maintainers of the projects in question to publish Windows wheels on PyPI directly: https://github.com/pytorch/pytorch/issues/24310 https://github.com/pytorch/vision/issues/1774 https://pypi.org/help/#file-size-limit
https://stackoverflow.com/questions/59856930/
PyTorch: How to define a new neural network that utilizes transfer learning
I am migrating from Keras/TF frameworks and I have litte troubles understanding the transfer learning process in PyTorch. I want to use pytorch-lightning framework and I want to switch between different neural networks in one script. Per this example we can switch between different neural networks in their implementation: class BERT(pl.LightningModule): def __init__(self, model_name, task): self.task = task if model_name == 'transformer': self.net = Transformer() elif model_name == 'my_cool_version': self.net = MyCoolVersion() The question is: how to create a new neural network that extends the nn.Module and utilizes transfer learning process? My own implementation looks like this: I am using vgg16 network and replaced the classifier layer with only one fc with two output neurons. class VGGNetwork(nn.Module): def __init__(self): super(VGGNetwork, self).__init__() # vgg16 is the default model here, we can use bn etc... self.model = vgg16(pretrained=True) # removing the last three layers of classifier only 2 ... self.model.classifier = nn.Sequential(nn.Linear(512 * 7 * 7, 2)) def forward(self, x): return self.model.forward(x) Is this the correct way how to do that?
you can freeze weights and bais for the neural network layer except for the last layer. you can use requires_grad = False for param in model_conv.parameters(): param.requires_grad = False you can find more about this at the following link https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
https://stackoverflow.com/questions/59858824/
PyTorch allocates more memory on the first available GPU (cuda:0)
As a part of the reinforcement learning training system, I am training four policies in parallel using four GPUs. For each model, there are two processes - the actor and the learner, which only use their specific GPU (e.g. actor and learner corresponding to model #2 only use GPU #2 for all their tensors). Actor and learner share the model layers via torch's share_memory_(). Since the four training "subsystems" are completely symmetric, I would expect them to use the exact same amount of GPU memory on each of the four GPUs. In practice, however, I see a lot more GPU memory allocated on the first GPU (cuda:0). It feels like all the memory sharing is somehow done via GPU #0. Is there a way to fix this? So far, I tried setting CUDA_VISIBLE_DEVICES in child processes by explicitly altering os.environ in the process start function. This does not seem to have any effect, probably because child processes are forked from the main process, where PyTorch CUDA is already initialized, and envvars just seem to be ignored at this point.
Ok, so far I came up with a workaround. My hypothesis was right, if PyTorch CUDA subsystem is already initialized before the child process is forked, setting CUDA_VISIBLE_DEVICES to a different value for a subprocess does not do anything. Even worse, calling torch.cuda.device_count() is enough to initialize CUDA, so we can't even query the number of GPUs from PyTorch. Solution is either hardcode it, pass as a parameter, or query PyTorch API in a separate process. My implementation for the latter: import sys def get_available_gpus_without_triggering_pytorch_cuda_initialization(envvars): import subprocess out = subprocess.run([sys.executable, '-m', 'utils.get_available_gpus'], capture_output=True, env=envvars) text_output = out.stdout.decode() from utils.utils import log log.debug('Queried available GPUs: %s', text_output) return text_output def main(): import torch device_count = torch.cuda.device_count() available_gpus = ','.join(str(g) for g in range(device_count)) print(available_gpus) return 0 if __name__ == '__main__': sys.exit(main()) Basically this function calls its own script as separate python process and reads stdout. I won't mark this answer as accepted because I would like to learn a proper solution if it exists.
https://stackoverflow.com/questions/59873577/
Python process never finishing when called from Java
I've tried to setup an AI with PyTorch. Everything is fine when I call my script from the console. But when I call the script in a Java `ProcessBuildera, it will finish but never terminate... Here is the ProcessBuilder code String[] cmd = {"python3", "-i" , "AI/Home-System.py", data.getName().replace(".csv", ""), "true", "false"}; ProcessBuilder pb = new ProcessBuilder(cmd); Process p = pb.start(); Hope that you can help me Edit: I found another solution. I call this script in a linux screen with String[] cmd = {"screen", "-dmS", "AI-" + device, "python3", "AI/Home-System.py", data.getName().replace(".csv", ""), "true", "false"}; Runtime.getRuntime().exec(cmd);
Read the process' output stream, as the end of this stream allows your ProcessBuilder to exit. Or else call the ProcessBuilder's inheritIO(). Then waitFor() the process. Here is some sample code showing these steps.
https://stackoverflow.com/questions/59879006/
Trying to learn how to implement a single Neuron
I have this code in Pytorch but cant get it to work. I have it working with Numpy as return (X.T * W).sum(axis=1) + B But with Pytorch I keep getting this error... def neural_network_neurons(W, B, X): W = W.view(W.size(0), -1) z1 = torch.matmul(X, W) + B return ReLU(z1) # -------------------------------------------------- W = torch.tensor([[1.2, 0.3, 0.1], [.01, 2.1, 0.7]]) B = torch.tensor([2.1, 0.89]) X = torch.tensor([0.3, 6.8, 0.59]) neural_network_neurons(W, B, X) --------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-21-8a5d3a425c16> in <module> 4 X = torch.tensor([0.3, 6.8, 0.59]) 5 ----> 6 neural_network_neurons(W, B, X) <ipython-input-20-7450924eb4a5> in neural_network_neurons(W, B, X) 5 ### 6 W = W.view(W.size(0), -1) ----> 7 z1 = torch.matmul(X, W) + B 8 return ReLU(z1) 9 #return (X.T * W).sum(axis=1) + B RuntimeError: size mismatch, m1: [1 x 3], m2: [2 x 3] at /pytorch/aten/src/TH/generic/THTensorMath.cpp:197
You have the wrong orientation for W: you defined a 2x3 matrix, but your algorithm requires a 3x2. Try W.T instead?
https://stackoverflow.com/questions/59905306/
In PyTorch's "MaxPool2D", is padding added depending on "ceil_mode"?
In MaxPool2D the padding is by default set to 0 and the ceil_mode is also set to False. Now, if I have an input of size 7x7 with kernel=2,stride=2 the output shape becomes 3x3, but when I use ceil_mode=True, it becomes 4x4, which makes sense because (if the following formula is correct), for 7x7 with output_shape would be 3.5x3.5 and depending on the ceil_mode it would be either 3x3 or 4x4. Now, my question is, if the ceil_mode=True, does it change the default padding? If it does, then how is it adding the padding i.e. is it adding the padding on left first or right, up first or down?
Ceil_mode=True changes the padding. In the case of ceil mode, additional columns and rows are added at the right as well as at the down. (Not top and not left). It does not need to be one extra column. It depends on the stride value as well. I just wrote small code snippet where you can check how the populated values are pooled in either modes. Before I found the post referenced above, I experimented the same way with your problem, it also seems as though the zero-padding is not used during the pooling operation, as in my following example the zeros would have been the maximum elements to be taken, but this does not seem to be the case. test_tensor = torch.FloatTensor(2,7,7).random_(-10,-5) print(test_tensor) max_pool = nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True) print(max_pool(test_tensor)) max_pool = nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=False) print(max_pool(test_tensor)) Random sample tensor: tensor([[[ -6., -9., -7., -10., -6., -8., -6.], [-10., -10., -10., -6., -10., -9., -6.], [-10., -7., -7., -8., -10., -10., -9.], [ -8., -10., -10., -9., -9., -10., -9.], [ -8., -6., -8., -6., -7., -7., -9.], [-10., -8., -7., -10., -9., -6., -8.], [-10., -6., -9., -10., -9., -9., -10.]], [[-10., -8., -6., -10., -9., -6., -7.], [ -7., -7., -10., -10., -6., -9., -7.], [ -6., -10., -7., -8., -8., -10., -9.], [ -8., -8., -6., -7., -6., -8., -6.], [ -9., -8., -7., -10., -8., -8., -7.], [-10., -10., -6., -9., -8., -8., -8.], [-10., -6., -9., -9., -7., -9., -10.]]]) ceil_mode=True tensor([[[ -6., -6., -6., -6.], [ -7., -7., -9., -9.], [ -6., -6., -6., -8.], [ -6., -9., -9., -10.]], [[ -7., -6., -6., -7.], [ -6., -6., -6., -6.], [ -8., -6., -8., -7.], [ -6., -9., -7., -10.]]]) ceil_mode=False tensor([[[-6., -6., -6.], [-7., -7., -9.], [-6., -6., -6.]], [[-7., -6., -6.], [-6., -6., -6.], [-8., -6., -8.]]])
https://stackoverflow.com/questions/59906456/
Image deconvolution with a CNN
I have an input tensor of shape (C,H,W), where H=W and C=W^2. This tensor contains non-linearly transformed information for an image of shape (1,H,W) squeezed to (H,W). The exact form of the transformation is not important (plus, there is no closed-form expression for it anyway). I would like to design a CNN to estimate images from such tensors. I realize that I will have to experiment with CNN architectures (since I don't have the exact form of the transformation), but I'm not exactly sure how to proceed. The input tensor has both positive and negative values which are important for the image reconstruction, so a ReLU layer probably should not be implemented near the beginning of CNN. I don't think that pooling layers would be useful, either, at least in the H and W dimensions. Clearly, I have to collapse the C dimension to get the image, but I don't think that it should be done all at once, e.g., torch.nn.Conv2d( C, 1, kernel_size ) is probably not a good idea. It seems to me that I should first use a Conv2D layer which produces the same size tensor as the input tensor (to partially unscramble the non-linear transformation), but if the kernel size is greater than one the H and W dimensions will be reduced in size, which I don't want (unless this can be addressed later in the CNN). On the other hand, if the kernel size is one the shape will stay the same but I don't think that anything happens to the tensor in this case. Also, I will probably have to include linear layers, but I'm not sure how to use them with 3D tensors. Any suggestions would be welcome.
There's no problem with applying a ReLU layer near the beginning, as long as you apply a weighted linear layer first. If the net learns that it needs the values there, it can apply a negative weight to preserve the information (roughly speaking). In fact, a useful thing to do in some networks is to normalize the input to fit a N(0, 1) normal distribution. See https://www.researchgate.net/post/Which_data_normalization_method_should_be_used_in_this_artificial_neural_network As to the problem of "reducing" the H/W dimensions because of kernel sizes - you can probably use 0-padding on the borders to avoid this problem. In my experience the networks usually handle this relatively well. However, if performance is an issue, usually you might want to reduce resolution significantly and then do upscaling of some sort at the end. You can find an example of such network here: Create image of Neural Network structure As for pooling/feature layers: Because the depth of the tensor is very big (W^2) I would suggest that you in fact do reduce a lot of it right away. The complexity of your network is quadratic in the depth of your tensors and in your pixels count, because of weights from/into each layer in the tensor. So, my basic strategy would be to reduce the information space fast in the beginning, do some layers of calculations, and then upscaling. What I've learned over the years is that CNNs are pretty resilient, and that architectural ideas that might seem good on paper do very little in reality - the best factors are pretty much always more layers (done in a good way, but since ResNet it's gotten way easier) and more/better data. So I would start experimenting and try to assess given a working PoC what blocks the network or try variations. I hope this makes enough sense :) Good luck!
https://stackoverflow.com/questions/59913069/
where could I find training.pt / test.pt
In the Pytorch docs for MNIST I read: root (string): Root directory of dataset where MNIST/processed/training.pt and MNIST/processed/test.pt exist. Where could I find these two files traing.pt, test.pt? And what are their format?
Assuming pytorch 1.x+, The constructor of torchvision.datasets.MNIST follows this signature: torchvision.datasets.MNIST(root, train=True, transform=None, target_transform=None, download=False) The easiest way to get the dataset is to set download=True, that way it will automatically download and store training.pt and test.pt. Assuming a local install, it will by default store them somewhere like .local/lib/python3.6/site-packages/torchvision/, although you don't have to worry about that.
https://stackoverflow.com/questions/59915334/
Famous neural networks for regression
I have come across many neural network architecture for classification problems. AlexNet, ResNet, VGGNet, GoogLeNet etc... Is there similar networks for regression problems which can be used for transfer learning?
Alright, all those architecture are not only for classification, the only shift you have to make for modifying a DL model from classification to regression is to change the top layer. For example in the VGGNET the last layer could be : Dense(25, activation='softmax') That means that we want to predict 25 outputs with a probability distribution (classification) But it could be Dense(1, activation='linear') With the exact same architecture, it will output a number (regression) So in the case of transfer learning you can just take an existing architecture that is pretrained on classification tasks, remove the top layer and do whatever you want.
https://stackoverflow.com/questions/59928750/
Autograd function in Pytorch documentation
In the Pytorch documentation https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html#sphx-glr-beginner-blitz-autograd-tutorial-py In the image, I am unable to understand what y.backward(v) means and why do we need to define another tensor v to do the backward operation and also how we got the results of x.grad Thanks in advance
y.backward() computes dy/dz where z are all the leaf nodes in the computation graph. And it stores dy/dz in z.grad. For example: In the above case, leaf nodes are x. y.backward() works when y is a scalar which is the case for most of the deep-learning. When y is a vector you have to pass another vector (v in the above case). You can see this as computing d(v^Ty)/dx. To answer how we got x.grad note that you raise x by the power of 2 unless norm exceeds 1000, so x.grad will be v*k*x**(k-1) where k is 2**i and i is the number of times the loop was executed. To have a less complicated example, consider this: x = torch.randn(3,requires_grad=True) print(x) Out: tensor([-0.0952, -0.4544, -0.7430], requires_grad=True) y = x**2 v = torch.tensor([1.0,0.1,0.01]) y.backward(v) print(x.grad) Out[15]: tensor([-0.1903, -0.0909, -0.0149]) print(2*v*x) Out: tensor([-0.1903, -0.0909, -0.0149], grad_fn=<MulBackward0>)
https://stackoverflow.com/questions/59935596/
while installing apex extension for pytorch(python environment) the following error is showing, am unable to solve this problem
I want to install apex extension for my pytorch environment, my system is windows 10 and am using python version 3.8.1 and pip version is 20.0.2 I read the instructions from this https://github.com/NVIDIA/apex and I executed the command pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext This error is showing. c:\users\dell\appdata\local\programs\python\python38\lib\site-packages\pip_internal\commands\install.py:244: UserWarning: Disabling all use of wheels due to the use of --build-option / --global-option / --install-option. cmdoptions.check_install_build_global(options) Non-user install because site-packages writeable Created temporary directory: C:\Users\Dell\AppData\Local\Temp\pip-ephem-wheel-cache-ehoqwpvf Created temporary directory: C:\Users\Dell\AppData\Local\Temp\pip-req-tracker-uowlsjqi Initialized build tracking at C:\Users\Dell\AppData\Local\Temp\pip-req-tracker-uowlsjqi Created build tracker: C:\Users\Dell\AppData\Local\Temp\pip-req-tracker-uowlsjqi Entered build tracker: C:\Users\Dell\AppData\Local\Temp\pip-req-tracker-uowlsjqi Created temporary directory: C:\Users\Dell\AppData\Local\Temp\pip-install-rivnsaa9 Cleaning up... Removed build tracker: 'C:\Users\Dell\AppData\Local\Temp\pip-req-tracker-uowlsjqi' ERROR: You must give at least one requirement to install (see "pip help install") Exception information: Traceback (most recent call last): File "c:\users\dell\appdata\local\programs\python\python38\lib\site-packages\pip_internal\cli\base_command.py", line 186, in _main status = self.run(options, args) Please solve this problem
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext The line specified in your link is $ pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ Note that you're missing the final ./, which is why pip tells you that You must give at least one requirement to install (see "pip help install") you're telling it to install, but you're not telling it what to install.
https://stackoverflow.com/questions/59943865/
How can I convert Tensor into Bitmap on PyTorch Mobile?
I found that solution (https://itnext.io/converting-pytorch-float-tensor-to-android-rgba-bitmap-with-kotlin-ffd4602a16b6) but when I tried to convert that way I found that the size of inputTensor.dataAsFloatArray is more than bitmap.width*bitmap.height. How works converting tensor to float array or is there any other possible method to convert pytorch tensor to bitmap? val inputTensor = TensorImageUtils.bitmapToFloat32Tensor( bitmap, TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB ) // Float array size is 196608 when width and height are 256x256 = 65536 val res = floatArrayToGrayscaleBitmap(inputTensor.dataAsFloatArray, bitmap.width, bitmap.height) fun floatArrayToGrayscaleBitmap ( floatArray: FloatArray, width: Int, height: Int, alpha :Byte = (255).toByte(), reverseScale :Boolean = false ) : Bitmap { // Create empty bitmap in RGBA format (even though it says ARGB but channels are RGBA) val bmp = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888) val byteBuffer = ByteBuffer.allocate(width*height*4) Log.d("App", floatArray.size.toString() + " " + (width * height * 4).toString()) // mapping smallest value to 0 and largest value to 255 val maxValue = floatArray.max() ?: 1.0f val minValue = floatArray.min() ?: 0.0f val delta = maxValue-minValue var tempValue :Byte // Define if float min..max will be mapped to 0..255 or 255..0 val conversion = when(reverseScale) { false -> { v: Float -> ((v-minValue)/delta*255).toByte() } true -> { v: Float -> (255-(v-minValue)/delta*255).toByte() } } // copy each value from float array to RGB channels and set alpha channel floatArray.forEachIndexed { i, value -> tempValue = conversion(value) byteBuffer.put(4*i, tempValue) byteBuffer.put(4*i+1, tempValue) byteBuffer.put(4*i+2, tempValue) byteBuffer.put(4*i+3, alpha) } bmp.copyPixelsFromBuffer(byteBuffer) return bmp }
None of the answers were able to produce the output I wanted, so this is what I came up with - it is basically only reverse engineered version of what happenes in TensorImageUtils.bitmapToFloat32Tensor(). Please note that this function only works if you are using MemoryFormat.CONTIGUOUS (which is default) in TensorImageUtils.bitmapToFloat32Tensor(). fun tensor2Bitmap(input: FloatArray, width: Int, height: Int, normMeanRGB: FloatArray, normStdRGB: FloatArray): Bitmap? { val pixelsCount = height * width val pixels = IntArray(pixelsCount) val output = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888) val conversion = { v: Float -> ((v.coerceIn(0.0f, 1.0f))*255.0f).roundToInt()} val offset_g = pixelsCount val offset_b = 2 * pixelsCount for (i in 0 until pixelsCount) { val r = conversion(input[i] * normStdRGB[0] + normMeanRGB[0]) val g = conversion(input[i + offset_g] * normStdRGB[1] + normMeanRGB[1]) val b = conversion(input[i + offset_b] * normStdRGB[2] + normMeanRGB[2]) pixels[i] = 255 shl 24 or (r.toInt() and 0xff shl 16) or (g.toInt() and 0xff shl 8) or (b.toInt() and 0xff) } output.setPixels(pixels, 0, width, 0, 0, width, height) return output } Example usage then could be as follows: tensor2Bitmap(outputTensor.dataAsFloatArray, bitmap.width, bitmap.height, TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB)
https://stackoverflow.com/questions/59950520/
PyTorch-YOLOv3 Generating Training and Validation Curves
Hello again stackoverflow! I greatly appreciate this community and the helpful feedback. I have some other questions that I hope someone can help me with. I am working with an implementation of PyTorch-YOLOv3 from https://github.com/eriklindernoren/PyTorch-YOLOv3 I have been able to train the model, but now I would like to generate training/validation curves. During training, I get back metrics on each epoch that look like this: Epoch 1/3001 batch 0/8 Epoch 1/3001 batch 7/8 I'm trying to generate a graph of a metric (loss, recall, precision, accuacy, mAP) versus epoch by logging these metrics to an external .csv file and plotting those values. Question 1: Does anyone with experience with this YOLOv3 know where the relevant information is? I know the AP and mAP for each epoch are at the bottom of the second image (epoch 1/3001 batch 7/8). I'm not sure where to look for the relevant loss, recall, and precision metrics. Question 2: Does anyone know the difference between the YOLO layer 0 and layer 1. Would plotting the metrics from each of these layers yield the training and validation curves, respectively? Question 3: As of two to three months ago, I started receiving the warning the following warning instead of getting the epoch outputs. /pytorch/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. It doesn't affect the training, but I would like to update the code so that this warning disappears. Any suggestions? Thank you all in advance.
The solution is mentioned in the issues. The exact link is: https://github.com/eriklindernoren/PyTorch-YOLOv3/issues/283 Replace in utils:269 ByteTensor = torch.cuda.ByteTensor if pred_boxes.is_cuda else torch.ByteTensor With: BoolTensor = torch.cuda.BoolTensor if pred_boxes.is_cuda else torch.BoolTensor And it usage in lines 278, 279 obj_mask = BoolTensor(nB, nA, nG, nG).fill_(0) noobj_mask = BoolTensor(nB, nA, nG, nG).fill_(1)
https://stackoverflow.com/questions/59961103/
Make GPU available again after numba.cuda.close()?
So when I run cuda.select_device(0) and then cuda.close(). Pytorch cannot access the GPU again, I know that there is way so that PyTorch can utilize the GPU again without having to restart the kernel. But I forgot how. Does anyone else know? from numba import cuda as cu import torch # random tensor a=torch.rand(100,100) #tensor can be loaded onto the gpu() a.cuda() device = cu.get_current_device() device.reset() # thows error "RuntimeError: CUDA error: invalid argument" a.cuda() cu.close() # thows error "RuntimeError: CUDA error: invalid argument" a.cuda() torch.cuda.is_available() #True And then trying to run cuda-based pytorch code yields: RuntimeError: CUDA error: invalid argument
I had the same issue but with TensorFlow and Keras when iterating through a for loop to tune hyperparamenters. It did not free up the GPU memory used by older models. The cuda solution did not work for me. The following did: import gc gc.collect()
https://stackoverflow.com/questions/59982296/
How to use Pytorch OneCycleLR in a training loop (and optimizer/scheduler interactions)?
I'm training an NN and using RMSprop as an optimizer and OneCycleLR as a scheduler. I've been running it like this (in slightly simplified code): optimizer = torch.optim.RMSprop(model.parameters(), lr=0.00001, alpha=0.99, eps=1e-08, weight_decay=0.0001, momentum=0.0001, centered=False) scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.0005, epochs=epochs) for epoch in range(epochs): model.train() for counter, (images, targets) in enumerate(train_loader): # clear gradients from last run optimizer.zero_grad() # Run forward pass through the mini-batch outputs = model(images) # Calculate the losses loss = loss_fn(outputs, targets) # Calculate the gradients loss.backward() # Update parameters optimizer.step() # Optimizer before scheduler???? scheduler.step() # Check loss on training set test() Note the optimizer and scheduler calls in each mini-batch. This is working, though when I plot the learning rates through the training, the curve is very bumpy. I checked the docs again, and this is the example shown for torch.optim.lr_scheduler.OneCycleLR >>> data_loader = torch.utils.data.DataLoader(...) >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) >>> scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=len(data_loader), epochs=10) >>> for epoch in range(10): >>> for batch in data_loader: >>> train_batch(...) >>> scheduler.step() Here, they omit the optimizer.step() in the training loop. And I thought, that makes sense since the optimizer is provided to OneCycleLR in its initialization, so it must be taking care of that on the back end. But doing so gets me the warning: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Do I ignore that and trust the pseudocode in the docs? Well, I did, and the model didn't do any learning, so the warning is correct and I put optimizer.step() back in. This gets to the point that I don't really understand how the optimizer and scheduler interact (edit: how the Learning Rate in the optimizer interacts with the Learning Rate in the scheduler). I see that generally the optimizer is run every mini-batch and the scheduler every epoch, though for OneCycleLR, they want you to run it every mini-batch too. Any guidance (or a good tutorial article) would be appreciated!
Use optimizer.step() before scheduler.step(). Also, for OneCycleLR, you need to run scheduler.step() after every step - source (PyTorch docs). So, your training code is correct (as far as calling step() on optimizer and schedulers is concerned). Also, in the example you mentioned, they have passed steps_per_epoch parameter, but you haven't done so in your training code. This is also mentioned in the docs. This might be causing the issue in your code.
https://stackoverflow.com/questions/59996859/
How exactly should the input file be formatted for the language model finetuning (BERT through Huggingface Transformers)?
I wanted to employ the examples/run_lm_finetuning.py from the Huggingface Transformers repository on a pretrained Bert model. However, from following the documentation it is not evident how a corpus file should be structured (apart from referencing the Wiki-2 dataset). I've tried One document per line (multiple sentences) One sentence per line. Documents are separated by a blank line (this I found in some older pytorch-transformers documentation) By looking at the code of examples/run_lm_finetuning.py it is not directly evident how sequence pairs for the Next Sentence Prediction objective are formed. Would the --line-by-line option help here? I'd be grateful, if someone could give me some hints how a text corpus file should look like. Many thanks and cheers, nminds
First of all, I strongly suggest to also open this as an issue in the huggingface library, as they have probably the strongest interest to answer this, and may take it as a sign that they should update/clarify their documentation. But to answer your question, it seems that this specific sample script is basically returning either a LineByLineTextDataset (if you pass --line_by_line to the training), and otherwise a TextDataset, see ll. 144-149 in the script (formatted slightly for better visibility): def load_and_cache_examples(args, tokenizer, evaluate=False): file_path = args.eval_data_file if evaluate else args.train_data_file if args.line_by_line: return LineByLineTextDataset(tokenizer, args, file_path=file_path, block_size=args.block_size) else: return TextDataset(tokenizer, args, file_path=file_path, block_size=args.block_size) A TextDataset simply splits the text into consecutive "blocks" of certain (token) length, e.g., it will cut your text every 512 tokens (default value). The Next Sentence Prediction task is only implemented for the default BERT model, if I recall that correctly (seems to be consistent with what I found in the documentation), and is unfortunately not part of this specific finetuning script. None of the utilized BERT models in the lm_finetuning script make use of that particular task, as far as I can see.
https://stackoverflow.com/questions/60001698/
Error in Tensorboard's(PyTorch) add_graph
I'm following this Pytorch's Tensorboard documentation. I have the following code: model = torchvision.models.resnet50(False) writer.add_graph(model) It throws the following error: _ = model(*args) # don't catch, just print the error message TypeError: ResNet object argument after * must be an iterable, not NoneType I don't know what I'm doing wrong here!
I had this problem too.. Passing an input_to_model parameter different from None solved the problem. However, I though it should be optional dataiter = iter(trainloader) images, labels = dataiter.next() writer.add_graph(model, images)
https://stackoverflow.com/questions/60021266/
Computing Linear Layer in Tensor/Outer-Product space in PyTorch is Very Slow
I would like to make a PyTorch model that takes the outer product of the input with itself and then does a linear regression on that. As an example, consider the input vector [1,2,3], then I would like to compute w and b to optimize [1*1, 1*2, 1*3, 2*1, 2*2, 2*3, 3*1, 3*2, 3*3] @ w + b. For a batch input with r rows and c columns, I can do this in PyTorch with (input.reshape(r,c,1) @ input.reshape(r,1,c)).reshape(r,c**2) @ weigts + b My problem is that it is extraordinarily slow. Like a factor 1000 times slower and more memory consumptious than Adding a fully connected c*c RELU layer, even though it has the same number of weights. My question is why this happens? Is reshape a very expensive operation for PyTorch? Could I reformulate it in a different way that would make things more efficient? Another equivalent formulation I know is torch.diag(input @ weights @ input.T) + b, but now we are computing way more values than we need (r*r) just to throw them away again.
When you have to reshape a tensor during the training loop of a model it's always best to use view instead of reshape. There doesn't appear to be any performance overhead with a view, but it does require that the tensor data is contiguous. If your tensors at the beginning aren't contiguous you can recopy the tensor and make it contiguous.
https://stackoverflow.com/questions/60025695/
pytorch train function Varibles and Tensors (read my introduction i dont know my problem as well it just dont work )
I started learning pytorch and started with videos about MNIST handwriting and learnt it with an video but the video is 2 years old and some things have changen since then i guess because it dosent work as in the video and i seriously dont know anything so i dont know whats my error or what i do wrong i just type everything the dude says in the video and want to understand and learn it this way(maybe you know better ways how to learn machine learning/deep learning would appreciate it) my code looks like this: import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import torch.optim as optim import os from torchvision import datasets, transforms kwargs = {'num_workers': 1, 'pin_memory': True} train_data = torch.utils.data.DataLoader(datasets.MNIST('data', train=True, download=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,),(0.3081,))])), batch_size=64, shuffle=True, **kwargs) test_data = torch.utils.data.DataLoader(datasets.MNIST('data', train=False, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,),(0.3081,))])), batch_size=64, shuffle=True, **kwargs) above everything works like in the video and i find the data in an folder now comes the class and it doesnt looks like theres an error but i dont know. class Netz(nn.Module): def __init__(self): super(Netz, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size= 4) self.conv2 = nn.Conv2d(10, 20, kernel_size= 4) self.conv_dropout = nn.Dropout2d() self.fc1 = nn.Linear(320, 60) self.fc2 = nn.Linear(60, 10) def forward(self, x): x = self.conv1(x) x = F.max_pool2d(x, 4) x = F.relu(x) x = self.conv2(x) x = self.conv_dropout(x) x = F.max_pool2d(x, 4) x = F.relu(x) print(x.size()) exit() model = Netz() model.cuda() something with this Varibale function is wrong it just dont works and pycharm also shows me there has to be something wrong but i dont know what so i ask here maybe you can help i also googled abit about it and it looks like this varible thing got removed or so but i dont know what to write else optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.8) def train(epoch): model.train() for batch_id, (data, target) in enumerate(train_data): data = data.cuda() target = target.cuda() data = Variable(data) target = Variable(target) optimizer.zero_grad() out = model(data) criterion = F.nll_loss loss = criterion(out, target) loss.backward() optimizer.step() for epoch in range(1, 30): train(epoch) the error code looks like this : Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main exitcode = _main(fd) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 114, in _main prepare(preparation_data) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 225, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path run_name="__mp_main__") File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\mnist handwriting.py", line 60, in <module> train(epoch) File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\mnist handwriting.py", line 46, in train for batch_id, (data, target) in enumerate(train_data): File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\venv\lib\site-packages\torch\utils\data\dataloader.py", line 279, in __iter__ return _MultiProcessingDataLoaderIter(self) File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\venv\lib\site-packages\torch\utils\data\dataloader.py", line 719, in __init__ w.start() File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start self._popen = self._Popen(self) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 46, in __init__ prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 143, in get_preparation_data _check_not_importing_main() File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 136, in _check_not_importing_main is not going to be frozen to produce an executable.''') RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Traceback (most recent call last): File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\venv\lib\site-packages\torch\utils\data\dataloader.py", line 761, in _try_get_data data = self._data_queue.get(timeout=timeout) File "C:\Users\Finnw\AppData\Local\Programs\Python\Python37\lib\queue.py", line 178, in get raise Empty _queue.Empty During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/Users/Finnw/PycharmProjects/pytorch 3.7/mnist handwriting.py", line 60, in <module> train(epoch) File "C:/Users/Finnw/PycharmProjects/pytorch 3.7/mnist handwriting.py", line 46, in train for batch_id, (data, target) in enumerate(train_data): File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\venv\lib\site-packages\torch\utils\data\dataloader.py", line 345, in __next__ data = self._next_data() File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\venv\lib\site-packages\torch\utils\data\dataloader.py", line 841, in _next_data idx, data = self._get_data() File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\venv\lib\site-packages\torch\utils\data\dataloader.py", line 798, in _get_data success, data = self._try_get_data() File "C:\Users\Finnw\PycharmProjects\pytorch 3.7\venv\lib\site-packages\torch\utils\data\dataloader.py", line 774, in _try_get_data raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) RuntimeError: DataLoader worker (pid(s) 10444) exited unexpectedly Process finished with exit code 1
I believe just setting num_workers to zero would solve your problem. One other thing that would solve your problem is to place your code in a main function. The reasons for this can be found here: https://docs.python.org/2/library/multiprocessing.html#multiprocessing-programming . The reason for this is that num_workers tells PyTorch to generate data samples in a multithreaded way, launching num_workers threads, such that they can be served as fast as possible to your training loop. The error code you gave actually tells you pretty much the same thing: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ...
https://stackoverflow.com/questions/60029369/
Why is PyTorch 2x slower than Keras for an identical model and hyperparameters?
I've experienced this with custom made modules as well, but for this example I'm specifically using one of the official PyTorch examples and the MNIST dataset. I've ported the exact architecture in Keras and TF2 with eager mode like so: model = keras.models.Sequential([ keras.layers.Conv2D(32, (3, 3) , input_shape=(28,28,1), activation='relu'), keras.layers.Conv2D(64, (3, 3)), keras.layers.MaxPool2D((2, 2)), keras.layers.Dropout(0.25), keras.layers.Flatten(), keras.layers.Dense(128, activation='relu'), keras.layers.Dropout(0.5), keras.layers.Dense(10, activation='softmax')] ) model.summary() model.compile(optimizer=keras.optimizers.Adadelta(), loss=keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) model.fit(train_data,train_labels,batch_size=64,epochs=30,shuffle=True, max_queue_size=1) The training loop in PyTorch is: def train(args, model, device, train_loader, optimizer, epoch): model.train() for batch_idx, (data, target) in enumerate(train_loader): data, target = data.to(device), target.to(device) optimizer.zero_grad() output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() With me timing every epoch like so: for epoch in range(1, args.epochs + 1): since = time.time() train(args, model, device, train_loader, optimizer, epoch) # test(args, model, device, test_loader) # scheduler.step() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format( time_elapsed // 60, time_elapsed % 60)) I have verified that: Both versions use the same Optimizer (AdaDelta) Both versions have around the same number of trainable parameters (1.2 million) I removed the normalization in dataLoader, leaving it to just a toTensor() call. pin_memory is set to True, and num_workers is set to 1 for the PyTorch code. Per the suggestion of Timbus Calin I set max_queue_size to 1 and the results are identical. The Keras version runs at around 4-5 seconds per epoch while the PyTorch version runs at around 9-10 seconds per epoch. Why is this and how can I improve this time?
I think there is a subtle difference that must be taken into consideration; my best bet/hunch is the following: it is not the processing time in itself per GPU, but the max_queue_size=10 parameter, 10 by default in Keras. Since by default in the normal for-loop in PyTorch the data is not queued, the queue which Keras benefits from allows the transfer of data from CPU to GPU faster; in essence, there is much less time spent to feed the GPU, since it consumes faster from that internal queue/the overhead of transfering data from CPU to GPU is reduced. Apart from my former observation, I cannot see any other visible difference, maybe other people can point out new findings.
https://stackoverflow.com/questions/60029607/
Is there any way that can convert a data format of .pb file from NCHW into NHWC?
I have a CNN model which was trained in Pytorch based on the data format N(batch) x C(channel) x H(height) x W(width). I saved the pre-trained model as model.pth. Afterward, I converted the pre-trained model from model.pth -> model.onnx by using existing function: torch.onnx.export(model, dummy_input, "model.onnx") And then, I converted this model.onnx -> model.pb by the module below: import onnx from onnx_tf.backend import prepare model_onnx = onnx.load('model.onnx') tf_rep = prepare(model_onnx) tf_rep.export_graph('model.pb') The problem is: I want to utilize this model.pb on a CPU device, which needs a NHWC data format. However, my model is based on NCHW data format. Is there any method that can convert the data format of this model.pb from NCHW into NHWC?
Short answer, you are in a tough spot. Long answer, it's difficult yet possible. What makes your problem difficult is your graph is already trained. It is inefficient, yet easier to convert NCHW->NHWC while you create the training graph. See similar answer here and here. Now to your answer, you'll have to overload conv2D operator with custom convolution operator. Here is a pseudo code to get started. tensor Conv2D(X, W, B) { int perm[] = {0, 3, 1, 2}; X = transposeTensor(X, perm); W = transposeTensor(W, perm); Y = Conv2D_orig(X, W, B, ...) ; perm = {0, 2, 3, 1}; return transposeTensor(Y, perm); }
https://stackoverflow.com/questions/60048660/
Sequence labeling for sentences and not tokens
I have sentences that belong to a paragraph. Each sentence has a label. [s1,s2,s3,…], [l1,l2,l3,…] I understand that I have to encode each sentence using an encoder, and then use sequence labeling. Could you guide me on how I could do that, combining them?
If i understand your question correctly, you are looking for encoding of your sentences into numeric representation. let's say you have data like : data = ["Sarah, is that you? Hahahahahaha Todd give you another black eye??" "Well, being slick comes with the job of being a propagandist, Andi..." "Sad to lose a young person who was earnestly working for the common good and public safety when so many are in the basement smoking pot and playing computer games."] labels = [0,1,0] Now you want to build a classifier, for training classifier data should be in numeric format so here we will transfer text data into numeric structure for that we will use tf-idf vectorizer which will create matrix for text data, then apply any algorithm. from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.svm import LinearSVC from sklearn.pipeline import Pipeline vectorizerPipe = Pipeline([ ('tfidf', TfidfVectorizer(lowercase=True,stop_words='english')), ('classification', LinearSVC(penalty='l2',loss='hinge'))]) trained_model = vectorizerPipe.fit(data,labels) Here pipeline is constructed where first step is feature vector extraction (converting text data into numeric format) and in next step we are applying algorithm to it. There are lot of parameters in both steps you can try. later we fir the pipeline with .fit method and passing data and labels.
https://stackoverflow.com/questions/60048900/
how to duplicate the input channel in a tensor?
I have a tensor with the shape torch.Size([39, 1, 20, 256, 256]) how do I duplicate the channel to make the shape torch.Size([39, 3, 20, 256, 256]).
I am fairly certain that this is already a duplicate question, but I could not find a fitting answer myself, which is why I am going ahead and answer this by referring to both the PyTorch documentation and PyTorch forum Essentially, torch.Tensor.expand() is the function that you are looking for, and can be used as follows: x = torch.rand([39, 1, 20, 256, 256]) y = x.expand(39, 3, 20, 256, 256) Note that this works only on singleton dimensions, which is the case in your example, but may not work for arbitrary dimensions prior to expansion. Also, this is basically just providing a different memory view, which means that, according to the documentation, you have to keep the following in mind: More than one element of an expanded tensor may refer to a single memory location. As a result, in-place operations (especially ones that are vectorized) may result in incorrect behavior. If you need to write to the tensors, please clone them first. For a newly allocated memory version, see torch.Tensor.repeat, which is outlined in this (slightly related) answer. The syntax works otherwise exactly the same as expand().
https://stackoverflow.com/questions/60058698/
I cant install torch-sparse in Google Colab
I am trying to install torch-sparse in Google Colab using ! pip install torch-sparse, but i am getting the following erorr: Collecting torch-sparse Using cached https://files.pythonhosted.org/packages/0e/bf/6242893c898621e7e4756e1ad298e903df6dfae208aec1c32adf8cfd1f7f/torch_sparse-0.4.4.tar.gz Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from torch-sparse) (1.4.1) Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from scipy->torch-sparse) (1.17.5) Building wheels for collected packages: torch-sparse Building wheel for torch-sparse (setup.py) ... error ERROR: Failed building wheel for torch-sparse Running setup.py clean for torch-sparse Failed to build torch-sparse Installing collected packages: torch-sparse Running setup.py install for torch-sparse ... error ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-h3_oysnr/torch-sparse/setup.py'"'"'; __file__='"'"'/tmp/pip-install-h3_oysnr/torch-sparse/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-0xvimdk5/install-record.txt --single-version-externally-managed --compile Check the logs for full command output. How can I fix it?
You need to go into Runtime -> Change runtime type and choose a GPU as the Hardware accelerator. After this it should install fine. Collecting torch-sparse Downloading https://files.pythonhosted.org/packages/0e/bf/6242893c898621e7e4756e1ad298e903df6dfae208aec1c32adf8cfd1f7f/torch_sparse-0.4.4.tar.gz Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from torch-sparse) (1.4.1) Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from scipy->torch-sparse) (1.17.5) Building wheels for collected packages: torch-sparse Building wheel for torch-sparse (setup.py) ... done Created wheel for torch-sparse: filename=torch_sparse-0.4.4-cp36-cp36m-linux_x86_64.whl size=4956229 sha256=0463ad1735eb37f9f555b7c83b32bd43cfee20e312061e8efca43f2c29158fbb Stored in directory: /root/.cache/pip/wheels/8a/1a/6f/88952b83ebba6b2742909fcd6e320e3a99fc7d2a2428391f8c Successfully built torch-sparse Installing collected packages: torch-sparse Successfully installed torch-sparse-0.4.4
https://stackoverflow.com/questions/60059680/
Get feature vectors from BertForSequenceClassification
I have successfully build a sentiment analysis tool with BertForSequenceClassification from huggingface/transformers to classify $tsla tweets as positive or negative. However, I can't find out how I can obtain the feature vectors per tweet (more specifically the embedding of [CLS]) from my finetuned model. more info of used model: model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, num_labels=num_labels) model.config.output_hidden_states = True tokenizer = BertTokenizer(OUTPUT_DIR+'vocab.txt') However, when I run the code below the output variable only consists of the logits. model.eval() eval_loss = 0 nb_eval_steps = 0 preds = [] for input_ids, input_mask, segment_ids, label_ids in tqdm_notebook(eval_dataloader, desc="Evaluating"): input_ids = input_ids.to(device) input_mask = input_mask.to(device) segment_ids = segment_ids.to(device) label_ids = label_ids.to(device) with torch.no_grad(): output = model(input_ids,token_type_ids= segment_ids,attention_mask= input_mask)
I also have this problem after fine-tuning BertForSequenceClassification. I know your purpose is to get the hidden state of [CLS] as the representation of each tweet. Right? As the instruction of API document, I think the code is: model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, output_hidden_states=True) logits, hidden_states = model(input_ids, attn_masks) cls_hidden_state = hidden_states[-1][:, 0, :] # the first hidden state in last layer or model = BertForSequenceClassification.from_pretrained(OUTPUT_DIR, output_hidden_states=True) last_hidden_states = model.bert(input_ids, attn_masks)[0] cls_hidden_state = last_hidden_states[:, 0, :]
https://stackoverflow.com/questions/60064988/
What is the default batch size of pytorch SGD?
What does pytorch SGD do if I feed the whole data and do not specify the batch size? I don't see any "stochastic" or "randomness" in the case. For example, in the following simple code, I feed the whole data (x,y) into a model. optimizer = torch.optim.SGD(model.parameters(), lr=0.1) for epoch in range(5): y_pred = model(x_data) loss = criterion(y_pred, y_data) optimizer.zero_grad() loss.backward() optimizer.step() Suppose there are 100 data pairs (x,y), i.e. x_data and y_data each has 100 elements. Question: It seems to me that all the 100 gradients are calculated before one update of parameters. Size of a "mini_batch" is 100, not 1. So there is no randomness, am I right? At first, I think SGD means randomly choose 1 data point and calculate its gradient, which will be used as an approximation of the true gradient from all data.
The SGD optimizer in PyTorch is just gradient descent. The stocastic part comes from how you usually pass a random subset of your data through the network at a time (i.e. a mini-batch or batch). The code you posted passes the entire dataset through on each epoch before doing backprop and stepping the optimizer so you're really just doing regular gradient descent.
https://stackoverflow.com/questions/60068114/
Transformers PreTrainedTokenizer add_tokens Functionality
Referring to the documentation of the awesome Transformers library from Huggingface, I came across the add_tokens functions. tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2']) model.resize_token_embeddings(len(tokenizer)) I tried the above by adding previously absent words in the default vocabulary. However, keeping all else constant, I noticed a decrease in accuracy of the fine tuned classifier making use of this updated tokenizer. I was able to replicate similar behavior even when just 10% of the previously absent words were added. My questions Am I missing something? Instead of whole words, is the add_tokens function expecting masked tokens, for example : '##ah', '##red', '##ik', '##si', etc.? If yes, is there a procedure to generate such masked tokens? Any help would be appreciated. Thanks in advance.
If you add tokens to the tokenizer, you indeed make the tokenizer tokenize the text differently, but this is not the tokenization BERT was trained with, so you are basically adding noise to the input. The word embeddings are not trained and the rest of the network never saw them in context. You would need a lot of data to teach BERT to deal with the newly added words. There are also some ways how to compute a single word embedding, such that it would not hurt BERT like in this paper but it seems pretty complicated and should not make any difference. BERT uses a word-piece-based vocabulary, so it should not really matter if the words are present in the vocabulary as a single token or get split into multiple wordpieces. The model probably saw the split word during pre-training and will know what to do with it. Regarding the ##-prefixed tokens, those are tokens can only be prepended as a suffix of another wordpiece. E.g., walrus gets split into ['wal', '##rus'] and you need both of the wordpieces to be in the vocabulary, but not ##wal or rus.
https://stackoverflow.com/questions/60068129/
free up the memory allocation cuda pytorch?
RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 1; 11.91 GiB total capacity; 10.12 GiB already allocated; 21.75 MiB free; 56.79 MiB cached) I encountered the preceding error during pytorch training. I'm using pytorch on jupyter notebook. Is there a way to free up the gpu memory in jupyter notebook?
I had the same issue sometime back. There are generally two way I go about. Decrease the batch size Sometimes, even when I had decrease the batch size to '1', this issue persists. Then I changed my approach as follows. Decrease the image size ( or patch size, depending upon your implementation). Decreasing the image size, also gives in space for you to increase your batch size. But second approach is not recommended because we want the network to learn different features of image in relation to each other. Decreasing the image size decreases the scope of network learning finer details. ( Depending upon on your need you would need to alter it).
https://stackoverflow.com/questions/60068277/
How to run inference of a pytorch model on pyspark dataframe (create new column with prediction) using pandas_udf?
Is there a way to run the inference of pytorch model over a pyspark dataframe in vectorized way (using pandas_udf?). One row udf is pretty slow since the model state_dict() needs to be loaded for each row. I'm trying to use pandas_udf to speed this up, since all the operations can be vectorized efficiently in pandas/pytorch. I've looked at this databricks post for inspiration, but it's doesn't correspond exactly to my use case since I want to run prediction on an existing pyspark dataframe. I can get it to work using one row udf in this simple example: import torch import torch.nn as nn from pyspark.sql.functions import col, pandas_udf, PandasUDFType, udf import pyspark.sql.functions as F from pyspark.sql import SparkSession from pyspark.sql.types import ArrayType, FloatType, DoubleType import pandas as pd import numpy as np spark = SparkSession.builder.master('local[*]') \ .appName("model_training") \ .getOrCreate() class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.w = nn.Linear(5, 1) def forward(self, x): return self.w(x) net = Net() bc_model_state = spark.sparkContext.broadcast(net.state_dict()) df = spark.sparkContext.parallelize([[np.random.rand() for i in range(5)] for j in range(10)]).toDF() df = df.withColumn('features', F.array([F.col(f"_{i}") for i in range(1, 6)])) def get_model_for_eval(): # Broadcast the model state_dict net.load_state_dict(bc_model_state.value) net.eval() return net def one_row_predict(x): model = get_model_for_eval() t = torch.tensor(x, dtype=torch.float32) prediction = model(t).cpu().detach().item() return prediction one_row_udf = udf(one_row_predict, FloatType()) df = df.withColumn('pred_one_row', one_row_udf(col('features'))) df.show() Output: +--------------------+-------------------+-------------------+-------------------+-------------------+--------------------+------------+ | _1| _2| _3| _4| _5| features|pred_one_row| +--------------------+-------------------+-------------------+-------------------+-------------------+--------------------+------------+ | 0.8447505355266759| 0.3938414671838497|0.46347383092447003| 0.7694022276208854| 0.6152606009215115|[0.84475053552667...| 0.025048971| |0.023782157504950607| 0.6434186254505012| 0.4090423037706754| 0.5466917794921007| 0.7855157903802007|[0.02378215750495...| 0.19694215| | 0.5057589877333257| 0.7186078182786649| 0.9123361330966105| 0.601837718628886| 0.0773272396167538|[0.50575898773332...| 0.278222| | 0.2815336141913932| 0.5196112020157087| 0.9646444599173869|0.04844988843812004|0.35445251642633047|[0.28153361419139...| 0.10699606| | 0.3896101050146765|0.38732747821339863| 0.8516864705178889| 0.2500977280156421| 0.7781221754566505|[0.38961010501467...| -0.08206403| | 0.8223344715797269| 0.9089425281658239|0.10088026161623431| 0.9920995834835098|0.40665125930441104|[0.82233447157972...| 0.3565607| | 0.31167413110257425| 0.9778009876605741| 0.4717549025588036|0.24563879994222826| 0.7594244867194454|[0.31167413110257...| 0.18897778| | 0.5667657426129576| 0.5383639427018171| 0.2983527299596511|0.18914810241640534|0.47854422807435326|[0.56676574261295...| 0.17796803| | 0.6419824467244137|0.03992370080139418|0.38462617679839173| 0.709487894249459|0.23020927682221126|[0.64198244672441...| 0.15635887| | 0.7972928622000178| 0.7700992684264264| 0.4387404431803098| 0.1340696629092989| 0.7072213018683782|[0.79729286220001...| 0.0500246| +--------------------+-------------------+-------------------+-------------------+-------------------+--------------------+------------+ Trying to do the same thing with in a vectorized way, this works: def batch_predict(x): model = get_model_for_eval() xp = np.vstack(x) t = torch.tensor(xp, dtype=torch.float32) prediction = model(t).cpu().detach().numpy().flatten() return pd.Series(prediction) df_pd = df.toPandas() x = df_pd['features'] print(batch_predict(x)) But running it inside a pandas_udf fails: batch_udf = pandas_udf(batch_predict, FloatType()) df = df.withColumn('pred_batch', batch_udf(col('features'))) df.show() with: 20/02/11 10:13:01 ERROR Executor: Exception in task 2.0 in stage 1.0 (TID 3) java.lang.IllegalArgumentException at java.nio.ByteBuffer.allocate(ByteBuffer.java:334) at org.apache.arrow.vector.ipc.message.MessageSerializer.readMessage(MessageSerializer.java:543) at org.apache.arrow.vector.ipc.message.MessageChannelReader.readNext(MessageChannelReader.java:58) at org.apache.arrow.vector.ipc.ArrowStreamReader.readSchema(ArrowStreamReader.java:132) at org.apache.arrow.vector.ipc.ArrowReader.initialize(ArrowReader.java:181) at org.apache.arrow.vector.ipc.ArrowReader.ensureInitialized(ArrowReader.java:172) at org.apache.arrow.vector.ipc.ArrowReader.getVectorSchemaRoot(ArrowReader.java:65) at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:162) at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122) at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) at org.apache.spark.sql.execution.python.ArrowEvalPythonExec$$anon$2.<init>(ArrowEvalPythonExec.scala:98) at org.apache.spark.sql.execution.python.ArrowEvalPythonExec.evaluate(ArrowEvalPythonExec.scala:96) at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:127) at org.apache.spark.sql.execution.python.EvalPythonExec$$anonfun$doExecute$1.apply(EvalPythonExec.scala:89) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801) at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:801) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) at org.apache.spark.scheduler.Task.run(Task.scala:123) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Thanks for the help
So apparently this issue is due to an incompatibility between spark 2.4.x and pyarrow >= 0.15. See here: https://issues.apache.org/jira/browse/SPARK-29367 https://arrow.apache.org/blog/2019/10/06/0.15.0-release/ https://spark.apache.org/docs/3.0.0-preview/sql-pyspark-pandas-with-arrow.html#usage-notes How I fixed it: Call this code before creating the spark session: import os os.environ['ARROW_PRE_0_15_IPC_FORMAT'] = '1'
https://stackoverflow.com/questions/60074543/
addition of 2 pytorch tensors with diffrent size
I have 2 tensors with dimension, A = [64,155,300] and B =[64,155,100] when I add this 2 tensors ie. C= A+B, I get this error ==> " RuntimeError: The size of tensor a (300) must match the size of tensor b (100) at non-singleton dimension 2 " could anyone please help how should I add above tensors? any help will be appreciated!
As error says you can not add two tensor with mis-match shapes but if you want you can repeat your third dim of B tensor so it can match with A using torch.Tensor.repeat try A + B.repeat(1,1,3) >>> A.shape torch.Size([64, 155, 300]) >>> B.shape torch.Size([64, 155, 100]) >>> B = B.repeat(1,1,3) >>> B.shape torch.Size([64, 155, 300]) >>> C = A + B >>> C.shape torch.Size([64, 155, 300])
https://stackoverflow.com/questions/60088784/
Converting python list to pytorch tensor
I have a problem converting a python list of numbers to pytorch Tensor : this is my code : caption_feat = [int(x) if x < 11660 else 3 for x in caption_feat] printing caption_feat gives : [1, 9903, 7876, 9971, 2770, 2435, 10441, 9370, 2] I do the converting like this : tmp2 = torch.Tensor(caption_feat) now printing tmp2 gives : tensor([1.0000e+00, 9.9030e+03, 7.8760e+03, 9.9710e+03, 2.7700e+03, 2.4350e+03, 1.0441e+04, 9.3700e+03, 2.0000e+00]) However I expected to get : tensor([1. , 9903, , 9971. ......]) Any Idea?
You can directly convert python list to a pytorch Tensor by defining the dtype. For example, import torch a_list = [3,23,53,32,53] a_tensor = torch.Tensor(a_list) print(a_tensor.int()) >>> tensor([3,23,53,32,53])
https://stackoverflow.com/questions/60090093/
What is projection layer in the context of neural machine translation using RNN?
I read a paper about machine translation, and it uses projection layer. The projection layer is explained as follows: "Additional projection aims to reduce the dimensionality of the encoder output representations to match the decoder stack dimension." Does anyone know this architecture or how to implement this layer in Pytorch? The paper's link: https://www.aclweb.org/anthology/P18-1008.pdf The model architecture:
It is a standard linear projection. You can just add nn.Linear(2 * model_dim, model_dim) where model_dim is RNN dimension. The encoder is bidirectional, with one RNNs in both directions having an output of dimension model_dim. The decoder only works in the forward direction, so it has states of only model_dim dimensions. It actually saves a lot of parameters in the multi-head attention because it makes the projection for keys and values only half size because they project from model_dim instead of 2 * model_dim.
https://stackoverflow.com/questions/60110462/
Pytorch: load dataset of grayscale images
I want to load a dataset of grayscale images. I used ImageFolder but this doesn't load gray images by default as it converts images to RGB. I found solutions that load images with ImageFolder and after convert images in grayscale, using: transforms.Grayscale(num_output_channels=1) or ImageOps.grayscale(image) Is it correct? How can I load grayscale imaged without conversion? I try ImageDataBunch, but I have problems to import fastai.vision
Assuming the dataset is stored in the "Dataset" folder as given below, set the root directory as "Dataset": Dataset class_1 img1.png img2.png class_2 img1.png img2.png from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader, random_split from torchvision import transforms root = 'Dataset/' data_transform = transforms.Compose([transforms.Grayscale(num_output_channels=1), transforms.ToTensor()]) dataset = ImageFolder(root, transform=data_transform) For reference, train and test dataset are being split into 70% and 30% respectively. # Split test and train dataset train_size = int(0.7 * len(dataset)) test_size = len(dataset) - train_size train_data, test_data = random_split(dataset, [train_size, test_size]) This dataset can be further divided into train and test data loaders as given below to perform operation in batches. Usually you will see the dataset is assigned batch_size once to be used for both train and test loaders. But, I try to define it separately. The idea is to give the batch_size such that it is a factor of the train/test data loader's size, otherwise it will give an error. # Set batch size of train data loader batch_size_train = 20 # Set batch size of test data loader batch_size_test = 22 # load the split train and test data into batches via DataLoader() train_loader = DataLoader(train_data, batch_size=batch_size_train, shuffle=True) test_loader = DataLoader(test_data, batch_size=batch_size_test, shuffle=True)
https://stackoverflow.com/questions/60116208/
Concatenate with respect to 1st dimension
In the following code what does torch.cat really do. I know it concatenates the batch which is contained in the sample but why do we have to do that and what does concatenate really mean. # memory is just a list of events def sample(self, batch_size): samples = zip(*random.sample(self.memory, batch_size)) return map(lambda x: Variable(torch.cat(x,0)))
torch.cat concatenates as the name suggests along specified dimension. Example from documentation will tell you everything you need to know: x = torch.randn(2, 3) # shape (2, 3) catted = torch.cat((x, x, x), dim=0) # shape (6, 3), e.g. 3 x stacked on each other Remember concatenated tensors need to have the same dimension except the one along which you are concatenating. In the above example it doesn't do anything though and isn't even viable as it lacks second argument (inputs to apply map to), see here. Assuming you would do this mapping instead: map(lambda x: Variable(torch.cat(x,0)), samples) It would create a new tensor of shape [len(samples), x_dim_1, x_dim_2, ...] provided all samples have the same dimensionality except 0. Still it is pretty convoluted example and definitely shouldn't be done like that (torch.autograd.Variable is deprecated, see here), this should be enough: # assuming random.sample returns either `list` or `tuple` def sample(self, batch_size): return torch.cat(random.sample(self.memory, batch_size), dim=0)
https://stackoverflow.com/questions/60117911/
How to speed up slicing in python, not using the for loop
I'm trying to speed up the following python code: import torch import numpy as np A = torch.zeros(11, 16, 64) B = torch.randn(11, 9, 64) indices = np.random.randint(0,9,(11,16)) for i in range(len(A)): A[i,:,:] = B[i,indices[i],:] Is there a nice way not to use the for loop? This way, it is really slow, especially when dealing with the big data . The indices is the pre-defined 2-dim matrix with size (11,16). What I need is to assign the elements of B to A according to the order of indices. After the speeding up, the result of A should be exactly the same with mine resulted A. Thanks!
You can use multiple mult-dimensional indices but they need to be the same size or broadcastable. So for example # create a (11, 1) range array that broadcasts with indices which is (11, 16) indices0 = np.expand_dims(np.arange(indices.shape[0]), 1) A = B[indices0, indices, :] Since broadcasting can be confusing I'll try to explain this a little. Basically you want indices0 and indices to be the same size and represent pairs of indices of B. The first index will be stored in indices0 and the second will be stored in indices in corresponding locations. Broadcasting implicitly repeats the columns of indices0 to make it the same shape as indices and can often be faster than constructing the full sized indices0. In case it helps here are some more verbose examples demonstrating why this works: import torch import numpy as np B = torch.randn(11, 9, 64) indices = np.random.randint(0,9,(11,16)) # constructing indices0 more verbosely (and slower) for demonstration purposes a0, a1 = indices.shape a2 = B.shape[2] # construct a complete indices0 the slow way, the same size as indices indices0 = np.empty((a0, a1), dtype=np.int32) for i in range(a0): for j in range(a1): indices0[i,j] = i # version 1 (nothing complicated happening here but very slow) A1 = torch.empty(a0, a1, a2, dtype=B.dtype) for i in range(a0): for j in range(a1): A1[i,j,:] = B[indices0[i,j], indices[i,j], :] # version 2 (using advanced indexing without broadcasting) A2 = B[indices0, indices, :] # version 3 (with broadcasting) # remove repeated columns leaving indices0 as (11, 1) the same state as above indices0 = indices0[:, :1] # broadcasting implicitly repeats columns of indices0 to match indices A3 = B[indices0, indices, :] # version 4 (your method) A4 = torch.empty(a0, a1, a2, dtype=B.dtype) for i in range(a0): A4[i,:,:] = B[i,indices[i],:] # compare everything error = torch.sum(torch.abs(A1 - A2)).item() + \ torch.sum(torch.abs(A2 - A3)).item() + \ torch.sum(torch.abs(A3 - A4)).item() print('Error:', error) which prints Error: 0.0 demonstrating that all these methods are equivalent. Also, if you wanted to stay in the PyTorch framework and indices were a torch.LongTensor instead of a numpy.ndarray then you could use indices0 = torch.arange(indices.shape[0]).unsqueeze(1) A = B[indices0, indices, :]
https://stackoverflow.com/questions/60124854/
Load multiple .npy files (size > 10GB) in pytorch
Im looking for a optimized solution to load multiple huge .npy files using pytorch data loader. I'm currently using the following method which creates a new dataloader for each file in each epoch. My data loader is something like: class GetData(torch.utils.data.Dataset): def __init__(self, data_path, target_path, transform=None): with open(data_path, 'rb') as train_pkl_file: data = pickle.load(train_pkl_file) self.data = torch.from_numpy(data).float() with open(target_path, 'rb') as target_pkl_file: targets = pickle.load(target_pkl_file) self.targets = torch.from_numpy(targets).float() def __getitem__(self, index): x = self.data[index] y = self.targets[index] return index, x, y def __len__(self): num_images = self.data.shape[0] return num_images I have a npy list of files: list1 = ['d1.npy', 'd2.npy','d3.npy'] list1 = ['s1.npy', 's2.npy','s3.npy'] I have created a dataloader which gives the filenames class MyDataset(torch.utils.data.Dataset): def __init__(self,flist): self.npy_list1 = flist1 self.npy_list2 = flist2 def __getitem__(self, idx): filename1 = self.npy_list1[idx] filename2 = self.npy_list2[idx] return filename1,filename2 def __len__(self): return len(self.npy_list1) And I itreate through them as follows: for epoch in range(500): print('Epoch #%s' % epoch) model.train() loss_, elbo_, recon_ = [[] for _ in range(3)] running_loss = 0 # FOR EVERY SMALL FILE print("Training: ") # TRAIN HERE my_dataset = MyDataset(npyList) for idx, (dynamic_file, static_file) in tqdm(enumerate(my_dataset)): ...Do stuff .... The above method works but i'm looking for more memory efficient solution. Note: I have huge amount of data > 200 GB so concatenating the numpy arrays into 1 file may not be the solution (due to RAM limitations). Thanks in advance
According to numpy.load, you can set the argument mmap_mode='r' to receive a memory-mapped array numpy.memmap. A memory-mapped array is kept on disk. However, it can be accessed and sliced like any ndarray. Memory mapping is especially useful for accessing small fragments of large files without reading the entire file into memory. I tried implementing a dataset that use memory maps. First, I generated some data as follows: import numpy as np feature_size = 16 total_count = 0 for index in range(10): count = 1000 * (index + 1) D = np.random.rand(count, feature_size).astype(np.float32) S = np.random.rand(count, 1).astype(np.float32) np.save(f'data/d{index}.npy', D) np.save(f'data/s{index}.npy', S) total_count += count print("Dataset size:", total_count) print("Total bytes:", total_count * (feature_size + 1) * 4, "bytes") The output was: Dataset size: 55000 Total bytes: 3740000 bytes Then, my implementation of the dataset is as follows: import numpy as np import torch from bisect import bisect import os, psutil # used to monitor memory usage class BigDataset(torch.utils.data.Dataset): def __init__(self, data_paths, target_paths): self.data_memmaps = [np.load(path, mmap_mode='r') for path in data_paths] self.target_memmaps = [np.load(path, mmap_mode='r') for path in target_paths] self.start_indices = [0] * len(data_paths) self.data_count = 0 for index, memmap in enumerate(self.data_memmaps): self.start_indices[index] = self.data_count self.data_count += memmap.shape[0] def __len__(self): return self.data_count def __getitem__(self, index): memmap_index = bisect(self.start_indices, index) - 1 index_in_memmap = index - self.start_indices[memmap_index] data = self.data_memmaps[memmap_index][index_in_memmap] target = self.target_memmaps[memmap_index][index_in_memmap] return index, torch.from_numpy(data), torch.from_numpy(target) # Test Code if __name__ == "__main__": data_paths = [f'data/d{index}.npy' for index in range(10)] target_paths = [f'data/s{index}.npy' for index in range(10)] process = psutil.Process(os.getpid()) memory_before = process.memory_info().rss dataset = BigDataset(data_paths, target_paths) used_memory = process.memory_info().rss - memory_before print("Used memory:", used_memory, "bytes") dataset_size = len(dataset) print("Dataset size:", dataset_size) print("Samples:") for sample_index in [0, dataset_size//2, dataset_size-1]: print(dataset[sample_index]) The output was as follows: Used memory: 299008 bytes Dataset size: 55000 Samples: (0, tensor([0.5240, 0.2931, 0.9039, 0.9467, 0.8710, 0.2147, 0.4928, 0.8309, 0.7344, 0.2861, 0.1557, 0.7009, 0.1624, 0.8608, 0.5378, 0.4304]), tensor([0.7725])) (27500, tensor([0.8109, 0.3794, 0.6377, 0.4825, 0.2959, 0.6325, 0.7278, 0.6856, 0.1037, 0.3443, 0.2469, 0.4317, 0.6690, 0.4543, 0.7007, 0.5733]), tensor([0.7856])) (54999, tensor([0.4013, 0.9990, 0.9107, 0.9897, 0.0204, 0.2776, 0.5529, 0.5752, 0.2266, 0.9352, 0.2130, 0.9542, 0.4116, 0.4959, 0.1436, 0.9840]), tensor([0.6342])) According to the results, the memory usage is only 10% from the total size. I didn't try my code with very large file sizes so I don't know how efficient it will be with >200 GB of files. If you can try it and tell me the memory usage with and without memmaps, I would be grateful.
https://stackoverflow.com/questions/60127632/
How to concurrently run multiple branches in pytorch?
I was trying to build a network with multiple branches in pytorch. But how can I run multiple branches in parallel instead of run them one by one? Not like tensorflow or keras, pytorch use dynamic graph, so I can't define concurrent processing beforehand. I looked up for some similar official implement of pytorch network like InceptionNet, only to find out pytorch runs consecutively with multiple branches. from inception.py def _forward(self, x): branch1x1 = self.branch1x1(x) branch5x5 = self.branch5x5_1(x) branch5x5 = self.branch5x5_2(branch5x5) branch3x3dbl = self.branch3x3dbl_1(x) branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl) branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1) branch_pool = self.branch_pool(branch_pool) outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool] return outputs Four branches run one by one, first branch1x1, then branch5x5, and branch3x3dbl, branch_pool. Then outputs stores their results and they will be concatenated later. Would't it be a waste of performance? And how can we deal with that?
In general you don't have to care about performance of network execution as long as you use functions provided by pytorch. As pointed out in the comments, all calls to the gpu are asynchron. And as long as a call is not dependent on data it is executed. So in your case you have multiple branches. Pytorch will schedule all operations and execute them according to the data dependencies. Since you don't share data between branches they will be executed in parallel. So in your case branch3x3dbl = self.branch3x3dbl_1(x) branch1x1 = self.branch1x1(x) branch3x3dbl = self.branch3x3dbl_1(x) are probably executed more or less at the same time. Same thing for all the following layers.
https://stackoverflow.com/questions/60133474/
TypeError: forward() missing 1 required positional argument: 'negative'
I want to utilize deep neural network to classify Hyperspectral Image. But every time I run this code, it gives me this error "TypeError: forward() missing 1 required positional argument: 'negative'". Code show as below(Not completed): import numpy as np import scipy.io as sio from tqdm import tqdm import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import math REBUILD_DATA = True TO read data: class DATA(): # 读取样本和标签,并转换为numpy数组格式 Pavia = sio.loadmat('G:\研究生\Matlab_code\dataset\Classification\paviaU.mat') PaviaGT = sio.loadmat('G:\研究生\Matlab_code\dataset\Classification\paviaU_GT.mat') # print(sorted(Pavia.keys())) 返回字典中的键值key() # print(sorted(PaviaGT.keys())) Sample = Pavia['data'] Sample = np.array(Sample, dtype = np.int32) Label = PaviaGT['groundT'] Label = np.array(Label, dtype = np.int32) # 将样本每一维度的数值存到a,b,c中,以便后续使用 [a,b,c]=Sample.shape # 将数据reshape成matlab中的格式 SampleT = Sample.transpose(1, 0, 2) SampleX = SampleT.reshape(-1,103) """ sio.savemat('G:\研究生\Sample.mat',{'dataX':SampleX}) """ LabelT = Label.transpose(1,0) Label = LabelT.reshape(-1,1) # 如何将样本和标签合并,输入神经网络的数据为[-1,band] """ sio.savemat('G:\研究生\Label.mat',{'LabelX':Label}) """ totalcount = np.zeros((10,1),dtype = np.int32) trainset = [] testset = [] # 将样本和标签合并 def integrated_data(self): rebuilddata = [] for i in range(0,self.a*self.b): rebuilddata.append([np.array(self.SampleX[i]),np.array(self.Label[i])]) for j in range(0,10): if self.Label[i] == j: self.totalcount[j] += 1 rebuilddata = np.array(rebuilddata) return rebuilddata # 并制作训练和测试数据 def make_trainset_and_testset(self, rebuilddata, ratio): TrainIndex = [] TestIndex = [] # 取出每一类的训练集和测试集坐标 for i in range(1,np.max(self.Label)+1): class_coor = np.argwhere(self.Label == i) index = class_coor[:,0].tolist() np.random.shuffle(index) VAL_SIZE = int(np.floor(len(index)*ratio)) ClassTrainIndex = index[:VAL_SIZE] ClassTestIndex = index[-VAL_SIZE:] TrainIndex += ClassTrainIndex TestIndex += ClassTestIndex # 返回训练集和测试集样本 TrainSample = rebuilddata[TrainIndex] TestSample = rebuilddata[TestIndex] return TrainIndex,TestIndex,TrainSample,TestSample This is my dnn module: class DNN(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(103, 500) self.fc2 = nn.Linear(500, 256) self.fc3 = nn.Linear(256, 9) def forward(self,x): x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return F.softmax(x,dim=1) The training and testing function: def train(dnn): BATCH_SIZE = 100 EPOCHS = 3 for epoch in range(EPOCHS): for i in tqdm(range(0, len(train_X), BATCH_SIZE)): batch_X = train_X[i:i+BATCH_SIZE] batch_y = train_y[i:i+BATCH_SIZE] dnn.zero_grad() outputs = dnn(batch_X) loss = loss_function(outputs, batch_y) loss.backward() optimizer.step() print(loss) def test(net): correct = 0 total = 0 with torch.no_grad(): for i in tqdm(range(len(test_X))): real_class = torch.argmax(test_y[i]).to(device) net_out = dnn(test_X[i].view(-1, 1, 50, 50).to(device))[0] predicted_class = torch.argmax(net_out) if predicted_class == real_class: correct += 1 total += 1 print("Accuracy:", round(correct/total,3)) if REBUILD_DATA: Data = DATA() datay = Data.integrated_data() Trainindex, Testindex, TrainSet, TestSet = Data.make_trainset_and_testset(rebuilddata=datay,ratio=0.1) train_X = torch.Tensor([i[0] for i in TrainSet]) train_y = torch.Tensor([i[1] for i in TrainSet]) train_X = train_X/3000 test_X = torch.Tensor([i[0] for i in TestSet]) test_y = torch.Tensor([i[1] for i in TestSet]) print(train_X[0]) dnn = DNN() optimizer = optim.SGD(dnn.parameters(), lr = 0.001) loss_function = nn.TripletMarginLoss() train(dnn)
You are using nn.TripletMarginLoss() as your loss function. This specific loss function expects three inputs for computing the loss: anchor, positive and negative. Your code passes only two arguments.
https://stackoverflow.com/questions/60134907/
Visualize the output of Vgg16 model by TSNE plot?
I need to visualize the output of Vgg16 model which classify 14 different classes. I load the trained model and I did replace the classifier layer with the identity() layer but it doesn't categorize the output. Here is the snippet: the number of samples here is 1000 images. epoch = 800 PATH = 'vgg16_epoch{}.pth'.format(epoch) checkpoint = torch.load(PATH) model.load_state_dict(checkpoint['model_state_dict']) optimizer.load_state_dict(checkpoint['optimizer_state_dict']) epoch = checkpoint['epoch'] class Identity(nn.Module): def __init__(self): super(Identity, self).__init__() def forward(self, x): return x model.classifier._modules['6'] = Identity() model.eval() logits_list = numpy.empty((0,4096)) targets = [] with torch.no_grad(): for step, (t_image, target, classess, image_path) in enumerate(test_loader): t_image = t_image.cuda() target = target.cuda() target = target.data.cpu().numpy() targets.append(target) logits = model(t_image) print(logits.shape) logits = logits.data.cpu().numpy() print(logits.shape) logits_list = numpy.append(logits_list, logits, axis=0) print(logits_list.shape) tsne = TSNE(n_components=2, verbose=1, perplexity=10, n_iter=1000) tsne_results = tsne.fit_transform(logits_list) target_ids = range(len(targets)) plt.scatter(tsne_results[:,0],tsne_results[:,1],c = target_ids ,cmap=plt.cm.get_cmap("jet", 14)) plt.colorbar(ticks=range(14)) plt.legend() plt.show() here is what this script has been produced: I am not sure why I have all colors for each cluster!
The VGG16 outputs over 25k features to the classifier. I believe it's too much to t-SNE. It's a good idea to include a new nn.Linear layer to reduce this number. So, t-SNE may work better. In addition, I'd recommend you two different ways to get the features from the model: The best way to get it regardless of the model is by using the register_forward_hook method. You may find a notebook here with an example. If you don't want to use the register, I'd suggest this one. After loading your model, you may use the following class to extract the features: class FeatNet (nn.Module): def __init__(self, vgg): super(FeatNet, self).__init__() self.features = nn.Sequential(*list(vgg.children())[:-1])) def forward(self, img): return self.features(img) Now, you just need to call FeatNet(img) to get the features. To include the feature reducer, as I suggested before, you need to retrain your model doing something like: class FeatNet (nn.Module): def __init__(self, vgg): super(FeatNet, self).__init__() self.features = nn.Sequential(*list(vgg.children())[:-1])) self.feat_reducer = nn.Sequential( nn.Linear(25088, 1024), nn.BatchNorm1d(1024), nn.ReLU() ) self.classifier = nn.Linear(1024, 14) def forward(self, img): x = self.features(img) x_r = self.feat_reducer(x) return self.classifier(x_r) Then, you can run your model returning x_r, that is, the reduced features. As I told you, 25k features are too much for t-SNE. Another method to reduce this number is by using PCA instead of nn.Linear. In this case, you send the 25k features to PCA and then train t-SNE using the PCA's output. I prefer using nn.Linear, but you need to test to check which one you get a better result.
https://stackoverflow.com/questions/60138486/
how to check whether a certain number is in the Pytorch tensor?
for a Pytorch tensor A: A = tensor([1,0,0], [0,0,0]) is there way I can check whether the number 1 is an element of the tensor A? like is there a pytorch function that returns True is 1 is an element of A, and returns False if 1 is not an element of A? Thank you,
torch.Tensor implements __contains__. So, you can just use: 1 in A This returns True if the element 1 is in A, and False otherwise.
https://stackoverflow.com/questions/60153144/
How much is the dimension of some bidirectional LSTM layers?
I read a paper about machine translation, and it uses projection layer. Its encoder has 6 bidirectional LSTM layers. If input embedding dimension is 512, how much will be the dimension of the encoder output? 512*2**5? The paper's link: https://www.aclweb.org/anthology/P18-1008.pdf
Not quite. Unfortunately, Figure 1 in the mentioned paper is a bit misleading. It is not that the six encoding layers are in parallel, as it might be understood from the figure, but rather that these layers are successive, meaning that the hidden state/output from the previous layer is used in the subsequent layer as an input. This, and the fact that the input (embedding) dimension is NOT the output dimension of the LSTM layer (in fact, it is 2 * hidden_size) change your output dimension to exactly that: 2 * hidden_size, before it is put into the final projection layer, which again is changing the dimension depending on your specifications. It is not quite clear to me what the description of add does in the layer, but if you look at a reference implementation it seems to be irrelevant to the answer. Specifically, observe how the encoding function is basically def encode(...): encode_inputs = self.embed(...) for l in num_layers: prev_input = encode_inputs encode_inputs = self.nth_layer(...) # ... Obviously, there is a bit more happening here, but this illustrates the basic functional block of the network.
https://stackoverflow.com/questions/60164056/
How to debug if weight keep increasing. Pytorch program
I m having some doubt when practicing Pytorch program. I have function like y = m1x1 + m2x2 + c (just 2 weights to learn here). The expected values of weight should be 16,-14 and bias should be 36. But in every epoch the learned wight goes very big. Can any one help me to debug and understand this 20 lines of code, what going wrong here. import torch x = torch.randint(size = (1,2), high = 10) w = torch.Tensor([16,-14]) b = 36 #Compute Ground Truth y = w * x + b #Find weights by program epoch = 20 learning_rate = 30 #initialize random w1 = torch.rand(size= (1,2), requires_grad= True) b1 = torch.ones(size = [1], requires_grad= True) for i in range(epoch): y1 = w1 * x + b1 #loss function RMSQ loss = torch.sum((y1-y)**2) #Find gradient loss.backward() with torch.no_grad(): #update parameters w1 -= (learning_rate * w1.grad) b1 -= (learning_rate * b1.grad) w1.grad.zero_() b1.grad.zero_() print("B ", b1) print("W ", w1) Thanks, Ganesh
You have a very large learning rate. This is an illustration from Jeremy Jordan's blog that explains exactly what is going on in your case.
https://stackoverflow.com/questions/60164779/
AttributeError: 'NoneType' object has no attribute 'zero_'
The Grad sub object becomes "None" if expand the expression. Not sure why? Can somebody give some clue. If expand the w.grand.zero_() throw error as "AttributeError: 'NoneType' object has no attribute 'zero_'" Thanks, Ganesh import torch x = torch.randint(size = (1,2), high = 10) w = torch.Tensor([16,-14]) b = 36 y = w * x + b epoch = 20 learning_rate = 0.01 w1 = torch.rand(size= (1,2), requires_grad= True) b1 = torch.ones(size = [1], requires_grad= True) for i in range(epoch): y1 = w1 * x + b1 loss = torch.sum((y1-y)**2) loss.backward() with torch.no_grad(): #w1 = w1 - learning_rate * w1.grad //Not Working : w1.grad becomes "None" not sure how ;( #b1 = b1 - learning_rate * b1.grad w1 -= (learning_rate * w1.grad) // Working code. b1 -= (learning_rate * b1.grad) w1.grad.zero_() b1.grad.zero_() print("B ", b1) print("W ", w1)
The thing is that in your working code you are modifying existing variable which has grad attribute, while in the non-working case you are creating a new variable. As new w1/b1 variable is created it has no gradient attribute as you didn't call backward() on it, but on the "original" variable. First, let's check whether that's really the case: print(id(w1)) # Some id returned here w1 = w1 - learning_rate * w1.grad # In case below w1 address doesn't change # w1 -= learning_rate * w1.grad print(id(w1)) # Another id here Now, you could copy it in-place and not brake it, but there is no point to do so and your working case is much clearer, but for posterity's sake: w1.copy_(w1 - learning_rate * w1.grad)
https://stackoverflow.com/questions/60166866/
Albumentations RandomCrop with mask different size as image
is it possible to RandomCrop an image with the size 256x256 and the mask with the size 100x100? Or RandomGridShuffle and RandomSizedCrop? https://albumentations.readthedocs.io/en/latest/api/augmentations.html Thank you
This functionality is not supported. The application of RandomCrop or RandomGridShuffle can lead to very strange corner cases. It is just easier to resize the mask and image to the same size and resize it back when needed. Two extra lines of code, but you will not get unexpected bugs.
https://stackoverflow.com/questions/60187803/