instruction
stringlengths
13
150
input
stringlengths
36
29.1k
output
stringlengths
31
29.5k
source
stringlengths
45
45
Pytorch RuntimeError: size mismatch
I get the following error when I try to run this code for training a GAN for prediction: RuntimeError: size mismatch, m1: [128 x 1], m2: [1392 x 2784] at C:\w\1\s\tmp_conda_3.7_021303\conda\conda-bld\pytorch_1565316900252\work\aten\src\TH/generic/THTensorMath.cpp:752 Please comment if you see any other mistakes in the code or if you have some tips in general. # Sample indices def sample_idx(m, n): A = np.random.permutation(m) idx = A[:n] return idx class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.fc1 = nn.Linear(in_features = dim, out_features = dim*2) self.fc2 = nn.Linear(in_features = dim*2, out_features = dim) self.fc3 = nn.Linear(in_features = dim, out_features = int(dim/2)) self.fc4 = nn.Linear(in_features = int(dim/2), out_features = 1) self.relu = nn.LeakyReLU(0.2) self.sigmoid = nn.Sigmoid() self.init_weight() def init_weight(self): layers = [self.fc1, self.fc2, self.fc3] [nn.init.xavier_normal_(layer.weight) for layer in layers] def forward(self, x): x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.relu(self.fc3(x)) x = self.sigmoid(self.fc4(x)) return x class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.fc1 = nn.Linear(in_features = dim, out_features = dim*2) self.fc2 = nn.Linear(in_features = dim*2, out_features = dim) self.fc3 = nn.Linear(in_features = dim, out_features = int(dim/2)) self.fc4 = nn.Linear(in_features = int(dim/2), out_features = 1) self.relu = nn.LeakyReLU(0.2) self.init_weight() def init_weight(self): layers = [self.fc1, self.fc2, self.fc3] [nn.init.xavier_normal_(layer.weight) for layer in layers] def forward(self, x): x = self.relu(self.fc1(x)) x = self.relu(self.fc2(x)) x = self.relu(self.fc3(x)) x = self.fc4(x) return x # 1. Mini batch size mb_size = 128 # 4. Loss Hyperparameters alpha = 10 # 5. Input Dim (Fixed) dim = data.shape[1] - 1 # 6. Train Test Split train_n = int(199476 * 0.8) test_n = 199476 - int(199476 * 0.8) X_train = data[:train_n,:-1] y_train = data[:train_n, -1] X_test = data[train_n:,:-1] y_test = data[train_n:, -1] # Nets D = Discriminator() G = Generator() # Optimizers optimD = torch.optim.Adam(D.parameters(), lr = 0.001) optimG = torch.optim.Adam(G.parameters(), lr = 0.001) # Loss mse_loss = nn.MSELoss(reduction = 'elementwise_mean') i = 1 for it in tqdm(range(10000)): mb_idx = sample_idx(train_n, mb_size) X_mb = X_train[mb_idx,:] X_mb = torch.tensor(X_mb).float() y_mb = y_train[mb_idx] y_mb = torch.tensor(y_mb).float() # Train D G_sample = G(X_mb) D_loss1 = ((D(X_mb) + 1e-8).log()).mean() + ((1 - D(G_sample) + 1e-8).log()).mean() D_loss = - D_loss1 D_loss.backward() optimD.step() optimD.zero_grad() # Train G G_sample = G(X_mb) D_prob.detach() G_loss1 = ((1 - D(G_sample) + 1e-8).log()).mean() G_mse_loss = mse_loss(G_sample, y_mb) G_loss = G_loss1 + alpha * G_mse_loss G_loss.backward() optimG.step() optimG.zero_grad() if it % 100 == 0: print("Iter: {}".format(it)) print("D_loss: {:.4}".format(D_loss)) print("Train loss: {:.4}".format(G_mse_loss)) print()
As you mentioned, you are getting the error in the following line. D_loss1 = ((D(X_mb) + 1e-8).log()).mean() + ((1 - D(G_sample) + 1e-8).log()).mean() I doubt the problematic part is: D(G_sample). Why? Because G_sample = G(X_mb) is of shape [batch_size, 1] which cannot be given as an input to the Discriminator, D because it takes tensor of shape [batch_size, dim] as input. That's why you are getting the error: RuntimeError: size mismatch, m1: [128 x 1], m2: [1392 x 2784] As you can see, you have an input of shape, [128 x 1] where batch_size = 128. But the Discriminator D is expecting an input of shape [batch_size x 1392]. Here, m2 is the shape of the weight matrix of fc1 layer in the Discriminator.
https://stackoverflow.com/questions/58311978/
Any numpy/torch style to set value given an index ndarray and a flag ndarray?
I'm working on PyTorch and currently I met a problem for which I've no idea how to solve it in a torch/numpy style. For example, suppose I have three PyTorch tensors import torch import numpy as np indices = torch.from_numpy(np.array([[2, 1, 3, 0], [1, 0, 3, 2]])) flags = torch.from_numpy(np.array([[False, False, False, True], [False, False, True, True]])) tensor = torch.from_numpy(np.array([[2.8, 0.5, 1.2, 0.9], [3.1, 2.8, 1.3, 2.5]])) Here flags is a boolean flag tensor to show which elements in indices should be extracted. Given the extracted indices, I want to set the corresponding elements in tensor to an indicated const (say 1e-30). Based on the example shown above, I want >>> sub_indices = indices.op1(flags) >>> sub_indices tensor([[0], [3, 2]]) >>> tensor.op2(sub_indices, 1e-30) >>> tensor tensor([[1e-30, 0.5, 1.2, 0.9], [3.1, 2.8, 1e-30, 1e-30]]) Could anyone help to give a solution? I'm using list comprehension but I think this way is a little bit ugly. I tried indices[flags] but it only returns a 1d-array [0, 3, 2] so applying this would change all rows on the same columns 0, 2, 3 Some additional remarks: The number of "True" values for each row in flags cannot be determined Each row of indices is assured to be a permutation of sequence 0 ... N - 1 Below is a numpy version of the example code, for the convenience of copy-pasting. I doubt whether this could be done in a pure numpy way import numpy as np indices = np.array([[2, 1, 3, 0], [1, 0, 3, 2]]) flags = np.array([[False, False, False, True], [False, False, True, True]]) tensor = np.array([[2.8, 0.5, 1.2, 0.9], [3.1, 2.8, 1.3, 2.5]])
You may sort flags according to the indices to create a mask, then use the mask as a mux. Here is an example code: indices = np.array([[2, 1, 3, 0], [1, 0, 3, 2]]) flags = np.array([[False, False, False, True], [False, False, True, True]]) tensor = np.array([[2.8, 0.5, 1.2, 0.9], [3.1, 2.8, 1.3, 2.5]]) indices_sorted = indices.argsort(axis=1) mask = np.take_along_axis(flags, indices_sorted, axis=1) result = tensor * (1 - mask) + 1e-30 * mask I'm not quite familiar with pytorch, but I guess it is not a good idea to gather a ragged tensor. Though, even in the worst case, you can convert to/from numpy arrays.
https://stackoverflow.com/questions/58319866/
NN regression loss value not decreasing
I'm training a NN with Pytorch to predict the expected price for the Boston dataset. The network looks like this: from sklearn.datasets import load_boston from torch.utils.data.dataset import Dataset from torch.utils.data import DataLoader import torch.nn.functional as F import torch import torch.nn as nn import torch.optim as optim class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(13, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64, 32) self.fc4 = nn.Linear(32, 16) self.fc5 = nn.Linear(16,1) def forward(self, x): x = self.fc1(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.relu(x) x = self.fc4(x) x = F.relu(x) return self.fc5(x) And the data loader: class BostonData(Dataset): __xs = [] __ys = [] def __init__(self, train = True): df = load_boston() index = int(len(df["data"]) * 0.7) if train: self.__xs = df["data"][0:index] self.__ys = df["target"][0:index] else: self.__xs = df["data"][index:] self.__ys = df["target"][index:] def __getitem__(self, index): return self.__xs[index], self.__ys[index] def __len__(self): return len(self.__xs) In my first attempt I didn't add the ReLU units, but after a little bit of research I saw that adding them is a common practice, but It didn't work out for me. Here is the training code: dset_train = BostonData(train = True) dset_test = BostonData(train = False) train_loader = DataLoader(dset_train, batch_size=30, shuffle=True) test_loader = DataLoader(dset_train, batch_size=30, shuffle=True) optimizer = optim.Adam(net.parameters(), lr = 0.001) criterion = torch.nn.MSELoss() EPOCHS = 10000 lloss = [] for epoch in range(EPOCHS): for trainbatch in train_loader: X,y = trainbatch net.zero_grad() output = net(X.float()) loss = criterion(output, y) loss.backward() optimizer.step() lloss.append(loss) print(loss) After 10k epochs, the loss graph looks like the following where I don't see any clear decrease. I don't know if I'm messing up with the torch.nn.MSELoss(), the optimizer or maybe with the net topology, so any help will be appreciated. Edit: Changing the learning rate and normalizing the data didn't work for me. I added the line self.__xs = (self.__xs - self.__xs.mean()) / self.__xs.std() and a change to lr = 0.01. The loss plot is very similar to the first one. Same plot for lr = 0.01 and normalizing after 1000 epochs:
you are appending lloss once per epoch and that correct but you are appending with loss (using only last batch) where you should append with avg_train_loss Try: for epoch in range(EPOCHS): avg_train_loss = 0 for trainbatch in train_loader: X,y = trainbatch net.zero_grad() output = net(X.float()) loss = criterion(output, y) loss.backward() optimizer.step() avg_train_loss += loss.item() / len(train_loader) lloss.append(avg_train_loss)
https://stackoverflow.com/questions/58324880/
Convert string to byte for pytorch loader
The method of downloading a pytorch model path is not in my control and I am trying to figure out a way to convert downloaded string data to byte data. The code below downloads my saved model from Dropbox and uses bytes with utf-8 encoding to encode the string. The problem is when I use torch.load with BytesIO I get a UnpicklingError with invalid load key, '<'. data = bytes(self.Download("https://www.dropbox.com/s/exampleurl/checkpoint.pth?dl=1"), 'utf-8') self.agent.local.load_state_dict(torch.load(BytesIO(data ), map_location=lambda storage, loc: storage)) The code below worked perfectly until requests was disabled and I am now trying to use the method above. dropbox_url = "https://www.dropbox.com/s/exampleurl/checkpoint.pth?dl=1" data = requests.get(dropbox_url ) self.agent.local.load_state_dict(torch.load(BytesIO(data.content), map_location=lambda storage, loc: storage)) I just need to figure out a way to convert the string to bytes data the correct way.
I had to convert the byte data to base64 and save the file in that format. Once I uploaded on Dropbox and downloaded using the built in method, I converted the base64 file back to bytes and it worked! import base64 from io import BytesIO with open("checkpoint.pth", "rb") as f: byte = f.read(1) # Base64 Encode the bytes data_e = base64.b64encode(byte) filename ='base64_checkpoint.pth' with open(filename, "wb") as output: output.write(data_e) # Save file to Dropbox # Download file on server b64_str= self.Download('url') # String Encode to bytes byte_data = b64_str.encode("UTF-8") # Decoding the Base64 bytes str_decoded = base64.b64decode(byte_data) # String Encode to bytes byte_decoded = str_decoded.encode("UTF-8") # Decoding the Base64 bytes decoded = base64.b64decode(byte_decoded) torch.load(BytesIO(decoded))
https://stackoverflow.com/questions/58333344/
Unable to download saved model from online resource, pickle error
I am unable to download and use a model I saved earlier from a online-repository. Here's the code: model = Model().double() # Model is defined in another class state_dict = torch.hub.load_state_dict_from_url(r'https://filebin.net/j2977ux7kts41aft/checkpoint_best.pt?t=wjbujfoo') model.load_state_dict(state_dict) model.eval() Which gives me the following error: Traceback (most recent call last): File "/path/file.py", line 47, in <module> state_dict = torch.hub.load_state_dict_from_url(r'https://filebin.net/j2977ux7kts41aft/checkpoint_best.pt?t=wjbujfoo') File "anaconda3/envs/torch_env/lib/python3.6/site-packages/torch/hub.py", line 466, in load_state_dict_from_url return torch.load(cached_file, map_location=map_location) File "/anaconda3/envs/torch_env/lib/python3.6/site-packages/torch/serialization.py", line 386, in load return _load(f, map_location, pickle_module, **pickle_load_args) File "anaconda3/envs/torch_env/lib/python3.6/site-packages/torch/serialization.py", line 563, in _load magic_number = pickle_module.load(f, **pickle_load_args) _pickle.UnpicklingError: invalid load key, '\x0a'. The model resides in: https://filebin.net/j2977ux7kts41aft/checkpoint_best.pt?t=wjbujfoo Note that I can perfectly download it manually, and then use torch.load(path) to load it without errors, but I need to do it from code! Could it be that the serializing when downloading from url somehow messes up the pickle encoding? Edit: I don't have to use filebin, any online-storage that supports what I try to do will suffice.
The problem was indeed within the environment configuration. I created the model with PyTorch 1.0.2 and then updated to 1.2.0 in order to use torch.hub. This gave me the pickle error. After training a new model in 1.2.0, the error is now gone. Hope this help someone in the future :)
https://stackoverflow.com/questions/58339835/
How to transform network's output images to colored pictures with target is PIL's P mode?
The original value in target images is 20 and 16 at training process with P mode of PIL,so I transform 20 to 1 and 16 to 2 in order to train the segmentation task. But when I want to get the output images,the pictures aren't colored although I used the code pred=pred.reshape([512,512]).astype('uint8') (x, y) = pred.shape for xx in range(x): for yy in range(y): if pred[xx, yy] == 2: pred[xx, yy] = 16 elif pred[xx, yy] == 1: pred[xx, yy] = 20 pp = Image.fromarray(pred).convert('P') pp.save(r'E:\python_workspace\0711\run\pascal\{}.png'.format(i)) But the output image is I have see the value with PIL.open and transform it to numpy to see the values,the part of things is convert to 16 and 20,the mode is P too. How can I deal with this problem?
You appear to have managed to change all pixels with index 20 into index 1 and all pixels with index 16 into 2. However, you then need to copy the palette entry 20 to palette entry 1 and palette entry 16 to palette entry 2 in order to make the colours remain the same. So, you want: import numpy as np from PIL import Image # Load image im = Image.open('R0T9R.png') # Get palette and make into Numpy array of 256 entries of 3 RGB colours palette = np.array(im.getpalette(),dtype=np.uint8).reshape((256,3)) # Set palette entry 1 the same as entry 20, and 2 the same as 16 palette[1] = palette[20] palette[2] = palette[16] # Change pixels too - this replaces your slow "for" loops npim = np.array(im) npim[npim==16] = 2 npim[npim==20] = 1 # Make Numpy array back into image res = Image.fromarray(npim) # Apply our modified palette and save res.putpalette(palette.ravel().tolist()) res.save('result.png')
https://stackoverflow.com/questions/58342439/
Pytorch .to('cuda') or .cuda() doesn't work and just get stuck
I am trying to do pytorch tutorial. When I try to set their device as cuda, it does not work and my code running get stuck. For specific information, I am using conda environment of python 3.7.3 pytorch 1.3.0 cuda 10.2 (NVIDIA RTX2080TI) >>> import torch >>> torch.cuda.is_available() True >>> torch.cuda.device_count() 1 >>> torch.cuda.current_device() 0 >>> device = torch.device('cuda:0') >>> device device(type='cuda', index=0) >>> aa = torch.randn(5) >>> aa = tensor([-2.2084, -0.2700, 0.0921, -1.7678, 0.7642]) >>> aa.to(device) nothing happens... Can anybody please help me how to overcome this problem?
This has happened with the Pytorch 1.3.0 release (the release was this week). I too face this bug. Basically, when I call .to(device), it just hangs and does nothing. If you would like to fix this temporarily, you can downgrade to PyTorch 1.2.0. To do this, I ran: conda install pytorch=1.2.0 torchvision cudatoolkit=10.2 -c pytorch I would have just commented but I do not have enough reputation to do that.
https://stackoverflow.com/questions/58344480/
Why is stft(istft(x)) ≠ x?
Why is stft(istft(x)) ≠ x? Using PyTorch, I have computed the short-time Fourier transform of the inverse short-time Fourier transform of a tensor. I have done so as shown below given a tensor x. For x, the real and imaginary part are equal, or the imaginary part is set to zero -- both produces the same problem. torch.stft(torchaudio.functional.istft(x, n_fft), n_fft) As shown in the image, only a single one of the stripes in the tensor remains after applying stft(istft(x)) -- all other stripes vanish. If stft(istft(x)) (bottom) was equal to x (top), both images would look similar. Why are they so different? It seems like stft(istft(x)) can pick up only certain frequencies of x. x (top) and stft of istft of x (bottom) I have also tried the same with scipy.signal.istft and scipy.signal.stft which causes the same issue. Moreover, I have tried it with a wide range of tensors x, e.g., different randomized distributions, images, and other stripes. Also, I have tried a variety of hyper-parameters for stft/istft. Only for x generated by a short-time Fourier transform from a sound wave, it works.
A short-time Fourier transform produces a more data than there is in the original signal. Where a signal has N real samples, then STFT might have 4N complex samples -- 8 times more data. It follows that the ISTFT operation must discard 7/8 of the data you provide it. Most of the data in a STFT is redundant, and if you just make up values for all of the data, it is unlikely to correspond to a real signal. In that case, an implementation of ISTFT will probably use a least-squares fit or other method of producing a signal with an STFT that matches your data as closely as possible, but it won't always be close.
https://stackoverflow.com/questions/58348493/
How to read pictures from a big folder and split it into train, validation and test sets?
I am working on sign language gesture classifier using pytorch, I have pictures resembling each letter residing in a folder titled with that specific letter. E.g. folder "A" has "1_A_1.jpg", "1_A_2.jpg", "21_A_3.jpg".. etc. I am trying to build a function that: Iterates through the different folders Splits the data into training, validation, and test sets Labels those pictures with their respective folder name (i.e. letter label) Returns 3 created folders that are train, test and validation All the online code shows examples of splitting data coming from torchvision data sets (built in data sets), nothing from scratch. I found the following on stackoverflow: import os import numpy as np import argparse def get_files_from_folder(path): files = os.listdir(path) return np.asarray(files) def main(path_to_data, path_to_test_data, train_ratio): # get dirs _, dirs, _ = next(os.walk(path_to_data)) # calculates how many train data per class data_counter_per_class = np.zeros((len(dirs))) for i in range(len(dirs)): path = os.path.join(path_to_data, dirs[i]) files = get_files_from_folder(path) data_counter_per_class[i] = len(files) test_counter = np.round(data_counter_per_class * (1 - train_ratio)) # transfers files for i in range(len(dirs)): path_to_original = os.path.join(path_to_data, dirs[i]) path_to_save = os.path.join(path_to_test_data, dirs[i]) #creates dir if not os.path.exists(path_to_save): os.makedirs(path_to_save) files = get_files_from_folder(path_to_original) # moves data for j in range(int(test_counter[i])): dst = os.path.join(path_to_save, files[j]) src = os.path.join(path_to_original, files[j]) shutil.move(src, dst) and when I tried doing the following: path_to_data= r'path\A' path_to_test_data=r"path\test" train_ratio=0.8 main(path_to_data,path_to_test_data,train_ratio) Nothing really happened.. If I can get this working for train and test, I can easily extend it for validation.
Give this a go: from pathlib import Path def main(data_path, out_path, train_ratio): #1 dir_paths = [child for child in Path(data_path).iterdir() if child.is_dir()] for i, dir_path in enumerate(dir_paths): #2 files = list(dir_path.iterdir()) train_len = int(len(files) * (1 - train_ratio)) #3 out_dir = Path(out_path).joinpath(dir_path.name) if not out_dir.exists(): out_dir.mkdir(parents=True) #4 for file_ in files[:train_len]: file_.replace(out_dir.joinpath(file_.name)) if __name__ == '__main__': main('data', 'test', 0.8)
https://stackoverflow.com/questions/58349732/
Deploy PyTorch model to GCP resulting in memory limit exceeded
I have trained a language model using transformer-lm which uses PyTorch. I would like to deploy the resulting model to the Google Cloud Platform as a Cloud Function. Cloud Functions are limited to 2 GB of memory. The problem is that loading the model leads to an error as too much memory is used (memory limit exceeded). The model.pt file is 1.32 GB, and I use torch.load(model_path / 'model.pt', map_location='cpu') to load the model. Is there a way to i) compress the model? ii) not load the full model at once? or any other possibility to make it run on GCP?
Cloud Functions run their own server instances when triggered. If a called function is not used after some time, that instance is terminated. However, if you call it again while the instance is still running, the same instance along with the elements in your execution environment will be used again. This can cause your function crash. In order to avoid that, you might want to implement a Garbage Collector. In Python, you can use the gc module for that. In particular, you can try the function gc.collect() to clear the memory.
https://stackoverflow.com/questions/58351544/
Convert pytorch tensor to numpy, and reshape
I have a pytorch tensor [100, 1, 32, 32] corresponding to batch size of 100 images, 1 channel, height 32 and width 32. I want to reshape this tensor to have dimension [32*10, 32*10], such that the images are represented as a 10x10 grid, with the first 10 images on row 1, and so on. How to achieve this?
Update More efficient and shorter version. To avoid using for-loop, we can permute a first. import torch a = torch.arange(9*2*2).view(9,1,2,2) b = a.permute([0,1,3,2]) torch.cat(torch.split(b, 3),-1).view(6,6).t() # tensor([[ 0, 1, 4, 5, 8, 9], # [ 2, 3, 6, 7, 10, 11], # [12, 13, 16, 17, 20, 21], # [14, 15, 18, 19, 22, 23], # [24, 25, 28, 29, 32, 33], # [26, 27, 30, 31, 34, 35]]) Original Answer You can use torch.split and torch.cat to implement it. import torch a = torch.arange(9*2*2).view(9,1,2,2) Assuming we have a tensor, which is a mini version of your original tensor. And it looks like, tensor([[[[ 0, 1], [ 2, 3]]], [[[ 4, 5], [ 6, 7]]], [[[ 8, 9], [10, 11]]], [[[12, 13], [14, 15]]], [[[16, 17], [18, 19]]], [[[20, 21], [22, 23]]], [[[24, 25], [26, 27]]], [[[28, 29], [30, 31]]], [[[32, 33], [34, 35]]]]) Each 2x2 sub-matrix can be seen as one image. What you want to do is stacking the first three images to one row, next three images to the second row, and last three images to the third row. The "row" has actually two dim due to the 2x2 sub-matrix. three_parts = torch.split(a,3) torch.cat(torch.split(three_parts[0],1), dim=-1) #tensor([[[[ 0, 1, 4, 5, 8, 9], # [ 2, 3, 6, 7, 10, 11]]]]) Here we only take the first part. torch.cat([torch.cat(torch.split(three_parts[i],1),-1) for i in range(3)],0).view(6,6) # tensor([[ 0, 1, 4, 5, 8, 9], # [ 2, 3, 6, 7, 10, 11], # [12, 13, 16, 17, 20, 21], # [14, 15, 18, 19, 22, 23], # [24, 25, 28, 29, 32, 33], # [26, 27, 30, 31, 34, 35]])
https://stackoverflow.com/questions/58356756/
How do I merge 2D Convolutions in PyTorch?
From linear algebra we know that linear operators are associative. In the deep learning world, this concept is used to justify the introduction of non-linearities between NN layers, to prevent a phenomenon colloquially known as linear lasagna, (reference). In signal processing this also leads to a well known trick to optimize memory and/or runtime requirements (reference). So merging convolutions is a very useful tool from different perspectives. How to implement it with PyTorch?
If we have y = x * a * b (where * means convolution and a, b are your kernels), we can define c = a * b such that y = x * c = x * a * b as follows: import torch def merge_conv_kernels(k1, k2): """ :input k1: A tensor of shape ``(out1, in1, s1, s1)`` :input k1: A tensor of shape ``(out2, in2, s2, s2)`` :returns: A tensor of shape ``(out2, in1, s1+s2-1, s1+s2-1)`` so that convolving with it equals convolving with k1 and then with k2. """ padding = k2.shape[-1] - 1 # Flip because this is actually correlation, and permute to adapt to BHCW k3 = torch.conv2d(k1.permute(1, 0, 2, 3), k2.flip(-1, -2), padding=padding).permute(1, 0, 2, 3) return k3 To illustrate the equivalence, this example combines two kernels with 900 and 5000 parameters respectively into an equivalent kernel of 28 parameters: # Create 2 conv. kernels out1, in1, s1 = (100, 1, 3) out2, in2, s2 = (2, 100, 5) kernel1 = torch.rand(out1, in1, s1, s1, dtype=torch.float64) kernel2 = torch.rand(out2, in2, s2, s2, dtype=torch.float64) # propagate a random tensor through them. Note that padding # corresponds to the "full" mathematical operation (s-1) b, c, h, w = 1, 1, 6, 6 x = torch.rand(b, c, h, w, dtype=torch.float64) * 10 c1 = torch.conv2d(x, kernel1, padding=s1 - 1) c2 = torch.conv2d(c1, kernel2, padding=s2 - 1) # check that the collapsed conv2d is same as c2: kernel3 = merge_conv_kernels(kernel1, kernel2) c3 = torch.conv2d(x, kernel3, padding=kernel3.shape[-1] - 1) print(kernel3.shape) print((c2 - c3).abs().sum() < 1e-5) Note: The equivalence is assuming that we have unlimited numerical resolution. I think there was research on stacking many low-resolution-float linear operations and showing that the networks profited from numerical error, but I am unable to find it. Any reference is appreciated!
https://stackoverflow.com/questions/58357815/
How to write custom CrossEntropyLoss
I am learning Logistic Regression within Pytorch and to better understand I am defining a custom CrossEntropyLoss as below: def softmax(x): exp_x = torch.exp(x) sum_x = torch.sum(exp_x, dim=1, keepdim=True) return exp_x/sum_x def log_softmax(x): return torch.exp(x) - torch.sum(torch.exp(x), dim=1, keepdim=True) def CrossEntropyLoss(outputs, targets): num_examples = targets.shape[0] batch_size = outputs.shape[0] outputs = log_softmax(outputs) outputs = outputs[range(batch_size), targets] return - torch.sum(outputs)/num_examples I also make my own logistic regression (to predict FashionMNIST) as below: input_dim = 784 # 28x28 FashionMNIST data output_dim = 10 w_init = np.random.normal(scale=0.05, size=(input_dim,output_dim)) w_init = torch.tensor(w_init, requires_grad=True).float() b = torch.zeros(output_dim) def my_model(x): bs = x.shape[0] return x.reshape(bs, input_dim) @ w_init + b To validate my custom crossentropyloss, I compared it with nn.CrossEntropyLoss from Pytorch by applying it on FashionMNIST data as below: criterion = nn.CrossEntropyLoss() for X, y in trn_fashion_dl: outputs = my_model(X) my_outputs = softmax(outputs) my_ce = CrossEntropyLoss(my_outputs, y) pytorch_ce = criterion(outputs, y) print (f'my custom cross entropy: {my_ce.item()}\npytorch cross entroopy: {pytorch_ce.item()}') break My question is toward the results my_ce (my cross entropy) vs pytorch_ce (pytorch cross entropy) where they are different: my custom cross entropy: 9.956839561462402 pytorch cross entroopy: 2.378990888595581 I appreciate your help in advance!
There are two bugs in your code. The log_softmax(x) should be like, def log_softmax(x): return torch.log(softmax(x)) When you calculate your own CE loss, you should input outputs instead of my_outputs. Because you will calculate softmax inside your own CE loss function. It should be like, outputs = my_model(X) my_ce = CrossEntropyLoss(outputs, y) pytorch_ce = criterion(outputs, y) Then you will have identical results. my custom cross entropy: 3.584486961364746 pytorch cross entroopy: 3.584486961364746
https://stackoverflow.com/questions/58360150/
Resnet-18 as backbone in Faster R-CNN
I code with pytorch and I want to use resnet-18 as backbone of Faster R-RCNN. When I print structure of resnet18, this is the output: >>import torch >>import torchvision >>import numpy as np >>import torchvision.models as models >>resnet18 = models.resnet18(pretrained=False) >>print(resnet18) ResNet( (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer2): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer3): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer4): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (avgpool): AdaptiveAvgPool2d(output_size=(1, 1)) (fc): Linear(in_features=512, out_features=1000, bias=True) ) My question is, until which layer it is feature extractor? is AdaptiveAvgPool2d should be part of backbone of Faster R-CNN? In this toturial, it is shown how to train a Mask R-CNN with an arbitrary backbone, I want to do the same thing with Faster R-CNN and train a Faster R-CNN with resnet-18 but until which layer should be part of feature extractor is confusing for me. I know how to use resnet+Feature Pyramid Network as backbone, My question is about resent.
If we want to use output of Adaptive Average Pooling we use this code for different Resnets: # backbone if backbone_name == 'resnet_18': resnet_net = torchvision.models.resnet18(pretrained=True) modules = list(resnet_net.children())[:-1] backbone = nn.Sequential(*modules) backbone.out_channels = 512 elif backbone_name == 'resnet_34': resnet_net = torchvision.models.resnet34(pretrained=True) modules = list(resnet_net.children())[:-1] backbone = nn.Sequential(*modules) backbone.out_channels = 512 elif backbone_name == 'resnet_50': resnet_net = torchvision.models.resnet50(pretrained=True) modules = list(resnet_net.children())[:-1] backbone = nn.Sequential(*modules) backbone.out_channels = 2048 elif backbone_name == 'resnet_101': resnet_net = torchvision.models.resnet101(pretrained=True) modules = list(resnet_net.children())[:-1] backbone = nn.Sequential(*modules) backbone.out_channels = 2048 elif backbone_name == 'resnet_152': resnet_net = torchvision.models.resnet152(pretrained=True) modules = list(resnet_net.children())[:-1] backbone = nn.Sequential(*modules) backbone.out_channels = 2048 elif backbone_name == 'resnet_50_modified_stride_1': resnet_net = resnet50(pretrained=True) modules = list(resnet_net.children())[:-1] backbone = nn.Sequential(*modules) backbone.out_channels = 2048 elif backbone_name == 'resnext101_32x8d': resnet_net = torchvision.models.resnext101_32x8d(pretrained=True) modules = list(resnet_net.children())[:-1] backbone = nn.Sequential(*modules) backbone.out_channels = 2048 If we want to use convolution feature map we use this code: # backbone if backbone_name == 'resnet_18': resnet_net = torchvision.models.resnet18(pretrained=True) modules = list(resnet_net.children())[:-2] backbone = nn.Sequential(*modules) elif backbone_name == 'resnet_34': resnet_net = torchvision.models.resnet34(pretrained=True) modules = list(resnet_net.children())[:-2] backbone = nn.Sequential(*modules) elif backbone_name == 'resnet_50': resnet_net = torchvision.models.resnet50(pretrained=True) modules = list(resnet_net.children())[:-2] backbone = nn.Sequential(*modules) elif backbone_name == 'resnet_101': resnet_net = torchvision.models.resnet101(pretrained=True) modules = list(resnet_net.children())[:-2] backbone = nn.Sequential(*modules) elif backbone_name == 'resnet_152': resnet_net = torchvision.models.resnet152(pretrained=True) modules = list(resnet_net.children())[:-2] backbone = nn.Sequential(*modules) elif backbone_name == 'resnet_50_modified_stride_1': resnet_net = resnet50(pretrained=True) modules = list(resnet_net.children())[:-2] backbone = nn.Sequential(*modules) elif backbone_name == 'resnext101_32x8d': resnet_net = torchvision.models.resnext101_32x8d(pretrained=True) modules = list(resnet_net.children())[:-2] backbone = nn.Sequential(*modules)
https://stackoverflow.com/questions/58362892/
Does a `DataLoader` created from `ConcatDataset` create a batch from a different files, or a single file?
I am working with multiple files, and multiple training samples in each file. I will use ConcatDataset as described here: https://discuss.pytorch.org/t/dataloaders-multiple-files-and-multiple-rows-per-column-with-lazy-evaluation/11769/7 I need to have negative samples in addition to my true samples, and I need my negative samples to be randomly selected from all the training data files. So, I am wondering, would the returned batch samples just be a random consecutive chuck from a single file, or would be batch span across multiple random indexes across all the datafiles? If there are more details needed about what I am trying to do exactly, it's because I am trying to train over a TPU with Pytorch XLA. Normally for negative samples, I would just use a 2nd DataSet and DataLoader, however, I am trying to train over TPUs with Pytorch XLA (alpha was just released a few days ago https://github.com/pytorch/xla ), and to do that I need to send my DataLoader to a torch_xla.distributed.data_parallel.DataParallel object, like model_parallel(train_loop_fn, train_loader) which can be seen in these example notebooks https://github.com/pytorch/xla/blob/master/contrib/colab/resnet18-training-xrt-1-15.ipynb https://github.com/pytorch/xla/blob/master/contrib/colab/mnist-training-xrt-1-15.ipynb So, I am now limited to a single DataLoader, which will need to handle both the true samples, and negative samples that need to be randomly selected from all my files.
ConcatDataset is a custom class that is subclassed from torch.utils.data.Dataset. Let's take a look at one example. class ConcatDataset(torch.utils.data.Dataset): def __init__(self, *datasets): self.datasets = datasets def __getitem__(self, i): return tuple(d[i] for d in self.datasets) def __len__(self): return min(len(d) for d in self.datasets) train_loader = torch.utils.data.DataLoader( ConcatDataset( dataset1, dataset2 ), batch_size=args.batch_size, shuffle=True, num_workers=args.workers, pin_memory=True) for i, (input, target) in enumerate(train_loader): ... Here, two datasets namely dataset1 (a list of examples) and dataset2 are combined to form a single training dataset. The __getitem__ function returns one example from the dataset and will be used by the BatchSampler to form the training mini-batches. Would the returned batch samples just be a random consecutive chuck from a single file, or would be batch span across multiple random indexes across all the datafiles? Since you have combined all your data files to form one dataset, now it depends on what BatchSampler do you use to sample mini-batches. There are several samplers implemented in PyTorch, for example, RandomSampler, SequentialSampler, SubsetRandomSampler, WeightedRandomSampler. See their usage in the documentation. You can have your custom BatchSampler too as follows. class MyBatchSampler(Sampler): def __init__(self, *params): # write your code here def __iter__(self): # write your code here # return an iterable def __len__(self): # return the size of the dataset The __iter__ function should return an iterable of mini-batches. You can implement your logic of forming mini-batches in this function. To randomly sample negative examples for training, one alternative could be to pick negative examples for each positive example in the __init__ function of the ConcatDataset class.
https://stackoverflow.com/questions/58367385/
RuntimeError: size mismatch, m1: [32 x 1], m2: [32 x 9]
I'm building a CNN and training it on hand sign gesture classification for letters A through I (9 classes), each image is RGB with 224x224 size. Not sure which matrix I need to transpose and how. I have managed to match the inputs and outputs of layers, but that matrix multiplication thing, not really sure how to fix it. class LargeNet(nn.Module): def __init__(self): super(LargeNet, self).__init__() self.name = "large" self.conv1 = nn.Conv2d(3, 5, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(5, 10, 5) self.fc1 = nn.Linear(10 * 53 * 53, 32) self.fc2 = nn.Linear(32, 9) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) print('x1') x = self.pool(F.relu(self.conv2(x))) print('x2') x = x.view(-1, 10*53*53) print('x3') x = F.relu(self.fc1(x)) print('x4') x = x.view(-1, 1) x = self.fc2(x) print('x5') x = x.squeeze(1) # Flatten to [batch_size] return x and training code #Loss and optimizer criterion = nn.BCEWithLogitsLoss() optimizer = optim.SGD(model2.parameters(), lr=learning_rate, momentum=0.9) # Train the model total_step = len(train_loader) loss_list = [] acc_list = [] for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): print(i,images.size(),labels.size()) # Run the forward pass outputs = model2(images) labels=labels.unsqueeze(1) labels=labels.float() loss = criterion(outputs, labels) The code prints up to x4 and then I get this error RuntimeError: size mismatch, m1: [32 x 1], m2: [32 x 9] at C:\w\1\s\tmp_conda_3.7_055457\conda\conda-bld\pytorch_1565416617654\work\aten\src\TH/generic/THTensorMath.cpp:752 Complete traceback error: https://ibb.co/ykqy5wM
You don't need x=x.view(-1,1) and x = x.squeeze(1) in your forward function. Remove these two lines. Your output shape would be (batch_size, 9). Also, you need to convert labels to one-hot encoding, which is in shape of (batch_size, 9). class LargeNet(nn.Module): def __init__(self): super(LargeNet, self).__init__() self.name = "large" self.conv1 = nn.Conv2d(3, 5, 5) self.pool = nn.MaxPool2d(2, 2) self.conv2 = nn.Conv2d(5, 10, 5) self.fc1 = nn.Linear(10 * 53 * 53, 32) self.fc2 = nn.Linear(32, 9) def forward(self, x): x = self.pool(F.relu(self.conv1(x))) x = self.pool(F.relu(self.conv2(x))) x = x.view(-1, 10*53*53) x = F.relu(self.fc1(x)) x = self.fc2(x) return x model2 = LargeNet() #Loss and optimizer criterion = nn.BCEWithLogitsLoss() # nn.BCELoss() optimizer = optim.SGD(model2.parameters(), lr=0.1, momentum=0.9) images = torch.from_numpy(np.random.randn(2,3,224,224)).float() # fake images, batch_size is 2 labels = torch.tensor([1,2]).long() # fake labels outputs = model2(images) one_hot_labels = torch.eye(9)[labels] loss = criterion(outputs, one_hot_labels)
https://stackoverflow.com/questions/58368601/
Listing first and second values in a tensor
I know this might be a simple problem for some of you. I am not able to get the output as expected. The input tensor is given below. X = tensor([[50.7500, 44.0000], [47.0000, 47.0000], [42.5000, 52.2500], [59.6163, 50.7097], [54.6682, 54.6682], [48.7304, 61.5956], [71.3156, 59.5631], [64.7864, 64.7864], [56.9515, 73.9272]]) I want two tensors A and B such that A = tensor([[50.7500], [47.0000], [42.5000], [59.6163], [54.6682], [48.7304], [71.3156], [64.7864], [56.9515]]) and B = tensor([[44.0000], [47.0000], [52.2500], [50.7097], [54.6682], [61.5956], [59.5631], [64.7864], [73.9272]]) This is what I tried till now. idx = torch.LongTensor([0]) idx1 = torch.LongTensor([1]) A = X.index_select(1,idx) B = X.index_select(1,idx1) which is not giving me output as expected. Thanks in advance.
You want to pick elements from second index in the tensor, you can do this as follows: A=X[:,0] B=X[:,1] The output of this will be : print(B) >>>tensor([44.0000, 47.0000, 52.2500, 50.7097, 54.6682, 61.5956, 59.5631, 64.7864, 73.9272]) print(A) >>>tensor([50.7500, 47.0000, 42.5000, 59.6163, 54.6682, 48.7304, 71.3156, 64.7864,56.9515]) But since you want two dimensional tensors, you can then do A.unsqueeze(dim=1) B.unsqueeze(dim=1) In above case you had to explicitly tell which dimension you have to add/unsqueeze but you can avoid that by using list of indexes as below: A = X[:, 0:1] B = x[:, 1:2] Using indexes of type i:i+1 will preserve the number dimensions from old tensor.
https://stackoverflow.com/questions/58375834/
Correct way to use custom weight maps in unet architecture
There is a famous trick in u-net architecture to use custom weight maps to increase accuracy. Below are the details of it: Now, by asking here and at multiple other place, I get to know about 2 approaches. I want to know which one is correct or is there any other right approach which is more correct? First is to use torch.nn.Functional method in the training loop: loss = torch.nn.functional.cross_entropy(output, target, w) where w will be the calculated custom weight. Second is to use reduction='none' in the calling of loss function outside the training loop criterion = torch.nn.CrossEntropy(reduction='none') and then in the training loop multiplying with the custom weight: gt # Ground truth, format torch.long pd # Network output W # per-element weighting based on the distance map from UNet loss = criterion(pd, gt) loss = W*loss # Ensure that weights are scaled appropriately loss = torch.sum(loss.flatten(start_dim=1), axis=0) # Sums the loss per image loss = torch.mean(loss) # Average across a batch Now, I am kinda confused which one is right or is there any other way, or both are right?
The weighting portion looks like just simply weighted cross entropy which is performed like this for the number of classes (2 in the example below). weights = torch.FloatTensor([.3, .7]) loss_func = nn.CrossEntropyLoss(weight=weights) EDIT: Have you seen this implementation from Patrick Black? # Set properties batch_size = 10 out_channels = 2 W = 10 H = 10 # Initialize logits etc. with random logits = torch.FloatTensor(batch_size, out_channels, H, W).normal_() target = torch.LongTensor(batch_size, H, W).random_(0, out_channels) weights = torch.FloatTensor(batch_size, 1, H, W).random_(1, 3) # Calculate log probabilities logp = F.log_softmax(logits) # Gather log probabilities with respect to target logp = logp.gather(1, target.view(batch_size, 1, H, W)) # Multiply with weights weighted_logp = (logp * weights).view(batch_size, -1) # Rescale so that loss is in approx. same interval weighted_loss = weighted_logp.sum(1) / weights.view(batch_size, -1).sum(1) # Average over mini-batch weighted_loss = -1. * weighted_loss.mean()
https://stackoverflow.com/questions/58377887/
Convert list of two dimensional DataFrame to Torch Tensor
Goal: I am working with RNNs in PyTorch, and my data is given by a list of DataFrames, where each DataFrame means one observation like: import numpy as np data = [pd.DataFrame(np.zeros((5,50))) for x in range(100)] which means 100 observation, with 50 parameters and 5 timesteps each. For my Model i need a tensor of shape (100,5,50). Issue: I tried a lot of things but nothing seems to work, does anyone know how this is done? This approaches doesn't work: import torch torch.tensor(np.array(data)) I thing the problem is to convert the DataFrames into Arrays and the List into a Tensor at the same time.
I don't think you can convert the list of dataframes in a single command, but you can convert the list of dataframes into a list of tensors and then concatenate the list. E.g. import pandas as pd import numpy as np import torch data = [pd.DataFrame(np.zeros((5,50))) for x in range(100)] list_of_arrays = [np.array(df) for df in data] torch.tensor(np.stack(list_of_arrays)) #or list_of_tensors = [torch.tensor(np.array(df)) for df in data] torch.stack(list_of_tensors)
https://stackoverflow.com/questions/58382401/
Use SHAP Values for PyTorch RNN / LSTM
Is there a way to do the above? The SHAP Package is very helpful and works pretty well for PyTorch Neural Nets. For PyTorch RNNs i get the error message below (for LSTMs its the same): Seems like it doesn't work but is there a workaround or something? Does anyone have experience with PyTorch and SHAP?
'RNNs aren't yet supported for the PyTorch DeepExplainer (A warning pops up to let you know which modules aren't supported yet: Warning: unrecognized nn.Module: RNN). In this case, the explainer assumes the module is linear, and makes no change to the gradient. Since RNNs contain nonlinearities, this is probably contributing to the problem.' That was an answer I found at Shap. Try to check captum.ai that is built on PyTorch.
https://stackoverflow.com/questions/58401581/
How to vectorize custom algorithms in numpy or pytorch?
Suppose I have two matrices: A: size k x m B: size m x n Using a custom operation, my output will be k x n. This custom operation is not a dot product between the rows of A and columns of B. Suppose this custom operation is defined as: For the Ith row of A and Jth column of B, the i,j element of the output is: sum( (a[i] + b[j]) ^20 ), i loop over I, j loops over J The only way I can see to implement this is to expand this equation, calculate each term, them sum them. Is there a way in numpy or pytorch to do this without expanding the equation?
Apart from the method @hpaulj outlines in the comments, you can also use the fact that what you are calculating is essentially a pair-wise Minkowski distance: import numpy as np from scipy.spatial.distance import cdist k,m,n = 10,20,30 A = np.random.random((k,m)) B = np.random.random((m,n)) method1 = ((A[...,None]+B)**20).sum(axis=1) method2 = cdist(A,-B.T,'m',p=20)**20 np.allclose(method1,method2) # True
https://stackoverflow.com/questions/58402887/
Indices out of range for MaxUnpool2d
I am trying to understand unpooling in Pytorch because I want to build a convolutional auto-encoder. I have the following code from torch.autograd import Variable data = Variable(torch.rand(1, 73, 480)) pool_t = nn.MaxPool2d(2, 2, return_indices=True) unpool_t = nn.MaxUnpool2d(2, 2) out, indices1 = pool_t(data) out = unpool_t(out, indices1) But I am constantly getting this error on the last line (unpooling). IndexError: tuple index out of range Although the data is simulated in this example, the input has to be of that shape because of the preprocessing that has to be done. I am fairly new to convolutional networks, but I have even tried using a ReLU and convolutional 2D layer before the pooling however, the indices always seem to be incorrect when unpooling for this shape.
Your data is 1D and you are using 2D pooling and unpooling operations. PyTorch interpret the first two dimensions of tensors as "batch dimension" and "channel"/"feature space" dimension. The rest of the dimensions are treated as spatial dimensions. So, in your example, data is 3D tensor of size (1, 73, 480) and is interpret by pytorch as a single batch ("batch dimension" = 1) with 73 channels per sample and 480 samples. For some reason MaxPool2d works for you and treats the channel dimension as a spatial dimension and sample this as well - I'm not sure this is a bug or a feature. If you do want to sample along the second dimension you can add an additional dimension, making data a 4D tensor: out, indices1 = pool_t(data[None,...]) In [11]: out = unpool_t(out, indices1, data[None,...].size())
https://stackoverflow.com/questions/58408616/
How to load the saved tokenizer from pretrained model
I fine-tuned a pretrained BERT model in Pytorch using huggingface transformer. All the training/validation is done on a GPU in cloud. At the end of the training, I save the model and tokenizer like below: best_model.save_pretrained('./saved_model/') tokenizer.save_pretrained('./saved_model/') This creates below files in the saved_model directory: config.json added_token.json special_tokens_map.json tokenizer_config.json vocab.txt pytorch_model.bin Now, I download the saved_model directory in my computer and want to load the model and tokenizer. I can load the model like below model = torch.load('./saved_model/pytorch_model.bin',map_location=torch.device('cpu')) But how do I load the tokenizer? I am new to pytorch and not sure because there are multiple files. Probably I am not saving the model in the right way?
If you look at the syntax, it is the directory of the pre-trained model that you are supposed to pass. Hence, the correct way to load tokenizer must be: tokenizer = BertTokenizer.from_pretrained(<Path to the directory containing pretrained model/tokenizer>) In your case: tokenizer = BertTokenizer.from_pretrained('./saved_model/') ./saved_model here is the directory where you'll be saving your pretrained model and tokenizer.
https://stackoverflow.com/questions/58417374/
pytorch - gradients not calculated for parameters
a = torch.nn.Parameter(torch.randn(1, requires_grad=True, dtype=torch.float, device=device)) b = torch.nn.Parameter(torch.randn(1, requires_grad=True, dtype=torch.float, device=device)) c = a + 1 d = torch.nn.Parameter(c, requires_grad=True,) for epoch in range(n_epochs): yhat = d + b * x_train_tensor error = y_train_tensor - yhat loss = (error ** 2).mean() loss.backward() print(a.grad) print(b.grad) print(c.grad) print(d.grad) Prints out None tensor([-0.8707]) None tensor([-1.1125]) How do I learn the gradient for a and c? variable d needs to stay a parameter
Basically, when you create a new tensor, like torch.nn.Parameter() or torch.tensor(), you are creating a leaf node tensor. And when you do something like c=a+1, c will be intermediate node. You can print(c.is_leaf) to check whether the tensor is leaf node or not. Pytorch will not calculate the gradient of intermediate node in default. In your code snippet, a, b, d are all leaf node tensor, and c is intermediate node. c.grad will None as pytorch doesn't calculate the gradient for intermediate node. a is isolated from the graph when you call loss.backword(). That's why a.grad is also None. If you change the code to this a = torch.nn.Parameter(torch.randn(1, requires_grad=True, dtype=torch.float, device=device)) b = torch.nn.Parameter(torch.randn(1, requires_grad=True, dtype=torch.float, device=device)) c = a + 1 d = c for epoch in range(n_epochs): yhat = d + b * x_train_tensor error = y_train_tensor - yhat loss = (error ** 2).mean() loss.backward() print(a.grad) # Not None print(b.grad) # Not None print(c.grad) # None print(d.grad) # None You will find a and b have gradients, but c.grad and d.grad are None, because they're intermediate node.
https://stackoverflow.com/questions/58421637/
How to fetch image dataset from Google Drive to Colab?
I have this very weird problem. I have searched across internet, read documentation but am not able to figure out how to do it. So what I want to do is train a classifier using Colab. And for that I have a image dataset of dogs on my local machine. So what I did was I packed that dataset folder of images into a zip file and uploaded it onto Drive. Then from Colab I mounted the drive and from there I tried to unzip the files. Everything good. But I've realised that after sometime some of the extracted files get deleted. And thing is that those files aren't on Colab storage, but instead on Drive and I dunno why they are getting deleted after sometime. Like about an hour. So far I've used the following commands to do the extraction - from google.colab import drive drive.mount('/content/drive') from zipfile import ZipFile filename = 'Stanford Dogs Dataset.zip' with ZipFile(filename, 'r') as zip: zip.extractall() print('Done') and also tried this - !unzip filename -d destination Not sure where I am going wrong. And also, dunno why the extracted files though being extracted to a subfolder within drive, also starts showing up on the main root directory. And no I am not talking about the recent section, because when I want to check their location then they points to the root of the drive. It's all so confusing.
First you mount google drive from google.colab import drive drive.mount('/gdrive') Then you can copy from your drive using !cp !cp '/gdrive/My Drive/my_file' 'my_file' then you can work as in your pc, unzip and ...
https://stackoverflow.com/questions/58434046/
Pytorch - going back and forth between eval() and train() modes
I'm studying "Deep Reinforcement Learning" and build my own example after pytorch's REINFORCEMENT LEARNING (DQN) TUTORIAL. I'm implement actor's strategy as follows: 1. model.eval() 2. get best action from a model 3. self.net.train() The question is: Does going back and forth between eval() and train() modes cause any damage to optimization process? The model includes only Linear and BatchNorm1d layers. As far as I know when using BatchNorm1d one must perform model.eval() to use a model, because there is different results in eval() and train() modes. When training Classification Neural Network the model.eval() performed only after training is finished, but in case of "Deep Reinforcement Learning" it is usual to use strategy and then continue the optimization process. I'm wondering if going back and forth between modes is "harmless" to optimization process? def strategy(self, state): # Explore or Exploit if self.epsilon > random(): action = choice(self.actions) else: self.net.eval() action = self.net(state.unsqueeze(0)).max(1)[1].detach() self.net.train()
eval() puts the model in the evaluation mode. In the evaluation mode, the Dropout layer just acts as a "passthrough" layer. During training, a BatchNorm layer keeps a running estimate of its computed mean and variance. The running sum is kept with a default momentum of 0.1. During the evaluation, this running mean/variance is used for normalization. So, going back and forth between eval() and train() modes do not cause any damage to the optimization process.
https://stackoverflow.com/questions/58447885/
Getting Torch to recognize GPU
How do you get Torch to recognize CUDA on your video card? I have a Nvidia GeForce GT 1030 running under Ubuntu 18.04, and it claims to support CUDA, yet when I first tested Torch with it by running: virtualenv -p python3.7 .env . .env/bin/activate pip install torch python -c "import torch; print(torch.cuda.is_available())" it returned False, along with the warning: The NVIDIA driver on your system is too old (found version 9010). Please update your GPU driver by downloading and installing a new version from the URL: http://www.nvidia.com/Download/index.aspx Alternatively, go to: https://pytorch.org to install a PyTorch version that has been compiled with your version of the CUDA driver. So I ran all system updates and used Ubuntu's proprietary driver installer to install the most recent Nvidia-435 driver for my card. However, torch.cuda.is_available() still returns false, but now it doesn't give me any warning. Have I mis-configured Torch or does my GPU just not support CUDA?
Nevermind. I spoke too soon. I didn't reboot after switching over the driver, and apparently that broke nvidia-smi and some other things that loaded the CUDA driver. After the reboot, Torch now recognizes CUDA 10.1 support.
https://stackoverflow.com/questions/58452620/
PyTorch BERT TypeError: forward() got an unexpected keyword argument 'labels'
Training a BERT model using PyTorch transformers (following the tutorial here). Following statement in the tutorial loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) leads to TypeError: forward() got an unexpected keyword argument 'labels' Here is the full error, TypeError Traceback (most recent call last) <ipython-input-53-56aa2f57dcaf> in <module> 26 optimizer.zero_grad() 27 # Forward pass ---> 28 loss = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels) 29 train_loss_set.append(loss.item()) 30 # Backward pass ~/anaconda3/envs/systreviewclassifi/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 539 result = self._slow_forward(*input, **kwargs) 540 else: --> 541 result = self.forward(*input, **kwargs) 542 for hook in self._forward_hooks.values(): 543 hook_result = hook(self, input, result) TypeError: forward() got an unexpected keyword argument 'labels' I cant seem to figure out what kind of argument the forward() function expects. There is a similar problem here, but I still do not get what the solution is. System information: OS: Ubuntu 16.04 LTS Python version: 3.6.x Torch version: 1.3.0 Torch Vision version: 0.4.1 PyTorch transformers version: 1.2.0
As far as I know, the BertModel does not take labels in the forward() function. Check out the forward function parameters. I suspect you are trying to fine-tune the BertModel for sequence classification task and the API provides a class for that which is BertForSequenceClassification. As you can see its forward() function definition: def forward(self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, labels=None): Please note, the forward() method returns the followings. Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs: **loss**: (`optional`, returned when ``labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``: Classification (or regression if config.num_labels==1) loss. **logits**: ``torch.FloatTensor`` of shape ``(batch_size, config.num_labels)`` Classification (or regression if config.num_labels==1) scores (before SoftMax). **hidden_states**: (`optional`, returned when ``config.output_hidden_states=True``) list of ``torch.FloatTensor`` (one for the output of each layer + the output of the embeddings) of shape ``(batch_size, sequence_length, hidden_size)``: Hidden-states of the model at the output of each layer plus the initial embedding outputs. **attentions**: (`optional`, returned when ``config.output_attentions=True``) list of ``torch.FloatTensor`` (one for each layer) of shape ``(batch_size, num_heads, sequence_length, sequence_length)``: Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Hope this helps!
https://stackoverflow.com/questions/58454157/
why the `test process` is inside the epoch loop in deep learning?
I am new to deep learning, here is the code I saw The code is fine, but I don't understand the follow: for epoch in range(1, args.epochs + 1): train(epoch) test(epoch) with torch.no_grad(): sample = torch.randn(64, 20).to(device) sample = model.decode(sample).cpu() In machine learning, when we finish training, we fix the model parameters for the test dataset. here are my two questions: (1) In deep learning, we have the training, validation, test dataset. Is the code test(epoch) actually for validation set? There we fix model parameters and predict for test dataset outside the epoch loop? (2) what is with torch.no_grad(): mean? and why it also inside the epoch loop? Thanks a lot.
Yes, the test(epoch) is actually for validation here (Update: not exactly validation, check below answer). with torch.no_grad() means that you're switching off the gradients (required for backpropagation during training). In validation/testing you don't need them, and it'll save memory and computations. Read more here. Also, check the tutorial here.
https://stackoverflow.com/questions/58454269/
Is there an equivalent function of pytorch named "index_select" in tensorflow
I tried to translate pytorch code to tensorflow. So I wanna know is there an equivalent function of pytorch named index_select in tensorflow
I haven't found a similar api can directly achieve it, but we can use tf.slice to implement it. def tf_index_select(input_, dim, indices): """ input_(tensor): input tensor dim(int): dimension indices(list): selected indices list """ shape = input_.get_shape().as_list() if dim == -1: dim = len(shape)-1 shape[dim] = 1 tmp = [] for idx in indices: begin = [0]*len(shape) begin[dim] = idx tmp.append(tf.slice(input_, begin, shape)) res = tf.concat(tmp, axis=dim) return res Here is an example to show the equivalence. import tensorflow as tf import torch import numpy as np a = np.arange(2*3*4).reshape(2,3,4) dim = 1 indices = [0,2] # array([[[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [16, 17, 18, 19], # [20, 21, 22, 23]]]) # pytorch res = torch.tensor(a).index_select(dim, torch.tensor(indices)) # tensor([[[ 0, 1, 2, 3], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [20, 21, 22, 23]]]) # tensorflow res = tf_index_select(tf.constant(a), dim, indices) # tensor([[[ 0, 1, 2, 3], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [20, 21, 22, 23]]])
https://stackoverflow.com/questions/58464790/
bool value of Tensor with more than one value is ambiguous Pytorch error
I have a Neural network that i want to train but i keep getting an "bool value of Tensor with more than one value is ambiguous" Error. Here is my network: from torch.autograd import Variable from torch.utils.data import Dataset, DataLoader import torch.nn as nn from torch.nn import functional as F from DataSetLoader import ReplaysDataSet class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.l1 = torch.nn.Linear(12,24) self.l2 = torch.nn.Linear(24, 20) self.l3 = torch.nn.Linear(20, 16) self.l4 = torch.nn.Linear(16, 10) self.l5 = torch.nn.Linear(10, 6) self.l6 = torch.nn.Linear(6, 1) def forward(self, t): t = F.relu(self.l1(t)) t = F.relu(self.l2(t)) t = F.relu(self.l3(t)) t = F.relu(self.l4(t)) t = F.relu(self.l5(t)) y_pred = F.relu(self.l6(t)) return y_pred def get_num_correct(self,preds,labels): return preds.argmax(dim=1).eq(labels).sum() class Test(): model = Model() optimiser = torch.optim.SGD(model.parameters(), lr=0.01) dataset = ReplaysDataSet() trainLoader = DataLoader(dataset=dataset, batch_size=250, shuffle=True) batch = next(iter(trainLoader)) criterion = nn.MSELoss for epoch in range(10): totalLoss = 0 totalCorrect = 0 for batch in trainLoader: data, label = batch prediction = model(data) prediction = prediction.reshape(250) print(prediction) print(label) loss = criterion(prediction, label) optimiser.zero_grad() loss.backward() optimiser.step() totalLoss += loss.item() print(totalLoss) totalCorrect += model.get_num_correct(prediction, label) print(totalCorrect) Here is my Data loader import torch from torch.autograd import Variable from torch.utils.data import Dataset, DataLoader from torch.nn import functional as F class ReplaysDataSet(Dataset): def __init__(self): self.xy = np.genfromtxt("dataset.csv", delimiter=',', dtype=np.float32) self.length = len(self.xy) self.x_data = torch.from_numpy(self.xy[0:,1:]) self.y_data = torch.from_numpy(self.xy[0:,0]) self.length = len(self.xy) def __getitem__(self, index): return self.x_data[index], self.y_data[index] def __len__(self): return self.length And here is some data from the csv im training on 1,303,497,784,748,743,169,462,479,785,310,26,701 1,658,598,645,786,248,381,80,428,248,530,591,145 0,38,490,796,637,130,380,226,359,720,392,464,497 0,94,752,645,801,381,479,475,381,227,645,445,248 0,59,806,254,521,91,538,212,645,609,227,545,531 1,65,254,685,565,91,796,445,658,465,485,472,184 1,385,248,211,612,82,38,485,652,212,373,563,26 1,796,596,785,310,145,479,142,685,748,635,798,474 1,380,658,485,645,36,598,806,428,786,798,645,113 0,743,214,625,642,530,784,645,641,65,598,786,637 The error i get is Traceback (most recent call last): File "C:/Users/tayya/PycharmProjects/untitled/NetworkFile.py", line 32, in <module> class Test(): File "C:/Users/tayya/PycharmProjects/untitled/NetworkFile.py", line 50, in Test loss = criterion(prediction, label) File "C:\Users\tayya\PycharmProjects\untitled\venv\lib\site-packages\torch\nn\modules\loss.py", line 428, in __init__ super(MSELoss, self).__init__(size_average, reduce, reduction) File "C:\Users\tayya\PycharmProjects\untitled\venv\lib\site-packages\torch\nn\modules\loss.py", line 12, in __init__ self.reduction = _Reduction.legacy_get_string(size_average, reduce) File "C:\Users\tayya\PycharmProjects\untitled\venv\lib\site-packages\torch\nn\_reduction.py", line 36, in legacy_get_string if size_average and reduce: RuntimeError: bool value of Tensor with more than one value is ambiguous Any help would be greatly appreciated. I'm new to NN's so if i made a glaring mistake, my apologies.
A couple of things: 1. From your data I concluded this is a classification problem, and not a regression problem. Thus the use of MSELoss isn't very optimal, I changed this to BCELoss which should be more appropriate. 2. The last activation of your network is RELU, and because this is a binary classification problem, Sigmoid is a better choice. 3. made a small fix in the "get_num_correct" function. Hope this works for you: from torch.utils.data import Dataset, DataLoader import torch.nn as nn from torch.nn import functional as F from DataSetLoader import ReplaysDataSet import torch class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.l1 = torch.nn.Linear(12,24) self.l2 = torch.nn.Linear(24, 20) self.l3 = torch.nn.Linear(20, 16) self.l4 = torch.nn.Linear(16, 10) self.l5 = torch.nn.Linear(10, 6) self.l6 = torch.nn.Linear(6, 1) def forward(self, t): t = F.relu(self.l1(t)) t = F.relu(self.l2(t)) t = F.relu(self.l3(t)) t = F.relu(self.l4(t)) t = F.relu(self.l5(t)) y_pred = F.sigmoid(self.l6(t)) return y_pred def get_num_correct(self,preds,labels): return preds.round().squeeze().eq(labels).numpy().sum() class Test(): model = Model() optimiser = torch.optim.SGD(model.parameters(), lr=0.01) dataset = ReplaysDataSet() trainLoader = DataLoader(dataset=dataset, batch_size=250, shuffle=True) batch = next(iter(trainLoader)) criterion = nn.BCELoss() for epoch in range(10): totalLoss = 0 totalCorrect = 0 for batch in trainLoader: data, label = batch prediction = model(data) print(prediction) print(label) loss = criterion(prediction.squeeze(), label) optimiser.zero_grad() loss.backward() optimiser.step() totalLoss += loss.item() print(totalLoss) totalCorrect += model.get_num_correct(prediction, label) print(totalCorrect)
https://stackoverflow.com/questions/58468504/
Slicing a 4D tensor with a 3D tensor-index in PyTorch
I have a 4D tensor (which happens to be a stack of three batches of 56x56 images where each batch has 16 images) with the size of [16, 3, 56, 56]. My goal is to select the correct one of those three batches (with my index map that has the size of [16, 56, 56]) for each pixel and get the images that I want. Now, I want to select the particular batches of images inside those three batches, with a which has values such as [[[ 0, 0, 2, ..., 0, 0, 0], [ 0, 0, 2, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0], ..., [ 0, 0, 0, ..., 0, 0, 0], [ 0, 2, 0, ..., 0, 0, 0], [ 0, 2, 2, ..., 0, 0, 0]], [[ 0, 2, 0, ..., 1, 1, 0], [ 0, 2, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 2, 0], ..., [ 0, 0, 0, ..., 0, 2, 0], [ 0, 0, 2, ..., 0, 2, 0], [ 0, 0, 2, ..., 0, 0, 0]]] So for the 0s, the value will be selected from the first batch, where 1 and 2 will mean I want to select the values from the second and the third batch. Here are some of the visualizations of the indices, each color denoting another batch. I have tried to transpose the 4D tensor to match the dimensions of my indices, but it did not work. All it does is to give me a copy of the dimensions I have tried to select. Means tposed = torch.transpose(fourD, 0,1) print(indices.size(), outs.size(), tposed[:, indices].size()) outputs torch.Size([16, 56, 56]) torch.Size([16, 3, 56, 56]) torch.Size([3, 16, 56, 56, 56, 56]) while the shape I need is torch.Size([16, 56, 56]) or torch.Size([16, 1, 56, 56]) and as an example, if I try to select the right values for only the first image on the batch with fourD[0,indices].size() I get a shape like torch.Size([16, 56, 56, 56, 56]) Not to mention that I get an out of memory error when I try this on the whole tensor. I appreciate any help for using these indices to select either one of these three batches for each pixel in my images. Note : I have tried the option outs[indices[:,None,:,:]].size() and that returns torch.Size([16, 1, 56, 56, 3, 56, 56]) Edit : torch.take does not help much since it treats the input tensor as a single dimensional array.
Turns out there is a function in PyTorch that has the functionality I was searching for. torch.gather(fourD, 1, indices.unsqueeze(1)) did the job. Here is a beautiful explanation of what gather does.
https://stackoverflow.com/questions/58471587/
Sampling point pairs from a grid in Pytorch
I need sample point pairs from a grid in PyTorch. I have a tensor of size (1 x 500 x 1000). I also have a mask tensor of size(1 x 500 x 1000), denoting if a point is valid or not. I want to sample 200k point pairs from this grid. In other words, I want to get coordinates of sampled point pairs as a tensor of size (200k x 4), denoting (x1, y1, x2, y2) for all 200k point pairs. All points in the pairs should be valid points. This will be repeated many times, so I need to have an efficient way of performing this procedure. What is an elegant way to implement this in PyTorch?
Not an expert here, but I did spend some time trying things out. Turns out operating on 1D array is a lot faster (method two). import time import torch class Timer(): def __init__(self): pass def __enter__(self): self.time = time.time() def __exit__(self, *exc): print(f'time used: {time.time() - self.time:.2f}s') # a = torch.rand([1,500,1000]) m = torch.randint(2, [1, 500, 1000]) # mask tensor valid_len = (m==1).nonzero().size()[0] # number of valid points rand_one = torch.randint(valid_len, [200000]) # sample 200k of random int rand_two = torch.randint(valid_len, [200000]) # sample 200k of random int # method one m0 = m == 1 # mask of shape torch.Size([1, 500, 1000]) m0 = m0.nonzero() # valid points of shape torch.Size([valid_len, 3]) m0 = m0[:, 1:] # reshape to shape torch.Size([valid_len, 2]) with Timer(): one0 = torch.index_select(m0, 0, rand_one) # take 200k valid points two0 = torch.index_select(m0, 0, rand_two) # take 200k valid points again coor0 = torch.cat([one0, two0], dim=1) # stack them up # >>> time used: 1.05s # method two m1 = m.reshape(-1) # reshape mask to torch.Size([500000]) m1 = m1==1 # mask of shape torch.Size([500000]) m1 = m1.nonzero() # valid points of shape torch.Size([valid_len, 1]) m1 = m1.reshape(-1) # valid points of shape torch.Size([valid_len]) with Timer(): one1 = m1.take(rand_one) # take 200k valid points two1 = m1.take(rand_two) # again # transform them to coordinates and stack them up coor1 = torch.stack([one1 // 1000, one1 % 1000, two1 // 1000, two1 % 1000], dim=1) # >>> time used: 0.07s assert torch.sum(coor0 == coor1) == 800000 # make sure consistent result cheers
https://stackoverflow.com/questions/58481270/
How to implement some trainable parameters in the model of Keras like nn.Parameters() in Pytorch?
I just wanna to implement some trainable parameters in my model with Keras. In Pytorch, we can do it by using torch.nn.Parameter() like below: self.a = nn.Parameter(torch.ones(8)) self.b = nn.Parameter(torch.zeros(16,8)) I think by doing this in pytorch it can add some trainable parameters into the model. And now I wanna to know, how to achieve similar operations in keras? Any suggestions or advice are welcomed! THX! :) p.s. I just write a custom layer in Keras as below: class Mylayer(Layer): def __init__(self,input_dim,output_dim,**kwargs): self.input_dim = input_dim self.output_dim = output_dim super(Mylayer,self).__init__(**kwargs) def build(self): self.kernel = self.add_weight(name='pi', shape=(self.input_dim,self.output_dim), initializer='zeros', trainable=True) self.kernel_2 = self.add_weight(name='mean', shape=(self.input_dim,self.output_dim), initializer='ones', trainable=True) super(Mylayer,self).build() def call(self,x): return x,self.kernel,self.kernel_2 and I wanna to know if I haven't change the tensor which pass through the layer, should I write the function def compute_output_shape() for necessary?
You need to create the trainable weights in a custom layer: class MyLayer(Layer): def __init__(self, my_args, **kwargs): #do whatever you need with my_args super(MyLayer, self).__init__(**kwargs) #you create the weights in build: def build(self, input_shape): #use the input_shape to infer the necessary shapes for weights #use self.whatever_you_registered_in_init to help you, like units, etc. self.kernel = self.add_weight(name='kernel', shape=the_shape_you_calculated, initializer='uniform', trainable=True) #create as many weights as necessary for this layer #build the layer - equivalent to self.built=True super(MyLayer, self).build(input_shape) #create the layer operation here def call(self, inputs): #do whatever operations are needed #example: return inputs * self.kernel #make sure the shapes are compatible #tell keras about the output shape of your layer def compute_output_shape(self, input_shape): #calculate the output shape based on the input shape and your layer's rules return calculated_output_shape Now use your layer in the model. If you are using eager execution on with tensorflow and creating a custom training loop, you can work pretty much the same way you do with PyTorch, and you can create weights outside layers with tf.Variable, passing them as parameters to the gradient calculation methods.
https://stackoverflow.com/questions/58488106/
PIL Image.open display image reversely rotated
I'm working on predicting the number picture as below with MNIST dataset and LeNet Model . Firstly, I show test images with Image.open, it displays test images in the way of reversely rotated. from PIL import Image import matplotlib.cm as cm import pylab as pl import numpy as np img = Image.open('./test/2.png').convert('L') img = np.invert(img) # convert to white on black pl.imshow(np.asarray(img), origin='lower', cmap=cm.Greys_r) pl.show() Another issue is the accuracy ratio of prediction seem very low. For example, the 2 here has been predicted as 4. Someone could help with that or explain? In my point of views, this number is much clear than handwriting MNIST. Thanks a lot.
imshow just sees an array of data. So specifying origin='lower' means you're telling imshow that the origin of your data is in the lower corner. However, image data has its origin in the upper corner so you can either remove origin= completely (the default is 'upper') or specify 'upper'. pl.imshow(np.asarray(img), cmap=cm.Greys_r) or pl.imshow(np.asarray(img), origin='upper', cmap=cm.Greys_r)
https://stackoverflow.com/questions/58488983/
copy construct from a tensor: USER WARNING
I am creating a random tensor from normal distribution and since this tensor is served as the weight in the NN, to add requires_grad attributes, I use torch.tensor() as below: import torch input_dim, hidden_dim = 3, 5 norm = torch.distributions.normal.Normal(loc=0, scale=0.01) W = norm.sample((input_dim, hidden_dim)) W = torch.tensor(W, requires_grad=True) I am getting user warning error as below: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor). Is there an alternative way to achieve the above? Thanks
You can just set W.requires_grad to True import torch input_dim, hidden_dim = 3, 5 norm = torch.distributions.normal.Normal(loc=0, scale=0.01) W = norm.sample((input_dim, hidden_dim)) W.requires_grad = True
https://stackoverflow.com/questions/58491070/
Transform images to white on black and predict
I want to predict new customized images with LeNet model trained from here. The customized images are black on white, so I need to convert them to white on black. # Load & transform image ori_img = Image.open('./test/2.png').convert('L') img = np.invert(ori_img) #Transform images to white on black t = transforms.Compose([ transforms.Resize((32, 32)), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ]) img = torch.autograd.Variable(t(img).unsqueeze(0)) ori_img.close() # Predict model.eval() output = model(img) pred = output.data.max(1, keepdim=True)[1][0][0] print('Prediction: {}'.format(pred)) The result I got: TypeError Traceback (most recent call last) <ipython-input-182-abbffa2ce0d8> in <module> 7 transforms.Normalize((0.1307,), (0.3081,)) 8 ]) ----> 9 img = torch.autograd.Variable(t(img).unsqueeze(0)) 10 ori_img.close() ~/.local/lib/python3.6/site-packages/torchvision/transforms/transforms.py in __call__(self, img) 59 def __call__(self, img): 60 for t in self.transforms: ---> 61 img = t(img) 62 return img 63 ~/.local/lib/python3.6/site-packages/torchvision/transforms/transforms.py in __call__(self, img) 196 PIL Image: Rescaled image. 197 """ --> 198 return F.resize(img, self.size, self.interpolation) 199 200 def __repr__(self): ~/.local/lib/python3.6/site-packages/torchvision/transforms/functional.py in resize(img, size, interpolation) 236 """ 237 if not _is_pil_image(img): --> 238 raise TypeError('img should be PIL Image. Got {}'.format(type(img))) 239 if not (isinstance(size, int) or (isinstance(size, Iterable) and len(size) == 2)): 240 raise TypeError('Got inappropriate size arg: {}'.format(size)) TypeError: img should be PIL Image. Got <class 'numpy.ndarray'> When I comment img = np.invert(ori_img) I get no errors, but all prediction results are 2s. Someone could help? Thanks a lot.
You can this function: PIL.Image.fromarray to create a PIL Image from your numpy array, and then you can use the PIL.ImageOps.invert function to invert colors. Then your img variable should be the right type and inverted.
https://stackoverflow.com/questions/58496139/
Pytorch RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0
I use code from here to train a model to predict printed style number from 0 to 9: idx_to_class = {0: "0", 1: "1", 2: "2", 3: "3", 4: "4", 5: "5", 6: "6", 7:"7", 8: "8", 9:"9"} def predict(model, test_image_name): transform = image_transforms['test'] test_image = Image.open(test_image_name) plt.imshow(test_image) test_image_tensor = transform(test_image) if torch.cuda.is_available(): test_image_tensor = test_image_tensor.view(1, 3, 224, 224).cuda() else: test_image_tensor = test_image_tensor.view(1, 3, 224, 224) with torch.no_grad(): model.eval() # Model outputs log probabilities out = model(test_image_tensor) ps = torch.exp(out) topk, topclass = ps.topk(1, dim=1) # print(topclass.cpu().numpy()[0][0]) print("Image class: ", idx_to_class[topclass.cpu().numpy()[0][0]]) predict(model, "path_of_test_image") But I get an error when try to use predict: Traceback (most recent call last): File "<ipython-input-12-f8636d3ba083>", line 26, in <module> predict(model, "/home/x/文档/Deep_Learning/pytorch/MNIST/test/2/QQ截图20191022093955.png") File "<ipython-input-12-f8636d3ba083>", line 9, in predict test_image_tensor = transform(test_image) File "/home/x/.local/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 61, in __call__ img = t(img) File "/home/x/.local/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 166, in __call__ return F.normalize(tensor, self.mean, self.std, self.inplace) File "/home/x/.local/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 217, in normalize tensor.sub_(mean[:, None, None]).div_(std[:, None, None]) RuntimeError: The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0 How could I fix it? Thanks.
I suspect your test_image has an additional alpha channel per pixel, thus it has 4 channels instead of only three. Try: test_image = Image.open(test_image_name).convert('RGB')
https://stackoverflow.com/questions/58496858/
Tensorflow 2.0 dataset and dataloader
I am a pytorch user, and I am used to the data.dataset and data.dataloader api in pytorch. I am trying to build a same model with tensorflow 2.0, and I wonder whether there is an api that works similarly with these api in pytorch. If there is no such api, can any of you tell me how people usually do to implement the data loading part in tensorflow ? I've used tensorflow 1, but never had an experience with dataset api. I've hard coded before. I hope there is something like overriding getitem with only index as an input. Thanks much in advance.
When using the tf.data API, you will usually also make use of the map function. In PyTorch, your __getItem__ call basically fetches an element from your data structure given in __init__ and transforms it if necessary. In TF2.0, you do the same by initializing a Dataset using one of the Dataset.from_... functions (see from_generator, from_tensor_slices, from_tensors); this is essentially the __init__ part of a PyTorch Dataset. Then, you can call map to do the element-wise manipulations you would have in __getItem__. Tensorflow datasets are pretty much fancy iterators, so by design you don't access their elements using indices, but rather by traversing them. The guide on tf.data is very useful and provides a wide variety of examples.
https://stackoverflow.com/questions/58505880/
Pytorch Autograd: what does runtime error "grad can be implicitly created only for scalar outputs" mean
I am trying to understand Pytorch autograd in depth; I would like to observe the gradient of a simple tensor after going through a sigmoid function as below: import torch from torch import autograd D = torch.arange(-8, 8, 0.1, requires_grad=True) with autograd.set_grad_enabled(True): S = D.sigmoid() S.backward() My goal is to get D.grad() but even before calling it I get the runtime error: RuntimeError: grad can be implicitly created only for scalar outputs I see another post with similar question but the answer over there is not applied to my question. Thanks
The error means you can only run .backward (with no arguments) on a unitary/scalar tensor. I.e. a tensor with a single element. For example, you could do T = torch.sum(S) T.backward() since T would be a scalar output. I posted some more information on using pytorch to compute derivatives of tensors in this answer.
https://stackoverflow.com/questions/58510249/
Freezing layers in pre-trained bert model
How to freeze the last two layers in the above pre-trained model (dropout and classifier layers)? So that when the model is run, I will get a dense layer as output.
I would like to point you to the definition of BertForSequenceClassification and you can easily avoid the dropout and classifier by using: model = BertForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2) model.bert() # this will give you the dense layer output Why you can do the above? If you take a look at the constructor of BertForSequenceClassification: def __init__(self, config): super(BertForSequenceClassification, self).__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, self.config.num_labels) self.init_weights() As you can see, you just want to ignore the dropout and classifier layers. One more thing, freezing a layer and removing a layer are two different things. In your question, you mentioned that you want to freeze the classifier layer but freezing a layer will not help you to avoid it. Freezing means, you do not want to train the layer.
https://stackoverflow.com/questions/58510737/
Is .data still useful in pytorch 1.3 stable and what is the meaning of it?
Is .data still be used in the pytorch 1.3 stable, if so, could you please share the reference to me? Thx. t = torch.randperm(8) t.data
From PyTorch v0.4.0, calling y = x.data still has similar semantics. So y will be a Tensor that shares the same data with x, is unrelated to the computation history of x, and has requires_grad=False. However, .data can be unsafe in some cases. Any changes on x.data wouldn't be tracked by autograd, and the computed gradients would be incorrect if x is needed in a backward pass. A safer alternative is to use x.detach(), which also returns a Tensor that shares data with requires_grad=False, but will have its in-place changes reported by autograd if x is needed in backward. Reference: https://github.com/pytorch/pytorch/releases/tag/v0.4.0
https://stackoverflow.com/questions/58514651/
Dimension mismatch while using Pytorch LSTM module
I have a pytorch pretrained model from which I am generating features/embedding for some input sentences. The features are essentially torch object. For example a sample input_embedding (a list of torch objects) for one sentence looks like below [tensor([-0.8264, 0.2524], device='cuda:0', grad_fn=<SelectBackward>)] Now, I want to pass this embedding through a custom model which is fundamentally a bi-directional LSTM: def custom_model(input_embedding): #initialize BiLSTM bilstm = torch.nn.LSTM(input_size=1, hidden_size=1, num_layers=1, batch_first=False, bidirectional=True) #feed input to bilstm object bi_output, bi_hidden = bilstm(input_embedding) # more code .... return F.softmax(x) I wanted to pass my input_embedding to this custom model to get the prediction output like below: for item in input_embedding: y_pred = biLSTM_single_sentence_student_model(item) But it is throwing error on the bi_output, bi_hidden = bilstm(input_embedding) line saying: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) Most likely I am not defining the bilstm object properly due to my lack of understanding of Pytorch nn.LSTM input. Please suggest.
The input to LSTM must be a 3d tensor of shape (seq_len, batch, input_size). In your example, you are basically providing a 2d tensor of shape seq_len, input_size as you mentioned, [-0.8264, 0.2524] is one example sentence. So, you can modify your example as follows. # a list of 2 sentences input_embedding = [ torch.FloatTensor([[-0.8264], [0.2524]]), torch.FloatTensor([[-0.3259], [0.3564]]) ] for item in input_embedding: # item is a 2d tensor of shape `seq_len, input_size` # so, we unsqueeze it to make it 3d `seq_len, 1, input_size` where batch_size = 1 y_pred = custom_model(item.unsqueeze(1)) print(y_pred.size()) # 2, 1, 2 Hope this helps!
https://stackoverflow.com/questions/58515759/
Extracting fixed vectors from BioBERT without using terminal command?
If we want to use weights from pretrained BioBERT model, we can execute following terminal command after downloading all the required BioBERT files. os.system('python3 extract_features.py \ --input_file=trial.txt \ --vocab_file=vocab.txt \ --bert_config_file=bert_config.json \ --init_checkpoint=biobert_model.ckpt \ --output_file=output.json') The above command actually reads individual file containing the text, reads the textual content from it, and then writes the extracted vectors to another file. So, the problem with this is that it could not be scaled easily for very large data-sets containing thousands of sentences/paragraphs. Is there is a way to extract these features on the go (using an embedding layer) like it could be done for the word2vec vectors in PyTorch or TF1.3? Note: BioBERT checkpoints do not exist for TF2.0, so I guess there is no way it could be done with TF2.0 unless someone generates TF2.0 compatible checkpoint files. I will be grateful for any hint or help.
You can get the contextual embeddings on the fly, but the total time spend on getting the embeddings will always be the same. There are two options how to do it: 1. import BioBERT into the Transformers package and treat use it in PyTorch (which I would do) or 2. use the original codebase. 1. Import BioBERT into the Transformers package The most convenient way of using pre-trained BERT models is the Transformers package. It was primarily written for PyTorch, but works also with TensorFlow. It does not have BioBERT out of the box, so you need to convert it from TensorFlow format yourself. There is convert_tf_checkpoint_to_pytorch.py script that does that. People had some issues with this script and BioBERT (seems to be resolved). After you convert the model, you can load it like this. import torch from transformers import * # Load dataset, tokenizer, model from pretrained model/vocabulary tokenizer = BertTokenizer.from_pretrained('directory_with_converted_model') model = BertModel.from_pretrained('directory_with_converted_model') # Call the model in a standard PyTorch way embeddings = model([tokenizer.encode("Cool biomedical tetra-hydro-sentence.", add_special_tokens=True)]) 2. Use directly BioBERT codebase You can get the embeddings on the go basically using the code that is exctract_feautres.py. On lines 346-382, they initialize the model. You get the embeddings by calling estimator.predict(...). For that, you need to format your format the input. First, you need to format the string (using code on line 326-337) and then apply and call convert_examples_to_features on it.
https://stackoverflow.com/questions/58518980/
Masking tensor of same shape in PyTorch
Given an array and mask of same shapes, I want the masked output of the same shape and containing 0 where mask is False. For example, # input array img = torch.randn(2, 2) print(img) # tensor([[0.4684, 0.8316], # [0.8635, 0.4228]]) print(img.shape) # torch.Size([2, 2]) # mask mask = torch.BoolTensor(2, 2) print(mask) # tensor([[False, True], # [ True, True]]) print(mask.shape) # torch.Size([2, 2]) # expected masked output of shape 2x2 # tensor([[0, 0.8316], # [0.8635, 0.4228]]) Issue: The masking changes the shape of the output as follows: #1: shape changed img[mask] # tensor([0.8316, 0.8635, 0.4228])
Simply type-cast your boolean mask to an integer mask, followed by float to bring the mask to the same type as in img. Perform element-wise multiplication afterwards. masked_output = img * mask.int().float()
https://stackoverflow.com/questions/58521595/
Select/Mask different column index in every row
In pytorch I have a multi-dimensional tensor, call it X X = [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12], ...] Now I would like to select a different column index for each row like so indices = [[0], [1], [0], [2], ...] # now I expect following values to be returned: [[1], [5], [7], [12], ...] also I would like to achieve the opposite so that for the given indices I get [[2, 3], [4, 6], [8, 9], [10, 11]] Is there a "simple" way to achieve this without a for loop? I would be grateful for any ideas.
In fact the torch.gather function performs exactly this. For example a = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]) indices = torch.tensor([[0], [1], [0], [2]]) a.gather(1, indices) will exactly return tensor([[ 1], [ 5], [ 7], [12]]) I do not require the opposite anymore but for this I would propose just creating a mask with all ones and then setting the respective indices of the "gathering" tensor to 0 or just create a new "gathering" tensor which contains the respective opposite keys. For example: indices_opposite = [np.setdiff1d(np.arange(a.size(1)), i) for i in indices.numpy()]
https://stackoverflow.com/questions/58523290/
How to train features in different scales in deep learning model
I'm new in deep learning and I built a very simple model to try to train my data. I have two features input: sex and age. sex is 0 or 1 and age is between 25 and 60. Output is just 0 means this person has no such disease and 1 means has such disease. However, when I train my model, the training loss does not decrease at all. It looks like because my two features are very different in range. So how can I fix this? Any suggestions would be greatly appreciated. My code is here: class Net(nn.Module): def __init__(self): super(Net,self).__init__() self.fc1 = nn.Sequential( nn.Linear(2,50), nn.ReLU(), nn.Linear(50,2) ) def forward(self,x): x = self.fc1(x) x = F.softmax(x, dim=1) return x #Inputs X = np.column_stack((sex,age)) X = torch.from_numpy(X).type(torch.FloatTensor) y = torch.from_numpy(y).type(torch.LongTensor) #Initialize the model model = Net() #Define loss criterion criterion = nn.CrossEntropyLoss() #Define the optimizer optimizer = torch.optim.Adam(model.parameters(), lr=0.01) epochs = 1000 losses = [] for i in range(epochs): y_pred = model.forward(X) #Compute Cross entropy loss loss = criterion(y_pred,y) #Add loss to the list losses.append(loss.item()) #Clear the previous gradients optimizer.zero_grad() #Compute gradients loss.backward() #Adjust weights optimizer.step() _, predicted = torch.max(y_pred.data, 1) if i % 50 == 0: print(loss.item()) And the train loss looks like this 0.9273738861083984 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 0.6992899179458618 EDIT Thank you for your comments. Sorry I didn't explain my question clearly. This is part of my network and my input data contains two parts: the first part is some signal data and I use CNN model to train it and it works well; the second part is what I mentioned above. My goal is to merge two models to improve my accuracy. I've tried normalization and looks like it works. I want to know is it always necessary to do normalization when pre-processing data? Thank you!
An alternative. If the age has discrete values in the range (25-60), then one possible way would be to learn embeddings for those two attributes, sex and age. For example, class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.sex_embed = nn.Embedding(2, 20) self.age_embed = nn.Embedding(36, 50) self.fc1 = nn.Sequential( nn.Linear(70, 35), nn.ReLU(), nn.Linear(35, 2) ) def forward(self, x): # write the forward In the above example, I assume age values will be integer (25, 26, ..., 60), so for each possible value, we can learn a vector representation. So, I propose to learn a 20d representation of sex and 50d representation of age. You can change the dimensions and do experiments to find an optimal value.
https://stackoverflow.com/questions/58525908/
How can I shuffle the labels of a dataset?
I have downloaded the MNIST dataset, using the following command: train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) I now need to run some experiments on this dataset (MNIST), but shuffling the labels of the training set. How can I shuffle/reassign them randomly? I have tried the following: train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), target_transform=lambda y: torch.randint(0, 10, (1,)).item(), download=True) But I have noticed that what comes after the lambda function makes the labels shuffle during the training process, e.g. they change at every epoch. This way, I won't reach 100% training accuracy, which is what I am aiming for. How can I shuffle these labels in a way that is completely random, making sure that these labels won't change during the training process? Thank you!!
In case your goal is to create a random mapping of labels you would need to define the mapping before defining the target transform to keep the transform constant. Something like the following should do the trick import random label_mapping = list(range(10)) random.shuffle(label_mapping) train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), target_transform=lambda y: label_mapping[y], download=True) In order to get a new shuffle each epoch you would want to redefine the label mapping, training dataset, and dataloader each epoch. Update To instead generate a random label which is independent of the true label but consistent for a given index then you probably need to either do some very careful seeding or reimplement some functionality of the dataset class. For example, the latter case might look something like this import random class RandomMNIST(dsets.MNIST): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.targets = [random.randint(0, 9) for _ in range(len(self.data))] train_dataset = RandomMNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) or equivalently import random train_dataset = dsets.MNIST(root='./data', train=True, transform=transforms.ToTensor(), download=True) train_dataset.targets = [random.randint(0, 9) for _ in range(len(train_dataset))]
https://stackoverflow.com/questions/58527130/
Deleting Rows in Torch Tensor
I have a torch tensor as follows - a = tensor( [[0.2215, 0.5859, 0.4782, 0.7411], [0.3078, 0.3854, 0.3981, 0.5200], [0.1363, 0.4060, 0.2030, 0.4940], [0.1640, 0.6025, 0.2267, 0.7036], [0.2445, 0.3032, 0.3300, 0.4253]], dtype=torch.float64) If the first value of each row is less than 0.2 then the whole row needs to be deleted. Thus I need the output like - tensor( [[0.2215, 0.5859, 0.4782, 0.7411], [0.3078, 0.3854, 0.3981, 0.5200], [0.2445, 0.3032, 0.3300, 0.4253]], dtype=torch.float64) I have tried to loop through the tensor and append the valid value to a new empty tensor but was not successful. Is there any way to get the results efficiently?
Code a = torch.Tensor( [[0.2215, 0.5859, 0.4782, 0.7411], [0.3078, 0.3854, 0.3981, 0.5200], [0.1363, 0.4060, 0.2030, 0.4940], [0.1640, 0.6025, 0.2267, 0.7036], [0.2445, 0.3032, 0.3300, 0.4253]]) y = a[a[:, 0] > 0.2] print(y) Output tensor([[0.2215, 0.5859, 0.4782, 0.7411], [0.3078, 0.3854, 0.3981, 0.5200], [0.2445, 0.3032, 0.3300, 0.4253]])
https://stackoverflow.com/questions/58530117/
Why is the input size of the MultiheadAttention in Pytorch Transformer module 1536?
When using the torch.nn.modules.transformer.Transformer module/object, the first layer is the encoder.layers.0.self_attn layer that is a MultiheadAttention layer, i.e. from torch.nn.modules.transformer import Transformer bumblebee = Transformer() bumblee.parameters [out]: <bound method Module.parameters of Transformer( (encoder): TransformerEncoder( (layers): ModuleList( (0): TransformerEncoderLayer( (self_attn): MultiheadAttention( (out_proj): Linear(in_features=512, out_features=512, bias=True) ) (linear1): Linear(in_features=512, out_features=2048, bias=True) (dropout): Dropout(p=0.1, inplace=False) (linear2): Linear(in_features=2048, out_features=512, bias=True) (norm1): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (norm2): LayerNorm((512,), eps=1e-05, elementwise_affine=True) (dropout1): Dropout(p=0.1, inplace=False) (dropout2): Dropout(p=0.1, inplace=False) ) And if we print out the size of the layer, we see: for name in bumblebee.encoder.state_dict(): print(name, '\t', bumblebee.encoder.state_dict()[name].shape) [out]: layers.0.self_attn.in_proj_weight torch.Size([1536, 512]) layers.0.self_attn.in_proj_bias torch.Size([1536]) layers.0.self_attn.out_proj.weight torch.Size([512, 512]) layers.0.self_attn.out_proj.bias torch.Size([512]) layers.0.linear1.weight torch.Size([2048, 512]) layers.0.linear1.bias torch.Size([2048]) layers.0.linear2.weight torch.Size([512, 2048]) layers.0.linear2.bias torch.Size([512]) layers.0.norm1.weight torch.Size([512]) layers.0.norm1.bias torch.Size([512]) layers.0.norm2.weight torch.Size([512]) layers.0.norm2.bias torch.Size([512]) It seems like 1536 is 512 * 3 and somehow the layers.0.self_attn.in_proj_weight parameter might be storing all three QKV tensors in the transformer architecture in one matrix. From https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/activation.py#L649 class MultiheadAttention(Module): def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None): super(MultiheadAttention, self).__init__() self.embed_dim = embed_dim self.kdim = kdim if kdim is not None else embed_dim self.vdim = vdim if vdim is not None else embed_dim self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim self.num_heads = num_heads self.dropout = dropout self.head_dim = embed_dim // num_heads assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads" if self._qkv_same_embed_dim is False: self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim)) self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim)) self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim)) else: self.in_proj_weight = Parameter(torch.empty(3 * embed_dim, embed_dim)) And the note in the docstring of the MultiheadAttention says: Note: if kdim and vdim are None, they will be set to embed_dim such that query, key, and value have the same number of features. Is that correct?
From the nn.Transformer definition with the default values, EncoderLayer is instantiated with d_model=512, nhead=8. The MultiheadAttention is instantiated with d_model, nhead equal to those values and k_dim, v_dim are left to the default value of None. If they are None, self._qkv_same_embed_dim at this line evaluates to True. When that happens, as you correctly pointed out self.in_proj_weight is defined as a Tensor of shape (3 x embed_dim, embed_dim). In short: yes, that's correct.
https://stackoverflow.com/questions/58532911/
PyTorch, apply different functions element-wise
I defined a tensor like this t_shape = [4, 1] data = torch.rand(t_shape) I want to apply different functions to each row. funcs = [lambda x: x+1, lambda x: x**2, lambda x: x-1, lambda x: x*2] # each function for each row. I can do it with the following code d = torch.tensor([f(data[i]) for i, f in enumerate(funcs)]) How can I do it in a proper way with more advanced APIs defined in PyTorch?
I think your solution is good. But it won't work with any tensor shape. You can slightly modify the solution as follows. t_shape = [4, 10, 10] data = torch.rand(t_shape) funcs = [lambda x: x+1, lambda x: x**2, lambda x: x-1, lambda x: x*2] # only change the following 2 lines d = [f(data[i]) for i, f in enumerate(funcs)] d = torch.stack(d, dim=0)
https://stackoverflow.com/questions/58533136/
Can I train two networks, one network which contains another netwrok, at the same time?
I define two networks in my model. def Net_One(): conv2d conv2d ... def Net_Two(): Net_One(input) conv2d fc So my question is: when I train Net_Two, using back propagation optimization, will pytorch automatically train Net_One or not? Why?
Firstly model class name's convention should be CapWords: NetOne,NetTwo, but this doesn't hurt anything, just a convention. As for your questions, It depends on the processing of NetTwo(). if the final loss of NetTwo has nothing to do with NetOne(), then the back propagation will not flow through NetOne(), and therefore will not update NetOne()'s parameters. Otherwise the back propagation will compute the NetOne()'s gradients and update its weights. For code samples: # NetTwo's loss has nothing to do with NetOne: def NetOne(): def __init__(self): super(NetOne, self).__init__() ... def forward(inputs): ... def NetTwo(): def __init__(self): super(NetTwo, self).__init__() ... def forward(inputs): # in this processing, temp is never used by NetTwo's layers.. ... temp = NetOne(inputs) inputs = conv2d(inputs) ... In the upper codes, temp is never used, so NetOne won't be updated when NetTwo is updating. But if temp is used by NetTwo.forward(), it'll be updated, like following: def NetTwo(): def __init__(self): super(NetTwo, self).__init__() ... def forward(inputs): # in this processing, temp is used by NetTwo's layers. ... temp = NetOne(inputs) inputs = conv2d(temp) ... Did this answer your question?
https://stackoverflow.com/questions/58535447/
How do you efficiently sum the occurences of a value in one array at positions in another array
Im looking for an efficient 'for loop' avoiding solution that solves an array related problem I'm having. I want to use a huge 1Darray (A -> size = 250.000) of values between 0 and 40 for indexing in one dimension, and a array (B) with the same size with values between 0 and 9995 for indexing in a second dimension. The result should be an array with size (41, 9996) with for each index the amount of times that any value from array 1 occurs at a value from array 2. Example: A = [0, 3, 2, 4, 3] B = [1, 2, 2, 0, 2] which should result in: [[0, 1, 0, [0, 0, 0, [0, 0, 1, [0, 0, 2, [1, 0, 0]] The dirty way is too slow as the amount of data is huge, what you would be able to do is: out = np.zeros(41,9995) for i in A: for j in B: out[i,j] += 1 which will take 238.000 * 238.000 loops... I've tried this, which works partially: out = np.zeros(41,9995) out[A,B] += 1 Which generates a result with 1 everywhere, regardless of the amount of times the values occur. Does anyone have a clue how to fix this? Thanks in advance!
You are looking for a sparse tensor: import torch A = [0, 3, 2, 4, 3] B = [1, 2, 2, 0, 2] idx = torch.LongTensor([A, B]) torch.sparse.FloatTensor(idx, torch.ones(idx.shape[1]), torch.Size([5,3])).to_dense() Output: tensor([[0., 1., 0.], [0., 0., 0.], [0., 0., 1.], [0., 0., 2.], [1., 0., 0.]]) You can also do the same with scipy sparse matrix: import numpy as np from scipy.sparse import coo_matrix coo_matrix((np.ones(len(A)), (np.array(A), np.array(B))), shape=(5,3)).toarray() output: array([[0., 1., 0.], [0., 0., 0.], [0., 0., 1.], [0., 0., 2.], [1., 0., 0.]]) Sometimes it is better to leave the matrix in its sparse representation, rather than forcing it to be "dense" again.
https://stackoverflow.com/questions/58541463/
How to use BERT just for ENTITY extraction from a Sequence without classification in the NER task?
My requirement here is given a sentence(sequence), I would like to just extract the entities present in the sequence without classifying them to a type in the NER task. I see that BertForTokenClassification for NER does the classification. Can this be adapted for just the extraction? Can BERT just be used to do entity extraction/identification?
Regardless BERT, NER tagging is usually done by tagging with the IOB format (inside, outside, beginning) or something similar (often the end is also explicitly tagged). The inside and beggining tags contain the entity type. Something like this: Alex B-PER is O going O to O Los B-LOC Angeles I-LOC If you modify your training data, such that there will be only one entity type, the model will only learn to detect the entities without knowing what type the entity is. Alex B is O going O to O Los B Angeles I
https://stackoverflow.com/questions/58541811/
Custom LSTM model in Pytorch showing input size mismatch
I have a custom bidirectional LSTM model where the custom part is - extract the forward and backward last hidden state - concat those states - create a fully connected layer and pass it through softmax layer. The code looks like below: class customModel(nn.Module): def __init__(self, input_size, hidden_size, num_layers, num_classes): super(customModel, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.bilstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=False, bidirectional=True) self.fcl = nn.Linear(hidden_size, num_classes) def forward(self, x): # Set initial hidden and cell states h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(device) # Forward propagate LSTM out, hidden = self.bilstm(x, (h0, c0)) # out: tensor of shape (batch_size, seq_length, hidden_size) #concat hidden state of forward and backword fw_bilstm = out[-1, :, :self.hidden_size] bk_bilstm = out[0, :, :self.hidden_size] concat_fw_bw = torch.cat((fw_bilstm, bk_bilstm), dim = 1) fc = nn.Linear(concat_fw_bw, num_classes) x = F.relu(fc(x)) return F.softmax(x) I use below parameters and input input_size = 2 hidden_size = 32 num_layers = 1 num_classes = 2 input_embedding = [ torch.FloatTensor([[-0.8264], [0.2524]]), torch.FloatTensor([[-0.3259], [0.3564]]) ] Then I create a model object model = customModel(input_size, hidden_size, num_layers, num_classes) Which then I use like below: for item in input_embedding: print(item.size()) for epoch in range(1): pred = model(item) print (pred) When I run it, I see for this line out, hidden = self.bilstm(x, (h0, c0)), it shows error RuntimeError: input must have 3 dimensions, got 2 I am not sure why the model is thinking that input must have 3 dimensions when I explicitly specified input_size=2 What am I missing?
You seem to be missing a (batch or sequence) dimension in your input. There is a difference between nn.LSTM and nn.LSTMCell. The former -- which is the one you use -- takes whole sequences as inputs. Therefore it needs 3-dimensional inputs of shape (seq_len, batch, input_size). Let's say you want to give those 4 sequences of letters (which you code as one-hot vectors) as inputs in form of a batch: x0 = [a,b,c] x1 = [c,d,e] x2 = [e,f,g] x3 = [h,i,j] ### input.size() should give you the following: (3,4,8) The seq_len parameter is the size of the sequences: here 3, The input_size parameter is the size of each input vector: here, the input would be a one-hot vector of size 8, The batch is the number of sequences you put together: here there are 4 sequences. NB: It can be easier to grasp by putting the batch sequence first and setting the batch_first as True Also: if (h_0, c_0) is not provided, both h_0 and c_0 default to zero so it's not useful to create them.
https://stackoverflow.com/questions/58549839/
After Deleting some Python Files, I can't install pytorch any more via pip
Unfortunately I deleted some files of Python. I got macOS Catalina and I want to install pytorch with the command: pip3 install torch If I enter this in my terminal I'll get Building wheel for torch (setup.py) ... error ERROR: Command errored out with exit status 1: command: /Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py'"'"'; file='"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-wheel-qh35w11o --python-tag cp38 cwd: /private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/ Complete output (30 lines): running bdist_wheel running build running build_deps Traceback (most recent call last): File "", line 1, in File "/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py", line 225, in setup(name="torch", version="0.1.2.post2", File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/setuptools/init.py", line 145, in setup return distutils.core.setup(**attrs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 192, in run self.run_command('build') File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/command/build.py", line 135, in run self.run_command(cmd_name) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' ERROR: Failed building wheel for torch Running setup.py clean for torch ERROR: Command errored out with exit status 1: command: /Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py'"'"'; file='"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' clean --all cwd: /private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch Complete output (2 lines): running clean error: [Errno 2] No such file or directory: '.gitignore' ERROR: Failed cleaning build dir for torch Failed to build torch Installing collected packages: torch Running setup.py install for torch ... error ERROR: Command errored out with exit status 1: command: /Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py'"'"'; file='"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-record-cerpeh7h/install-record.txt --single-version-externally-managed --compile cwd: /private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/ Complete output (23 lines): running install running build_deps Traceback (most recent call last): File "", line 1, in File "/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py", line 225, in setup(name="torch", version="0.1.2.post2", File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/setuptools/init.py", line 145, in setup return distutils.core.setup(**attrs) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/core.py", line 148, in setup dist.run_commands() File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/dist.py", line 966, in run_commands self.run_command(cmd) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py", line 99, in run self.run_command('build_deps') File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/cmd.py", line 313, in run_command self.distribution.run_command(command) File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/distutils/dist.py", line 985, in run_command cmd_obj.run() File "/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap' ---------------------------------------- ERROR: Command errored out with exit status 1: /Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py'"'"'; file='"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-kb_zrdjk/torch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-record-cerpeh7h/install-record.txt --single-version-externally-managed --compile Check the logs for full command output. Is there a way to solve this problem? Do you need further Information? UPDATE I changed the way of the enviroment (without anaconda) and so I get this error message: ERROR: Command errored out with exit status 1: /Library/Frameworks/Python.framework/Versions/3.8/bin/python3.8 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-zelcwulm/torch/setup.py'"'"'; file='"'"'/private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-install-zelcwulm/torch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /private/var/folders/sb/4h99w30940zcbdvy_csrksxc0000gn/T/pip-record-i1_7akf_/install-record.txt --single-version-externally-managed --compile Check the logs for full command output.
Try running: pip3 install --upgrade --force-reinstall pip If that doesn't work you can always try reinstalling python from https://www.python.org/downloads
https://stackoverflow.com/questions/58554327/
Do all variables in the loss function have to be tensor with grads in pytorch?
I have the following function def msfe(ys, ts): ys=ys.detach().numpy() #output from the network ts=ts.detach().numpy() #Target (true labels) pred_class = (ys>=0.5) n_0 = sum(ts==0) #Number of true negatives n_1 = sum(ts==1) #Number of true positives FPE = sum((ts==0)[[bool(p) for p in (pred_class==1)]])/n_0 #False positive error FNE = sum((ts==1)[[bool(p) for p in (pred_class==0)]])/n_1 #False negative error loss= FPE**2+FNE**2 loss=torch.tensor(loss,dtype=torch.float64,requires_grad=True) return loss and I wonder, if the autograd in Pytorch works properly, since ys and ts does not have the grad flag. So my question is: do all the variables (FPE,FNE,ys,ts,n_1,n_0) have to be tensors, before optimizer.step() works, or is it okay that it is only the final function (loss) which is ?
All of the variables you want to optimise via optimizer.step() need to have gradient. In your case it would be y predicted by network, so you shouldn't detach it (from graph). Usually you don't change your targets, so those don't need gradients. You shouldn't have to detach them though, tensors by default don't require gradient and won't be backpropagated. Loss will have gradient if it's ingredients (at least one) have gradient. Overall you rarely need to take care of it manually. BTW. don't use numpy with PyTorch, there is rarely ever the case to do so. You can perform most of the operations you can do on numpy array on PyTorch's tensor. BTW2. There is no such thing as Variable in pytorch anymore, only tensors which require gradient and those that don't. Non-differentiability 1.1 Problems with existing code Indeed, you are using functions which are not differentiable (namely >= and ==). Those will give you trouble only in the case of your outputs, as those required gradient (you can use == and >= for targets though). Below I have attached your loss function and outlined problems in it in the comments: # Gradient can't propagate if you detach and work in another framework # Most Python constructs should be fine, detaching will ruin it though. def msfe(outputs, targets): # outputs=outputs.detach().numpy() # Do not detach, no need to do that # targets=targets.detach().numpy() # No need for numpy either pred_class = outputs >= 0.5 # This one is non-differentiable # n_0 = sum(targets==0) # Do not use sum, there is pytorch function for that # n_1 = sum(targets==1) n_0 = torch.sum(targets == 0) # Those are not differentiable, but... n_1 = torch.sum(targets == 1) # It does not matter as those are targets # FPE = sum((targets==0)[[bool(p) for p in (pred_class==1)]])/n_0 # Do not use Python bools # FNE = sum((targets==1)[[bool(p) for p in (pred_class==0)]])/n_1 # Stay within PyTorch # Those two below are non-differentiable due to == sign as well FPE = torch.sum((targets == 0.0) * (pred_class == 1.0)).float() / n_0 FNE = torch.sum((targets == 1.0) * (pred_class == 0.0)).float() / n_1 # This is obviously fine loss = FPE ** 2 + FNE ** 2 # Loss should be a tensor already, don't do things like that # Gradient will not be propagated, you will have a new tensor # Always returning gradient of `1` and that's all # loss = torch.tensor(loss, dtype=torch.float64, requires_grad=True) return loss 1.2 Possible solution So, you need to get rid of 3 non-differentiable parts. You could in principle try to approximate it with continuous outputs from your network (provided you are using sigmoid as activation). Here is my take: def msfe_approximation(outputs, targets): n_0 = torch.sum(targets == 0) # Gradient does not flow through it, it's okay n_1 = torch.sum(targets == 1) # Same as above FPE = torch.sum((targets == 0) * outputs).float() / n_0 FNE = torch.sum((targets == 1) * (1 - outputs)).float() / n_1 return FPE ** 2 + FNE ** 2 Notice that to minimize FPE outputs will try to be zero on the indices where targets are zero. Similarly for FNE, if targets are 1, network will try to output 1 as well. Notice similarity of this idea to BCELoss (Binary CrossEntropy). And lastly, example you can run this on, just for sanity check: if __name__ == "__main__": model = torch.nn.Sequential( torch.nn.Linear(30, 100), torch.nn.ReLU(), torch.nn.Linear(100, 200), torch.nn.ReLU(), torch.nn.Linear(200, 1), torch.nn.Sigmoid(), ) optimizer = torch.optim.Adam(model.parameters()) targets = torch.randint(high=2, size=(64, 1)) # random targets inputs = torch.rand(64, 30) # random data for _ in range(1000): optimizer.zero_grad() outputs = model(inputs) loss = msfe_approximation(outputs, targets) print(loss) loss.backward() optimizer.step() print(((model(inputs) >= 0.5) == targets).float().mean())
https://stackoverflow.com/questions/58560316/
need help understanding pytorch blitz math notation
I just came across this notation in the pytorch blitz tutorial and I dont know what the vertical line is Does anyone have any suggestions on the notation?
The vertical line means the value of the left side variable given that the right side variable is a particular value. So, your given example means z_i is 27 when x_i is 1. Basically, it means 'LHS holds given RHS'
https://stackoverflow.com/questions/58575476/
How to create a completely (uniformly) random dataset on PyTorch
I need to run some experiments on custom datasets using pytorch. The question is, how can I create a dataset using torch.Dataloader? I have two lists, one is called Values and has a datapoint tensor at every entry, and the other one is called Labels, that has the corresponding label. What I did is the following: for i in range(samples): dataset[i] = [values[i],labels[I]] So I have a list with datapoint and respective label, and then tried the following: dataset = torch.tensor(dataset).float() dataset = torch.utils.data.TensorDataset(dataset) data_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=100, shuffle=True, num_workers=4, pin_memory=True) But, first of all, I get the error "Not a sequence" in the torch.tensor command, and second, I'm not sure this is the right way of creating one. Any suggestion? Thank you very much!
You do not need to overload DataLoader, but rather create a Dataset for your data. For instance, class MyDataset(Dataset): def __init__(self): super(MyDataset, self).__init__() # do stuff here? self.values = values self.labels = labels def __len__(self): return len(self.values) # number of samples in the dataset def __getitem__(self, index): return self.values[index], self.labels[index]
https://stackoverflow.com/questions/58579211/
PyTorch embedding layer raises "expected...cuda...but got...cpu" error
I'm working on translating a PyTorch model from CPU (where it works) to GPU (where it so far doesn't). The error message (clipped to the important bits) is as follows: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-12-a7bb230c924c> in <module> 1 model = FeedforwardTabularModel() 2 model.cuda() ----> 3 model.fit(X_train_sample.values, y_train_sample.values) <ipython-input-11-40b1edae7417> in fit(self, X, y) 100 for epoch in range(self.n_epochs): 101 for i, (X_batch, y_batch) in enumerate(batches): --> 102 y_pred = model(X_batch).squeeze() 103 # scheduler.batch_step() # Disabled due to a bug, see above. 104 loss = self.loss_fn(y_pred, y_batch) [...] /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1482 # remove once script supports set_grad_enabled 1483 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1484 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1485 1486 RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select Here is the full model definition: import torch from torch import nn import torch.utils.data # ^ https://discuss.pytorch.org/t/attributeerror-module-torch-utils-has-no-attribute-data/1666 class FeedforwardTabularModel(nn.Module): def __init__(self): super().__init__() self.batch_size = 512 self.base_lr, self.max_lr = 0.001, 0.003 self.n_epochs = 5 self.cat_vars_embedding_vector_lengths = [ (1115, 80), (7, 4), (3, 3), (12, 6), (31, 10), (2, 2), (25, 10), (26, 10), (4, 3), (3, 3), (4, 3), (23, 9), (8, 4), (12, 6), (52, 15), (22, 9), (6, 4), (6, 4), (3, 3), (3, 3), (8, 4), (8, 4) ] self.loss_fn = torch.nn.MSELoss() self.score_fn = torch.nn.MSELoss() # Layer 1: embeddings. self.embeddings = [] for (in_size, out_size) in self.cat_vars_embedding_vector_lengths: emb = nn.Embedding(in_size, out_size) self.embeddings.append(emb) # Layer 1: dropout. self.embedding_dropout = nn.Dropout(0.04) # Layer 1: batch normalization (of the continuous variables). self.cont_batch_norm = nn.BatchNorm1d(16, eps=1e-05, momentum=0.1) # Layers 2 through 9: sequential feedforward model. self.seq_model = nn.Sequential(*[ nn.Linear(in_features=215, out_features=1000, bias=True), nn.ReLU(), nn.BatchNorm1d(1000, eps=1e-05, momentum=0.1), nn.Dropout(p=0.001), nn.Linear(in_features=1000, out_features=500, bias=True), nn.ReLU(), nn.BatchNorm1d(500, eps=1e-05, momentum=0.1), nn.Dropout(p=0.01), nn.Linear(in_features=500, out_features=1, bias=True) ]) def forward(self, x): # Layer 1: embeddings. inp_offset = 0 embedding_subvectors = [] for emb in self.embeddings: index = torch.tensor(inp_offset, dtype=torch.int64).cuda() inp = torch.index_select(x, dim=1, index=index).long().cuda() out = emb(inp) out = out.view(out.shape[2], out.shape[0], 1).squeeze() embedding_subvectors.append(out) inp_offset += 1 out_cat = torch.cat(embedding_subvectors) out_cat = out_cat.view(out_cat.shape[::-1]) # Layer 1: dropout. out_cat = self.embedding_dropout(out_cat) # Layer 1: batch normalization (of the continuous variables). out_cont = self.cont_batch_norm(x[:, inp_offset:]) out = torch.cat((out_cat, out_cont), dim=1) # Layers 2 through 9: sequential feedforward model. out = self.seq_model(out) return out def fit(self, X, y): self.train() # TODO: set a random seed to invoke determinism. # cf. https://github.com/pytorch/pytorch/issues/11278 X = torch.tensor(X, dtype=torch.float32).cuda() y = torch.tensor(y, dtype=torch.float32).cuda() # The build of PyTorch on Kaggle has a blog that prevents us from using # CyclicLR with ADAM. Cf. GH#19003. # optimizer = torch.optim.Adam(model.parameters(), lr=max_lr) # scheduler = torch.optim.lr_scheduler.CyclicLR( # optimizer, base_lr=base_lr, max_lr=max_lr, # step_size_up=300, step_size_down=300, # mode='exp_range', gamma=0.99994 # ) optimizer = torch.optim.Adam(model.parameters(), lr=(self.base_lr + self.max_lr) / 2) batches = torch.utils.data.DataLoader( torch.utils.data.TensorDataset(X, y), batch_size=self.batch_size, shuffle=True ) for epoch in range(self.n_epochs): for i, (X_batch, y_batch) in enumerate(batches): y_pred = model(X_batch).squeeze() # scheduler.batch_step() # Disabled due to a bug, see above. loss = self.loss_fn(y_pred, y_batch) optimizer.zero_grad() loss.backward() optimizer.step() print( f"Epoch {epoch + 1}/{self.n_epochs}, Loss {loss.detach().numpy()}" ) def predict(self, X): self.eval() with torch.no_grad(): y_pred = model(torch.tensor(X, dtype=torch.float32).cuda()) return y_pred.squeeze() def score(self, X, y): y_pred = self.predict(X) y = torch.tensor(y, dtype=torch.float32).cuda() return self.score_fn(y, y_pred) model = FeedforwardTabularModel() model.cuda() model.fit(X_train_sample.values, y_train_sample.values) This type of error typically occurs when there is a tensor in the model that should be on GPU but is on CPU instead. But as far as I can tell, I've already placed .cuda() calls at all of the necessary places: every time a torch.tensor is declared, and running model.cuda() before model.fit. What is causing this error?
Someone on a separate forum provided the solution: Pytorch requires your to do self.module_name = module for things to work correctly. It's okay to keep them in a list. Just do something like setattr(self, 'emb_{}'.format(i), emb) for each step in that loop. Because I was managing my embedding layers in a list, whereas PyTorch requires all layers be registered as an attribute on the model object, they were not automatically moved over to GPU memory when model.cuda() was called. Tricky!
https://stackoverflow.com/questions/58581464/
Multi dimensional inputs in pytorch Linear method?
When building a simple perceptron neural network we usuall passes a 2D matrix of input of format (batch_size,features) to a 2D weight matrix, similar to this simple neural network in numpy. I always assumed a Perceptron/Dense/Linear layer of a neural network only accepts an input of 2D format and outputs another 2D output. But recently I came across this pytorch model in which a Linear layer accepts a 3D input tensor and output another 3D tensor (o1 = self.a1(x)). import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class Net(nn.Module): def __init__(self): super().__init__() self.a1 = nn.Linear(4,4) self.a2 = nn.Linear(4,4) self.a3 = nn.Linear(9,1) def forward(self,x): o1 = self.a1(x) o2 = self.a2(x).transpose(1,2) output = torch.bmm(o1,o2) output = output.view(len(x),9) output = self.a3(output) return output x = torch.randn(10,3,4) y = torch.ones(10,1) net = Net() criterion = nn.MSELoss() optimizer = optim.Adam(net.parameters()) for i in range(10): net.zero_grad() output = net(x) loss = criterion(output,y) loss.backward() optimizer.step() print(loss.item()) These are the question I have, Is the above neural network a valid one? that is whether the model will train correctly? Even after passing a 3D input x = torch.randn(10,3,4), why is the pytorch nn.Linear doesn't shows any error and gives a 3D output?
Newer versions of PyTorch allows nn.Linear to accept N-D input tensor, the only constraint is that the last dimension of the input tensor will equal in_features of the linear layer. The linear transformation is then applied on the last dimension of the tensor. For instance, if in_features=5 and out_features=10 and the input tensor x has dimensions 2-3-5, then the output tensor will have dimensions 2-3-10.
https://stackoverflow.com/questions/58587057/
How to solve size mismatch of Multi Head Attention in pytorch?
Learning how to coding Multi Head Attention in pytorch now, I can't solve the problem of size_mismatch in case dimension of input tensor have 4 dims. I refer to the def and class codes in http://nlp.seas.harvard.edu/2018/04/03/attention.html Sorry for the inconvenience,Can you give me advice? #attention def and class def clones(module, N): "Produce N identical layers." return nn.ModuleList([copy.deepcopy(module) for _ in range(N)]) def attention(query, key, value, mask=None, dropout=None): "Compute 'Scaled Dot Product Attention'" d_k = query.size(-1) scores = torch.matmul(query, key.transpose(-2, -1)) \ / math.sqrt(d_k) if mask is not None: scores = scores.masked_fill(mask == 0, -1e9) p_attn = F.softmax(scores, dim = -1) if dropout is not None: p_attn = dropout(p_attn) return torch.matmul(p_attn, value), p_attn # MultiHead Attention class class MultiHeadedAttention(nn.Module): def __init__(self, h, d_model, dropout=0.1): "Take in model size and number of heads." super(MultiHeadedAttention, self).__init__() assert d_model % h == 0 # We assume d_v always equals d_k self.d_k = d_model // h self.h = h self.linears = clones(nn.Linear(d_model, d_model), 4) self.attn = None self.dropout = nn.Dropout(p=dropout) def forward(self, query, key, value, mask=None): "Implements Figure 2" if mask is not None: # Same mask applied to all h heads. mask = mask.unsqueeze(1) nbatches = query.size(0) # 1) Do all the linear projections in batch from d_model => h x d_k query, key, value = \ [l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2) for l, x in zip(self.linears, (query, key, value))] # 2) Apply attention on all the projected vectors in batch. x, self.attn = attention(query, key, value, mask=mask, dropout=self.dropout) # 3) "Concat" using a view and apply a final linear. x = x.transpose(1, 2).contiguous() \ .view(nbatches, -1, self.h * self.d_k) return self.linears[-1](x) # create test_4_dim tensor X=torch.randn(10,5,64,64) X=X.view(X.shape[0],X.shape[1],X.shape[2]*X.shape[3]) #X:torch.Size([10, 5, 4096]) query_=X.transpose(2,1) key_=X value_=X print("query:",query_.size()) print("key:",key_.size()) print("value:",value_.size()) #query: torch.Size([10, 4096, 5]) #key: torch.Size([10, 5, 4096]) #value: torch.Size([10, 5, 4096]) multihead_testmodel= MultiHeadedAttention(h=4,d_model=4096,dropout=0.1) #print(multihead_model) output=multihead_testmodel(query=query_,key=key_,value=value_) print("model output:",output.size()) #size mismatch, m1: [40960 x 5], m2: [4096 x 4096] at #../aten/src/TH/generic/THTensorMath.cpp:197 in case tensor seze: torch.randn(5,64,64),this code has no error. X=torch.randn(5,64,64) #X=X.view(X.shape[0],X.shape[1],X.shape[2]*X.shape[3]) query_=X.transpose(2,1) key_=X value_=X print("query:",query_.size()) print("key:",key_.size()) print("value:",value_.size()) #query: torch.Size([5, 64, 64]) #key: torch.Size([5, 64, 64]) #value: torch.Size([5, 64, 64]) multihead_model= MultiHeadedAttention(h=4,d_model=64,dropout=0.1) temp_output=multihead_model(query=query_,key=key_,value=value_) print(temp_output.size()) #torch.Size([5, 64, 64])
Looks like the code expects to get the same dimensions for query, key, and value, so if you don't transpose it fixes the issue: query_ = X key_ = X value_ = X You're right that there needs to be a transpose for the attention to work, but the code already handles this by calling key.transpose(-2, -1) in the attention implementation.
https://stackoverflow.com/questions/58588347/
What is the meaning of in-place in dropout
def dropout(input, p=0.5, training=True, inplace=False) inplace: If set to True, will do this operation in-place. I would like to ask what is the meaning of in-place in dropout. What does it do? Any performance changes when performing these operation? Thanks
Keeping inplace=True will itself drop few values in the tensor input itself, whereas if you keep inplace=False, you will to save the result of droput(input) in some other variable to be retrieved. Example: import torch import torch.nn as nn inp = torch.tensor([1.0, 2.0, 3, 4, 5]) outplace_dropout = nn.Dropout(p=0.4) print(inp) output = outplace_dropout(inp) print(output) print(inp) # Notice that the input doesn't get changed here inplace_droput = nn.Dropout(p=0.4, inplace=True) inplace_droput(inp) print(inp) # Notice that the input is changed now PS: This is not related to what you have asked but try not using input as a variable name since input is a Python keyword. I am aware that Pytorch docs also does that, and it is kinda funny.
https://stackoverflow.com/questions/58589128/
PyTorch Confusion Matrix Plot
I have met a problem for plotting a confusion matrix. The upper and lower lines are put incorrectly. When I plot it, it looks like this. I think there is nothing wrong with my code, since I took it from this YouTube exactly. def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Graphical analytics cm = confusion_matrix(train_set.targets, train_preds.argmax(dim=1)) names = ('T-shirt/top','Trouser','Pullover','Dress','Coat','Sandal','Shirt','Sneaker','Bag','Ankle boot') plt.figure(figsize=(10,10)) plot_confusion_matrix(cm, names)
You can change manually the range of the y axis. plt.ylim(-0.5, len(names) - 0.5) For some reason heuristic for estimating the axis range does not get that you are not only interested in the points you are plotting but also in the (-0.5; +0.5) surrounding on both axes. The lowest points have y coordinate 0, the top-most points have y coordinate len(names) - 1.
https://stackoverflow.com/questions/58589349/
How can I replace the entries of a tensor with the appropriate ranking of each entry?
suppose that I have the following tensor: >> i = 3 >> j = 5 >> k = 2 >> sor = torch.randn(i,j,k) >> sor Out[20]: tensor([[[ 0.5604, -0.9675], [-1.0953, -0.5615], [ 0.4250, -0.9176], [-1.6188, -1.0217], [-0.0778, 1.9407]], [[-0.1034, -0.7925], [-0.2955, 0.8058], [-0.5349, 1.1040], [ 1.1240, 0.8249], [ 0.0827, -1.2471]], [[ 0.5924, 0.4777], [-2.4640, -1.9527], [-0.4519, 0.4788], [-0.2308, -0.2368], [-1.6786, 0.1360]]]) suppose that for every fixed i and j, I want to compute the numeric rank of elements across k, and replace the elements of the tensor sor with those ranks. For instance, from the example above, I want to change the entry [ 0.5604, -0.9675], which is sor[0,0,:] , into [1, 2], since 0.5604 > -0.9675 Thank you,
I think you are looking for torch.argsort: torch.argsort(sor, dim=2) Out[ ]: tensor([[[1, 0], [0, 1], [1, 0], [0, 1], [0, 1]], [[1, 0], [0, 1], [0, 1], [1, 0], [1, 0]], [[1, 0], [0, 1], [0, 1], [1, 0], [0, 1]]])
https://stackoverflow.com/questions/58590420/
pytorch question regarding backward argument used in blitz tutorial
A pytorch question, regarding backward(). In the pytorch blitz tutorial copied and pasted below, they pass in a vector [0.1, 1.0, 0.0001] to backward() . I can intuitively guess why vector [0.1, 1.0, 0.0001] shape passed in is [3] , but I do not understand where the values 0.1, 1.0, 0.0001 come from. Another tutorial I looked at passes in one such that backwards on a vector is done like this : L.backward(torch.ones(L.shape)) # copied from blitz tutorial Now in this case y is no longer a scalar. torch.autograd could not compute the full Jacobian directly, but if we just want the vector-Jacobian product, simply pass the vector to backward as argument: v = torch.tensor([0.1, 1.0, 0.0001], dtype=torch.float) y.backward(v) print(x.grad) If anyone can explain the reasoning for [0.1, 1.0, 0.0001], I would appreciate it.
As the document says, implicitly, grad cannot be created for non-scalar outputs. y is non-scalar tensor, and you cannot y.backward() directly. But you can pass in a vector to backward to get the vector-Jacobian product. If you don't want to change the grads, you can pass in a vector with all elements are ones. x = torch.tensor([2.,3.,4.], requires_grad=True) y = x**2 y.backward() # error y.backward(torch.tensor([1.,1.,1.])) # work x.grad # tensor([4.,6.,8.]) # y.backward(torch.tensor([2.,2.,2.])) # change the passed vector. # x.grad # tensor([8.,12.,16])
https://stackoverflow.com/questions/58595195/
Optimizer parameters missing in Pytorch
I have 2 nets sharing one optimizer using different learning rate. Simple code shown as below: optim = torch.optim.Adam([ {'params': A.parameters(), 'lr': args.A}, {'params': B.parameters(), 'lr': args.B}]) Is this right? I ask this because when I check parameters in optimizer (using code below), I found only 2 parameters. for p in optim.param_groups: outputs = '' for k, v in p.items(): if k is 'params': outputs += (k + ': ' + str(v[0].shape).ljust(30) + ' ') else: outputs += (k + ': ' + str(v).ljust(10) + ' ') print(outputs) Only 2 parameters are printed: params: torch.Size([16, 1, 80]) lr: 1e-05 betas: (0.9, 0.999) eps: 1e-08 weight_decay: 0 amsgrad: False params: torch.Size([30, 10]) lr: 1e-05 betas: (0.9, 0.999) eps: 1e-08 weight_decay: 0 amsgrad: False Actually, 2 nets have more than 100 parameters. I thought all parameters will be printed. Why is this happening? Thank you!
You only print the first tensor of each param groups: if k is 'params': outputs += (k + ': ' + str(v[0].shape).ljust(30) + ' ') # only v[0] is printed! Try and print all the parameters: if k is 'params': outputs += (k + ': ') for vp in v: outputs += (str(vp.shape).ljust(30) + ' ')
https://stackoverflow.com/questions/58600407/
How can I vectorize these nested loops in Python?
I am trying to vectorize the following: n = torch.zeros_like(x) for i in range(x.shape[0]): for j in range(x.shape[1]): for k in range(x.shape[2]): n[i, j, k] = p[i, x[i, j, k], j, k] I tried doing something like n = p[:, x, ...] but I just get an error that I ran out of memory, which isn't very helpful. I think the problem with this is that instead of getting the value of x at the correct index it is trying to index the entirety of x, but I am not sure how I would go about fixing that if that is the problem.
This looks like a perfect use-case for broadcasted fancy indices. np.ogrid is a valuable tool here, or you can manually reshape your ranges: i, j, k = np.ogrid[:x.shape[0], :x.shape[1], :x.shape[2]] n = p[i, x, j, k] This black magic works because the index into ogrid returns three arrays that broadcast into the same shape as x. Therefore the final extraction from p will have that shape. The indexing is trivial after that. Another way to write it is: i = np.arange(x.shape[0]).reshape(-1, 1, 1) j = np.arange(x.shape[1]).reshape(1, -1, 1) k = np.arange(x.shape[2]).reshape(1, 1, -1) n = p[i, x, j, k]
https://stackoverflow.com/questions/58601366/
how to add list of arrays (tensors)
I am defining a simple conv2d function to calculate the cross-correlation between input and kernel (both 2D tensor) as below: import torch def conv2D(X, K): h = K.shape[0] w = K.shape[1] ĥ = X.shape[0] - h + 1 ŵ = X.shape[1] - w + 1 Y = torch.zeros((ĥ, ŵ)) for i in range (ĥ): for j in range (ŵ): Y[i, j] = (X[i: i+h, j: j+w]*K).sum() return Y When X and K are of rank-3 tensor, I calculate the conv2d for each channel and then add them together as below: def conv2D_multiple(X, K): cross = [] result = 0 for x, k in zip(X, K): cross.append(conv2D(x,k)) for t in cross: result += t return result To test my function: X_2 = torch.tensor([[[0, 1, 2], [3, 4, 5], [6, 7, 8]], [[1, 2, 3], [4, 5, 6], [7, 8, 9]]], dtype=torch.float32) K_2 = torch.tensor([[[0, 1], [2, 3]], [[1, 2], [3, 4]]], dtype=torch.float32) conv2D_multiple(X_2, K_2) The results is: tensor([[ 56., 72.], [104., 120.]]) The result is as expected, however, I believe my second for loop inside conv2D_multiple(X, K) function is redundant. My question is how to sum (element wise) tensors (arrays) in the list so I omit the second for loop.
Since your conv2D operates on a per slice behaviour, what you can do is allocate a 3D tensor so that when you use the first for loop, you store the results by taking each result and populating each slice. You can then sum along the dimension of the slices using PyTorch's built-in torch.sum operator on the tensor to get the same result. To make it palatable, I'll make the slice dimension dim=0. Therefore, replace cross from being an initial empty list to a Torch tensor that is 3D to allow you to store the intermediate results, then compress along the slice dimension by summing. We can get away with doing this as your initial implementation stored the intermediate results as a list of 2D tensors. To make it easier, go to 3D and allow PyTorch to sum along the slice axis. This will require that you define the correct dimensions for this 3D tensor first prior to looping: def conv2D_multiple(X, K): h = K.shape[1] w = K.shape[2] ĥ = X.shape[1] - h + 1 ŵ = X.shape[2] - w + 1 c = X.shape[0] cross = torch.zeros((c, ĥ, ŵ), dtype=torch.float32) for i, (x, k) in enumerate(zip(X, K)): cross[i] = conv2D(x,k) result = cross.sum(dim=0) return result Notice that for each slice you're iterating over between the input and kernel, instead of appending to a new list we directly place this into a slice in the intermediate tensor. Once you store these results, sum along the slice axis to finally compress it into what you expect. Running the new function above with your example inputs generates the same result. If this isn't a desired result for you, another way is to simply take the list of tensors you created, build the intermediate tensor out of that by stacking them all together using torch.stack and sum. By default it stacks along the first axis (dim=0): def conv2D_multiple(X, K): cross = [] result = 0 for x, k in zip(X, K): cross.append(conv2D(x,k)) cross = torch.stack(cross) result = cross.sum(dim=0) return result
https://stackoverflow.com/questions/58602039/
How to share the common parts of two models in pytorch?
I had a problem implementing the model with pytorch. I want to build two models, some of which are shared, and share the encoder part like this Model1: input_1 -> encoder -> decoder_1 -> ouput_1 Model2: input_2 -> encoder -> decoder_2 -> ouput_2 What I want to do is make the two models use the encoder part together, but the decoder part is not the same. I looked up about parameter sharing, but it seems to be somewhat different from the requirements here. My own idea is to build a model that includes encode, decoder_1, decoder_2 and then choose which decoder to use based on input. I'm not sure about this method, if possible, can you give simple examples for using the common parts of two models?
You could do something like: import torch.nn as nn class SharedModel(nn.Module): def __init__(self, mode): super(SharedModel, self).__init__() self.mode = mode # use 1 or 2 self.encoder = ... self.decoder_1 = ... self.decoder_2 = ... def forward(self, x): x = self.encoder(x) if self.mode == 1: x = self.decoder_1(x) elif self.mode == 2: x = self.decoder_2(x) else: raise ValueError("Unkown mode.") return x
https://stackoverflow.com/questions/58603478/
Change the image size and range
I want to ask for data transforms if I have an image of size 28 * 28 and I want to resize it to be 32 *32, I know that this could be done with transforms.Resize() but I'm sure how. Also for the normalization, if I want it to be within the range of [-1,1], I did it previously if I want it to be within [0,1] using transforms.Normalize((0.485,0.456,0.406),(0.229,0.224,0.225))
Don't rage, it's gonna be fine. Resizing MNIST to 32x32 height x width can be done like so: import tempfile import torchvision dataset = torchvision.datasets.MNIST( root=tempfile.gettempdir(), download=True, train=True, # Simply put the size you want in Resize (can be tuple for height, width) transform=torchvision.transforms.Compose( [torchvision.transforms.Resize(32), torchvision.transforms.ToTensor()] ), ) print(dataset[0][0].shape) # 1, 32, 32 (channels, width, height) When it comes to normalization, you can see PyTorch's per-channel normalization source here. It depends whether you want it per-channel or in another form, but something along those lines should work (see wikipedia for formula of the normalization, here it's applied per-channel): import dataclasses @dataclasses.dataclass class Normalize: maximum: typing.Tuple minimum: typing.Tuple low: int = -1 high: int = 1 def __call__(self, tensor): maximum = torch.as_tensor(self.maximum, dtype=dtype, device=tensor.device) minimum = torch.as_tensor(self.minimum, dtype=dtype, device=tensor.device) return self.low + ( (tensor - minimum[:, None, None]) * (self.high - self.low) ) / (maximum[:, None, None] - minimum[:, None, None]) You would have to provide Tuple of minimum values and Tuple of maximum values (one value per channel for both) just like for standard PyTorch's torchvision normalization though. You could calculate those from data, for MNIST you could calculate them like this: def per_channel_op(data, op=torch.max): per_sample, _ = op(data, axis=0) per_width, _ = op(per_sample, axis=1) per_height, _ = op(per_width, axis=1) return per_height # Unsqueeze to add superficial channel for MNIST # Divide cause they are uint8 type by default data = dataset.data.unsqueeze(1).float() / 255 # Maximum over samples maximum = per_channel_op(data) # value per channel, here minimum = per_channel_op(data, op=torch.min) # only one value cause MNIST And finally, to apply normalization on MNIST (watch out, as those will only have -1, 1 values as all pixels are black and white, will act differently on datasets like CIFAR etc.): dataset = torchvision.datasets.MNIST( root=tempfile.gettempdir(), download=True, train=True, # Simply put the size you want in Resize (can be tuple for height, width) transform=torchvision.transforms.Compose( [ torchvision.transforms.Resize(32), torchvision.transforms.ToTensor(), # Apply with Lambda your custom transformation torchvision.transforms.Lambda(Normalize((maximum,), (minimum,))), ] ), )
https://stackoverflow.com/questions/58606442/
how to get jacobian with pytorch for log probability of multivariate normal distribution
I've drawn samples from a multivariate normal distribution and would like to get the gradient of their log probability with respect to the mean. Since there are many samples, this requires a Jacobian: import torch mu = torch.ones((2,), requires_grad=True) sigma = torch.eye(2) dist = torch.distributions.multivariate_normal.MultivariateNormal(mu, sigma) num_samples=10 samples = dist.sample((num_samples,)) logprobs = dist.log_prob(samples) Now I would like to get the derivative of each entry in logprobs with respect to each entry in mu. A simple solution is a python loop: grads = [] for logprob in logprobs: grad = torch.autograd.grad(logprob, mu, retain_graph=True) grads.append(grad) If you stack the grads, the result is the desired Jacobian. Is there also built-in and vectorized support for this? Related questions/internet ressources: This is a huge topic, there are lots of related posts. Nevertheless, I think that this specific question (regarding distributions) hasn't been answered yet: This question is basically the same as mine (but without example code and solution attempt), sadly it is unanswered: Pytorch custom function jacobian gradient This question shows the calculation of a jacobian in pytorch, but I don't think the solution is applicable to my problem: Pytorch most efficient Jacobian/Hessian calculation It requires stacking the input in a way that seems incompatible with the distribution. I couldn't make it work. This gist has some code snippets for Jacobians. In principle they are similar to the approach from the question above.
PyTorch 1.5.1 introduced a torch.autograd.functional.jacobian function. This computes the Jacobians of a function w.r.t. the input tensors. Since jacobian requires a python function as the first argument, using it requires some code restructuring. import torch torch.manual_seed(0) # for repeatable results mu = torch.ones((2,), requires_grad=True) sigma = torch.eye(2) num_samples = 10 def f(mu): dist = torch.distributions.multivariate_normal.MultivariateNormal(mu, sigma) samples = dist.sample((num_samples,)) logprobs = dist.log_prob(samples) return logprobs grads = torch.autograd.functional.jacobian(f, mu) print(grads) tensor([[-1.1258, -1.1524], [-0.2506, -0.4339], [ 0.5988, -1.5551], [-0.3414, 1.8530], [ 0.4681, -0.1577], [ 1.4437, 0.2660], [ 1.3894, 1.5863], [ 0.9463, -0.8437], [ 0.9318, 1.2590], [ 2.0050, 0.0537]])
https://stackoverflow.com/questions/58612113/
PyTorch DataLoader returns the batch as a list with the batch as the only entry. How is the best way to get a tensor from my DataLoader
I currently have the following situation where I want to use DataLoader to batch a numpy array: import numpy as np import torch import torch.utils.data as data_utils # Create toy data x = np.linspace(start=1, stop=10, num=10) x = np.array([np.random.normal(size=len(x)) for i in range(100)]) print(x.shape) # >> (100,10) # Create DataLoader input_as_tensor = torch.from_numpy(x).float() dataset = data_utils.TensorDataset(input_as_tensor) dataloader = data_utils.DataLoader(dataset, batch_size=100, ) batch = next(iter(dataloader)) print(type(batch)) # >> <class 'list'> print(len(batch)) # >> 1 print(type(batch[0])) # >> class 'torch.Tensor'> I expect the batchto be already a torch.Tensor. As of now I index the batch like so, batch[0] to get a Tensor but I feel this is not really pretty and makes the code harder to read. I found that the DataLoader takes a batch processing function called collate_fn. However, setting data_utils.DataLoader(..., collage_fn=lambda batch: batch[0]) only changes the list to a tuple (tensor([ 0.8454, ..., -0.5863]),) where the only entry is the batch as a Tensor. You would help me a lot by helping me finding out how to elegantly transform the batch to a tensor (even if this would include telling me that indexing the single entry in batch is okay).
Sorry for inconvenience with my answer. Actually, you don't have to create Dataset from your tensor, you can pass torch.Tensor directly as it implements __getitem__ and __len__, so this is sufficient: import numpy as np import torch import torch.utils.data as data_utils # Create toy data x = np.linspace(start=1, stop=10, num=10) x = np.array([np.random.normal(size=len(x)) for i in range(100)]) # Create DataLoader dataset = torch.from_numpy(x).float() dataloader = data_utils.DataLoader(dataset, batch_size=100) batch = next(iter(dataloader))
https://stackoverflow.com/questions/58612401/
KeyError when enumerating over dataloader
I'm trying to iterate over a pytorch dataloader initialized as follows: trainDL = torch.utils.data.DataLoader(X_train,batch_size=BATCH_SIZE, shuffle=True, **kwargs) where X_train is a pandas dataframe like this one: So, I'm not being able to do the following statement, since I'm getting a KeyError in the 'enumerate': for batch_idx, (data, _) in enumerate(trainDL): {stuff} has anyone a clue of what's happening? EDIT: The error I get is: KeyError Traceback (most recent call last) ~/.local/share/virtualenvs/Pipenv-l_wD1rT4/lib/python3.6/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2896 try: -> 2897 return self._engine.get_loc(key) 2898 except KeyError: pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 40592 During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) <ipython-input-63-95142e0748bb> in <module> ----> 1 for batch_idx, (data, _) in enumerate(trainDL): 2 print(".") ~/.local/share/virtualenvs/Pipenv-l_wD1rT4/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self) 344 def __next__(self): 345 index = self._next_index() # may raise StopIteration --> 346 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 347 if self._pin_memory: 348 data = _utils.pin_memory.pin_memory(data) ~/.local/share/virtualenvs/Pipenv-l_wD1rT4/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] ~/.local/share/virtualenvs/Pipenv-l_wD1rT4/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] ~/.local/share/virtualenvs/Pipenv-l_wD1rT4/lib/python3.6/site-packages/pandas/core/frame.py in __getitem__(self, key) 2993 if self.columns.nlevels > 1: 2994 return self._getitem_multilevel(key) -> 2995 indexer = self.columns.get_loc(key) 2996 if is_integer(indexer): 2997 indexer = [indexer] ~/.local/share/virtualenvs/Pipenv-l_wD1rT4/lib/python3.6/site-packages/pandas/core/indexes/base.py in get_loc(self, key, method, tolerance) 2897 return self._engine.get_loc(key) 2898 except KeyError: -> 2899 return self._engine.get_loc(self._maybe_cast_indexer(key)) 2900 indexer = self.get_indexer([key], method=method, tolerance=tolerance) 2901 if indexer.ndim > 1 or indexer.size > 1: pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/index.pyx in pandas._libs.index.IndexEngine.get_loc() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.get_item() KeyError: 40592
You have to create torch.utils.data.Dataset wrapping your dataset. For example: from torch.utils.data import Dataset class PandasDataset(Dataset): def __init__(self, dataframe): self.dataframe = dataframe def __len__(self): return len(self.dataframe) def __getitem__(self, index): return self.dataframe.iloc[index] Pass this object to DataLoader instantiated by your pandas dataframe and you should be fine. Example usage with DataLoader: import pandas as pd df = pd.read_csv("data.csv") dataset = PandasDataset(df) dataloader = torch.utils.data.DataLoader(dataset, batch_size=16) for sample in dataloader: ...
https://stackoverflow.com/questions/58612453/
PyTorch C++ Extensions: Accessing data for Half Tensors
I'm trying to write a C++/CUDA extension for PyTorch using the C++ Tensor API, and I would like my code to work with both float32 and float16 (half precision). I'm not sure how to access the data pointer for half tensors coming from Python. Here's how I do it for float tensors: // Access data pointer for float Tensor A torch::Tensor A; float* ptr = A.data<float>(); Here's what I've tried for half tensors: // CUDA float 16 type // undefined symbol: _ZNK2at6Tensor4dataI6__halfEEPT_v A.data<__half>(); // PyTorch float16 type // error: no instance of function template "at::Tensor::data" A.data<torch::ScalarType::Half>(); // Casting to __half* // This compiles but throws and error if the requested pointer type doesn't match the Tensor type: // RuntimeError: expected scalar type Float but found Half (__half*)(A.data<float>()); I tried looking into the C++ api source code, but couldn't find anything else that looks like a float16 type. System information: Python 3.6.2 PyTorch 1.0.1
The correct type turned out to be at::Half.
https://stackoverflow.com/questions/58613427/
How to make prediction from train Pytorch and PytorchText model?
General speaking, after I have successfully trained a text RNN model with Pytorch, using PytorchText to leverage data loading on an origin source, I would like to test with other data sets (a sort of blink test) that are from different sources but the same text format. First I defined a class to handle the data loading. class Dataset(object): def __init__(self, config): # init what I need def load_data(self, df: pd.DataFrame, *args): # implementation below # Data format like `(LABEL, TEXT)` def load_data_but_error(self, df: pd.DataFrame): # implementation below # Data format like `(TEXT)` Here is the detail of load_data which I load data that trained successfully. TEXT = data.Field(sequential=True, tokenize=tokenizer, lower=True, fix_length=self.config.max_sen_len) LABEL = data.Field(sequential=False, use_vocab=False) datafields = [(label_col, LABEL), (data_col, TEXT)] # split my data to train/test train_df, test_df = train_test_split(df, test_size=0.33, random_state=random_state) train_examples = [data.Example.fromlist(i, datafields) for i in train_df.values.tolist()] train_data = data.Dataset(train_examples, datafields) # split train to train/val train_data, val_data = train_data.split(split_ratio=0.8) # build vocab TEXT.build_vocab(train_data, vectors=Vectors(w2v_file)) self.word_embeddings = TEXT.vocab.vectors self.vocab = TEXT.vocab test_examples = [data.Example.fromlist(i, datafields) for i in test_df.values.tolist()] test_data = data.Dataset(test_examples, datafields) self.train_iterator = data.BucketIterator( (train_data), batch_size=self.config.batch_size, sort_key=lambda x: len(x.title), repeat=False, shuffle=True) self.val_iterator, self.test_iterator = data.BucketIterator.splits( (val_data, test_data), batch_size=self.config.batch_size, sort_key=lambda x: len(x.title), repeat=False, shuffle=False) Next is my code (load_data_but_error) to load others source but causing error TEXT = data.Field(sequential=True, tokenize=tokenizer, lower=True, fix_length=self.config.max_sen_len) datafields = [('title', TEXT)] examples = [data.Example.fromlist(i, datafields) for i in df.values.tolist()] blink_test = data.Dataset(examples, datafields) self.blink_test = data.BucketIterator( (blink_test), batch_size=self.config.batch_size, sort_key=lambda x: len(x.title), repeat=False, shuffle=True) When I was executing code, I had an error AttributeError: 'Field' object has no attribute 'vocab' which has a question at here but it doesn't like my situation as here I had vocab from load_data and I want to use it for blink tests. My question is what the correct way to load and feed new data with a trained PyTorch model for testing current model is?
What I need are to keep TEXT in load_data and reuse in load_data_but_error by assigning to class variables add train=True to object data.BucketIterator on load_data_but_error function
https://stackoverflow.com/questions/58613467/
Custom distance loss function in Pytorch?
I want to implement the following distance loss function in pytorch. I was following this https://discuss.pytorch.org/t/custom-loss-functions/29387/4 thread from the pytorch forum np.linalg.norm(output - target) # where output.shape = [1, 2] and target.shape = [1, 2] So I have implemented the loss function like this def my_loss(output, target): loss = torch.tensor(np.linalg.norm(output.detach().numpy() - target.detach().numpy())) return loss with this loss function, calling backwards gives runtime error RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn My entire code looks like this model = nn.Linear(2, 2) x = torch.randn(1, 2) target = torch.randn(1, 2) output = model(x) loss = my_loss(output, target) loss.backward() <----- Error here print(model.weight.grad) PS: I am aware of the pairwise loss of pytorch but due to some limitation of it, I have to implement it myself. Following the pytorch source code I have tried the following, class my_function(torch.nn.Module): # forgot to define backward() def forward(self, output, target): loss = torch.tensor(np.linalg.norm(output.detach().numpy() - target.detach().numpy())) return loss model = nn.Linear(2, 2) x = torch.randn(1, 2) target = torch.randn(1, 2) output = model(x) criterion = my_function() loss = criterion(output, target) loss.backward() print(model.weight.grad) And I get the Run time error RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn How can I implement the loss function correctly?
This happens because, in the loss function, you are detaching tensors. You had to detach because you wanted to use np.linalg.norm. This breaks the graph and you get the error that tensors don't have grad fn. You can replace loss = torch.tensor(np.linalg.norm(output.detach().numpy() - target.detach().numpy())) by torch operations as loss = torch.norm(output-target) This should work fine.
https://stackoverflow.com/questions/58613698/
PyTorch Softmax Output Doesn't Sum to 1
Cross posting my question from the PyTorch forum: I started receiving negative KL divergences between a target Dirichlet distribution and my model’s output Dirichlet distribution. Someone online suggested that this might be indicative that the parameters of the Dirichlet distribution don’t sum to 1. I thought this was ridiculous since the output of the model is passed through output = F.softmax(self.weights(x), dim=1) But after looking into it more closely, I found that torch.all(torch.sum(output, dim=1) == 1.) returns False! Looking at the problematic row, I see that it is tensor([0.0085, 0.9052, 0.0863], grad_fn=<SelectBackward>). But torch.sum(output[5]) == 1. produces tensor(False). What am I misusing about softmax such that output probabilities do not sum to 1? This is PyTorch version 1.2.0+cpu. Full model is copied below: import torch import torch.nn as nn import torch.nn.functional as F def assert_no_nan_no_inf(x): assert not torch.isnan(x).any() assert not torch.isinf(x).any() class Network(nn.Module): def __init__(self): super().__init__() self.weights = nn.Linear( in_features=2, out_features=3) def forward(self, x): output = F.softmax(self.weights(x), dim=1) assert torch.all(torch.sum(output, dim=1) == 1.) assert_no_nan_no_inf(x) return output
This is most probably due to the floating point numerical errors due to finite precision. Instead of checking strict inequality you should check the mean square error or something to be within an acceptable limit. For ex: I get torch.norm(output.sum(dim=1)-1)/N to be less than 1e-8. N is the batch size.
https://stackoverflow.com/questions/58615923/
Install pytorch from the source using pip
I am trying to install pytorch on a remote server. It has CentOS 6.5 and according to this link it has stopped support for CentOS 6. So, I am trying to install it via source. The recommended method to install is by anaconda but the thing is I am getting plenty of issues while installing anaconda as it is messing with the remote server path's alot so I have decided to use pip. But I have issues regarding converting some conda commands in pip which are as below- conda install -c pytorch magma-cuda90 The above command is mentioned before the pytorch cloning step and it given me an error that Could not open requirements file: [Errno 2] No such file or directory: 'pytorch' The other issue I am facing is below- export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"} Wht should be the alternative for CMAKE_PREFIX_PATH in pip?
According to your Python version, you can try to install from the wheel files. pip install https://download.pytorch.org/whl/cu101/torch-1.3.0-cp36-cp36m-manylinux1_x86_64.whl --user # For torch pip install https://download.pytorch.org/whl/cu101/torchvision-0.4.1-cp36-cp36m-linux_x86_64.whl --user # For torchvision If it fails you may want to check your glibc version: ldd --version because PyTorch is supported on Linux distributions that use glibc >= v2.17. For your question: What should be the alternative for `CMAKE_PREFIX_PATH in pip ? CMAKE_PREFIX_PATH act as a build directive to indicate where to find modules needed for the build. In your case (installation as non-root with the --user flag) it's probably: ~/.local/lib/python3.6/site-packages You can verify the exact location with the following command: python -c "import site; print(site.getsitepackages()[0])" As a side note your compilation will more likely fail if you still don't have the minimum required version of glibc.
https://stackoverflow.com/questions/58616298/
How to fix this error: list indices must be integers or slices, not tuple
Suppose I have a list of tensors called outputs >> outputs[2][0][0,:,:] Out[20]: tensor([[ 14.0448, -5.1494, -0.1780, ..., 10.1937, -8.9158, -5.3964], [ 32.0382, -0.5201, 29.9942, ..., -18.8268, -23.1068, 23.9745], [-24.5911, 14.7233, -6.3053, ..., -5.8131, -3.3088, 0.5685], [ 0.8842, -14.8318, 6.7204, ..., 17.7127, 7.3332, 3.7249], [ 2.0654, -16.5236, 38.3582, ..., -23.1663, -5.1202, 13.6506]], grad_fn=<SliceBackward>) >> outputs[2][1][0,:,:] Out[24]: tensor([[-0.1260, 0.0463, -0.3362, ..., 0.1089, -0.2454, 0.0140], [ 0.5050, -0.0750, -0.1639, ..., -0.0020, -0.0521, -0.3224], [-0.5311, 0.4526, 0.0079, ..., -0.0654, -0.1255, -0.0012], [ 0.0728, -0.1219, 0.0905, ..., 0.1354, 0.2730, -0.1186], [-0.0680, -0.5570, 0.0295, ..., -0.2411, -0.1690, 0.0331]], grad_fn=<SliceBackward>) When I try to do: >> outputs[2][0:1][0,:,:] python generates an error, and the error message is TypeError: list indices must be integers or slices, not tuple How can I fix this error? Thank you,
outputs[2][0] and outputs[2][1] both return an object (tensor I suppose). outputs[2][0:1] returns a list of those objects. What I think you are looking for is something like outputs[2][0:1][:,0,:,:] or [a[0,:,:] for a in outputs[2][0:1]]
https://stackoverflow.com/questions/58629848/
Using torch.nn.Embedding for GloVe: should we fine-tune the embeddings or just use them as they are?
while transfer learning / fine-tuning recent language models, such as BERT and XLNET, is by far a very common practice, how is this for GloVe? Basically, I see two options when using GloVe to get dense vector representations that can be used by downstream NNs. 1) Fine-tune GloVe embeddings (in pytorch terms, gradient enabled) 2) Just use the embeddings without gradient. For instance, given GloVe's embeddings matrix, I do embed = nn.Embedding.from_pretrained(torch.tensor(embedding_matrix, dtype=torch.float)) ... dense = nn.Linear(...) Is it best practice to solely use GloVe to get vector representation (and only train the dense layer and potentially other layers) or would one fine-tune the embeddings matrix, too?
You should absolutely fine-tune your word embedding matrix. Here is the thing, when you initialize the word embedding matrix with the GloVe word embeddings, your word embeddings will already capture most of the semantic properties of the data. However, you want your word embeddings to be tailored to the task your solving i.e task specific (Check Yang). Now, assuming that you don't have enough data in your dataset, you can't learn the word embedding matrix on your own (If you initialize the word embedding matrix with random vectors). Because of that, you want to initialize it with vectors that have been trained on huge datasets and are general. One really important thing to keep in mind → Because the rest of your model is going to be initialized randomly, when you start training your word embedding matrix may suffer from catastrophic forgetting (Check the work of Howard and Ruder and Kirkpatrick et al.), i.e., the gradients will be huge because your model will drastically underfit the data for the first few batches, and you will lose the initial vectors completely. You can overcome this by: For the first several epochs don't fine-tune the word embedding matrix, just keep it as it is: embeddings = nn.Embedding.from_pretrained(glove_vectors, freeze=True). After the rest of the model has learned to fit your training data, decrease the learning rate, unfreeze the your embedding module embeddings.weight.requires_grad = True, and continue training. By following the above mentioned steps, you will get the best of both worlds. In other words, your word embeddings will still capture semantic properties while being tailored for your own downstream task. Finally, there are works (Check Ye Zhang for example) showing that it is fine to fine-tune immediately, but I would opt for the safer option.
https://stackoverflow.com/questions/58630101/
SSH into Docker? or docker on SSH? and I need command
I'm new to DL, and docker and even not familiar with Linux and internet things (SSH and port.. DNS things.. part of them are only existing in my mind). Thus, I'd be so much happy with "specific explanation + command" (or reference sites). My basic questions are: what is superior concept between Docker and SSH? (running SSH on the Docker ? or running Docker on SSH? or both are possible?) Which specific command should I use if I want to use SSH+Docker+Pytorch+Jupyternotebook+visdom ? 2-1) I run SSH first (which is my lab's server, thus I am not usually root user, so if I want to run python file here, I face with permission denied often), let say SSH address is 123.456.789.999 2-2) use docker after running ssh (however, what I found from many posts are about running docker FIRST and then access to SSH.. how's it different?) 2-2-1) so for this, I now I have to pull a image which includes pytorch, jupyternotebook. I've done it 2-2-2) I need to RUN DOCKER using image with proper COMMAND LINE. What makes me confused is here. $docker run -it --[name] -p 8888:8888 [docker_image_with_pytorch] this is what I found. I assume to use jupyter notebook (let's say if I want to use 4444 instead of 8888, and visdom as 5555 instead of 8097) then I need to map port from host to docker for two times and is it right? $docker run -it --[name] -p 4444:8888 -p 5555:8097 [docker_image_with_pytorch] Finally I need to link SSH(let's say SSH port num:22 as general, ip: 123.456.789.999, id=heyjude) for SSH, I also found command below. $ ssh -L <host port>:localhost:<remote port> user@remote But is it general to use command after running docker instead of adding option when I first run docker? also if I assume to use that command for setting SSH, i'm confused which things I need to input ;( (host port=22 ? for SSH? localhost is just literal expression? remote port is arbitrary thing?) Below is my assumption.. $ ssh -L <22>:localhost:<12345> [email protected] I know it's so messy and you may catch how my thinking is twisted.. it would be very helpful for me to explain from the scratch.. Thank you.
Your question is somewhat unclear. I'm going to take a guess at what you're trying to solve. Assuming (!) that you've a container image that includes PyTorch and Jupyter and all their dependencies, it's likely that Jupyter will serve content to you via a web server (via HTTP and I suspect) over port :8888. If you docker run -it ... which is equivalent to docker run --interactive --tty ..., you should see log output from the process(es) running in the container. These logs should contain relevant information. To access the Jupyter Notebook once the container is running on your location workstation, you should be able to just browse http://localhost:8888. You probably do not need to use SSH if you're running everything locally. If you are running e.g. the docker container, on a remote host, you may SSH into the remote host first, run the commands e.g. docker run... but you may alternatively simply configure your Docker client to access the remote Docker Engine. Somewhat akin to SSH, when using Docker containers, you may execute commands in the container. But, you are able to do this using docker exec ....; you don't need to use SSH to interact with containers. A container image has one or more statically defined ports that the container will use to expose its services (over TCP|UDP). When you run the container, you may map the container ports to different ports on your host. This may be for necessity (if the container port is already being used on your host) or just for convenience. To do this you use --publish=[HOST-PORT]:[CONTAINER-PORT]. For a given container image, you cannot change the [CONTAINER-PORT] but you may use any available [HOST-PORT]. In your example --publish=4444:8888 would mean that the Jupyter (?) service is now accessible on your local machine via localhost:4444. Docker port forwards traffic from your host's :4444 to the container's :8888.
https://stackoverflow.com/questions/58631882/
Withou onnx, how to convert a pytorch model into a tensorflow model manually?
Since ONNX supports limited models, I tried to do this conversion by assigning parameters directly, but the gained tensorflow model failed to show the desired accuracy. Details are described as follows: The source model is Lenet trained on MNIST dataset. I firstly extracted each module and its parameters by model.named_parameters() and save them into a dictionary where the key is the module's name and the value is the parameters Then, I built and initiated a tensorflow model with the same architecture Finally, I assign each layer's parameters of pytroch model to the tensorflow model However, the accuracy of gained tensorflow model is only about 20%. Thus, my question is that is it possible to convert the pytorch model by this method?. If yes, what's the possible issue causing the bad result? If no, then please kindly explain the reasons. PS: assume the assignment procedure is right.
As the comment by jodag mentioned, there are many differences between operator representations in Tensorflow and PyTorch that might cause discrepancies in your workflow. We would recommend using the following method: Use the ONNX exporter in PyTorch to export the model to the ONNX format. import torch.onnx # Argument: model is the PyTorch model # Argument: dummy_input is a torch tensor torch.onnx.export(model, dummy_input, "LeNet_model.onnx") Use the onnx-tensorflow backend to convert the ONNX model to Tensorflow. import onnx from onnx_tf.backend import prepare onnx_model = onnx.load("LeNet_model.onnx") # load onnx model tf_rep = prepare(onnx_model) # prepare tf representation tf_rep.export_graph("LeNet_model.pb") # export the model
https://stackoverflow.com/questions/58637390/
PyTorch: Different Forward Methods for Train and Test/Validation
I'm currently trying to extend a model that is based on FairSeq/PyTorch. During training I need to train two encoders: one with the target sample, and the original one with the source sample. So the current forward function looks like this: def forward(self, src_tokens=None, src_lengths=None, prev_output_tokens=None, **kwargs): encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) decoder_out = self.decoder(prev_output_tokens, encoder_out=encoder_out, **kwargs) return decoder_out And based on this this idea i want something like this: def forward_test(self, src_tokens=None, src_lengths=None, prev_output_tokens=None, **kwargs): encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) decoder_out = self.decoder(prev_output_tokens, encoder_out=encoder_out, **kwargs) return decoder_out def forward_train(self, src_tokens=None, src_lengths=None, prev_output_tokens=None, **kwargs): encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) autoencoder_out = self.encoder(tgt_tokens, src_lengths=src_lengths, **kwargs) concat = some_concatination_func(encoder_out, autoencoder_out) decoder_out = self.decoder(prev_output_tokens, encoder_out=concat, **kwargs) return decoder_out Is there any way to do this? Edit: These are the constraints that I have, since I need to extend FairseqEncoderDecoderModel: @register_model('transformer_mass') class TransformerMASSModel(FairseqEncoderDecoderModel): def __init__(self, encoder, decoder): super().__init__(encoder, decoder) Edit 2: The parameters passed to the forward function in Fairseq can be altered by implementing your own Criterion, see for example CrossEntropyCriterion, where sample['net_input'] is passed to the __call__ function of the model, which invokes the forward method.
First of all you should always use and define forward not some other methods that you call on the torch.nn.Module instance. Definitely do not overload eval() as shown by trsvchn as it's evaluation method defined by PyTorch (see here). This method allows layers inside your model to be put into evaluation mode (e.g. specific changes to layers like inference mode for Dropout or BatchNorm). Furthermore you should call it with __call__ magic method. Why? Because hooks and other PyTorch specific stuff is registered that way properly. Secondly, do not use some external mode string variable as suggested by @Anant Mittal. That's what train variable in PyTorch is for, it's standard to differentiate by it whether model is in eval mode or train mode. That being said you are the best off doing it like this: import torch class Network(torch.nn.Module): def __init__(self): super().__init__() ... # You could split it into two functions but both should be called by forward def forward( self, src_tokens=None, src_lengths=None, prev_output_tokens=None, **kwargs ): encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) if self.train: return self.decoder(prev_output_tokens, encoder_out=encoder_out, **kwargs) autoencoder_out = self.encoder(tgt_tokens, src_lengths=src_lengths, **kwargs) concat = some_concatination_func(encoder_out, autoencoder_out) return self.decoder(prev_output_tokens, encoder_out=concat, **kwargs) You could (and arguably should) split the above into two separate methods, but that's not too bad as the function is rather short and readable that way. Just stick to PyTorch's way of handling things if easily possible and not some ad-hoc solutions. And no, there will be no problem with backpropagation, why would there be one?
https://stackoverflow.com/questions/58655207/
pytorch nn.Sequential(*list) TypeError: list is not a Module subclass
when I use pytorch to train a model, I tried to print the whole net structure so I packed all the layers in a list then I use nn.Sequential(*list) but it doesn't work, and the TypeError: list is not a Module subclass
Please provide the list of layers that you have created, are you sure you haven' done any error in that. Try checking if your list is actually [] and not [[..]]. The other thing that I noticed is that you have list as a variable name, which isn't a good idea - list is a Python keyword. I tried writing a sample code of unpacking a list and it works fine for me. import torch import torch.nn as nn net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) layers = [nn.Linear(2, 2), nn.Linear(2, 2)] net = nn.Sequential(*layers) print(net) This ran without any error, and the result was: Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Hope this helps. :)
https://stackoverflow.com/questions/58656046/
PyTorch Data Augmentation is taking too long
For the task that involves regression, I need to train my models to generate density maps from RGB images. To augment my dataset I have decided to flip all the images horizontally. For that matter, I also have to flip my ground truth images and I did so. dataset_for_augmentation.listDataset(train_list, shuffle=True, transform=transforms.Compose([ transforms.RandomHorizontalFlip(p=1), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]), target_transform=transforms.Compose([ transforms.RandomHorizontalFlip(p=1), transforms.ToTensor() ]), train=True, resize=4, batch_size=args.batch_size, num_workers=args.workers), But here is the problem : For some reason, PyTorch transforms.RandomHorizontalFlip function takes only PIL images (numpy is not allowed) as input. So I decided to convert the type to PIL Image. img_path = self.lines[index] img, target = load_data(img_path, self.train, resize=self.resize) if type(target[0][0]) is np.float64: target = np.float32(target) img = Image.fromarray(img) target = Image.fromarray(target) if self.transform is not None: img = self.transform(img) target = self.target_transform(target) return img, target And yes, this operation need enormous amount of time. Considering I need this operation to be carried out for thousands of images, 23 seconds (should have been under half a second at most) per batch is not tolerable. 2019-11-01 16:29:02,497 - INFO - Epoch: [0][0/152] Time 27.095 (27.095) Data 23.150 (23.150) Loss 93.7401 (93.7401) I would appreciate any suggestions to speed up my augmentation process
You don't need to change the DataLoader to do that. You can use ToPILImage(): transform=transforms.Compose([ transforms.ToPILImage(), # check mode assumption in the documentation transforms.RandomHorizontalFlip(p=1), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]) Anyway, I would avoid converting to PIL. It seems completely unnecessary. If you want to flip all images, then why not to do that using NumPy only? img_path = self.lines[index] img, target = load_data(img_path, self.train, resize=self.resize) if type(target[0][0]) is np.float64: target = np.float32(target) # assuming width axis=1 -- see my comment below img = np.flip(img, axis=1) target = np.flip(target, axis=1) if self.transform is not None: img = self.transform(img) target = self.target_transform(target) return img, target And remove the transforms.RandomHorizontalFlip(p=1) from the Compose. As ToTensor(...) also handles ndarray, you are good to go. Note: I am assuming the width axis is equal to 1, since ToTensor expects it to be there. From the docs: Converts a PIL Image or numpy.ndarray (H x W x C) ...
https://stackoverflow.com/questions/58656332/
How to compute pairwise distance between point set and lines in PyTorch?
The point set A is a Nx3 matrix, and from two point sets B and C with the same size of Mx3 we could get the lines BC betwen them. Now I want to compute the distance from each point in A to each line in BC. B is Mx3 and C is Mx3, then the lines are from the points with correspoinding rows, so BC is a Mx3 matrix. The basic method is computed as follows: D = torch.zeros((N, M), dtype=torch.float32) for i in range(N): p = A[i] # 1x3 for j in range(M): p1 = B[j] # 1x3 p2 = C[j] # 1x3 D[i,j] = torch.norm(torch.cross(p1 - p2, p - p1)) / torch.norm(p1 - p2) Are there any faster method to do this work? Thanks.
You can remove the for loops by doing this (it should speed-up at the cost of memory, unless M and N are small): diff_B_C = B - C diff_A_C = A[:, None] - C norm_lines = torch.norm(diff_B_C, dim=-1) cross_result = torch.cross(diff_B_C[None, :].expand(N, -1, -1), diff_A_C, dim=-1) norm_cross = torch.norm(cross_result, dim=-1) D = norm_cross / norm_lines Of course, you don't need to do it step-by-step. I just tried to be clear with the variable names. Note: if you don't provide dim to torch.cross, it will use the first dim=3 which would give the wrong results if N=3 (from the docs): If dim is not given, it defaults to the first dimension found with the size 3. If you are wondering, you can check here why I chose expand instead of repeat.
https://stackoverflow.com/questions/58660031/
Loss doesn't decrease in Pytorch CNN
I'm doing a CNN with Pytorch for a task, but it won't learn and improve the accuracy. I made a version working with the MNIST dataset so I could post it here. I'm just looking for an answer as to why it's not working. The architecture is fine, I implemented it in Keras and I had over 92% accuracy after 3 epochs. Note: I reshaped the MNIST into 60x60 pictures because that's how the pictures are in my "real" problem. import numpy as np from PIL import Image import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import DataLoader from torch.autograd import Variable from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() def resize(pics): pictures = [] for image in pics: image = Image.fromarray(image).resize((dim, dim)) image = np.array(image) pictures.append(image) return np.array(pictures) dim = 60 x_train, x_test = resize(x_train), resize(x_test) # because my real problem is in 60x60 x_train = x_train.reshape(-1, 1, dim, dim).astype('float32') / 255 x_test = x_test.reshape(-1, 1, dim, dim).astype('float32') / 255 y_train, y_test = y_train.astype('float32'), y_test.astype('float32') if torch.cuda.is_available(): x_train = torch.from_numpy(x_train)[:10_000] x_test = torch.from_numpy(x_test)[:4_000] y_train = torch.from_numpy(y_train)[:10_000] y_test = torch.from_numpy(y_test)[:4_000] class ConvNet(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 32, 3) self.conv2 = nn.Conv2d(32, 64, 3) self.conv3 = nn.Conv2d(64, 128, 3) self.fc1 = nn.Linear(5*5*128, 1024) self.fc2 = nn.Linear(1024, 2048) self.fc3 = nn.Linear(2048, 1) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2)) x = x.view(x.size(0), -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.dropout(x, 0.5) x = torch.sigmoid(self.fc3(x)) return x net = ConvNet() optimizer = optim.Adam(net.parameters(), lr=0.03) loss_function = nn.BCELoss() class FaceTrain: def __init__(self): self.len = x_train.shape[0] self.x_train = x_train self.y_train = y_train def __getitem__(self, index): return x_train[index], y_train[index].unsqueeze(0) def __len__(self): return self.len class FaceTest: def __init__(self): self.len = x_test.shape[0] self.x_test = x_test self.y_test = y_test def __getitem__(self, index): return x_test[index], y_test[index].unsqueeze(0) def __len__(self): return self.len train = FaceTrain() test = FaceTest() train_loader = DataLoader(dataset=train, batch_size=64, shuffle=True) test_loader = DataLoader(dataset=test, batch_size=64, shuffle=True) epochs = 10 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 for images, labels in train_loader: optimizer.zero_grad() log_ps = net(images) loss = loss_function(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 with torch.no_grad(): for images, labels in test_loader: log_ps = net(images) test_loss += loss_function(log_ps, labels) ps = torch.exp(log_ps) top_p, top_class = ps.topk(1, dim=1) equals = top_class.type('torch.LongTensor') == labels.type(torch.LongTensor).view(*top_class.shape) accuracy += torch.mean(equals.type('torch.FloatTensor')) train_losses.append(running_loss/len(train_loader)) test_losses.append(test_loss/len(test_loader)) print("[Epoch: {}/{}] ".format(e+1, epochs), "[Training Loss: {:.3f}] ".format(running_loss/len(train_loader)), "[Test Loss: {:.3f}] ".format(test_loss/len(test_loader)), "[Test Accuracy: {:.3f}]".format(accuracy/len(test_loader)))
First the major issues... 1. The main issue with this code is that you're using the wrong output shape and the wrong loss function for classification. nn.BCELoss computes the binary cross entropy loss. This is applicable when you have one or more targets which are either 0 or 1 (hence the binary). In your case the target is a single integer between 0 and 9. Since there are only a small number of potential target values, the most common approach is to use categorical cross-entropy loss (nn.CrossEntropyLoss). The "theoretical" definition of cross entropy loss expects the network outputs and the targets to both be 10 dimensional vectors where the target is all zeros except in one location (one-hot encoded). However for computational stability and space efficiency reasons, pytorch's nn.CrossEntropyLoss directly takes the integer as a target. However, you still need to provide it with a 10 dimensional output vector from your network. # pseudo code (ignoring batch dimension) loss = nn.functional.cross_entropy_loss(<output 10d vector>, <integer target>) To fix this issue in your code we need to have fc3 output a 10 dimensional feature, and we need the labels to be integers (not floats). Also, there's no need to use .sigmoid on fc3 since pytorch's cross-entropy loss function internally applies log-softmax before computing the final loss value. 2. As pointed out by Serget Dymchenko, you need to switch the network to eval mode during inference and train mode during train. This mainly affects dropout and batch_norm layers since they behave differently during training and inference. 3. A learning rate of 0.03 is probably a little too high. It works just fine with a learning rate of 0.001 and in a couple experiments I saw the training diverge at 0.03. To accommodate these fixes a number of changes needed to be made. The minimal corrections to the code are shown below. I commented any lines which were changed with #### followed by a short description of the change. import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import DataLoader from torch.autograd import Variable from keras.datasets import mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() def resize(pics): pictures = [] for image in pics: image = Image.fromarray(image).resize((dim, dim)) image = np.array(image) pictures.append(image) return np.array(pictures) dim = 60 x_train, x_test = resize(x_train), resize(x_test) # because my real problem is in 60x60 x_train = x_train.reshape(-1, 1, dim, dim).astype('float32') / 255 x_test = x_test.reshape(-1, 1, dim, dim).astype('float32') / 255 #### float32 -> int64 y_train, y_test = y_train.astype('int64'), y_test.astype('int64') #### no reason to test for cuda before converting to numpy #### I assume you were taking a subset for debugging? No reason to not use all the data x_train = torch.from_numpy(x_train) x_test = torch.from_numpy(x_test) y_train = torch.from_numpy(y_train) y_test = torch.from_numpy(y_test) class ConvNet(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 32, 3) self.conv2 = nn.Conv2d(32, 64, 3) self.conv3 = nn.Conv2d(64, 128, 3) self.fc1 = nn.Linear(5*5*128, 1024) self.fc2 = nn.Linear(1024, 2048) #### 1 -> 10 self.fc3 = nn.Linear(2048, 10) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2)) x = x.view(x.size(0), -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.dropout(x, 0.5) #### removed sigmoid x = self.fc3(x) return x net = ConvNet() #### 0.03 -> 1e-3 optimizer = optim.Adam(net.parameters(), lr=1e-3) #### BCELoss -> CrossEntropyLoss loss_function = nn.CrossEntropyLoss() class FaceTrain: def __init__(self): self.len = x_train.shape[0] self.x_train = x_train self.y_train = y_train def __getitem__(self, index): #### .unsqueeze(0) removed return x_train[index], y_train[index] def __len__(self): return self.len class FaceTest: def __init__(self): self.len = x_test.shape[0] self.x_test = x_test self.y_test = y_test def __getitem__(self, index): #### .unsqueeze(0) removed return x_test[index], y_test[index] def __len__(self): return self.len train = FaceTrain() test = FaceTest() train_loader = DataLoader(dataset=train, batch_size=64, shuffle=True) test_loader = DataLoader(dataset=test, batch_size=64, shuffle=True) epochs = 10 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 #### put net in train mode net.train() for idx, (images, labels) in enumerate(train_loader): optimizer.zero_grad() log_ps = net(images) loss = loss_function(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 #### put net in eval mode net.eval() with torch.no_grad(): for images, labels in test_loader: log_ps = net(images) test_loss += loss_function(log_ps, labels) #### removed torch.exp() since exponential is monotone, taking it doesn't change the order of outputs. Similarly with torch.softmax() top_p, top_class = log_ps.topk(1, dim=1) #### convert to float/long using proper methods. what you have won't work for cuda tensors. equals = top_class.long() == labels.long().view(*top_class.shape) accuracy += torch.mean(equals.float()) train_losses.append(running_loss/len(train_loader)) test_losses.append(test_loss/len(test_loader)) print("[Epoch: {}/{}] ".format(e+1, epochs), "[Training Loss: {:.3f}] ".format(running_loss/len(train_loader)), "[Test Loss: {:.3f}] ".format(test_loss/len(test_loader)), "[Test Accuracy: {:.3f}]".format(accuracy/len(test_loader))) Results of training are now... [Epoch: 1/10] [Training Loss: 0.139] [Test Loss: 0.046] [Test Accuracy: 0.986] [Epoch: 2/10] [Training Loss: 0.046] [Test Loss: 0.042] [Test Accuracy: 0.987] [Epoch: 3/10] [Training Loss: 0.031] [Test Loss: 0.040] [Test Accuracy: 0.988] [Epoch: 4/10] [Training Loss: 0.022] [Test Loss: 0.029] [Test Accuracy: 0.990] [Epoch: 5/10] [Training Loss: 0.017] [Test Loss: 0.066] [Test Accuracy: 0.987] [Epoch: 6/10] [Training Loss: 0.015] [Test Loss: 0.056] [Test Accuracy: 0.985] [Epoch: 7/10] [Training Loss: 0.018] [Test Loss: 0.039] [Test Accuracy: 0.991] [Epoch: 8/10] [Training Loss: 0.012] [Test Loss: 0.057] [Test Accuracy: 0.988] [Epoch: 9/10] [Training Loss: 0.012] [Test Loss: 0.041] [Test Accuracy: 0.991] [Epoch: 10/10] [Training Loss: 0.007] [Test Loss: 0.048] [Test Accuracy: 0.992] Some other issues that will improve your performance and code. 4. You're never moving the model to the GPU. This means you won't be getting GPU acceleration. 5. torchvision is designed with all the standard transforms and datasets and is built to be used with PyTorch. I recommend using it. This also removes the dependency on keras in your code. 6. Normalize your data by subtracting the mean and dividing by the standard deviation to improve performance of your network. With torchvision you can use transforms.Normalize. This won't make a big difference in MNIST because its already too easy. But in more difficult problems it turns out to be important. Further improved code is show below (much faster on GPU). import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torch.utils.data import DataLoader from torchvision.datasets import MNIST from torchvision import transforms dim = 60 class ConvNet(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 32, 3) self.conv2 = nn.Conv2d(32, 64, 3) self.conv3 = nn.Conv2d(64, 128, 3) self.fc1 = nn.Linear(5 * 5 * 128, 1024) self.fc2 = nn.Linear(1024, 2048) self.fc3 = nn.Linear(2048, 10) def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2)) x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2)) x = x.view(x.size(0), -1) x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = F.dropout(x, 0.5) x = self.fc3(x) return x net = ConvNet() if torch.cuda.is_available(): net.cuda() optimizer = optim.Adam(net.parameters(), lr=1e-3) loss_function = nn.CrossEntropyLoss() train_dataset = MNIST('./data', train=True, download=True, transform=transforms.Compose([ transforms.Resize((dim, dim)), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])) test_dataset = MNIST('./data', train=False, download=True, transform=transforms.Compose([ transforms.Resize((dim, dim)), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])) train_loader = DataLoader(dataset=train_dataset, batch_size=64, shuffle=True, num_workers=8) test_loader = DataLoader(dataset=test_dataset, batch_size=64, shuffle=False, num_workers=8) epochs = 10 steps = 0 train_losses, test_losses = [], [] for e in range(epochs): running_loss = 0 net.train() for images, labels in train_loader: if torch.cuda.is_available(): images, labels = images.cuda(), labels.cuda() optimizer.zero_grad() log_ps = net(images) loss = loss_function(log_ps, labels) loss.backward() optimizer.step() running_loss += loss.item() else: test_loss = 0 accuracy = 0 net.eval() with torch.no_grad(): for images, labels in test_loader: if torch.cuda.is_available(): images, labels = images.cuda(), labels.cuda() log_ps = net(images) test_loss += loss_function(log_ps, labels) top_p, top_class = log_ps.topk(1, dim=1) equals = top_class.flatten().long() == labels accuracy += torch.mean(equals.float()).item() train_losses.append(running_loss/len(train_loader)) test_losses.append(test_loss/len(test_loader)) print("[Epoch: {}/{}] ".format(e+1, epochs), "[Training Loss: {:.3f}] ".format(running_loss/len(train_loader)), "[Test Loss: {:.3f}] ".format(test_loss/len(test_loader)), "[Test Accuracy: {:.3f}]".format(accuracy/len(test_loader))) Updated results of training... [Epoch: 1/10] [Training Loss: 0.125] [Test Loss: 0.045] [Test Accuracy: 0.987] [Epoch: 2/10] [Training Loss: 0.043] [Test Loss: 0.031] [Test Accuracy: 0.991] [Epoch: 3/10] [Training Loss: 0.030] [Test Loss: 0.030] [Test Accuracy: 0.991] [Epoch: 4/10] [Training Loss: 0.024] [Test Loss: 0.046] [Test Accuracy: 0.990] [Epoch: 5/10] [Training Loss: 0.020] [Test Loss: 0.032] [Test Accuracy: 0.992] [Epoch: 6/10] [Training Loss: 0.017] [Test Loss: 0.046] [Test Accuracy: 0.991] [Epoch: 7/10] [Training Loss: 0.015] [Test Loss: 0.034] [Test Accuracy: 0.992] [Epoch: 8/10] [Training Loss: 0.011] [Test Loss: 0.048] [Test Accuracy: 0.992] [Epoch: 9/10] [Training Loss: 0.012] [Test Loss: 0.037] [Test Accuracy: 0.991] [Epoch: 10/10] [Training Loss: 0.013] [Test Loss: 0.038] [Test Accuracy: 0.992]
https://stackoverflow.com/questions/58666904/
Pytorch Hardware Requirement
What is the minimum Computation Capability required by the latest PyTorch version? I have Nvidia Geforce 820M with computation capability 2.1. How can I run PyTorch models on my GPU (if it doesn't support naturally)
Looking at this page, PyTorch (even the somewhat oldest versions) support CUDA upwards from version 7.5. Whereas, looking at this page, CUDA 7.5 requires minimum Compute Capability 2.0. So, on paper, your machine should support some older version of PyTorch which allows CUDA 7.5 or preferably 8.0 (as of writing this answer, the latest version uses minimum CUDA 9.2). However, PyTorch also requires cuDNN. So, cuDNN 6.0 works for CUDA 7.5. But cuDNN 6.0 requires Compute Capability of 3.0. So, mostly, PyTorch won't work on your machine. (Thanks for pointing out the cuDNN part Robert Crovella)
https://stackoverflow.com/questions/58672185/
Can not get pytorch working with tensorboard
I"m going through this tutorial to set up pytorch (v1.3.0 through conda) with tensorboard https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html# but on the step from torch.utils.tensorboard import SummaryWriter # default `log_dir` is "runs" - we'll be more specific here writer = SummaryWriter('runs/fashion_mnist_experiment_1') I keep getting the error --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) C:\ProgramData\Anaconda3\envs\fastai_v1\lib\site-packages\torch\utils\tensorboard\__init__.py in 1 try: ----> 2 from tensorboard.summary.writer.record_writer import RecordWriter # noqa F401 3 except ImportError: ModuleNotFoundError: No module named 'tensorboard.summary'; 'tensorboard' is not a package During handling of the above exception, another exception occurred: ImportError Traceback (most recent call last) c:\Users\matt\Documents\code\playground\tensorboard.py in ----> 1 from torch.utils.tensorboard import SummaryWriter 2 3 # default `log_dir` is "runs" - we'll be more specific here 4 writer = SummaryWriter('runs/fashion_mnist_experiment_1') C:\ProgramData\Anaconda3\envs\fastai_v1\lib\site-packages\torch\utils\tensorboard\__init__.py in 2 from tensorboard.summary.writer.record_writer import RecordWriter # noqa F401 3 except ImportError: ----> 4 raise ImportError('TensorBoard logging requires TensorBoard with Python summary writer installed. ' 5 'This should be available in 1.14 or above.') 6 from .writer import FileWriter, SummaryWriter # noqa F401 ImportError: TensorBoard logging requires TensorBoard with Python summary writer installed. This should be available in 1.14 or above. Does anyone have any suggestions?
The error log says, among other things, ImportError: TensorBoard logging requires TensorBoard with Python summary writer installed. This should be available in 1.14 or above. So, when it tries to import TensorBoard, it's unable to do so because it's missing it in the search path. You can install the latest version (without specifying any version number), as in: $ conda install -c conda-forge tensorboard Apart from that, you might also need to install protobuf: $ conda install -c conda-forge protobuf These installations should fix the ImportErrors.
https://stackoverflow.com/questions/58686400/
How do I augment data after spliting traininng datset into train and validation set for CIFAR10 using PyTorch?
When classifying the CIFAR10 in PyTorch, there are normally 50,000 training samples and 10,000 testing samples. However, if I need to create a validation set, I can do it by splitting the training set into 40000 train samples and 10000 validation samples. I used the following codes train_transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]) test_transform = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))]) cifar_train_L = CIFAR10('./data',download=True, train= True, transform = train_transform) cifar_test = CIFAR10('./data',download=True, train = False, transform= test_transform) train_size = int(0.8*len(cifar_training)) val_size = len(cifar_training) - train_size cifar_train, cifar_val = torch.utils.data.random_split(cifar_train_L,[train_size,val_size]) train_dataloader = torch.utils.data.DataLoader(cifar_train, batch_size= BATCH_SIZE, shuffle= True, num_workers=2) test_dataloader = torch.utils.data.DataLoader(cifar_test,batch_size= BATCH_SIZE, shuffle= True, num_workers= 2) val_dataloader = torch.utils.data.DataLoader(cifar_val,batch_size= BATCH_SIZE, shuffle= True, num_workers= 2) Normally, when augmenting data in PyTorch, different augmenting processes are used under the transforms.Compose function (i.e., transforms.RandomHorizontalFlip()). However, if I use these augmentation processes before splitting the training set and validation set, the augmented data will also be included in the validation set. Is there any way, I can fix this problem? In short, I want to manually split the training dataset into train and validation set as well as I want to use the data augmentation technique into the new training set.
You can manually override the transforms of the dataset: cifar_train, cifar_val = torch.utils.data.random_split(cifar_train_L,[train_size,val_size]) cifar_val.transforms = test_transform
https://stackoverflow.com/questions/58687028/
Pytorch BiLSTM POS Tagging Issue: RuntimeError: input.size(-1) must be equal to input_size. Expected 6, got 12
I have a nlp dataset, and according to the Pytorch official tutorial, I change the dataset to the word_to_idx and tag_to_idx, like: word_to_idx = {'I': 0, 'have': 1, 'used': 2, 'transfers': 3, 'on': 4, 'three': 5, 'occasions': 6, 'now': 7, 'and': 8, 'each': 9, 'time': 10} tag_to_idx = {'PRON': 0, 'VERB': 1, 'NOUN': 2, 'ADP': 3, 'NUM': 4, 'ADV': 5, 'CONJ': 6, 'DET': 7, 'ADJ': 8, 'PRT': 9, '.': 10, 'X': 11} I want to complete the POS-Tagging task with BiLSTM. Here is my BiLSTM code: class LSTMTagger(nn.Module): def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size): super(LSTMTagger, self).__init__() self.hidden_dim = hidden_dim self.word_embeddings = nn.Embedding(vocab_size, tagset_size) # The LSTM takes word embeddings as inputs, and outputs hidden states self.lstm = nn.LSTM(embedding_dim, hidden_dim, bidirectional=True) # The linear layer that maps from hidden state space to tag space self.hidden2tag = nn.Linear(in_features=hidden_dim * 2, out_features=tagset_size) def forward(self, sentence): embeds = self.word_embeddings(sentence) lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1)) tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1)) # tag_scores = F.softmax(tag_space, dim=1) tag_scores = F.log_softmax(tag_space, dim=1) return tag_scores Then I run the training code in Pycharm, like: EMBEDDING_DIM = 6 HIDDEN_DIM = 6 NUM_EPOCHS = 3 model = LSTMTagger(embedding_dim=EMBEDDING_DIM, hidden_dim=HIDDEN_DIM, vocab_size=len(word_to_idx), tagset_size=len(tag_to_idx)) loss_function = nn.NLLLoss() optimizer = optim.SGD(model.parameters(), lr=0.1) # See what the scores are before training with torch.no_grad(): inputs = prepare_sequence(training_data[0][0], word_to_idx) tag_scores = model(inputs) print(tag_scores) print(tag_scores.size()) However, it shows error with line tag_scores = model(inputs) and line lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1)). The error is: Traceback (most recent call last): line 140, in <module> tag_scores = model(inputs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) line 115, in forward lstm_out, _ = self.lstm(embeds.view(len(sentence), 1, -1)) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 559, in forward return self.forward_tensor(input, hx) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 539, in forward_tensor output, hidden = self.forward_impl(input, hx, batch_sizes, max_batch_size, sorted_indices) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 519, in forward_impl self.check_forward_args(input, hx, batch_sizes) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 490, in check_forward_args self.check_input(input, batch_sizes) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 153, in check_input self.input_size, input.size(-1))) RuntimeError: input.size(-1) must be equal to input_size. Expected 6, got 12 I don't know how to debug with it. Could somebody help me fix this issue? Thanks in advance!
The error is here: self.word_embeddings = nn.Embedding(vocab_size, tagset_size) Instead of using the embedding dimension, you use the number of tags which is 12 and not 6 which is what the LSTM layer expects.
https://stackoverflow.com/questions/58688056/
Recommendation for Best Neural Network Type (in TensorFLow or PyTorch) For Fitting Problems
I am looking to develop a simple Neural Network in PyTorch or TensorFlow to predict one numeric value based on several inputs. For example, if one has data describing the interior comfort parameters for a building, the NN should predict the numeric value for the energy consumption. Both PyTorch or TensorFlow documented examples and tutorials are generally focused on classification and time dependent series (which is not the case). Any idea on which NN available in those libraries is best for this kind of problems? I'm just looking for a hint about the type, not code. Thanks!
The type of problem you are talking about is called a regression problem. In such types of problems, you would have a single output neuron with a linear activation (or no activation). You would use MSE or MAE to train your network. If your problem is time series(where you are using previous values to predict current/next value) then you could try doing multi-variate time series forecasting using LSTMs. If your problem is not time series, then you could just use a vanilla feed forward neural network. This article explains the concepts of data correlation really well and you might find it useful in deciding what type of neural networks to use based on the type of data and output you have.
https://stackoverflow.com/questions/58690771/
Why am I getting a Pytorch Runtime Error on Test Set
I have a model that is a binary image classification model with the resnext model. I keep getting a run time error when it gets to the test set. Error message is RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight' I am sending my test set tensors to my GPU like my train model. I've looked at the following and I'm doing what was suggested here as stated above. Here is my model code: resnext = models.resnext50_32x4d(pretrained=True) resnext = resnext.to(device) for param in resnext.parameters(): param.requires_grad = True resnext.classifier = nn.Sequential(nn.Linear(2048, 1000), nn.ReLU(), nn.Dropout(0.4), nn.Linear(1000, 2), nn.Softmax(dim = 1)) criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(resnext.classifier.parameters(), lr=0.001) import time start_time = time.time() epochs = 1 max_trn_batch = 5 max_tst_batch = 156 y_val_list = [] policy_list = [] train_losses = [] test_losses = [] train_correct = [] test_correct = [] for i in range(epochs): for i in tqdm(range(0, max_trn_batch)): trn_corr = 0 tst_corr = 0 # Run the training batches for b, (X_train, y_train, policy) in enumerate(train_loader): #print(y_train, policy) X_train = X_train.to(device) y_train = y_train.to(device) if b == max_trn_batch: break b+=1 # Apply the model y_pred = resnext(X_train) loss = criterion(y_pred, y_train) # Tally the number of correct predictions predicted = torch.max(y_pred.data, 1)[1] batch_corr = (predicted == y_train).sum() trn_corr += batch_corr # Update parameters optimizer.zero_grad() loss.backward() optimizer.step() # Print interim results if b%1 == 0: print(f'epoch: {i:2} batch: {b:4} [{100*b:6}/63610] loss: {loss.item():10.8f} \ accuracy: {trn_corr.item()/(100*b):7.3f}%') train_losses.append(loss) train_correct.append(trn_corr) # Run the testing batches with torch.no_grad(): for b, (X_test, y_test, policy) in enumerate(test_loader): policy_list.append(policy) X_test.to(device) y_test.to(device) if b == max_tst_batch: break # Apply the model y_val = resnext(X_test) y_val_list.append(y_val.data) # Tally the number of correct predictions predicted = torch.max(y_val.data, 1)[1] tst_corr += (predicted == y_test).sum() loss = criterion(y_val, y_test) test_losses.append(loss) test_correct.append(tst_corr) print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed Here is the full traceback: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-84-48bce2e8d4fa> in <module> 60 61 # Apply the model ---> 62 y_val = resnext(X_test) 63 y_val_list.append(y_val.data) 64 # Tally the number of correct predictions C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) C:\ProgramData\Anaconda3\lib\site-packages\torchvision\models\resnet.py in forward(self, x) 194 195 def forward(self, x): --> 196 x = self.conv1(x) 197 x = self.bn1(x) 198 x = self.relu(x) C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs) 545 result = self._slow_forward(*input, **kwargs) 546 else: --> 547 result = self.forward(*input, **kwargs) 548 for hook in self._forward_hooks.values(): 549 hook_result = hook(self, input, result) C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in forward(self, input) 341 342 def forward(self, input): --> 343 return self.conv2d_forward(input, self.weight) 344 345 class Conv3d(_ConvNd): C:\ProgramData\Anaconda3\lib\site-packages\torch\nn\modules\conv.py in conv2d_forward(self, input, weight) 338 _pair(0), self.dilation, self.groups) 339 return F.conv2d(input, weight, self.bias, self.stride, --> 340 self.padding, self.dilation, self.groups) 341 342 def forward(self, input): RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 'weight' Again, my tensors and the model are sent to the GPU so I'm not sure what is going on. Does anyone see my mistake?
[...] my tensors and the model are sent to the GPU [...] Not the test Tensors. It is a simple mistake: X_test.to(device) y_test.to(device) should be X_test = X_test.to(device) y_test = y_test.to(device)
https://stackoverflow.com/questions/58697040/
How to reduce the dimensions of a tensor with neural networks
I have a 3D tensor of size [100,70,42] (batch, seq_len, features) and I would like to get a tensor of size [100,1,1] by using a neural network based on linear transformations (nn.Linear in Pytorch). I have implemented the following code class Network(nn.Module): def __init__(self): super(Network, self).__init__() self.fc1 = nn.Linear(42, 120) self.fc2 = nn.Linear(120,1) def forward(self, input): model = nn.Sequential(self.fc1, nn.ReLU(), self.fc2) output = model(input) return output However, upon training this only gives me an output of the shape [100,70,1], which is not the desired one. Thanks!
nn.Linear acts only on last axis. If you want to apply linear over last two dimensions, you must reshape your input tensor: class Network(nn.Module): def __init__(self): super(Network, self).__init__() self.fc1 = nn.Linear(70 * 42, 120) # notice input shape self.fc2 = nn.Linear(120,1) def forward(self, input): input = input.reshape((-1, 70 * 42)) # added reshape model = nn.Sequential(self.fc1, nn.ReLU(), self.fc2) output = model(input) output = output.reshape((-1, 1, 1)) # OP asked for 3-dim output return output
https://stackoverflow.com/questions/58699012/
Is there a tensor.item() equivalent for a tensor containing a list in pytorch?
In pytorch, if I define a one-element tensor as follows: >>> import torch >>> target1 = torch.tensor([5]) I'm able to pull out the value of its one element like this: >>> target1.item() 5 What I would like to know is if when my tensor is defined as: target2 = torch.tensor([[5], [5], [5], [5]]) Is there some way (similar or not to .item() above) to pull out all of its entries into a list like: >>> target2.(something) [5, 5, 5, 5] I can't seem to find any function in the documentation that supports an operation like this.
You can use target2.numpy().ravel() or target2.view(-1).numpy() or target2.view(target2.numel()).numpy() Out[1]: array([5, 5, 5, 5], dtype=int64)
https://stackoverflow.com/questions/58701205/
runtimeerror size mismatch at tensor
error message: RuntimeError: size mismatch, m1: [64 x 3200], m2: [512 x 1] at C:/w/1/s/windows/pytorch/aten/src\THC/generic/THCTensorMathBlas.cu:290 The code is the following.: class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.label_emb = nn.Embedding(opt.n_classes, opt.latent_dim) self.init_size = opt.img_size // 4 # Initial size before upsampling self.l1 = nn.Sequential(nn.Linear(opt.latent_dim, 128 * self.init_size ** 2)) self.conv_blocks = nn.Sequential( nn.BatchNorm2d(128), nn.Upsample(scale_factor=2), nn.Conv2d(128, 128, 3, stride=1, padding=1), nn.BatchNorm2d(128, 0.8), nn.LeakyReLU(0.2, inplace=True), nn.Upsample(scale_factor=2), nn.Conv2d(128, 64, 3, stride=1, padding=1), nn.BatchNorm2d(64, 0.8), nn.LeakyReLU(0.2, inplace=True), nn.Conv2d(64, opt.channels, 3, stride=1, padding=1), nn.Tanh(), ) def forward(self, noise, labels): gen_input = torch.mul(self.label_emb(labels), noise) out = self.l1(gen_input) out = out.view(out.shape[0], 128, self.init_size, self.init_size) img = self.conv_blocks(out) return img class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() def discriminator_block(in_filters, out_filters, bn=True): """Returns layers of each discriminator block""" block = [nn.Conv2d(in_filters, out_filters, 3, 2, 1), nn.LeakyReLU(0.2, inplace=True), nn.Dropout2d(0.25)] if bn: block.append(nn.BatchNorm2d(out_filters, 0.8)) return block self.conv_blocks = nn.Sequential( *discriminator_block(opt.channels, 16, bn=False), *discriminator_block(16, 32), *discriminator_block(32, 64), *discriminator_block(64, 128), ) # The height and width of downsampled image ds_size = opt.img_size // 2 ** 4 # Output layers self.adv_layer = nn.Sequential(nn.Linear(128 * ds_size ** 2, 1), nn.Sigmoid()) self.aux_layer = nn.Sequential(nn.Linear(128 * ds_size ** 2, opt.n_classes), nn.Softmax()) def forward(self, img): out = self.conv_blocks(img) out = out.view(out.shape[0], -1) validity = self.adv_layer(out) label = self.aux_layer(out) return validity, label # Loss functions adversarial_loss = torch.nn.BCELoss() auxiliary_loss = torch.nn.CrossEntropyLoss() # Initialize generator and discriminator generator = Generator() discriminator = Discriminator() os.makedirs("../../data/mnist", exist_ok=True) labels_path = 'C:/project/PyTorch-GAN/ulna/train-labels-idx1-ubyte.gz' images_path = 'C:/project/PyTorch-GAN/ulna/train-images-idx3-ubyte.gz' label_name = [] with gzip.open(labels_path, 'rb') as lbpath: labels = np.frombuffer(lbpath.read(), dtype="int32", offset=8) with gzip.open(images_path, 'rb') as imgpath: images = np.frombuffer(imgpath.read(), dtype="int32", offset=16).reshape(len(labels),70,70,1) hand_transform2 = transforms.Compose([ transforms.Resize((70, 70)), transforms.Grayscale(1), transforms.ToTensor(), transforms.Normalize([0.5], [0.5]) ]) #images=cv2.resize(images, (70, 70),1) dataset1 = datasets.ImageFolder('C:/project/PyTorch-GAN/ulna/ulna', transform=hand_transform2) dataloader = torch.utils.data.DataLoader( dataset1, batch_size=opt.batch_size, shuffle=True, ) The Traceback is the following.: Traceback (most recent call last): File "acgan.py", line 225, in <module> real_pred, real_aux = discriminator(real_imgs) File "C:\Users\S\AppData\Local\conda\conda\envs\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "acgan.py", line 110, in forward validity = self.adv_layer(out) File "C:\Users\S\AppData\Local\conda\conda\envs\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\S\AppData\Local\conda\conda\envs\venv\lib\site-packages\torch\nn\modules\container.py", line 92, in forward input = module(input) File "C:\Users\S\AppData\Local\conda\conda\envs\venv\lib\site-packages\torch\nn\modules\module.py", line 541, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\S\AppData\Local\conda\conda\envs\venv\lib\site-packages\torch\nn\modules\linear.py", line 87, in forward return F.linear(input, self.weight, self.bias) File "C:\Users\S\AppData\Local\conda\conda\envs\venv\lib\site-packages\torch\nn\functional.py", line 1370, in linear ret = torch.addmm(bias, input, weight.t()) RuntimeError: size mismatch, m1: [64 x 3200], m2: [512 x 1] at C:/w/1/s/windows/pytorch/aten/src\THC/generic/THCTensorMathBlas.cu:290 What I want to practice is the GAN code. The entire GAN code before modification can be found at the following link: https://github.com/eriklindernoren/PyTorch-GAN/blob/master/implementations/acgan/acgan.py Input image is an x-ray image adjusted to 70x70, and output image is a fake x-ray image that is newly created by learning input x-ray image. The code worked well when practised with the minist database. I'm afraid I don't have a clue about the problem of code. please help me! Thank you.
It seems that opt.img_size might still be set to 32 as if you were using CIFAR. As you resize it to 70, it should be set to 70. Anyway, another problem will arise because ds_size = opt.img_size // 2 ** 4 is not valid for opt.img_size=70. If you want a hard-coded solution, set ds_size=5. This fixes the Discriminator, but the same thing will happen to the Generator. If you don't understand how to fix this properly, I would recommend you to take some time to read about how these models work. If you want to use the code as is, I would recommend you to use an img_size that is multiple of 16, e.g., opt.img_size=80 and you would have no problem. To avoid other problems, you may want to use transforms.Resize((opt.img_size, opt.img_size)) instead of hard-coding the img_size there.
https://stackoverflow.com/questions/58709494/
What "exactly" happens inside embedding layer in pytorch?
From multiple searches and pytorch documentation itself I could figure out that inside embedding layer there is a lookup table where the embedding vectors are stored. What I am not able to understand: what exactly happens during training in this layer? What are the weights and how the gradients of those weights are computed? My intuition is that at least there should be a function with some parameters that produces the keys for the lookup table. If so, then what is that function? Any help in this will be appreciated. Thanks.
That is a really good question! The embedding layer of PyTorch (same goes for Tensorflow) serves as a lookup table just to retrieve the embeddings for each of the inputs, which are indices. Consider the following case, you have a sentence where each word is tokenized. Therefore, each word in your sentence is represented with a unique integer (index). In case the list of indices (words) is [1, 5, 9], and you want to encode each of the words with a 50 dimensional vector (embedding), you can do the following: # The list of tokens tokens = torch.tensor([0,5,9], dtype=torch.long) # Define an embedding layer, where you know upfront that in total you # have 10 distinct words, and you want each word to be encoded with # a 50 dimensional vector embedding = torch.nn.Embedding(num_embeddings=10, embedding_dim=50) # Obtain the embeddings for each of the words in the sentence embedded_words = embedding(tokens) Now, to answer your questions: During the forward pass, the values for each of the tokens in your sentence are going to be obtained in a similar way as the Numpy's indexing works. Because in the backend, this is a differentiable operation, during the backward pass (training), Pytorch is going to compute the gradients for each of the embeddings and readjust them accordingly. The weights are the embeddings themselves. The word embedding matrix is actually a weight matrix that will be learned during training. There is no actual function per se. As we defined above, the sentence is already tokenized (each word is represented with a unique integer), and we can just obtain the embeddings for each of the tokens in the sentence. Finally, as I mentioned the example with the indexing many times, let us try it out. # Let us assume that we have a pre-trained embedding matrix pretrained_embeddings = torch.rand(10, 50) # We can initialize our embedding module from the embedding matrix embedding = torch.nn.Embedding.from_pretrained(pretrained_embeddings) # Some tokens tokens = torch.tensor([1,5,9], dtype=torch.long) # Token embeddings from the lookup table lookup_embeddings = embedding(tokens) # Token embeddings obtained with indexing indexing_embeddings = pretrained_embeddings[tokens] # Voila! They are the same np.testing.assert_array_equal(lookup_embeddings.numpy(), indexing_embeddings.numpy())
https://stackoverflow.com/questions/58718612/
Average error and standard deviation of error within epoch not correctly updating - PyTorch
I am attempting to use Stochastic Gradient Descent but I am unsure as to why my error/loss is not decreasing. The information I am using from the train dataframe is the index (each sequence) and the binding affinity, and the goal is to predict the binding affinity. Here is what the head of the dataframe looks like: For the training, I make a one-hot of a sequence and calculate a score with another matrix, and the goal is to get this score to be as close to the binding affinity as possible (for any given peptide). How I calculate the score and my training loop is shown in my code below but I don't think an explanation is necessary to solve why my error fails to decrease. #ONE-HOT ENCODING AA=['A','R','N','D','C','Q','E','G','H','I','L','K','M','F','P','S','T','W','Y','V'] loc=['N','2','3','4','5','6','7','8','9','10','11','C'] aa = "ARNDCQEGHILKMFPSTWYV" def p_one_hot(seq): c2i = dict((c,i) for i,c in enumerate(aa)) int_encoded = [c2i[char] for char in seq] onehot_encoded = list() for value in int_encoded: letter = [0 for _ in range(len(aa))] letter[value] = 1 onehot_encoded.append(letter) return(torch.Tensor(np.transpose(onehot_encoded))) #INITALIZE TENSORS a=Var(torch.randn(20,1),requires_grad=True) #initalize similarity matrix - random array of 20 numbers freq_m=Var(torch.randn(12,20),requires_grad=True) freq_m.data=(freq_m.data-freq_m.min().data)/(freq_m.max().data-freq_m.min().data)#0 to 1 scaling optimizer = optim.SGD([torch.nn.Parameter(a), torch.nn.Parameter(freq_m)], lr=1e-6) loss = nn.MSELoss() #TRAINING LOOP epochs = 100 for i in range(epochs): #RANDOMLY SAMPLE DATA train = all_seq.sample(frac=.03) names = train.index.values.tolist() affinities = train['binding_affinity'] print('Epoch: ' + str(i)) #forward pass iteration_loss=[] for j, seq in enumerate(names): sm=torch.mm(a,a.t()) #make simalirity matrix square symmetric freq_m.data=freq_m.data/freq_m.data.sum(1,keepdim=True) #sum of each row must be 1 (sum of probabilities of each amino acid at each position) affin_score = affinities[j] new_m = torch.mm(p_one_hot(seq), freq_m) tss_m = new_m * sm tss_score = tss_m.sum() sms = sm fms = freq_m error = loss(tss_score, torch.FloatTensor(torch.Tensor([affin_score]))) iteration_loss.append(error.item()) optimizer.zero_grad() error.backward() optimizer.step() mean = statistics.mean(iteration_loss) stdev = statistics.stdev(iteration_loss) print('Epoch Average Error: ' + str(mean) + '. Epoch Standard Deviation: ' + str(stdev)) iteration_loss.clear() After each epoch, I print out the average of all errors for that epoch as well as the standard deviation. Each epoch runs through about 45,000 sequences. However, after 10 epochs I'm still not seeing any improvement with my error and I'm unsure as to why. Here is the output I am seeing: Are there any ideas as to what I'm doing wrong? I'm new to PyTorch so any help is appreciated! Thank you!
It turns out that casting the optimizer parameters into torch.nn.Parameter() makes the tensors fail to hold on to updates, and removing this now shows a decreasing error.
https://stackoverflow.com/questions/58722615/
"no module named torch". But installed pytorch 1.3.0 with conda in Ubuntu 18.04.02 Server Edition
installed pytorch with conda : (base) (3.8.0/envs/my_virtual_env-3.8.0) marco@pc:~/facenet_pytorch/examples$ conda install pytorch torchvision cpuonly -c pytorch Collecting package metadata (current_repodata.json): done Solving environment: done # All requested packages already installed. I updated conda: (base) (3.8.0/envs/my_virtual_env-3.8.0) marco@pc:~/facenet_pytorch/examples$ conda update conda Collecting package metadata (current_repodata.json): done Solving environment: done # All requested packages already installed. Installed mkl=2019 : (base) (3.8.0/envs/my_virtual_env-3.8.0) marco@pc:~/facenet_pytorch/examples$ conda install mkl=2019 Collecting package metadata (current_repodata.json): done Solving environment: done # All requested packages already installed. (base) (3.8.0/envs/my_virtual_env-3.8.0) marco@pc:~/facenet_pytorch/examples$ conda list | grep torch cpuonly 1.0 0 pytorch facenet-pytorch 0.1.0 pypi_0 pypi pytorch 1.3.0 py3.7_cpu_0 [cpuonly] pytorch torchfile 0.1.0 pypi_0 pypi torchvision 0.4.1 py37_cpu [cpuonly] pytorch But it still says "no module torch" : (base) (3.8.0/envs/my_virtual_env-3.8.0) marco@pc:~/facenet_pytorch/examples$ python3 Python 3.8.0 (default, Oct 30 2019, 16:20:23) [GCC 7.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'torch' >>> I discovered that the problem appears only with python 3.8.0 version (base) marco@pc:~/facenet_pytorch$ python3 Python 3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> Ubuntu 18.04.02 Server Edition Or, may be, it's just a matter of python environments, as you said. But I do not understand why just activating conda environment, with "conda activate", it doesn't work Marco
First create a Conda environment using: conda create -n pytorch_env python=3 ( you can create with any python version ) Activate the environment using: conda activate pytorch_env Now install PyTorch using: conda install pytorch-cpu torchvision -c pytorch Go to python shell and import using the command: import torch
https://stackoverflow.com/questions/58732358/