id
stringlengths
3
8
text
stringlengths
1
115k
st180100
outputs and true_value must be same batch_size. [[0.4820, 0.5180]]'s shape is [1,2](batch_size,num_labels), and true_value’s shape is [2](num_labels). therefore you must change true_value’s shape to [1,2](batch_size,num_labels)!
st180101
Adding more info to @larcane 's explanation, true_value should be a LongTensor which means the type of elements inside the tensor is long. Your true_value should have only one value. In this case, [0] or [1]. Please make it clear whether your output should be updated into [0, 1] or [1, 0]
st180102
Thank you, larcane and thecho7! I finally succeed to train the model! thanks. The correct shape of outputs and label(when batchsize = 1) is [1x2] and [1x1]. (when using CrossEntropyLoss) And here is my train code for those who may need(binary classification) criterion = nn.CrossEntropyLoss() for idx, (raw_data, true_value) in enumerate(train_module): raw_data = raw_data.to(device).float() true_value= true_value.to(device) true_value= true_value.reshape(1) optimizer.zero_grad() outputs = model(raw_data) loss = criterion(outputs.float(), true_value.long()) loss.backward() optimizer.step()
st180103
Hello. I am trying to use augmented and not augmented dataset in each epoch(for example: augmented in one epoch not augmented in different epoch) but i couldn’t figure out how to do it. My approach was loading DataLoader in each epoch again and again but I think it’s wrong. Because when i print indexes in getitem in my Dataset, there are a lot of duplicate indexes. Here is my code for training: CodePile | Easily Share Piles of Code Here is my code for the dataset: CodePile | Easily Share Piles of Code How can i achieve what i want? Thanks in advance.
st180104
Solved by mMagmer in post #6 in your code , it’s for creating val and train set . this is one way of spliting your dataset to train and val set. if you set sampler to None, dataloader choose samples from all semple in your dataset. transform doesn’t work this way by default . in most case dataset getitem applis transform t…
st180105
hi, you’re not clear about what you want. at least i don’t get it. also re-creating dataloader can not cause duplicate index. import torch class Dataset1(torch.utils.data.Dataset): def __init__(self,lenght): self.idx = torch.arange(lenght)**2 def __len__(self): return len(self.idx) def __getitem__(self, i): return i , self.idx[i] train_sampler = torch.utils.data.SubsetRandomSampler(torch.arange(100)) Training_Data = Dataset1(100) train_loader = torch.utils.data.DataLoader(Training_Data, batch_size=4, sampler=train_sampler, num_workers=2, pin_memory=False) idx = [] for x,y in train_loader: print(x,y) idx.append(x) all_index , _= torch.sort(torch.cat(idx)) print(all_index) output : tensor([91, 55, 82, 31]) tensor([8281, 3025, 6724, 961]) tensor([71, 23, 29, 25]) tensor([5041, 529, 841, 625]) tensor([70, 10, 26, 93]) tensor([4900, 100, 676, 8649]) tensor([75, 96, 36, 85]) tensor([5625, 9216, 1296, 7225]) tensor([48, 86, 38, 22]) tensor([2304, 7396, 1444, 484]) tensor([44, 46, 98, 3]) tensor([1936, 2116, 9604, 9]) tensor([12, 33, 27, 99]) tensor([ 144, 1089, 729, 9801]) tensor([43, 65, 16, 2]) tensor([1849, 4225, 256, 4]) tensor([63, 97, 51, 47]) tensor([3969, 9409, 2601, 2209]) tensor([69, 45, 76, 56]) tensor([4761, 2025, 5776, 3136]) tensor([32, 11, 68, 64]) tensor([1024, 121, 4624, 4096]) tensor([52, 39, 18, 5]) tensor([2704, 1521, 324, 25]) tensor([ 0, 79, 92, 35]) tensor([ 0, 6241, 8464, 1225]) tensor([77, 54, 21, 60]) tensor([5929, 2916, 441, 3600]) tensor([81, 1, 57, 58]) tensor([6561, 1, 3249, 3364]) tensor([ 7, 13, 84, 94]) tensor([ 49, 169, 7056, 8836]) tensor([67, 14, 80, 89]) tensor([4489, 196, 6400, 7921]) tensor([83, 20, 53, 37]) tensor([6889, 400, 2809, 1369]) tensor([62, 66, 15, 78]) tensor([3844, 4356, 225, 6084]) tensor([59, 90, 17, 42]) tensor([3481, 8100, 289, 1764]) tensor([72, 41, 95, 50]) tensor([5184, 1681, 9025, 2500]) tensor([73, 28, 74, 49]) tensor([5329, 784, 5476, 2401]) tensor([19, 34, 61, 30]) tensor([ 361, 1156, 3721, 900]) tensor([ 8, 87, 6, 9]) tensor([ 64, 7569, 36, 81]) tensor([40, 88, 24, 4]) tensor([1600, 7744, 576, 16]) tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99])
st180106
Thank you so much for your answer. I just wanna understand how sampling works. In one iteration i want my program to select batch that has augmented images and in another iteration i want my program to select batch that doesnt have augmented image.
st180107
Also i researched and i couldnt understand anything about sampling. I mean what does random sampler is doing?? What is the purpose of that? If you answer this, it would be amazing too
st180108
in your code , it’s for creating val and train set . this is one way of spliting your dataset to train and val set. if you set sampler to None, dataloader choose samples from all semple in your dataset. muhammedcanpirincci: i want my program to select batch that doesnt have augmented image. transform doesn’t work this way by default . in most case dataset getitem applis transform to single sample. if you want , you can return both transfomed and orginal image in getitem and use one that you want. def __getitem__(self, i): img , t = ...#load image and label return img ,self.transform(img),t
st180109
Thank you so much. So sampler is getting data that i want with random indexes or with sequential order.
st180110
torch.utils.data.SubsetRandomSampler by default is random with equal probability for each sample and without replacing.
st180111
What do you mean by “random with equal probability for each sample”? I thought if we have dataset like this:x,y,x,x,y,y,x,x,y,x,x,y,x,x,x,x,y,y,y,x,x,y,y,y,x,y We are sampling it randomly for each batch and in for loop each batch will look like this : x,x,x,y (if batch size is 4) (indexes are randomly chosen)
st180112
yes . but you can use torch.utils.data.WeightedRandomSampler https://pytorch.org/docs/stable/data.html#torch.utils.data.WeightedRandomSampler to set different weight for each sample.
st180113
My data is stored in class-specific files, each file contains all the data for one class. I’m currently loading it with a custom IterableDataset and a multi-worker DataLoader. As my dataset is quite large, each worker is responsible for a fixed set of n_files/n_workers files as I do not want to every file into memory on every worker. The problem is as each file is as class-specific, each worker only produces a batch containing the classes it has been assigned (based on worker ID). Each worker has it’s own copy of dataset and collate_fn so it batches within the worker. How to shuffle an iterable dataset 1 discusses how to shuffle using torch.utils.data.datapipes.iter.combinatorics.ShuffleIterDataPipe (which isn’t in the docs?). It applies to any iterator but applying it to Dataset then shuffles within each worker and thus batch or to the DataLoader which then only shuffles the order the batches are yielded. The shuffle flag in the DataLoader also just throws me an error about not selecting mode. Is there some argument in DataLoader that can let me mix and re-collect some buffer of completed batches? Ideally I don’t want to just iterate and collect some n_worker batches and manually shuffle them. Thanks! Here’s a simple example of what I’m doing class CustomDataset(IterableDataset): def __init__(self, files: List[str]): self.files = files def __iter__(self): worker_info = torch.utils.data.get_worker_info() if worker_info is None: files_chunk = self.files else: n_workers = worker_info.num_workers n_files = len(self.files) chunk_size = n_files // n_workers chunk_start = chunk_size * worker_info.id files_chunk = self.files[chunk_start: chunk_start + chunk_size] files_data = [open(f, 'r') for f in files_chunk] for line in chain.from_iterable(zip(*files_data)): yield line dataloader = DataLoader(dataset, batch_size=args.batch_size, num_workers=n_cpus, pin_memory=True)
st180114
I’ve managed to find a solution that works, even if it is a bit ugly. dataloader = DataLoader(dataset, batch_size=1, num_workers=n_cpus, collate_fn=lambda batch: {k:v[0] for k,v in default_collate(batch).items()}, prefetch_factor=1) shuffled = combinatorics.ShufflerIterDataPipe(dataloader, buffer_size=2 * args.batch_size) dataloader = DataLoader(shuffled, batch_size=args.batch_size, num_workers=0, collate_fn=custom_collate_fn) The lambda is to reset the extra dimension added to the front when the first dataloader runs with a batch_size=1. ShufflerIterDataPipe then shuffles the single instances before they are batched by the second data loader. In this case, by running it with num_workers=0 I can also run GPU operations in there, though I’ve found it to conflict with pin_memory=True.
st180115
I’d like to ask how the “dataset” argument of torch.utils.data.DataLoader is accessed by worker processes during multi-process data loading. My dataset object contains a database handle that can’t be shared across multiple processes. Single-process data loading works as intended. During multi-process data loading, all processes erroneously attempt to use the same database connection handle. I’ve implemented custom copying and pickling that create dedicated database connections for each object. This works as intended when I deepcopy or pickle the object myself. However, it doesn’t seem to solve this problem. How do I open dedicated database connections as my dataset object is sent to, or accessed by worker processes? Thanks for any pointers. Update I’ve found a hacky solution by using torch.utils.data.get_worker_info to detect whether the current dataset object is inside a worker process, which triggers a one-time close+reopen of the database connection. This results in each loading process using a dedicated database connection as intended. I’d much rather do this when the dataset object is handed over to the worker process, but still haven’t had any luck implementing that.
st180116
Hi! First off all, I am reading posts and github issues and threads since a few hours. I learned that Multithreading on Windows and/or Jupyter (Google colab) seams to be a pain or not working at all. After a lot of trial and error, following a lot of advice it seams to work now for me, giving me an immense speed improvement. But sadly only with a downloaded Dataset. If I try it with my own it freezes up immidiatly and I have to restart the runtime, all meanwhile the CPU sits at 0% I also read a lot of posts that are similiar to this one, and none seam to have the same issue. I believe there is an error in my DataSet class that I am unable to find. I hope someone can help me out. This Test works perfectly fine: import time from tqdm import tqdm import torch import torchvision def train(data_loader): start = time.time() for _ in tqdm(range(10)): for x in data_loader: pass end = time.time() return end - start if __name__ == '__main__': train_dataset = torchvision.datasets.FashionMNIST( root=".", train=True, download=True, transform=torchvision.transforms.ToTensor() ) batch_size = 32 train_loader1 = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, pin_memory=True, num_workers=0) train_loader2 = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, pin_memory=True, num_workers=8) train_loader3 = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, pin_memory=True, num_workers=8, persistent_workers=True) train_loader4 = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, pin_memory=True, num_workers=10, persistent_workers=True) print(train(train_loader1)) print(train(train_loader2)) print(train(train_loader3)) print(train(train_loader4)) this gets stuck immediatly after data_loader1, so as soon as I switch on multiple workers. It freezes and the CPU chills at 0%. It looks like the iterator just never returnes a value? Similar to Multiple Dataloader Workers in multi Threading 1 ? import time from torch.utils.data import DataLoader # Gives easier dataset managment by creating mini batches etc. from tqdm import tqdm # For nice progress bar! import numpy as np def train(data_loader): start = time.time() for _ in tqdm(range(10)): for x in data_loader: pass end = time.time() return end - start if __name__ == '__main__': batch_size = 64 root_dir = R'[REDACTED]\MiniStoneShaderDataset' dataset = smallShaderDataset(csv_file='Labels.csv', root_dir=root_dir ) train_dataset, test_dataset = torch.utils.data.random_split(dataset, [99, 9900]) train_loader1 = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=0, pin_memory= True) train_loader2 = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=4, pin_memory= True, persistent_workers = True) train_loader3 = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, num_workers=8, pin_memory= True, persistent_workers = True) print(train(train_loader1)) print(train(train_loader2)) print(train(train_loader3)) Using this DataSet Class: import pandas as pd import torch from torch.utils.data import Dataset import numpy as np #from skimage import io class smallShaderDataset(Dataset): def __init__(self, csv_file, root_dir, transform=None): self.annotations = pd.read_csv(os.path.join(root_dir,csv_file)) self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.annotations) def __getitem__(self, index): data_path = os.path.join(self.root_dir,'LearnDataCombined', self.annotations.iloc[index,0] + ".npy") input = np.load(data_path) parameters = self.annotations.iloc[index,1:] parameters = np.array([parameters],dtype = float).flatten() input = np.array(input,dtype = float) sample = {'input': input, 'parameters': parameters} return input, parameters #return sample Everything works (slowly) with num_workers = 0 There is probably something wrong here wich i am unable to find in full tunnelvision mode. I really hope someone can help me find the issue. If it really is a Bug in DataSet, why does the downloaded set work fine? Thank you for any help! Greetings
st180117
def train(data_loader): start = time.time() for _ in tqdm(range(10)): for x in data_loader: pass end = time.time() return end - start This part is super confusing. Is this intentional that you want to iterate over DataLoader 10 times?
st180118
Yes its intentional just so the results are easier to compare, also it is less vulnerable to single datapoints scewing the results
st180119
Have you tried to del train_loaderN after each step? You enabled pin_memory and persistent_worker for each DataLoader, then these threads and child processes won’t be cleaned up.
st180120
I feel like thats how it shoule be because they are reused. I read something about winders and interactive IDE on github but i am not quite sure anymore. I will try your suggestion, but I think it actually never gets that far. it does not run out of memory or anything it just stops executing
st180121
Curious about are you running you code in Notebook? And, does it work with smaller number of worker?
st180122
yes, in collab, it does not work with any number of workers above 0. it DOES work with a different dataset though, but i described it quite thoroughly in the original question above
st180123
What I can recommend to try is to use Tensor rather than panda DataFrame for your self.annotations. I am not sure if this is something related to spawning process with pandas dataframe when using the multiprocessing in colab.
st180124
I am using torch 1.9.0 in Python 3.7.6 on Windows 10. I am finding that each DataLoader worker, when starting up, imports the script from which the DataLoader is being called. I tried to make a minimal example below. Is this the expected behavior? It really surprises me, since it seems to assume the caller script can be re-imported with no side-effects in every case. Or am I using DataLoader incorrectly? import inspect import torchvision import torch.utils.data as tu_data def get_dl(): batch_size = 1 num_workers = 4 print('batch_size:', batch_size) print('num_workers:', num_workers) transform=torchvision.transforms.ToTensor() mnist_data = torchvision.datasets.MNIST('./data/', transform=transform, download=True) data_loader = tu_data.DataLoader(mnist_data, batch_size=batch_size, shuffle=False, num_workers=num_workers, pin_memory=True) count = 0 print('Entering data_loader loop....') for datum in data_loader: print('count:',count) count+=1 if count > 5: break if __name__ == '__main__': get_dl() else: print('MYSTERY IMPORT!', flush=True) print('__name__ is',__name__) print('Importer is:', inspect.currentframe().f_back.f_code.co_name) # Print out how we got here. The output looks like this (I am not worried about the warning): C:\Users\peria\Anaconda3\envs\segmenter\lib\site-packages\torchvision\datasets\mnist.py:498: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:180.) return torch.from_numpy(parsed.astype(m[2], copy=False)).view(*s) Entering data_loader loop.... MYSTERY IMPORT! __name__ is __mp_main__ Importer is: _run_code MYSTERY IMPORT! __name__ is __mp_main__ Importer is: _run_code MYSTERY IMPORT! __name__ is __mp_main__ Importer is: _run_code MYSTERY IMPORT! __name__ is __mp_main__ Importer is: _run_code count: 0 count: 1 count: 2 count: 3 count: 4 count: 5
st180125
Yes, I think this is expected as Windows uses spawn instead of fork as described in the Windows FAQ 1: The implementation of multiprocessing is different on Windows, which uses spawn instead of fork. So we have to wrap the code with an if-clause to protect the code from executing multiple times. Refactor your code into the following structure. import torch def main() for i, data in enumerate(dataloader): # do something here if __name__ == '__main__': main()
st180126
Thank you for answering a FAQ; I didn’t yet understand how to search for this particular thing in the FAQs. My example has the recommended structure, just by luck. But now I see that putting everything in main() (or get_dl() as I did), completely prevents the “side effects” I was worried about. We just have to make sure there are no executable statements outside of main(), and that takes care of it. And also I guess DataLoader needs to import some modules for each worker; that’s why it does the import. Since it is importing a whole script, it might import some things it does not need, but that does no harm. p.s. I am always glad to see your responses to a question I have searched for; I always learn something from them.
st180127
Hi, I have an image of dimension 1024x1024 which after flatteining comes out to be a 1048576x1 . Apart from this I have a json file containing the coordinate information(3x1).In all there are 17,000 entries in that json file. As such,I am appending the 3x1 vector to the flattened image ,this will create one sample of dimension 1048579x1 The final dataset dimension will be of 1048579x17,000. Is there an efficient way to create this and load it appropriately in pytorch I tried reducing the size of the image to 256x256,but my current code leads to a memory issue(even when it’s run on colab pro). My current code (which was written in numpy is as follows) import json import cv2 import numpy as np image=cv2.imread('blender_files/Model_0/image_0/model_0_0.jpg',0) shape=image.shape image_1=cv2.resize(image,(256,256)) flat_img=image_1.ravel() print(flat_img.shape) with open('blender_files/Model_0/image_0/vertices_0_0.json') as f: json1_str = f.read() json1_data=json.loads(json1_str) print(len(json1_data)) local_coordinates=[] for i in range(len(json1_data)): local_coord=json1_data[i]['local_coordinate'] local_coord=np.array(local_coord) new_arr=np.hstack((flat_img,local_coord)) new_arr=new_arr.tolist() local_coordinates.append(new_arr) #local_coords=np.array(json1_data['local_coordinate']) #print(local_coords.shape) #new_arr=np.hstack((flat_img,local_coords)) print(new_arr.shape) local_coordinates=np.array(local_coordinates) print(local_coordinates.shape)
st180128
In my experience, it is not good approach to stack a such large tensor at once. I recommend you to store (256x256 + 3, 1) size of tensor which indicates a single data. Storing each data points separately has an advantage of easy managing and fast loading and so forth. Think about why such a huge datasets (well-known open datasets) manages all data points(images) separately.
st180129
So just to rephrase you are saying,that once I get that tensor X,I should simply save it in a npy file .And thanks for the comment,in hindsight I think that makes more sense. And hence my final dataset will contain about 17,000 npys right? Also,is it better to downsample the image from 1024 to 256,because I feel that during training having a 1048579 vector might be a problem
st180130
Exactly. And downsampling is up to your model. However, I haven’t ever seen 1048579 vector for training. I recommend you to use other models which utilize 3x1 vector effectively. They perhaps have a name of ‘Conditional ~~’ something.
st180131
I’m trying to manually split my training data into individual batches in a way where I can access the desired batch by indexing. Hence, I can’t rely on DataLoader to do the batch splitting since it’s unindexable. I’ve tried some approaches to achieve this as per this link 2, but I’ve been getting some weird behavior. So what is the proper way to implement this?
st180132
@Omar_AlSuwaidi , you should be able to use a simple list for achieving the indexer import math import torch import torch.nn as nn X = torch.rand(1000,10, 4) batch_size = 64 num_batches = math.ceil(X.size()[0]/batch_size) X_list = [X[batch_size*y:batch_size*(y+1),:,:] for y in range(num_batches)] print(X_list[0].size()) torch.Size([64, 10, 4])
st180133
Hey thanks for the response. Yeah but unfortunately, my training data does not exist as a type torch.Tensor, rather it comes from a datasets object. If you can view the link in the question you’ll get a better idea of what I’m talking about.
st180134
@Omar_AlSuwaidi I don’t think there is any difference at the data level, because i did a 1:1 comparison seed=42 random.seed(seed) np.random.seed(seed) torch.manual_seed(seed) torch.cuda.manual_seed_all(seed) def train_func(x): x = torch.Tensor(plt.imread(x)) print(x.size()) return x train_data = datasets.Caltech101(root='data', transform=transforms.Compose([ transforms.Resize((128, 128)), transforms.Grayscale() , transforms.ToTensor(), ]), download=True) BS = 4 num_batches = len(train_data) // BS print("Num batches is {0}".format(num_batches)) sequence = list(range(len(train_data))) np.random.shuffle(sequence) # To shuffle the training data subsets = [Subset(train_data, sequence[i * BS: (i + 1) * BS]) for i in range(num_batches)] train_loader = [DataLoader(sub, batch_size=BS) for sub in subsets] # Create multiple batches, each with BS number of samples BS = 4 num_batches = len(train_data) // BS print("Num batches is {0}".format(num_batches)) np.random.shuffle(train_data) # To shuffle the training data train_loader1 = [DataLoader(train_data[i*BS: (i+1)*BS], batch_size=BS) for i in range(num_batches)] Comparison Code for i, loader in enumerate(train_loader): loader1 = list(train_loader1[i]) for j, (x,y) in enumerate(loader): if np.sum((x != loader1[j][0]).detach().numpy().reshape(-1)) != 0: print("Mismatch at Loader {0} Data Index {1}".format(i, j)) print("Completed") They are pristine. Please share the entire script as the error is probably the way the models are getting initialized and trained. You need to ensure that there is 1:1 match at the model initialization and training as well
st180135
Hey, thanks for checking back. Well yeah that’s the whole point, it’s that dividing the train_data into batches manually using both methods give drastically different results during training (the training procedure is quite simple, it’s exactly as shown in the link for both methods)! The first method utilizes Subset class to divide train_data into batches, while the second method casts train_data directly into a list, and then indexing multiple batches out of it. While they both are indeed the same at the data level (the order of the images in each batch is identical), training any model with the same weight initialization and random seeds results in very different results (method 1 always gives better results for some reason). If you read the “UPDATE:” in the link, you might find the behavior mentioned there quite interesting; even though both images from both methods get transformed using T_train, it seems that something weird is going on with method 2. I believe the information below “UPDATE:” is the key to this whole discussion but I’m not sure what exactly.
st180136
@Omar_AlSuwaidi In that case can you provide the T_train function. I feel this is a case of different random seeds. There are two additional seeds that you need to set apart from the ones you have set
st180137
Yeah sure! T_train = transforms.Compose([transforms.RandomHorizontalFlip(), transforms.RandomCrop(32, 4), transforms.ToTensor(), transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])]) But if the case was due to different random seeds, then one shouldn’t expect such drastic difference in accuracy. Moreover, the time taken during training for both is very different (it seems like the second method is always much faster regardless of what’s in T_train; also method 2’s performance get’s worse when you add in the random H-Flips and random crops).
st180138
Hello everyone, I’m a newbie to pytorch. There are two data simulation approaches in my training, one works fast, and one works much slower. Worker id is used to distinguish the two approaches, e.g. there are 10 workers in total, worker 0 to 8 use fast simulation, and worker 9 uses slow simulation. Since multi-processes are used in DataLoader, it is supposed that the DataLoader and the training process works like the producer-consumer mode: once a data batch is produced by a worker, it is add to a queue. On the other side, the training process get data batches from the queue, and wait if the queue is empty. However, it is found that the training time is the same as all workers use the slow approach. So, I deduce that workers are not owned by independent subprocesses, but run in a loop, the data simulation is slowed down by the worker 9. My DataLoader is initialized like this: dataloader = torch.utils.data.DataLoader(dataset, batch_size = BATCH_SIZE, num_workers = NUM_WORKERS) My question is that is there are any way to make the process not stucked by the slow workers? Is there are any parameters I missing? Thank you very much!
st180139
For example, I have tried this toy dataset: import numpy as np import torch import time class ToyDataset(torch.utils.data.IterableDataset): def __init__(self, numworkers): super().__init__() self.numworkers = numworkers self.batchidx = 0 def __iter__(self): return self def __next__(self): id = 0 info = torch.utils.data.get_worker_info() if info is not None: id = info.id if id < self.numworkers - 1: time.sleep(0.5) print('fast worker {:d}, batch {:d}'.format(id, self.batchidx)) else: time.sleep(5) print('-slow worker {:d}, batch {:d}'.format(id, self.batchidx)) retval = self.batchidx self.batchidx += 1 return retval BATCH_SIZE = 1 NUM_WORKERS = 2 TIMEOUT = 0 PREFETCH_FACTOR = 2 dataset = ToyDataset(NUM_WORKERS) dataloader = torch.utils.data.DataLoader( dataset, batch_size = BATCH_SIZE, num_workers = NUM_WORKERS, timeout = TIMEOUT, prefetch_factor = PREFETCH_FACTOR) it = iter(dataloader) for i in range(1000): try: data = next(it) print('----------') except RuntimeError: print('timeout') The output is something like this: fast worker 0, batch 5 -slow worker 1, batch 3 fast worker 0, batch 6 -slow worker 1, batch 4 fast worker 0, batch 7 -slow worker 1, batch 5 The data generation is always slowed down by the slow worker.
st180140
Hey everyone, I am still a PyTorch noob. I want to do Incremental Learning and want to split my training dataset (Cifar-10) into 10 equal parts (or 5, 12, 20, …), each part with the same target distribution. I already tried to do it with sklearn (train_test_split) but it only can split the data in half: from sklearn.model_selection import train_test_split transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform) data1_idx, data2_idx= train_test_split( np.arange(len(targets)), test_size=0.5, shuffle=True, stratify=targets) data1_sampler = torch.utils.data.SubsetRandomSampler(data1_idx) data2_sampler = torch.utils.data.SubsetRandomSampler(data2_idx) data1_loader = torch.utils.data.DataLoader(trainset, batch_size=4, sampler=data1_sampler) data2_loader = torch.utils.data.DataLoader(trainset, batch_size=4, sampler=data2_sampler) How would you do it in PyTorch? Maybe you can point me to some example code.
st180141
I think `sklearn.model_selection.StratifiedKFold 1 might be useful as it allows you to create multiple splits in a stratified fashion.
st180142
No, I don’t think there are plans to reimplement these scikit-learn methods as they wouldn’t benefit from Autograd and are already easily available and well tested. What’s your use case or potential advantage you are seeing in copying them?
st180143
I agree with you. I just imagine the code that is fully implemented in PyTorch lol. Thanks for your comment
st180144
Thank you for answering the question! I will try this. I have a follow-up question: Using 10 dataloaders is the best way to do this? Because then there is much copy-paste code — maybe there is a cleaner way? Also 2nd follow-up question: If I don’t want the same distribution for each part (so not stratified) but want to have a random distribution, then this is still easy to do (just changing the argument of the sklearn function). But what if I want to create my own distributions, e.g. let’s say data part 1 consisting of: 50% of class 3, 10% of class 7, and remaining 8 classes have 5% each. Data part 2 consisting of: 15% of class 4, 15% of class 5, 15% of class 9, 15% of class 10, and the remaining 6 classes have 5% each. Data part 3 consting of: 30% of class 9, 20% … and so on, I think you know what I mean. Is there a way to create this detailed data split in PyTorch or Sklearn? I guess the best way to do this data split/data preparation is not in PyTorch but with Numpy, Pandas, Vanilla Python whatever. And then load the 10 data parts into PyTorch. Can you confirm it?
st180145
danman: Using 10 dataloaders is the best way to do this? Because then there is much copy-paste code — maybe there is a cleaner way? Maybe appending the loaders into e.g. a list would be cleaner and avoid code duplication. danman: I guess the best way to do this data split/data preparation is not in PyTorch but with Numpy, Pandas, Vanilla Python whatever. Yes, I would claim the easiest approach would be to reuse an already implemented method in any of these mentioned packages. One approach could be to create a WeightedRandomSampler in PyTorch using the desired class distribution and create the 10 loaders with them.
st180146
Hi Pytorch community, I am training a model on a very wide dataset (~500,000 features). To read the data from disc I use dask to load an xarray.core.dataarray.DataArray object to not load all the data in memory at once. I can load subsets of the data into memory with a numpy array as such: xarray[0:64,:].values. This loads 64 samples into memory in about 2 seconds. I then want to feed the data to the model one batch at a time, using batch size of 64, since this should have an estimated epoch time of 3 minutes given the total sample size (~6000). My problem is when I try to implement this functionality in the pytorch Datasetmodule. I want to create a class in which a DataLoader loads the data from disc into memory one batch at a time with a specified batch size. To that I made the following Datasetclass and wrapped it in a DataLoader. class load_wide(Dataset): def __init__(self, xarray, labels): self.xarray = xarray self.labels = labels def __getitem__(self, item): data = self.xarray[item,:].values labels = self.labels[item] return data, labels def __len__(self): return len(self.xarray) I then load the data like this: train_ds = load_plink(xarray_train, labels_train) train_dl = DataLoader(train_ds, batch_size=64, shuffle=True) for i, data in enumerate(train_dl): feats, labels = data preds = net(feats) loss = criterion(preds, labels) This works fine, but it takes about 90 seconds to load a batch resulting in unreasonable training time. What went wrong in my implementation of dataloading, that causes it to load the data so slowly. Using the DataLoader causes a 45-fold slowdown of dataloading. Can someone explain the cause of this? Also, how can you simultaneously evaluate the model on a validation set that is loaded one batch at a time?
st180147
I’m not familiar with the internal implementation of the xarray, but what seems to be different is the shuffling. Could you test your simple code snippet with random indices xarray[0:64,:].values instead of contiguous ones and compare the loading speed?
st180148
ptrblck: xarray[0:64,:].values This was the exact cause of the issue. It seems like xarray is not a good fit to combine with pytorch’s dataloader class. I will look for alternative ways of loading the data. Thank you!
st180149
Hi Pytorch community, I have a held-one-patient-out experiment. I have designed a CNN+RNN net. During training, since I have an imbalanced dataset I used the sampler within the DataLoader. Now I am adding some labeled samples (e.g 10) to my training set from the target subject with the objective of improving the performance of my task. Although I would expect this to be beneficial for my model, in some situations I see the performance drops down. So my question is, how can I be sure the DataLoader is using also the added target samples? I would expect adding samples for the target subject would be at least as good as not adding them, but never worse. Any help would be more than appreciated. I train my model from scratch. I stop the training at 20 epochs. I use BatchNorm with the PyTorch momentum param equal to 0.01 This is a sniped of my code. import torch torch.manual_seed(0) # the same seed, to ensure the same weight init train_df = pd.DataFrame() train_df = pd.concat([train_df, train_df_aux, seeds]) # train_df_aux is the training data without any labelled samples from the target subject # seeds are the labelled samples from the target subject, this is also an umbalaced subset, since I get the labelled samples untill X positive class are found. train_df.reset_index(drop=True, inplace=True) # DATA LOADERS train_data = torch.utils.data.ConcatDataset([train_data_ori, train_data_trf1]) # train_data_trf1 is the train data with augmentation strategies. sampler = torch.utils.data.sampler.WeightedRandomSampler(weights, len(weights)) #weights is constructed using cutomized fuctions to have during training balanced batched. # During training kwargs = {'num_workers': hparams["num_workers"], 'pin_memory': True} if use_cuda else {} train_loader = DataLoader(train_data, batch_size=hparams["batch_size"], sampler=sampler, **kwargs) Thank you in advance!
st180150
If you just iterate through DataLoader and print out each sample, do you see the label samples that you added?
st180151
Hi nivek! thank you for your reply. Yes, I have checked, and I see the added samples!
st180152
So I’m trying to manually split my training data into batches such that I can easily access them via indexing, and not relying on DataLoader to split them up for me, since that way I won’t be able to access the individual batches by indexing. So I tried the following: train_data = datasets.ANY(root='data', transform=T_train, download=True) BS = 200 num_batches = len(train_data) // BS sequence = list(range(len(train_data))) np.random.shuffle(sequence) # To shuffle the training data subsets = [Subset(train_data, sequence[i * BS: (i + 1) * BS]) for i in range(num_batches)] train_loader = [DataLoader(sub, batch_size=BS) for sub in subsets] # Create multiple batches, each with BS number of samples Which works during training just fine. However, when I attempted another way to manually split the training data I got different end results, even with all the same parameters and the following settings: device = torch.device('cuda') torch.manual_seed(0) np.random.seed(0) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True torch.cuda.empty_cache() I only split the training data the following way this time: train_data = list(datasets.ANY(root='data', transform=T_train, download=True)) # Cast into a list BS = 200 num_batches = len(train_data) // BS np.random.shuffle(train_data) # To shuffle the training data train_loader = [DataLoader(train_data[i*BS: (i+1)*BS], batch_size=BS) for i in range(num_batches)] But this gives me different results than the first approach, even though (I believe ) that both approaches are identical in manually splitting the training data into batches. I even tried not shuffling at all and loading the data just as it is, but I still got different results (85.2% v.s 81.98% accuracy). I even manually checked that the loaded images from the batches match; and are the same using both methods. Not only that, when I load the training data the conventional way as follows: BS = 200 train_loader = DataLoader(train_data, batch_size=BS, shuffle=True) I get even more drastic results! Can somebody please explain to me why these differences arise, and how to fix it?
st180153
You have confirmed that “the loaded images from the batches match” between the first two methods. What happens afterward? Do you set your seed (e.g. torch.manual_seed(0)) before training?
st180154
Hey, yeah the order of the images from each batch is the same in all batches using both approaches. And before training, I’ve set the following: device = torch.device('cuda') torch.manual_seed(0) torch.cuda.manual_seed_all(0) np.random.seed(0) torch.backends.cudnn.benchmark = False torch.backends.cudnn.deterministic = True torch.cuda.empty_cache() Moreover, I would like to share an update that might shed some light: T_train transformation contains some random transformations (H_flip, crop) and when using it along with the first train_loader, the time taken during training was: 24.79s/it, while the second train_loader took: 10.88s/it (even though both have the exact same number of parameters updates/steps). So I decided to remove the random transformations from T_train; then the time taken using the first train_loader reduced to: 16.99s/it, while the second train_loader remained at: 10.87s/it. So somehow, the second train_loader still took the same time (with or without the random transformations). Thus, I decided to bring back the random transformations in T_train to visualize the image outputs from the second train_loader to verify if the random transformations were being applied, and indeed they were! So this is really confusing and I’m not quite why they’re giving different results.
st180155
I am unsure why your second train_loader still has the random transformations after you remove them. Perhaps you need clear all the variable and re-run everything? Aside from that, the other pitfall may be nondeterministic algorithms: https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
st180156
hi all, I have a question where I am apparently not able to find any answer. Let’s say I have a dataset which is relatively small and I want to be able to test on the entire dataset to reduce the bias of my model. I thought I could use something like k-fold cross-validation, but no matter wher I look, i only find the case where for each fold, the data is split in only training and test set. What I am interested in is to have for each fold a training, validation AND test data, where I train the model on the test data, determine when to stop training on the validation data and then test on the test data. I would then report the average of the scores obtained on the different test sets. However, I am unsure on how to split the data and if the mentioned procedure is actually correct. Would this for example be correct? (I fix [p] to be the validation set) and test on the other different test sets) Dataset = [o, d, p, x] Fold1: Train = [o, d], Validation = [p], Test=[x] Fold2: Train = [o, x], Validation = [p], Test=[d] Fold3: Train = [d, x], Validation = [p], Test=[o] Thank for help!
st180157
@rasbt published this great post 4 a while ago where he explains cross validation in detail (you should also check other resources from his blog). To perform the actual splitting you could use e.g. scikit-learn’s methods.
st180158
Hi ptrblck, thanks for the useful link! But I am still not sure if what I want to do is correct. I want to use k-fold CV for model-evaluation, I am not interested in any hyper-parameter selection. However, because I am using neural networks, I still need a validation set to decide when to stop training. In the slides of the link you posted, this case is covered: Dataset = [o, d, p, x] Fold1: Train = [o, d, p], Test=[x] Fold2: Train = [d, p, x], Test=[o] Fold3: Train = [p, x, o], Test=[d] Fold4: Train = [x, o, d], Test=[p] from which I could estimate the performance of my model with fixed hyper-parameters as the average of the performance on the test sets across the different folds. But no validation set is used here and I am wondering if I should keep the validation set fixed across the different folds (e.g. [p] as in my previous post) or if this should vary as well or also use some inner loop. What do you think?
st180159
IMO, cross-validation is flawed by the definition. So, If possible, test set should be fully separated from training loop. If not, then I’d choose scenario 3: run outer loop picking test subset, then run inner loop on remaining data training model via cross-validation. Yet, I’d look first into possibility of generating additional samples, either fully synthetic or augmented.
st180160
hi Sergey, unfortunately, generating additional samples is not a possible option for my application. So you think this approach Dataset = [o, d, p, x] Fold1: Train = [o, d], Validation = [p], Test=[x] Fold2: Train = [o, x], Validation = [p], Test=[d] Fold3: Train = [d, x], Validation = [p], Test=[o] is good? I only want to use [p] for checking when to stop training the network on [o,d], for example. or is it better to do like this? Dataset = [o, d, p, x] Fold1: Train = [o, d], Validation = [p], Test=[x] Fold2: Train = [d, p], Validation = [x], Test=[o] Fold3: Train = [p, x], Validation = [o], Test=[d] Fold4: Train = [x, o], Validation = [d], Test=[p] Thank you!
st180161
Both of what you have suggested gives an unbiased estimate of the test set error, which is the most important aspect. But your second approach has probably lower variance error, since you are also switching up the validation set. But keep in mind your procedure differs from cross-validation and I also would not call it that way, or else you are confusing people. At least in your second approach, I can tell what you are doing is nested resampling, where the outer resampling is cross validation and the inner resampling is holdout (see here 1 for more)
st180162
The dataset interface runs problem-free for a couple of epochs, then outputs this error: urllib.error.HTTPError: HTTP Error 503: Service Unavailable when running the line with scikit-image method for opening a URL: im = np.array(io.imread(self.img_data[idx][‘coco_url’])) I’m training a single class (‘toothbrush’), and the model has 2 labels, background+class. Toothbrush has label 80, so I change 80 to 1 when training. None of the errors I could relate to are reported, just this one. EDIT: another error that pops up (same method): ConnectionResetError: [Errno 104] Connection reset by peer
st180163
I think it is possible that the server that the URLs are pointing have a quota or is overloaded, such that your request gets an HTTP Error 503. Is it possible for you to download the data in advance and load them from disk?
st180164
Yeah, that’s right. After a bit of investigation I discovered that I used this solution a year ago!
st180165
Looking at the documentation and source code for torch.utils.data.Dataset, I’m not seeing a natural way to extract response variables (e.g. labels / targets, like per-example class labels) other than via __getitem__(), which also extracts the predictor variables. I think this is a problem with the interface failing to separate concerns, but perhaps there’s a solution. I’m researching open set recognition, so I often need to partition datasets “by class”, which means determining which class each data point belongs to. I’ve recently been working with a subset of ImageNet 1K, and I’m often working on a server where the file system is networked, so it’s extremely expensive to load images from disk, and it’s difficult to justify doing so when I don’t actually need the images, but rather just the class labels. My current solution has been to build individual wrapper classes for each torchvision.datasets.XXX dataset class that I’ve been using to expose a function for extracting just the labels, often having to refer to the corresponding dataset’s source code to see how it stores and loads labels. But this a) depends on implementation details and thus can break from any torchvision update, and b) requires a lot of extra work. Is there an easier, more universal way to extract just response variables, annotations, etc. from a dataset without also extracting predictor variables? Or is this impossible given the current nature of the torch.utils.data.Dataset interface?
st180166
I am trying to use TransformerConv to train a model which takes in graphs of different sizes. This is to be used on a graph classification task. My code snippet is as follows: self.conv1 = TransformerConv(in_channels=-1, out_channels=256) def forward(self, x_onein, x_twoin, lambd): x_one, edge_index_one, batch_one = x_onein.x, x_onein.edge_index, x_onein.batch x_one = self.conv1(x_one, edge_index_one) return x_one My input data come in batches of various sizes, and each graph has between tens of nodes to 1200 nodes. Because of this, I require the first input layer to be able to take in graphs of different sizes. When I run the code, I get the error: RuntimeError: Trying to create tensor with negative dimension -1: [256, -1] This is strange since I followed the documentation: in_channels (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities. Any advice on this matter would be greatly appreciated.
st180167
I can’t reproduce the issue using: x1 = torch.randn(4, 8) x2 = torch.randn(2, 16) edge_index = torch.tensor([[0, 1, 2, 3], [0, 0, 1, 1]]) conv = TransformerConv(in_channels=-1, out_channels=256) out = conv(x1, edge_index) so maybe your torch_geometric version is too old to support the lazy initialization of the layers (I don’t know when it was introduced)?
st180168
Thank you for the prompt reply. My Pytorch version is 1.7.1 and my Pytorch geometric version is 1.6.3. I shall try to update them to the latest versions and try again. BE01A2C4A5D44938A5AAFFA26E1EF9F4.png708×1 83 Bytes
st180169
I’ve executed my code snippet with PyTorch 1.10.1 and Geometric 2.0.3, so let’s see if updating helps.
st180170
Thank you for the help. Updating did help and I am able to run the program my desired way now. Lazy inputs seem to be a more recent functionality. BE01A2C4A5D44938A5AAFFA26E1EF9F4.png708×1 83 Bytes
st180171
I have a multidimensional dataset, (1827, 5). From that dataset I exctract the variable I want to predict and im left with my X and y variables as such: X has size (1827, 4) and y has size (1827, 4). Then i further split them into train and test datasets giving 20% to the test dataset. now the shapes I have are these: > print(X_train.shape) > print(y_train.shape) > print(X_test.shape) > print(y_test.shape) torch.Size([1461, 4]) torch.Size([1461]) torch.Size([366, 4]) torch.Size([366]) The batch size I have to use (I cannot change that) is 50. My question is, when reshaping a tensor so as to fit my model, shouldnt I take the original dimensions into account ? By dividing the train dataset into batches of 50, I will surely put data that belongs to the same observation into different batches, simply because 50 is not divisible by 4 exactly. My network architecture is this: class NeuralNetwork(nn.Module): def __init__(self): super(NeuralNetwork, self).__init__() self.hidden1 = torch.nn.Linear(50, 25) # hidden layer self.hidden2 = torch.nn.Linear(25, 25) # hidden layer self.out = torch.nn.Linear(25, 1) # output layer def forward(self, x): z = F.relu(self.hidden1(x)) # activation function for first hidden layer z = F.relu(self.hidden2(z)) # activation function for second hidden layer z = self.out(z) # linear output return z Im the first layer takes 50 as input because the batch size is 50, the output layer outputs 1 because im doing regression. The in-between I chose 25 because I read that a good number is the median between the input and output. This problem came to be because I tried training the model with the data as is and I got this error: mat1 and mat2 shapes cannot be multiplied (50x4 and 50x25) I know that by changing the layers to output 4 it will be solved but Im wondering if its better to just reshape the data to (50, 25) instead.
st180172
Solved by ptrblck in post #2 This seems to be wrong, since the layer dimensions are not depending on the batch size. You should defined the layers using their expected input features, not the number of samples they would see during training/inference. Based on your initial description you are dealing with 1827 samples where …
st180173
elemeo: Im the first layer takes 50 as input because the batch size is 50 This seems to be wrong, since the layer dimensions are not depending on the batch size. You should defined the layers using their expected input features, not the number of samples they would see during training/inference. Based on your initial description you are dealing with 1827 samples where each sample has 4 features. If that’s the case, use self.hidden1 = torch.nn.Linear(4, 25) and let the DataLoader create the batches each with a shape of [50, 4] (except the last one which might be smaller).
st180174
Alright so first hidden layer is based on the input features, however how was the 25 chosen ? Also, since the first layer outputs a shape of 25 then the second layer will have an input of 25 with what output ? Other than that which I understand I’m a bit confused with what happens to variable information when a batch splits an observation into multiple batches.
st180175
elemeo: however how was the 25 chosen ? I’ve reused the out_features from your code snippet as you’ve picked this number of output features. elemeo: Also, since the first layer outputs a shape of 25 then the second layer will have an input of 25 with what output ? Yes, that’s correct. The number of output features again depends on your choice. elemeo: what happens to variable information when a batch splits an observation into multiple batches. Most layers process the samples of a batch independently so you would only see the expected abs. error due to the limited numerical precision in their outputs. Exceptions are e.g. batchnorm layers, which calculate the stats to normalize the inputs from the entire batch.
st180176
Hi, I have a working data loader, however while I am training it on the gpu cluster the average time taken to run per epoch is too much (approx. 25 mins). I am using PNG images for training with an average dimension of the image around (2500x5000). I want to make the training more faster. Please suggest what more can i do in order to do the training fast. Attached is the code of the data loader. class MammographyCBISDDSM(Dataset): def __init__(self, excel_file, category, transform=None): """ Args: excel_file (string): Path to the excel file with annotations. category (string) : 'Classification' for Benign and Malignant. 'Subtypes' for Subtype Classification transform (callable, optional): Optional transform to be applied """ self.mammography = pd.read_csv(excel_file,sep = ';') self.category = category self.transform = transform def __len__(self): return len(self.mammography) def class_name_to_labels(self, idx): if self.category == 'Classification': class_name = self.mammography.iloc[idx, 9] if class_name == 'MALIGNANT': labels = 0.0 elif class_name == 'BENIGN': labels = 1.0 elif class_name == 'BENIGN_WITHOUT_CALLBACK': labels = 2.0 return labels def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() img_folder = self.mammography.iloc[idx, 11] image_array=cv2.imread(img_folder) image_array = image_array.pixel_array image_array = image_array * 1.0 / image_array.max() image_array = torch.from_numpy(image_array.astype(np.float32)) image_array = image_array.repeat(3, 1, 1) if self.transform: image_array = self.transform(image_array) labels = self.class_name_to_labels(idx) labels = torch.from_numpy(np.array(labels)) return image_array, labels
st180177
This great post 7 explains common issues causing a data loading bottleneck and has some suggestions to speed it up.
st180178
Thanks for the link, I went through the suggestions and it made sense. As I am new to python and pytorch environment, it would be great if you could tell me more precise reason in this case and what I can do to overcome the issue! Thanks once again!
st180179
aks407: BISDDSM Probably biggest bottleneck here is reading image from disk (you can use timeit to track if it is true). First, I would check if I have enough RAM to pre-load all the images into dataset. If this is not an option, I’d move all data to NVMe/SSD drive. Also I would convert and save PNGs to numpy arrays anyway, even on HDD, since you don’t have to invoke cv2 file parser every time image is loaded. Last thing is, 2500x5000 is really big size. Is there any particular reason to use images as is without scaling down?
st180180
dataloader = datasets.CIFAR10 trainset = dataloader(root='./data', train=True, download=True, transform=transform_train) Now I want to get one sample of trainset to train for debugging my code ,how to write the code?
st180181
Solved by nivek in post #2 Based on your code, you should be able to get the first element by running this: print(next(iter(trainset))) You may find the examples on this page helpful.
st180182
Based on your code, you should be able to get the first element by running this: print(next(iter(trainset))) You may find the examples on this page 1 helpful.
st180183
I am running the code below to create data loaders for Graph data: “”" batch_size = 128 train_list =[] for idx, batch in enumerate(zip(X_train[train_idx], class_v[train_idx], adj_train[train_idx])): edge_index, _ = dense_to_sparse(torch.from_numpy(batch[2]).float()) train_list.append(Data(x = torch.from_numpy(batch[0]).float(), y = torch.from_numpy(batch[1]).float(), edge_index = edge_index ) ) batch_train_loader = DataLoader(train_list, batch_size=batch_size) val_list = [] for idx, batch in enumerate(zip(X_train_val[val_idx], class_v_val[val_idx], adj_train_val[val_idx])): edge_index, _ = dense_to_sparse(torch.from_numpy(batch[2]).float()) val_list.append(Data(x = torch.from_numpy(batch[0]).float(), y = torch.from_numpy(batch[1]).float(), edge_index = edge_index) ) batch_val_loader = DataLoader(val_list, batch_size=batch_size) “”" When I try to read this data. I get an error that data cannot be on two devices. “”" idx = 0 for data in batch_train_loader: idx+=1 print(data.x.shape,data.y.shape,data.edge_index.shape) if idx==3: break “”" “”" RuntimeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_17752/355153217.py in 1 idx = 0 ----> 2 for data in batch_train_loader: 3 idx+=1 4 print(data.x.shape,data.y.shape,data.edge_index.shape) 5 if idx==3: ~\miniconda3\lib\site-packages\torch\utils\data\dataloader.py in next(self) 519 if self._sampler_iter is None: 520 self._reset() → 521 data = self._next_data() 522 self._num_yielded += 1 523 if self._dataset_kind == _DatasetKind.Iterable and \ ~\miniconda3\lib\site-packages\torch\utils\data\dataloader.py in _next_data(self) 559 def _next_data(self): 560 index = self._next_index() # may raise StopIteration → 561 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 562 if self._pin_memory: 563 data = _utils.pin_memory.pin_memory(data) ~\miniconda3\lib\site-packages\torch\utils\data_utils\fetch.py in fetch(self, possibly_batched_index) 45 else: 46 data = self.dataset[possibly_batched_index] —> 47 return self.collate_fn(data) ~\miniconda3\lib\site-packages\torch_geometric\loader\dataloader.py in call(self, batch) 37 38 def call(self, batch): —> 39 return self.collate(batch) 40 41 ~\miniconda3\lib\site-packages\torch_geometric\loader\dataloader.py in collate(self, batch) 17 elem = batch[0] 18 if isinstance(elem, Data) or isinstance(elem, HeteroData): —> 19 return Batch.from_data_list(batch, self.follow_batch, 20 self.exclude_keys) 21 elif isinstance(elem, torch.Tensor): ~\miniconda3\lib\site-packages\torch_geometric\data\batch.py in from_data_list(cls, data_list, follow_batch, exclude_keys) 61 Will exclude any keys given in :obj:exclude_keys.""" 62 —> 63 batch, slice_dict, inc_dict = collate( 64 cls, 65 data_list=data_list, ~\miniconda3\lib\site-packages\torch_geometric\data\collate.py in collate(cls, data_list, increment, add_batch, follow_batch, exclude_keys) 74 75 # Collate attributes into a unified representation: —> 76 value, slices, incs = _collate(attr, values, data_list, stores, 77 increment) 78 ~\miniconda3\lib\site-packages\torch_geometric\data\collate.py in _collate(key, values, data_list, stores, increment) 142 incs = get_incs(key, values, data_list, stores) 143 if incs.dim() > 1 or int(incs[-1]) != 0: → 144 values = [value + inc for value, inc in zip(values, incs)] 145 else: 146 incs = None ~\miniconda3\lib\site-packages\torch_geometric\data\collate.py in (.0) 142 incs = get_incs(key, values, data_list, stores) 143 if incs.dim() > 1 or int(incs[-1]) != 0: → 144 values = [value + inc for value, inc in zip(values, incs)] 145 else: 146 incs = None RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! “”" Not sure why this should happen. I am also seeing a considerable slowing in my training loop.
st180184
Are you setting the default tensor type to a CUDATensor or is any class pushing the data to the GPU (e.g. Data)? Could you check the .device attribute of the tensors while iterating the train_list?
st180185
Hello, For some datasets in torchtext, I see the parameter filter_pred which filter out part of the dataset. I’m using torchtext.Datasets which gives an IterableDataset. This iterator you can pass to the Dataloader. Is it possible to simply apply a filter like filter_pred? This is my solution, which I don’t like, if you have something better, please share. class MyIterableDataset(IterableDataset): def __init__(self, iterableDataset, pred_filter=None): self.iterableDataset = iterableDataset self.pred_filter = pred_filter def __iter__(self): def it(myDataIter): for x in myDataIter: if self.pred_filter and not self.pred_filter(x): continue yield x myiter = iter(it(self.iterableDataset)) return myiter
st180186
Does this work for you? class MyIterableDataset(IterableDataset): def __init__(self, iterableDataset, pred_filter=None): self.iterableDataset = iterableDataset self.pred_filter = pred_filter def __iter__(self): return self def __next__(self): while True: item = next(self.iterableDataset) if self.pred_filter is None or self.pred_filter(item): return item
st180187
Hello there, I want to make custom dataset or dataloader, just don’t know which is the best way to do it. This is a self supervised task, where you want to blank out part of the input and then use it as a “label”. Let’s say I have .npy files, each one of shape (600,30000), and I want to do the following: Read the file, pick a row from the 600, take a slice around that row, say (32,30000), return y=x[row] and x where x[row]=0 (blanked out). The next batch should be another x,y pair where you just pick another row, and then repeat this for all the rows of the file, then move on to the net file. After thoroughly reading the docs I thought that the best way to do that, is to make an Iterable Dataset, but whenever I try to increase the batch_size e.g. 56, in order to get a batch of (56,32,30000), this takes too long ~12’’ for a single batch which is an eternity of waiting for the GPU. Code for the iterable dataset: class MyIterableDataset(torch.utils.data.IterableDataset): def __init__(self, data_path, N_sub, batch_size, channel_min = 1700, channel_max = 2300): self.data_path = data_path self.filenames = [x for x in os.listdir(data_path) if x.endswith(".npy")] self.channel_min = channel_min self.channel_max = channel_max self.N_sub = N_sub self.batch_size = batch_size def sliding_window(self): for file in self.filenames: data = np.load(f"{data_path}{file}",mmap_mode='r')[self.channel_min:self.channel_max] for row in range(data.shape[0]): low_index = int(row - self.N_sub/2) high_index = int(row + self.N_sub/2) # if target is close to zero, then pick range [0, N_sub], target is not centered. if int(row - self.N_sub/2) <= 0: low_index = 0 high_index = self.N_sub # if target is close to max channel, pick range [Nsub, max_channel], target is not centered again. if int(row + self.N_sub/2) >= self.channel_max-self.channel_min: high_index = self.channel_max-self.channel_min low_index = (self.channel_max - self.channel_min) - self.N_sub # Normalization, this causes minimal data leakage data = data/data.std() # Copy because assigning values to slices messes things up y_ = data[row].copy() # Zero out the target channel, the model will predict this. data[row] = 0 x_ = data[low_index:high_index] # Keep only frequencies from f_min to f_max. # x_ = taper_filter(x_, self.f_min, self.f_max, self.sampleRate) # y_ = taper_filter(y_, self.f_min, self.f_max, self.sampleRate) x_ = torch.tensor(x_.astype(np.float32).copy()) y_ = torch.tensor(y_.astype(np.float32).copy()) yield x_, y_ def __iter__(self): return self.sliding_window() def __len__(self): return len(self.filenames) I thought of parallelizing this, I just can’t find an example related to this, so: Any code/examples/advice you see fit is welcome, Is this the way to go? Or am I overcomplicating things? Thanks a lot in advance, Have a great day!
st180188
You can try multi-process data loading 4 and you may find the example 2 4 here helpful.
st180189
What you can do is provide multiple worker to DataLoader. And, to prevent duplicate data across each process, you can give each Dataset instance within each worker part of your self.filenames based on worker_id.
st180190
Thanks for the reply. I’m still trying to implement this to work on my problem. One question though, if I move to multi-GPU in the future, will something like that be scalable?
st180191
Yes, you can use the torch.distributed package. You can find more details on this page 2. We recommend you to follow the instructions of “3. Use single-machine multi-GPU DistributedDataParallel” on that page.
st180192
I am currently training a model which is a mix of graph neural networks and LSTM. However that means for each of my training sample, I need to pass in a list of graphs. The current batch class in torch_geometric supports batching with torch_geometric.data.Batch.from_data_list() but this only allows one graph for each data point. How else can I go about batching the graphs when each data consist of multiple graphs?
st180193
Do you want to post the question/request to torch_geometric repo? We are not familiar with that.
st180194
Hi, I was wondering if I could get a better understanding of data Augmentation in PyTorch. From what I know, data augmentation is used to increase the number of data points when we are running low on them. So we use transforms to transform our data points into different types. I am suing data transformation like this: transform_img = transforms.Compose([ transforms.RandomResizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), # transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) class dataload(Dataset): def __init__(self, x, transform=None): self.data = x self.transform = transform def __len__(self): return len(self.data) def __getitem__(self, i): img = Image.open(self.data[i]) # img = img.transpose((2, 0, 1)) # img = torch.from_numpy(img).float() tmp = np.int32(filenames[i].split('/')[-1].split('_')[0][1]) label = np.zeros(67) label[tmp] = 1 label = torch.from_numpy(label).float() if self.transform: img = self.transform(img) return img,label train_dataloader = dataload(filenames, transform=transform_img) Now, it seems to work but I don’t get one thing. It does the transformation but it doesn’t increase the number of data points. I was hoping that each label would have 2 extra images since we are doing that transformation. But it doesn’t seem to do that. The total number of training samples is still the same. So am I getting something wrong about augmentation or have I implemented this in the wrong way?
st180195
This way i’d call it alteration, not augmentation. Augmentation is when you are creating additional training samples. You need to move transformations to init, transform all x’es and add result to original data. Also take a look at timm 1 library for the augmentations, cutmix and mixup implementations helped me a lot in recent project.
st180196
I want to create more training samples because I have little training data. I have 2000 images so I was looking to increase that to 4 times through augmentation.
st180197
Right, but in your code you’re not creating additional samples, you’re modifying existing ones. (I just realized your x (or self.data) in init actually paths to files, so you can’t transform x in init without additionally saving transformed images). Then the easiest way here is to run multiple training loops. One based on dataset without transform, another one based on dataset with first transform. then another one etc.
st180198
So there any direct approach to data augmentation that it does in one go, like keras?
st180199
I don’t think keras does something different. Another option I can think of is if you change structure of self.data to have not only file paths, but also what kind of augmentation (if there is one) should be applied during get_init.