id
stringlengths
3
8
text
stringlengths
1
115k
st82168
Solved by spanev in post #2 Did you implement the forward method in your model class as: def forward(self, x): # method body ? The error means that model does not have a member function (method) overriding the nn.Module one.
st82169
Did you implement the forward method in your model class as: def forward(self, x): # method body ? The error means that model does not have a member function (method) overriding the nn.Module one.
st82170
I am trying to build an LSTM network that will predict the next frame in a sequence based on the current frame and the action taken (there is an action for every frame). I currently encode all frames into a latent vector of size 128, and the actions are represented by an array of size 10. How would i format the input into the LSTM network? For example, if I have a video consisting of 3191 frames, i will have a tensor of shpe (3191 x 128) for all encoded frames, would appending the action array associated with each frame to the latent vector work? Or is there a a way of inputting encoded frames and actions separately into the LSTM?
st82171
I am trying to test my model with different batch sizes and I am getting different accuracies for different batch sizes. here is my test snippet. (one_hot is False!) for idx, data in enumerate(test_loader): # if idx == 1: # break # print(model.training) test_x, label = data['input'], data['label'] # print(test_x) # print(test_x.shape) # this = test_x.numpy().squeeze(0).transpose(1,2,0) # print(this.shape, np.min(this), np.max(this)) if cuda: test_x = test_x.cuda(device=device) label = label.cuda(device=device) # forward out_x, pred = model.forward(test_x) loss = criterion(out_x, label) un_confusion_meter.add(predicted=pred, target=label) confusion_meter.add(predicted=pred, target=label) ############################### # pred = pred.view(-1) # pred = pred.cpu().numpy() # label = label.cpu().numpy() # print(pred.shape, label.shape) ############################### # get accuracy metric # correct_count += np.sum((pred == label)) # print(pred, label) # get accuracy metric if 'one_hot' in kwargs.keys(): if kwargs['one_hot']: batch_correct = (torch.argmax(label, dim=1).eq(pred.long())).double().sum().item() else: batch_correct = (label.eq(pred.long())).double().sum().item() correct_count += batch_correct # print(batch_correct) total_count += np.float(batch_size) net_loss.append(loss.item()) if idx % log_after == 0: print('log: on {}'.format(idx)) ################################# mean_loss = np.asarray(net_loss).mean() mean_accuracy = correct_count * 100 / total_count print(correct_count, total_count) print('$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$') print('log: test:: total loss = {:.5f}, total accuracy = {:.5f}%'.format(mean_loss, mean_accuracy)) print('$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$') I have tried to fix all sorts of problems. Model is in .eval() mode and the function is decorated by torch.no_grad() as well. I can’t find any solution to this. Thanks
st82172
Do you have any random operations using the functional API, i.e. F.dropout()? If so, could you check to set the training parameter accordingly, as it might not be set by model.eval(). How are you calculating pred inside your model? As a small side note: you shouldn’t use model.forward() but rather call the model directly model(test_x).
st82173
Thank you so much for such a quick reply ptrblck! this is my model definition class VGG_5(nn.Module): """ The following is an implementation of the lasagne based binarized VGG network, but with floating point weights """ def __init__(self): super(VGG_5, self).__init__() # need some pretrained help! graph = models.vgg11(pretrained=True) graph_layers = list(graph.features) for i, layer in enumerate(graph_layers): print('{}.'.format(i), layer) drop_rate = 0.5 activator = nn.Tanh() self.feauture_exctractor = nn.Sequential( nn.Conv2d(in_channels=5, out_channels=64, kernel_size=3, padding=1), nn.MaxPool2d(kernel_size=2), nn.BatchNorm2d(num_features=64, eps=1e-4, momentum=0.2), # nn.ReLU(), activator, nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=1), nn.MaxPool2d(kernel_size=2), nn.BatchNorm2d(num_features=64, eps=1e-4, momentum=0.2), # nn.ReLU(), activator, nn.Dropout2d(drop_rate), # nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=1), graph_layers[3], # pretrained on imagenet nn.BatchNorm2d(num_features=128, eps=1e-4, momentum=0.2), # nn.ReLU(), activator, nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, padding=1), nn.MaxPool2d(kernel_size=2), nn.BatchNorm2d(num_features=128, eps=1e-4, momentum=0.2), # nn.ReLU(), activator, nn.Dropout(drop_rate), # nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, padding=1), graph_layers[6], # pretrained on imagenet nn.MaxPool2d(kernel_size=2), nn.BatchNorm2d(num_features=256, eps=1e-4, momentum=0.2), # nn.ReLU(), activator, nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, padding=1), nn.MaxPool2d(kernel_size=2), nn.BatchNorm2d(num_features=256, eps=1e-4, momentum=0.2), # nn.ReLU(), activator, nn.Dropout2d(drop_rate), ) self.fc = nn.Sequential( nn.Linear(in_features=256 * 2 * 2, out_features=512), nn.BatchNorm1d(num_features=512, eps=1e-4, momentum=0.2), # nn.ReLU(), activator, nn.Dropout(drop_rate), nn.Linear(in_features=512, out_features=512), nn.BatchNorm1d(num_features=512, eps=1e-4, momentum=0.2), # nn.ReLU(), activator, nn.Linear(in_features=512, out_features=10), # nn.BatchNorm1d(num_features=10), ) pass def forward(self, *input): x, = input x = self.feauture_exctractor(x) x = x.view(-1, 256*2*2) x = self.fc(x) return x, torch.argmax(input=x, dim=1) so no F.anything() and here is my entire testing code (the one that runs here is the outermost else) @torch.no_grad() def eval_net(**kwargs): model = kwargs['model'] cuda = kwargs['cuda'] device = kwargs['device'] if cuda: model.cuda(device=device) if 'criterion' in kwargs.keys(): writer = kwargs['writer'] val_loader = kwargs['val_loader'] criterion = kwargs['criterion'] global_step = kwargs['global_step'] correct_count, total_count = 0, 0 net_loss = [] model.eval() # put in eval mode first ############################ print('evaluating with batch size = 1') for idx, data in enumerate(val_loader): test_x, label = data['input'], data['label'] if cuda: test_x = test_x.cuda(device=device) label = label.cuda(device=device) # forward out_x, pred = model.forward(test_x) loss = criterion(out_x, label) net_loss.append(loss.item()) # get accuracy metric if kwargs['one_hot']: batch_correct = (torch.argmax(label, dim=1).eq(pred.long())).double().sum().item() else: batch_correct = (label.eq(pred.long())).double().sum().item() correct_count += batch_correct total_count += np.float(pred.size(0)) ################################# mean_accuracy = correct_count / total_count * 100 mean_loss = np.asarray(net_loss).mean() # summarize mean accuracy writer.add_scalar(tag='val. loss', scalar_value=mean_loss, global_step=global_step) writer.add_scalar(tag='val. over_all accuracy', scalar_value=mean_accuracy, global_step=global_step) print('$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$') print('log: validation:: total loss = {:.5f}, total accuracy = {:.5f}%'.format(mean_loss, mean_accuracy)) print('$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$') else: # model, images, labels, pre_model, save_dir, sum_dir, batch_size, lr, log_after, cuda pre_model = kwargs['pre_model'] base_folder = kwargs['base_folder'] batch_size = kwargs['batch_size'] log_after = kwargs['log_after'] criterion = nn.CrossEntropyLoss() un_confusion_meter = tnt.meter.ConfusionMeter(10, normalized=False) confusion_meter = tnt.meter.ConfusionMeter(10, normalized=True) model.load_state_dict(torch.load(pre_model)) print('log: resumed model {} successfully!'.format(pre_model)) _, _, test_loader = get_dataloaders(base_folder=base_folder, batch_size=batch_size) net_accuracy, net_loss = [], [] correct_count = 0 total_count = 0 print('batch size = {}'.format(batch_size)) model.eval() # put in eval mode first for idx, data in enumerate(test_loader): # if idx == 1: # break # print(model.training) test_x, label = data['input'], data['label'] # print(test_x) # print(test_x.shape) # this = test_x.numpy().squeeze(0).transpose(1,2,0) # print(this.shape, np.min(this), np.max(this)) if cuda: test_x = test_x.cuda(device=device) label = label.cuda(device=device) # forward out_x, pred = model(test_x) loss = criterion(out_x, label) un_confusion_meter.add(predicted=pred, target=label) confusion_meter.add(predicted=pred, target=label) ############################### # pred = pred.view(-1) # pred = pred.cpu().numpy() # label = label.cpu().numpy() # print(pred.shape, label.shape) ############################### # get accuracy metric # correct_count += np.sum((pred == label)) # print(pred, label) # get accuracy metric if 'one_hot' in kwargs.keys(): if kwargs['one_hot']: batch_correct = (torch.argmax(label, dim=1).eq(pred.long())).double().sum().item() else: batch_correct = (label.eq(pred.long())).sum().item() # print(label.shape, pred.shape) # break correct_count += batch_correct # print(batch_correct) total_count += np.float(batch_size) net_loss.append(loss.item()) if idx % log_after == 0: print('log: on {}'.format(idx)) ################################# mean_loss = np.asarray(net_loss).mean() mean_accuracy = correct_count * 100 / total_count print(correct_count, total_count) print('$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$') print('log: test:: total loss = {:.5f}, total accuracy = {:.5f}%'.format(mean_loss, mean_accuracy)) print('$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$') with open('normalized.pkl', 'wb') as this: pkl.dump(confusion_meter.value(), this, protocol=pkl.HIGHEST_PROTOCOL) with open('un_normalized.pkl', 'wb') as this: pkl.dump(un_confusion_meter.value(), this, protocol=pkl.HIGHEST_PROTOCOL) pass pass I can’t see any problem with this thing.
st82174
and btw, my accuracy keeps jumping with different batch sizes. from 93% to 98.31% for different batch sizes. I trained it with batch size of 256 and testing it with 256, 257, 200, 1, 300, 512 and all give somewhat different results while 1, 200, 300 give 98.31%. Strange… (and I fixed it to call model() directly rather than its forward function as well)
st82175
Could you please tell me the input shape? I’m currently trying to reproduce the issue and apparently [batch_size, 5, 128, 128] is working. Is it the right shape as the output looks strange.
st82176
It’s apparently [batch_size, 5, 64, 64]? EDIT: I assume you are using the else branch in your code. Could you add the cast to batch_correct like in the upper branch: # Change batch_correct = (label.eq(pred.long())).sum().item() # to batch_correct = (label.eq(pred.long())).double().sum().item() # or .float()
st82177
yes, I have five channels in my images (batch_size, 5, 64, 64) because they are coming from sentinel satellite. ptrblck: batch_correct = (label.eq(pred.long())).double().sum().item() Actually I have already used double() before because I read on this forum that equating pytorch tensors return a byte tensor so it is a good idea to cast them to double. But it still doesn’t work. Just checked it now with batch size 1 (98.31%) and batch size 512 (94.26%).
st82178
and I asked a similar question sometime ago. but the problem was different back then. Test accuracy with different batch sizes vision this is a newby question I am asking here but for some reason, when I change the batch size at test time, the accuracy of my model changes. Decreasing the batch size reduces the accuracy until a batch size of 1 leads to 11% accuracy although the same model gives me 97% accuracy with a test batch size of 512 (I trained it with batch size 512). I am using a pretrained resnet 50 model and finetuning it on my own images and I am also using .train() and .eval() at train and test times properly. The b…
st82179
Input shape -> (batch_size, 5, 64, 64) fixed the .double() thing too. Still doesn’t work
st82180
OK, I see. The float/double cast was only necessary in older versions, but I assumed it could be the mistake as I’ve found it in the other branch. Regarding the other thread, it looks like you are performing some random transformations on your dataset. Are you performing the same random transforms on the eval set currently? Also, could you check, if this code snippet returns True? model = VGG_5() model.eval() x = torch.randn(10, 5, 64, 64) output_all, pred_all = model(x) output_1, pred_1 = model(x[:5]) output_2, pred_2 = model(x[5:]) output_stacked = torch.cat((output_1, output_2), dim=0) print(torch.allclose(output_all, output_stacked)) If so, the issue is probably located in the data not the model.
st82181
okay. So I have three sets, training, validation, test, and those transformations are only applied on the training set. Before dataloading I am saying random.seed(74) so that I get the same test, train split every time because I am reading from a folder directly without having train/test split image filenames declared def get_dataloaders(base_folder, batch_size, one_hot=False): print('inside dataloading code...') class dataset(Dataset): def __init__(self, data_dictionary, bands, mode='train'): super(dataset, self).__init__() self.example_dictionary = data_dictionary # with open(mode+'.txt', 'wb') as this: # this.write(json.dumps(self.example_dictionary)) self.bands = bands # bands are a list bands to use as data, pass them as a list [] self.mode = mode self.max = 0 pass def __getitem__(self, k): example_path, label_name = self.example_dictionary[k] # print(example_path, label_name) # example is a tiff image, need to use gdal this_example = gdal.Open(example_path) this_label = all_labels[label_name] if one_hot: label_arr = np.zeros(10) label_arr[this_label] = 1 # print(this_label, label_arr) example_array = this_example.GetRasterBand(self.bands[0]).ReadAsArray() for i in self.bands[1:]: example_array = np.dstack((example_array, this_example.GetRasterBand(i).ReadAsArray())).astype(np.int16) # transforms if self.mode == 'train': example_array = np.squeeze(seq.augment_images( (np.expand_dims(example_array, axis=0))), axis=0) pass # range of vals = [0,1] example_array = np.clip((example_array.astype(np.float)/4096), a_min=0, a_max=1) # just to bring those values down # range of vals = [-1,1] example_array = 2*example_array-1 # max value in test set is 28000 # this_max = example_array.max() # if this_max > self.max: # self.max = this_max # print(example_array.max(), example_array.min(), example_array.mean()) example_array = toTensor(image=example_array) if one_hot: return {'input': example_array, 'label': torch.LongTensor(label_arr)} return {'input': example_array, 'label': this_label} def __len__(self): return len(self.example_dictionary) # create training set examples dictionary all_examples = {} for folder in sorted(os.listdir(base_folder)): # each folder name is a label itself # new folder, new dictionary! class_examples = [] inner_path = os.path.join(base_folder, folder) for image in [x for x in os.listdir(inner_path) if x.endswith('.tif')]: image_path = os.path.join(inner_path, image) # for each index as key, we want to have its path and label as its items class_examples.append(image_path) all_examples[folder] = class_examples # split them into train and test train_dictionary, val_dictionary, test_dictionary = {}, {}, {} for class_name in all_examples.keys(): class_examples = all_examples[class_name] # print(class_examples) random.shuffle(class_examples) total = len(class_examples) train_count = int(total * 0.8); train_ = class_examples[:train_count] test = class_examples[train_count:] total = len(train_) train_count = int(total * 0.9); train = train_[:train_count] validation = train_[train_count:] for example in train: train_dictionary[len(train_dictionary)] = (example, class_name) for example in test: test_dictionary[len(test_dictionary)] = (example, class_name) for example in validation: val_dictionary[len(val_dictionary)] = (example, class_name) # create dataset class instances bands = [4, 3, 2, 5, 8] # these are [NIR, Vegetation Red Edge, Red, Green, Blue] bands train_data = dataset(data_dictionary=train_dictionary, bands=bands, mode='train') val_data = dataset(data_dictionary=val_dictionary, bands=bands, mode='eval') test_data = dataset(data_dictionary=test_dictionary, bands=bands, mode='test') print('train examples =', len(train_dictionary), 'val examples =', len(val_dictionary), 'test examples =', len(test_dictionary)) train_dataloader = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True, num_workers=4) val_dataloader = DataLoader(dataset=val_data, batch_size=batch_size, shuffle=True, num_workers=4) test_dataloader = DataLoader(dataset=test_data, batch_size=batch_size, shuffle=True, num_workers=4) return train_dataloader, val_dataloader, test_dataloader I check for self.mode and only apply those transformations to the training set. I don’t think it’s creating any problems. And I ran your snippet and it returns True!
st82182
and I checked all my dataloaders before passing, I also viewed them with the corresponding labels and it was all okay. let me check again.
st82183
If the code snippet returns True, it might show that your model is working OK. Is the shuffling of the dataset somehow seeded, i.e. are you getting the same split every time?
st82184
okay, so I found a huuge mistake but fixing it still doesn’t solve my problem. My data split was different every time I found out by comparing those dictionaries, but now I have fixed it like this def get_dataloaders(base_folder, batch_size, one_hot=False): print('inside dataloading code...') class dataset(Dataset): def __init__(self, data_dictionary, bands, mode='train'): super(dataset, self).__init__() self.example_dictionary = data_dictionary # with open(mode+'.txt', 'wb') as this: # this.write(json.dumps(self.example_dictionary)) self.bands = bands # bands are a list bands to use as data, pass them as a list [] self.mode = mode self.max = 0 pass def __getitem__(self, k): example_path, label_name = self.example_dictionary[k] # print(example_path, label_name) # example is a tiff image, need to use gdal this_example = gdal.Open(example_path) this_label = all_labels[label_name] if one_hot: label_arr = np.zeros(10) label_arr[this_label] = 1 # print(this_label, label_arr) example_array = this_example.GetRasterBand(self.bands[0]).ReadAsArray() for i in self.bands[1:]: example_array = np.dstack((example_array, this_example.GetRasterBand(i).ReadAsArray())).astype(np.int16) # transforms if self.mode == 'train': example_array = np.squeeze(seq.augment_images( (np.expand_dims(example_array, axis=0))), axis=0) pass # range of vals = [0,1] example_array = np.clip((example_array.astype(np.float)/4096), a_min=0, a_max=1) # just to bring those values down # range of vals = [-1,1] example_array = 2*example_array-1 # max value in test set is 28000 # this_max = example_array.max() # if this_max > self.max: # self.max = this_max # print(example_array.max(), example_array.min(), example_array.mean()) example_array = toTensor(image=example_array) if one_hot: return {'input': example_array, 'label': torch.LongTensor(label_arr)} return {'input': example_array, 'label': this_label} def __len__(self): return len(self.example_dictionary) # create training set examples dictionary all_examples = {} for folder in sorted(os.listdir(base_folder)): # each folder name is a label itself # new folder, new dictionary! class_examples = [] inner_path = os.path.join(base_folder, folder) #####################################3 this was a problem for a long time now.. because of not sorting it all_images_of_current_class = [x for x in os.listdir(inner_path) if x.endswith('.tif')] all_images_of_current_class.sort(key=lambda f: int(filter(str.isdigit, f))) # if folder == 'Forest': # print(all_images_of_current_class) for image in all_images_of_current_class: # dirFiles.sort(key=lambda f: int(filter(str.isdigit, f))) # print(image) image_path = os.path.join(inner_path, image) # for each index as key, we want to have its path and label as its items class_examples.append(image_path) all_examples[folder] = class_examples # split them into train and test train_dictionary, val_dictionary, test_dictionary = {}, {}, {} for class_name in all_examples.keys(): class_examples = all_examples[class_name] # print(class_examples) ########################## this doesn't work # random.shuffle(class_examples) ########################### but this does random.Random(4).shuffle(class_examples) total = len(class_examples) train_count = int(total * 0.8); train_ = class_examples[:train_count] test = class_examples[train_count:] total = len(train_) train_count = int(total * 0.9); train = train_[:train_count] validation = train_[train_count:] for example in train: train_dictionary[len(train_dictionary)] = (example, class_name) for example in test: test_dictionary[len(test_dictionary)] = (example, class_name) for example in validation: val_dictionary[len(val_dictionary)] = (example, class_name) # # test dataset with open('train1.txt', 'wb') as train_check: for k in range(len(train_dictionary)): train_check.write('{}\n'.format(train_dictionary[k][0])) # create dataset class instances bands = [4, 3, 2, 5, 8] # these are [NIR, Vegetation Red Edge, Red, Green, Blue] bands train_data = dataset(data_dictionary=train_dictionary, bands=bands, mode='train') val_data = dataset(data_dictionary=val_dictionary, bands=bands, mode='eval') test_data = dataset(data_dictionary=test_dictionary, bands=bands, mode='test') print('train examples =', len(train_dictionary), 'val examples =', len(val_dictionary), 'test examples =', len(test_dictionary)) train_dataloader = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True, num_workers=4) val_dataloader = DataLoader(dataset=val_data, batch_size=batch_size, shuffle=True, num_workers=4) test_dataloader = DataLoader(dataset=test_data, batch_size=batch_size, shuffle=True, num_workers=4) return train_dataloader, val_dataloader, test_dataloader, test_dictionary and tested it like this to compare data generated across runs, and the difference is zero now def check_data_sanity(): train, val, _, test1 = get_dataloaders(base_folder='/home/annus/Desktop/' 'projects/forest_cover_change/' 'eurosat/images/tif/', batch_size=16) train, val, _, test2 = get_dataloaders(base_folder='/home/annus/Desktop/' 'projects/forest_cover_change/' 'eurosat/images/tif/', batch_size=16) train, val, _, test3 = get_dataloaders(base_folder='/home/annus/Desktop/' 'projects/forest_cover_change/' 'eurosat/images/tif/', batch_size=16) # shared_items = {k: test1[k] for k in test1 if k in test2 and test1[k] == test2[k]} # print(len(shared_items), len(test1), len(test2)) def get_dict_diff(d1, d2): return len(set(d1.values()) - set(d2.values())) import pickle # with open('test1.pkl', 'wb') as ts1: # pickle.dump(test1, ts1, protocol=pickle.HIGHEST_PROTOCOL) with open('test1.pkl', 'rb') as ts1: test1_old = pickle.load(ts1) print(get_dict_diff(test2, test1_old)) # print(get_dict_diff(test1, test2)) # print(get_dict_diff(test1, test2)) and tested my model again and I am still getting different accuracies. But I am convinced that it must be a problem with the way I load the data and still it is getting different split every time
st82185
I have tested the dataloader’s test set by getting test set multiple times in the same run as well as across different runs, and there is no difference now. Here is the updated test now def check_data_sanity(): train, val, _, test1 = get_dataloaders(base_folder='/home/annus/Desktop/' 'projects/forest_cover_change/' 'eurosat/images/tif/', batch_size=16) train, val, _, test2 = get_dataloaders(base_folder='/home/annus/Desktop/' 'projects/forest_cover_change/' 'eurosat/images/tif/', batch_size=16) train, val, _, test3 = get_dataloaders(base_folder='/home/annus/Desktop/' 'projects/forest_cover_change/' 'eurosat/images/tif/', batch_size=16) def get_dict_diff(d1, d2): return len(set(d1.values()) - set(d2.values())) # compare on the same run print(get_dict_diff(test1, test2)) print(get_dict_diff(test2, test3)) print(get_dict_diff(test3, test1)) # compare across runs import pickle with open('test1.pkl', 'rb') as ts1: test1_old = pickle.load(ts1) print(get_dict_diff(test2, test1_old))
st82186
okay so finally after all of this, I decided to save the pickle files for my train and test split to make sure same data loads every time and tested again but I still get different accuracy every time. This is my new loader code def get_dataloaders(base_folder, batch_size, one_hot=False): print('inside dataloading code...') class dataset(Dataset): def __init__(self, data_dictionary, bands, mode='train'): super(dataset, self).__init__() self.example_dictionary = data_dictionary # with open(mode+'.txt', 'wb') as this: # this.write(json.dumps(self.example_dictionary)) self.bands = bands # bands are a list bands to use as data, pass them as a list [] self.mode = mode self.max = 0 pass def __getitem__(self, k): example_path, label_name = self.example_dictionary[k] # print(example_path, label_name) # example is a tiff image, need to use gdal this_example = gdal.Open(example_path) this_label = all_labels[label_name] if one_hot: label_arr = np.zeros(10) label_arr[this_label] = 1 # print(this_label, label_arr) example_array = this_example.GetRasterBand(self.bands[0]).ReadAsArray() for i in self.bands[1:]: example_array = np.dstack((example_array, this_example.GetRasterBand(i).ReadAsArray())).astype(np.int16) # transforms if self.mode == 'train': example_array = np.squeeze(seq.augment_images( (np.expand_dims(example_array, axis=0))), axis=0) pass # range of vals = [0,1] example_array = np.clip((example_array.astype(np.float)/4096), a_min=0, a_max=1) # just to bring those values down # range of vals = [-1,1] example_array = 2*example_array-1 # max value in test set is 28000 # this_max = example_array.max() # if this_max > self.max: # self.max = this_max # print(example_array.max(), example_array.min(), example_array.mean()) example_array = toTensor(image=example_array) if one_hot: return {'input': example_array, 'label': torch.LongTensor(label_arr)} return {'input': example_array, 'label': this_label} def __len__(self): return len(self.example_dictionary) """ Okay so here is how we do it. We save the train, test and validation dictionaries if they don't exist, and once they do, we load the preexisting ones to help us! """ # check if we already have the data saved with us... count_data = 0 # count tells us what to do if os.path.exists('train_loader.pkl'): count_data += 1 with open('train_loader.pkl', 'rb') as train_l: train_dictionary = p.load(train_l) print('INFO: Loaded pre-saved train data...') if os.path.exists('val_loader.pkl'): count_data += 1 with open('val_loader.pkl', 'rb') as val_l: val_dictionary = p.load(val_l) print('INFO: Loaded pre-saved eval data...') if os.path.exists('test_loader.pkl'): count_data += 1 with open('test_loader.pkl', 'rb') as test_l: test_dictionary = p.load(test_l) print('INFO: Loaded pre-saved test data...') # create training set examples dictionary if count_data != 3: all_examples = {} for folder in sorted(os.listdir(base_folder)): # each folder name is a label itself # new folder, new dictionary! class_examples = [] inner_path = os.path.join(base_folder, folder) #####################################3 this was a problem for a long time now.. because of not sorting it all_images_of_current_class = [x for x in os.listdir(inner_path) if x.endswith('.tif')] all_images_of_current_class.sort(key=lambda f: int(filter(str.isdigit, f))) # if folder == 'Forest': # print(all_images_of_current_class) for image in all_images_of_current_class: # dirFiles.sort(key=lambda f: int(filter(str.isdigit, f))) # print(image) image_path = os.path.join(inner_path, image) # for each index as key, we want to have its path and label as its items class_examples.append(image_path) all_examples[folder] = class_examples # split them into train and test train_dictionary, val_dictionary, test_dictionary = {}, {}, {} for class_name in all_examples.keys(): class_examples = all_examples[class_name] # print(class_examples) ########################## this doesn't work # random.shuffle(class_examples) ########################### but this does random.Random(4).shuffle(class_examples) total = len(class_examples) train_count = int(total * 0.8); train_ = class_examples[:train_count] test = class_examples[train_count:] total = len(train_) train_count = int(total * 0.9); train = train_[:train_count] validation = train_[train_count:] for example in train: train_dictionary[len(train_dictionary)] = (example, class_name) for example in test: test_dictionary[len(test_dictionary)] = (example, class_name) for example in validation: val_dictionary[len(val_dictionary)] = (example, class_name) # # test dataset with open('train1.txt', 'wb') as train_check: for k in range(len(train_dictionary)): train_check.write('{}\n'.format(train_dictionary[k][0])) print(map(len, [train_dictionary, val_dictionary, test_dictionary])) # create dataset class instances bands = [4, 3, 2, 5, 8] # these are [NIR, Vegetation Red Edge, Red, Green, Blue] bands train_data = dataset(data_dictionary=train_dictionary, bands=bands, mode='train') val_data = dataset(data_dictionary=val_dictionary, bands=bands, mode='eval') test_data = dataset(data_dictionary=test_dictionary, bands=bands, mode='test') print('train examples =', len(train_dictionary), 'val examples =', len(val_dictionary), 'test examples =', len(test_dictionary)) train_dataloader = DataLoader(dataset=train_data, batch_size=batch_size, shuffle=True, num_workers=4) val_dataloader = DataLoader(dataset=val_data, batch_size=batch_size, shuffle=True, num_workers=4) test_dataloader = DataLoader(dataset=test_data, batch_size=batch_size, shuffle=True, num_workers=4) # save the created datasets if count_data != 3: with open('train_loader.pkl', 'wb') as train_l: p.dump(train_dictionary, train_l, protocol=p.HIGHEST_PROTOCOL) with open('test_loader.pkl', 'wb') as test_l: p.dump(test_dictionary, test_l, protocol=p.HIGHEST_PROTOCOL) with open('val_loader.pkl', 'wb') as val_l: p.dump(val_dictionary, val_l, protocol=p.HIGHEST_PROTOCOL) print('INFO: saved data pickle files for later use') return train_dataloader, val_dataloader, test_dataloader #, test_dictionary
st82187
Would it be possible to upload the test_loader so that I could run your code? If it’s too big, maybe a small part would be sufficient (e.g. 1000 samples). Currently I can’t locate the source of this issue.
st82188
okay. So here are my labels as indices all_labels = { 'AnnualCrop' : 0, 'Forest' : 1, 'HerbaceousVegetation' : 2, 'Highway' : 3, 'Industrial' : 4, 'Pasture' : 5, 'PermanentCrop' : 6, 'Residential' : 7, 'River' : 8, 'SeaLake' : 9 } and I have uploaded my test images and the corresponding pickle file that contains their labels too. https://drive.google.com/file/d/1HJ8auOSDAVNoV4Jll2izz8bMIAvJ3v0o/view?usp=sharing 5
st82189
I’ve downloaded the data and did some checks. The model output of small stacked batches compared to one large batch (257) is the same. The losses for batch_size=1 and the unreduced losses for batch_size=257 are the same. I can’t check the accuracy as I don’t have the state dict, but so far the solution is deterministic. Also, I had to remove some code, which should be dead anyway due to mode='test', e.g. the transform.
st82190
Thank you so much for your time ptrblck! I still can’t find out what the error is. By the way, when you have some free time you can test my trained model on this thing as well. https://drive.google.com/file/d/1x6ebxznusnUEKVQahgIDHUPzrolVofYd/view?usp=sharing 12
st82191
Hi everyone, i have several questions about ANN on pytorch. I have been using pytorch for image and video processing for a long time. Now, I want to use it on different area. I have three different types of dataset about temperature, voltage and current. I need to train them to get very close result to real result. Here is my question, can I train these 3 types of numpy array? and how?
st82192
This is the error message I get. In the first line, I output the shapes of predicted and target. From my understanding, the error arises from those shapes not being the same but here they clearly are. torch.Size([6890, 3]) torch.Size([6890, 3]) Traceback (most recent call last): File "train.py", line 251, in <module> main() File "train.py", line 230, in main train(net, training_dataset, targets, device, criterion, optimizer, epoch, args.epochs) File "train.py", line 101, in train loss = criterion(predicted, target.detach().cpu().numpy()) File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 443, in forward return F.mse_loss(input, target, reduction=self.reduction) File "/home/hb119056/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2244, in mse_loss if not (target.size() == input.size()): TypeError: 'int' object is not callable I hope all the relevant context information is given. Thanks for any suggestions!
st82193
Solved by ptrblck in post #2 You are passing the target as a numpy array, which works differently for size() (it returns the number of elements as an int, thus raising this error). Remove the numpy() call and the code should work fine.
st82194
You are passing the target as a numpy array, which works differently for size() (it returns the number of elements as an int, thus raising this error). Remove the numpy() call and the code should work fine.
st82195
There are two required parameters for the nn.transformer module (src - the sequence to the encoder (required) and tgt - the sequence to the decoder (required)) Is src (tgt) the length of a sentence or is it the length of a dictionary? I already tried different options, but transformer absolutely does not want to learn.
st82196
I have vocab, sentence, word. src - the sequence to the encoder - what does this apply to? src.png1234×830 4.33 KB
st82197
Hi I am trying to implement simple/General attention in Pytorch , So far the model seems to working , but what i am intersted in doing is getting the attention weights , so that i can visualize it . Here’s what i am doing , creating a dummy sequence data , the 5th sequence is set as the target , so all the model needs to do is to understand that the 5th sequence in the data is the target and give a higher attention weight to the 5th sequence . Strangely when i try to plot my attention weight the highest weight is observed in the last sequence! . So i would like to know two things . Given the code , is my implementation of attention Correct ? If its right, why is the attention weights shifted to the last sequence? Here’s the entire code that i am playing with (Hope its readable and understandable) import torch import torch.nn as nn import matplotlib.pyplot as plt import pandas as pd import numpy as np import math import random import matplotlib.pyplot as plt import torch.nn.functional as F import pandas as pd INPUT_DIMS = 10 TIME_STEPS = 10 ATTENTION_COL = 5 def get_data_recurrent(n, time_steps, input_dim, attention_column=10): """ Data generation. x is purely random except that it's first value equals the target y. In practice, the network should learn that the target = x[attention_column]. Therefore, most of its attention should be focused on the value addressed by attention_column. :param n: the number of samples to retrieve. :param time_steps: the number of time steps of your series. :param input_dim: the number of dimensions of each element in the series. :param attention_column: the column linked to the target. Everything else is purely random. :return: x: model inputs, y: model targets """ x = np.random.standard_normal(size=(n, time_steps, input_dim)) y = np.random.randint(low=0, high=2, size=(n, 1)) x[:, attention_column, :] = np.tile(y[:], (1, input_dim)) return x, y X_train , y_train = get_data_recurrent(300000 , input_dim=INPUT_DIMS , time_steps=TIME_STEPS , attention_column=ATTENTION_COL) class simple_lstm( nn.Module): def __init__(self , input_size , hidden_size , output_units): super(simple_lstm, self).__init__() self.lstm = nn.LSTM(input_size=10,hidden_size=hidden_size , batch_first = True) self.dense1 = nn.Linear(hidden_size , 1) def forward(self , x): out , (hn,cn) = self.lstm(x) hn = hn.squeeze(0) hidden_state = hn attention_scores = torch.bmm(out, hidden_state.unsqueeze(2)).squeeze(2) soft_attention_weights = F.softmax(attention_scores, 1) attention_output = torch.bmm(out.transpose(1, 2), soft_attention_weights.unsqueeze(2)).squeeze(2) out = self.dense1(attention_output) out = out return out , soft_attention_weights #Model Training torch_model = simple_lstm(input_size=INPUT_DIMS , hidden_size=32, output_units=1) optimiser = torch.optim.Adam(params = torch_model.parameters()) criterion = nn.MSELoss() torch_train = torch.utils.data.TensorDataset( torch.tensor(X_train , dtype = torch.float) , torch.tensor(y_train , dtype = torch.float)) torch_train_loader = torch.utils.data.DataLoader(torch_train , batch_size=64) num_epochs = 1 saw = [] for epoch in range(num_epochs): for i , (X_tr ,y_tr) in enumerate(torch_train_loader): optimiser.zero_grad() output , att = torch_model(X_tr) saw.append(att.data.numpy().mean(axis=0)) loss = criterion(output , y_tr) loss.backward() optimiser.step() print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch + 1, num_epochs, loss.item())) #PLOT ATTENTION Weights att = [] for i in range(1,300): X_test , y_test = get_data_recurrent(1,input_dim=INPUT_DIMS , time_steps=TIME_STEPS , attention_column=ATTENTION_COL) preds , attention = torch_model(torch.tensor(X_test , dtype = torch.float)) att.append(attention.data.numpy()) arr = np.mean(np.array(att) , axis=0) pd.DataFrame(arr.squeeze(0), columns=['attention (%)']).plot(kind='bar', title='Attention Mechanism as ' 'a function of input') Here are the attention weights from the model Screenshot 2019-08-30 at 5.26.12 PM.png848×534 26.6 KB I was expecting to see a higher weight for the 5th sequence ,and not for the last sequence. Could someone please guide me on this ?
st82198
I have a tensorflow function. I want to convert it to pytorch. However, I got a big error between tensorflow and pytorch result. Could you please check my code and let me know what is the problem? tensorflow result 0.23463342 Pytorch result 0.035501156 The colab is at Here 4 This is my target tensorflow code eps = 1e-5 ndims = 3 win = [9] * ndims #################################### # TENSORFLOW #################################### def ncc_tensorflow(I, J): # get convolution function conv_fn = getattr(tf.nn, 'conv%dd' % ndims) # compute CC squares I2 = I*I J2 = J*J IJ = I*J # compute filters sum_filt = tf.ones([*win, 1, 1]) padding = 'SAME' strides = [1] * (ndims + 2) # compute local sums via convolution I_sum = conv_fn(I, sum_filt, strides, padding) J_sum = conv_fn(J, sum_filt, strides, padding) I2_sum = conv_fn(I2, sum_filt, strides, padding) J2_sum = conv_fn(J2, sum_filt, strides, padding) IJ_sum = conv_fn(IJ, sum_filt, strides, padding) # compute cross correlation win_size = np.prod(win) u_I = I_sum/win_size u_J = J_sum/win_size cross = IJ_sum - u_J*I_sum - u_I*J_sum + u_I*u_J*win_size I_var = I2_sum - 2 * u_I * I_sum + u_I*u_I*win_size J_var = J2_sum - 2 * u_J * J_sum + u_J*u_J*win_size cc = cross*cross / (I_var*J_var + eps) # return negative cc. return tf.reduce_mean(cc) And I reproduce it in pytorch by #################################### # PYTORCH #################################### def ncc_torch( I, J): # compute CC squares I2 = I*I J2 = J*J IJ = I*J # compute filters batch_size, channels, _, _, _ = I.shape sum_filt = torch.ones((batch_size, channels, *win)).float() strides = [1] * (ndims) # compute local sums via convolution I_sum = F.conv3d(I, sum_filt, stride= strides, padding=1) J_sum = F.conv3d(J, sum_filt, stride= strides, padding=1) I2_sum = F.conv3d(I2, sum_filt, stride=strides, padding=1) J2_sum = F.conv3d(J2, sum_filt, stride=strides, padding=1) IJ_sum = F.conv3d(IJ, sum_filt, stride=strides, padding=1) # compute cross correlation win_size = np.prod(win) u_I = I_sum / win_size u_J = J_sum / win_size cross = IJ_sum - u_J*I_sum - u_I*J_sum + u_I*u_J*win_size I_var = I2_sum - 2 * u_I * I_sum + u_I*u_I*win_size J_var = J2_sum - 2 * u_J * J_sum + u_J*u_J*win_size cc = cross*cross / (I_var*J_var + eps) return torch.mean(cc) The unit test is # Unit test I = torch.rand(1,18,18,18,1) #BDHWC J = torch.rand(1,18,18,18,1) with tf.Session() as sess: tf_ncc = ncc_tensorflow(I, J) tf_result = sess.run(tf_ncc) print ('tensorflow result ', tf_result) I = I.permute(0,4,1,2,3) #BCDHW J = J.permute(0,4,1,2,3) #BCDHW print('Pytorch result' , ncc_torch(I,J).numpy()) ``
st82199
Solved by ptrblck in post #2 Skimming through your code it looks like the padding in your PyTorch approach might be wrong. If I’m not mistaken you are using a 3D ([9x9x9]) conv kernel. To get the same volumetric output as your input, you should use padding=4 for stride=1.
st82200
Skimming through your code it looks like the padding in your PyTorch approach might be wrong. If I’m not mistaken you are using a 3D ([9x9x9]) conv kernel. To get the same volumetric output as your input, you should use padding=4 for stride=1.
st82201
Geat. It looks closer than previous using input size 2x81x81x81x1 and your kenel size suggestion tensorflow result 0.05861372 Pytorch result 0.029243657 However, the code also uses win=[kernel size] * 3 win_size=np.prod(win) So if I use kenelsize is 4, then it will be change the result on np.prod(win). Do you think we still need to keep same value win_size for both tf and pytorch, ie 999 and just modify size in the conv3d?
st82202
I’m not sure I understand the last question properly, but I would suggest to stick as close as possible to the reference implementation in order to get the same results.
st82203
John1231983: sum_filt = torch.ones((batch_size, channels, *win)).float() I think @SimonW is mentioning that the above snippet would be sum_filt = torch.ones((1, channels, *win)).float()
st82204
Thanks . That is my typo. I fixed it but the result is same because my example bstch size is 1. I guess main problem is padding type
st82205
@ptrblck it worked. Sorry. I mistake your comment. The padding shoud be 4 instead of kernel 4
st82206
I used nn.DataParallel for increasing the batch size of a large model. While running, I am getting the error stating that: “Mismatch between the batch size of the input and target” I used nn.DataParallel(model, device_list).cuda() The batch size shown in the error message is 3 times that of the batch size(I am using 3 GPUs). I checked some of the examples and my understanding is that nothing else is needed to be done in order to replicate the model into multiple GPUs and share the batches among that. I would like to know if something else is needed to be done or my understanding about the whole approach is plain wrong! Thanks in advance
st82207
Make sure that the first dimension is always batch while inputting to your model.
st82208
I want to see the code for nn.Dropout, while staying in google colab, I ctrl+click on nn.Dropout, it takes me to dropout.py, which has Dropout class, and returns F.dropout(input, self.p, self.training, self.inplace) so I ctrl+click on F.dropout, it takes me to functional.py where something like this is returned return (_VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training)) now I am stuck, because this is builtin, and no option to go beyond this. only way is to search for this in github. this happens multiple times for a lot of builtin_function_or_method, for example searching for view, permute, all are builtin_function_or_method, so cannot access their implementation from colab notebook.
st82209
I don’t think there is an easy method using a Python notebook / REPL. However, why would you like to see the underlying (optimized) C++ code?
st82210
to see how dropout is implemented. this happens not only for dropout, but for other builtin also, for example torch.group_norm, I want to see how this is implemented, but it is builtin.
st82211
I was wondering if anyone can point me to resources on how Pytorch scales on extremely large scale parallel jobs - I am researcher with a group working on ML for computational physics, with all my codes being in Pytorch. Currently we are evaluating ML frameworks which can scale on super-massive distributed jobs i.e. > 5000 GPUs. Any idea on what is the largest distributed job using Pytorch so far (both in public domain and internally at Facebook) ? Are there any fundamental limitations which prevent it from scaling at that level? If so, can I expect any improvements with 1.0 release? I would like to avoid re-writing all my codes in Tensorflow for scaling, so I’d really appreciate some input. Thanks in advance…
st82212
For imagenet, for converged model accuracy, it entirely depends on the total batch size of the model training. In other words, the largest number of ResNet batch size without model accuracy loss I personally have done is 4096 images, even though other people have published about going as large as 18-24K batch size. So imagine, if you put 32 batch size per GPU, with 4096 images, you will be distributing your distributed job to 128 GPUs. From my own experience, from pure HPC performance and scalability point of view, I have done 64 nodes * 8 GPU/node (512 GPUs) training myself with pretty good linear scalability. Of course, the scalability depends upon your model size, your HPC cluster network setup. So what is your network interconnect, Infiniband or Ethernet? at what speed and what model are you trying to train?
st82213
Thanks for your input. Certainly interesting that you can see scalability for 512 GPUs - We are looking at really large scale and 5000 GPUs would be the lower bound. Some facts about the hardware: IBM Power9 with Nvidia Volta V100 GPUs GPUs have NVLink (Gen 2) interconnect, with 4 GPUs/node. Mellanox EDR Infiniband. I want to train a custom architecture which is similar to a convolutional recurrent network - we will be training on several terabytes of training data. Apart from the inherent issues of communication latency etc., is there any limitation in Pytorch as such when I scale up? I am having difficulty finding resources to learn distributed GPU training with Pytorch (apart from documentation) - What approach did you use? I am currently experimenting with Horovod on a much smaller scale, but I’d prefer using Pytorch’s native distributed module, if I can only get some nice tutorials to study. Thanks again for your insights.
st82214
I am using Horovod for distributed training. You only need to change few lines to make your code distributed.
st82215
Its been several months since your last post. Can you lighten us up about what you ultimately did? Did you use Pytorch? How did you go about it? what did you do? was it good? or no ? I’d really appreciate it
st82216
Hi, I have got this error Traceback (most recent call last): File “train.py”, line 15, in model = CreateModel(opt) File “model/model_Loader.py”, line 15, in CreateModel model.init_weights() File “model/DRGAN.py”, line 204, in init_weights self.G.apply(weights_init_normal) File “/home/pirl/anaconda3/envs/dr-gan-torch1-2/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 293, in apply module.apply(fn) File “/home/pirl/anaconda3/envs/dr-gan-torch1-2/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 293, in apply module.apply(fn) File “/home/pirl/anaconda3/envs/dr-gan-torch1-2/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 293, in apply module.apply(fn) [Previous line repeated 2 more times] File “/home/pirl/anaconda3/envs/dr-gan-torch1-2/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 294, in apply fn(self) File “model/Component.py”, line 49, in weights_init_normal init.uniform_(m.weight.data, 1.0, 0.02) File “/home/pirl/anaconda3/envs/dr-gan-torch1-2/lib/python3.6/site-packages/torch/nn/init.py”, line 88, in uniform_ return no_grad_uniform(tensor, a, b) File “/home/pirl/anaconda3/envs/dr-gan-torch1-2/lib/python3.6/site-packages/torch/nn/init.py”, line 14, in no_grad_uniform return tensor.uniform_(a, b) RuntimeError: Expected a_in <= b_in to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) File “train.py”, line 15, in is model = CreateModel(opt) File “model/model_Loader.py”, line 15, in CreateModel is model.init_weights() File “model/DRGAN.py”, line 204, in init_weights is 203 def init_weights(self): 204 self.G.apply(weights_init_normal) 205 self.D.apply(weights_init_normal) I got this pytorch code from GitHub zhangjunh/DR-GAN-by-pytorch An implement of Disentangled Representation Learning GAN for Pose-Invariant Face Recognition - zhangjunh/DR-GAN-by-pytorch and try to run it. According to github-site, the required pytorch version is 0.2, however my torch version is 1.2, the latest version. I want to run in the current version without changing the version. Any help will be greatly appreciated!!!
st82217
Solved by alex.veuthey in post #2 I’m having trouble understanding the logic in the original code. In both version 0.2 and 1.2 of PyTorch, calling .uniform_(a, b) on a tensor requires a <= b, as the error states, and the call uses a = 1.0 and b = 0.02. This might be a mistake in the original code, but I’m not familiar with GAN init…
st82218
I’m having trouble understanding the logic in the original code. In both version 0.2 2 and 1.2 2 of PyTorch, calling .uniform_(a, b) on a tensor requires a <= b, as the error states, and the call uses a = 1.0 and b = 0.02. This might be a mistake in the original code, but I’m not familiar with GAN initialization standards / approaches for batch norm layers. Have a look here 18 for the torchvision provided ResNet model.
st82219
Sorry for late comment Thx for your advice, I finally solve the problem. I change the value of a & b like init.uniform_(m.weight.data, 1.0, 0.02) to init.uniform_(m.weight.data, 0.02, 1.0) After that, I install the torchvision using in different way conda install torchvision -c pytorch instead of pip install torchvision and I work I really appreciate your help!!!
st82220
I am using grid_sample function, that torch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros') I want to construct a random grid and it trained with the network. The scrip likes class Network(nn.Module): def __init__(self): self.rnd_grid_tensor = ... (required_grad=True) def forward(self,input): output = torch.nn.functional.grid_sample(input, self.rnd_grid_tensor, mode='bilinear', padding_mode='zeros') return output How to generate the rnd_grid_tensor so that the value in the grid followed by gaussian distribution. Thanks This is my solution but I am not sure is it correct or not from torch.autograd import Variable def gaussian(ins, is_training, mean, stddev): if is_training: noise = Variable(ins.data.new(ins.size()).normal_(mean, stddev)) return ins + noise return ins grid = torch.rand((1,2,16,16), requires_grad=True) grid = gaussian (grid, True, 1, 0.1)
st82221
Do you want a random displacement field where the displacements follow a Gaussian distribution? If so, what you need to do is generate a tensor with the displacements disp_field = torch.FloatTensor(1,2,16,16).normal_(mean, std).requires_grad_() Then add it into an identity grid, which you can create using affine_grid() 3: id_grid = torch.nn.functional.affine_grid(torch.FloatTensor([[[1, 0, 0],[0, 1, 0]]]), size=(1,2,16,16)) and then pass the sum to grid_sample(): output = torch.nn.functional.grid_sample(input, id_grid + disp_field) Just note that the mean and standard deviation will be in the grid units of a half-image (ie. a displacement of 1 will be equivalent to (size - 1)/2 pixels). So if you have the mean and std in pixels, you have to divide them both by (size - 1)/2. I hope this helps.
st82222
I have a code to generate 2D grid B, C, H, W = x.size() # mesh grid xx = torch.arange(0, W).view(1,-1).repeat(H,1) yy = torch.arange(0, H).view(-1,1).repeat(1,W) xx = xx.view(1,1,H,W).repeat(B,1,1,1) yy = yy.view(1,1,H,W).repeat(B,1,1,1) grid = torch.cat((xx,yy),1).float() How to obtain 3D grid by adding zz in the code? This is what I tried B,C,D,H,W=input.size() xx = torch.arange(0, W).view(1, 1,-1).repeat(D, H, 1) yy = torch.arange(0, H).view(1, -1,1).repeat(D, 1, W) zz = torch.arange(0, D).view(-1, 1,1).repeat(1,H,W) xx = xx.view(1,1,D,H,W).repeat(B,1,1,1,1) yy = yy.view(1,1,D,H,W).repeat(B,1,1,1,1) zz = zz.view(1,1,D,H,W).repeat(B,1,1,1,1) grid = torch.cat((xx,yy,zz),1).float().to('cuda') But i feel it wrong in somewhere. Could you verify it?
st82223
@ptrblck: Yes. Because it is care about the order. I can use it to make grid but when I feed to grid_sample () function. The output looks rotated. For example xx = torch.arange(0, W).view(1, 1,-1).repeat(D, H, 1) or xx = torch.arange(0, D).view(-1, 1,1).repeat(1, H, W)
st82224
The easiest way to obtain a grid for use with the grid_sample() function is probably to use torch.nn.functional.affine_grid() 35. You give it an affine matrix, and it returns a grid that you can then pass to grid_sample(). If you want an identity grid (no transformation), you can just pass it the identity affine matrix, which for 3D data is aff = torch.FloatTensor([[[1, 0, 0, 0],[0, 1, 0, 0],[0, 0, 1, 0]]]) For example, aff = aff.expand(B, 3, 4) # expand to the number of batches you need grid = torch.nn.functional.affine_grid(aff, size=(B,C,D,H,W)) torch.nn.functional.grid_sample(3d_data, grid) So yeah, you could probably get the same effect using your code sample if you do it just right, but this should be much easier. I hope this helps.
st82225
Assume I have a multi-GPU system. Let tensor “a” be on one of the GPUs, and tensor “b” be on CPU. How can I move “b” to the same GPU that “a” resides? Unfortunately, b.type_as(a) always moves b to GPU 0. Thanks.
st82226
Thanks @ptrblck. The problem I have with the “Tensor.new” function is that If “a” is on GPU and “b” on CPU, then “a.new(b)” does not work (error: …constructor received an invalid combination of arguments…). “a.new(b.numpy())” works though, but I am afraid that it is inefficient. If “a” and “b” are already on the same device, then “a.new(b)” will unnecessarily create a new copy of “b” I am looking for a function like “b.type_as(a)”, but could automatically move the data to the same device as “a”.
st82227
As far as I understand, you would like to move b to the same device as a. This should work: a = torch.randn(10, 10).cuda() print(a) b = a.new(a) print(b) c = a.new(10, 10) print(c)
st82228
@ptrblck The problem is that “b” does not necessarily have the same shape and type of “a”. For example “a” could be 10⨉10 float tensor, while “b” 13⨉19⨉23 int Tensor.
st82229
You can pass the standard arguments to new as to a new Tensor: a = torch.randn(10, 10).cuda() print(a) b = a.new(13, 19, 23).long() print(b) Would this work? EDIT: You could of course pass a numpy array or something else to the constructor. Could you post your use case? I have the feeling, I’m not really understanding your problem and thus posting useless approaches.
st82230
Thanks @ptrblck. I have a huge project, and in most places, in order to move the data to the proper device I used “type_as”. Then I wanted to run several instances of that program on different GPUs of the machine at the same time. The problem was that type_as always uses GPU 0. Right now I am using the approach explained here Select GPU device through env vars 137, and it solves my problem.
st82231
This may be new functionality from the Tensor API, but to move tensor a to the device of tensor b I use: a = a.to(b.device)
st82232
Procedure_1: Loss1= crit(pred,target) Loss1.backward() Loss2 = crit(pred,target) Loss2.backward() Optim.step() Procedure_2: Loss1= crit(pred,target) Loss1.backward() Optim.step() Optim.zero_grad() Loss2 = crit(pred,target) Loss2.backward() Optim.step() Optim.zero_grad() Are these two procedure same in nature meaning they will yield same amount of gradient update? I am trying to switch to efficient b5 from b0 for a project but my gpu can only handle batchsize of 3 on b5. If i run with batchsize 3, the loss does not converge as well as before with b0. So i am thinking about gradient accumulation. Is the abovementioned way is the right way to do it? Also what are the other way to mitigate the batch issue? I am trying to install apex for the last 3 hours but failed! :’(
st82233
This would yield the same results, if you are using an optimizer without any running estimates. E.g. this example returns the same updated values for SGD, but will fail for e.g. Adam: torch.manual_seed(2809) model = nn.Linear(10, 1, bias=False) w0 = model.weight.clone() optimizer = torch.optim.SGD(model.parameters(), lr=1e-3) criterion = nn.MSELoss() data = torch.randn(1, 10) target = torch.randn(1, 1) # 1) output = model(data) loss1 = criterion(output, target) loss1.backward(retain_graph=True) loss2 = criterion(output, target) loss2.backward() optimizer.step() print(w0 - model.weight) optimizer.zero_grad() # 2) torch.manual_seed(2809) model = nn.Linear(10, 1, bias=False) # make sure weight is equal print((w0 == model.weight).all()) optimizer = torch.optim.SGD(model.parameters(), lr=1e-3) output = model(data) loss1 = criterion(output, target) loss1.backward(retain_graph=True) optimizer.step() optimizer.zero_grad() loss2 = criterion(output, target) loss2.backward() optimizer.step() optimizer.zero_grad() print(w0 - model.weight) Have a look at this post 1 for some possible work flows. What error are you getting while trying to install apex?
st82234
Hi, Is there a way to get the same result with an adam optimizer before and after gradient accumulation ? the apex error i am getting, … Compiling cuda extensions with nvcc: NVIDIA ® Cuda compiler driver Copyright © 2005-2018 NVIDIA Corporation Built on Sat_Aug_25_21:08:04_Central_Daylight_Time_2018 Cuda compilation tools, release 10.0, V10.0.130 from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0/bin … multi_tensor_sgd_kernel.cu c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1379): error C3203: ‘templated_iterator’: unspecialized class template can’t be used as a template argument for template parameter ‘_Ty1’, expected a real type c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1437): note: see reference to class template instantiation ‘ska::flat_hash_map<K,V,H,E,A>’ being compiled c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1383): error C3203: ‘templated_iterator’: unspecialized class template can’t be used as a template argument for template parameter ‘_Ty1’, expected a real type c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1391): error C3203: ‘templated_iterator’: unspecialized class template can’t be used as a template argument for template parameter ‘_Ty1’, expected a real type c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1473): error C3203: ‘templated_iterator’: unspecialized class template can’t be used as a template argument for template parameter ‘_Ty1’, expected a real type c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1510): note: see reference to class template instantiation ‘ska::flat_hash_set<T,H,E,A>’ being compiled c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1478): error C3203: ‘templated_iterator’: unspecialized class template can’t be used as a template argument for template parameter ‘_Ty1’, expected a real type c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1482): error C3203: ‘templated_iterator’: unspecialized class template can’t be used as a template argument for template parameter ‘_Ty1’, expected a real type c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1486): error C3203: ‘templated_iterator’: unspecialized class template can’t be used as a template argument for template parameter ‘_Ty1’, expected a real type c:/users/rafi/anaconda3/envs/myenv2/lib/site-packages/torch/include\c10/util/flat_hash_map.h(1490): error C3203: ‘templated_iterator’: unspecialized class template can’t be used as a template argument for template parameter ‘_Ty1’, expected a real type error: command ‘C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin\nvcc.exe’ failed with exit status 2 Running setup.py install for apex … error
st82235
If you would like to simulate a larger batch size, the first approach should be the valid one. The build error might be related to an older VS version as described here 50.
st82236
Hi, I am trying to trace using torch.jit.trace() a network that takes 2 tensors (z and x) as input and produces one tensor as output (y). I can successfully trace models with 1 input and 1 output as follows: sample_input = torch.rand(1,3,256,256) traced_module_feature = torch.jit.trace(model, sample_input) traced_module_feature.save("traced_model.pt") However, I have no clue on how to trace models that take 2 inputs. I tried the following which produces error as given below: z = torch.rand(1,3,127,127) x = torch.rand(1,3,256,256) traced_module_rpn = torch.jit.trace(model, { 'zf':z, 'xf':x } ) and the output is Traceback (most recent call last): File "export_to_cpp.py", line 57, in <module> main() File "export_to_cpp.py", line 52, in main traced_module_rpn = torch.jit.trace(model.rpn_model, { 'zf':z, 'xf':x } ) File "/home/tlm/anaconda3/envs/svt2_pyth1/lib/python3.7/site-packages/torch/jit/__init__.py", line 565, in trace module._create_method_from_trace('forward', func, example_inputs) RuntimeError: Only tensors and (possibly nested) tuples of tensors are supported as inputs or outputs of traced functions (toIValue at /opt/conda/conda-bld/pytorch-nightly_1538562647654/work/torch/csrc/jit/pybind_utils.h:74) frame #0: <unknown function> + 0x3fe53f (0x7f98c5fb353f in /home/tlm/anaconda3/envs/svt2_pyth1/lib/python3.7/site-packages/torch/_C.cpython-37m-x86_64-linux-gnu.so) frame #1: <unknown function> + 0x463a2b (0x7f98c6018a2b in /home/tlm/anaconda3/envs/svt2_pyth1/lib/python3.7/site-packages/torch/_C.cpython-37m-x86_64-linux-gnu.so) frame #2: <unknown function> + 0x1a665d (0x7f98c5d5b65d in /home/tlm/anaconda3/envs/svt2_pyth1/lib/python3.7/site-packages/torch/_C.cpython-37m-x86_64-linux-gnu.so) <omitting python frames> frame #19: __libc_start_main + 0xf0 (0x7f98da43b830 in /lib/x86_64-linux-gnu/libc.so.6) I am using pytorch-1.0-rc1 with cuda-9.0 and python 3.7. Thank you
st82237
and, this is the network that I am trying to trace: import torch import torch.nn as nn import torch.nn.functional as F class RPN(nn.Module): def __init__(self): super(RPN, self).__init__() def forward(self, z_f, x_f): raise NotImplementedError def conv2d_group(x, kernel): batch = kernel.size()[0] pk = kernel.view(-1, x.size()[1], kernel.size()[2], kernel.size()[3]) px = x.view(1, -1, x.size()[2], x.size()[3]) po = F.conv2d(px, pk, groups=batch) po = po.view(batch, -1, po.size()[2], po.size()[3]) return po class UPChannelRPN(RPN): def __init__(self, anchor_num=5, feature_in=256, feature_out=256): super(UPChannelRPN, self).__init__() self.anchor_num = anchor_num self.feature_in = feature_in self.feature_out = feature_out self.cls_output = 2 * self.anchor_num self.loc_output = 4 * self.anchor_num self.template_cls_conv = nn.Conv2d(self.feature_in, self.feature_out * self.cls_output, kernel_size=3) self.template_loc_conv = nn.Conv2d(self.feature_in, self.feature_out * self.loc_output, kernel_size=3) self.search_cls_conv = nn.Conv2d(self.feature_in, self.feature_out, kernel_size=3) self.search_loc_conv = nn.Conv2d(self.feature_in, self.feature_out, kernel_size=3) self.loc_adjust = nn.Conv2d(self.loc_output, self.loc_output, kernel_size=1) def forward(self, z_f, x_f): cls_kernel = self.template_cls_conv(z_f) loc_kernel = self.template_loc_conv(z_f) cls_feature = self.search_cls_conv(x_f) loc_feature = self.search_loc_conv(x_f) pred_cls = conv2d_group(cls_feature, cls_kernel) pred_loc = self.loc_adjust(conv2d_group(loc_feature, loc_kernel)) return pred_cls, pred_loc I am trying to trace UPChannelRPN(). This network does not include any conditionals (to the best of my knowledge) and therefore I think it should be possible to trace this network using torch.jit.trace
st82238
The trace works when I use: traced_module_rpn = torch.jit.trace(model.rpn_model, [ Variable(z), Variable(x) ] ) but with this warning: rpn.py:18: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! po = F.conv2d(px, pk, groups=batch) where line 18: of rpn.py is po = F.conv2d(px, pk, groups=batch) Any help in understanding this warning message is highly appreciated.
st82239
tlm: traced_module_rpn = torch.jit.trace(model.rpn_model, [ Variable(z), Variable(x) ] ) I think you can try tuple type
st82240
Hello, i would like to see the trace graphically through torch.jit.trace: traced_net = torch.jit.trace(net,inputs, optimize=True, check_trace=True, check_inputs=None, check_tolerance=1e-05) print(traced_net) make_dot_from_trace(traced_net) <— —>AttributeError: ‘TopLevelTracedModule’ object has no attribute ‘set_graph’ If there is not graph then what ?
st82241
Any insights on how to solve this. Similar problem that needs to be solved. trying to trace the model with a RPN head with 2 inputs.
st82242
Hi! I am trying to run a python script that uses torch. I get the following error. I have Python 3.6.8 symbol lookup error: /home/workspace/Ubuntu1804/libtorch_python.so:: undefined symbol:PyThread_tss_alloc I am getting the same error when I import torch in IPython 3.6 in my terminal and haven’t been able to find much references to this specific type of symbol look up error.
st82243
This post 193 described the same undefined symbol. Could you try to create a new clean conda environment and reinstall PyTorch?
st82244
Dear community, could you tell me, what is going wrong, if my reward is getting worse over time and the loss is increasing? I am using a simple dueling network architecture with linear layers. grafik.png1230×345 28.3 KB
st82245
I’ve got a machine with a Tesla C2075 and a Quadro 5000. Both of these cards are compute capability of 2.0, and from this post, it appears I need to be using CUDA 8.0 or lower 1 2. Additionally, my graphics cards do not support CuDNN. When I follow the instructions on the “Get Started Locally” page 2 2, it installs perfectly and even returns true for torch.cuda.is_available(), but alas it errors out saying my GPU is too old to be supported when I try to move something into GPU memory. My guess is it has something to do with the fact it’s installing this version of PyTorch, which references CuDNN: py3.7_cuda8.0.61_cudnn7.1.2_2. My next idea was to try building from source as it appears from this post 3 1 that I don’t strictly need CuDNN support to use Cuda. However, when I try to build PyTorch from source with the steps specified in the getting started page 2 2 which specifically indicate Cuda 8.0, CMake encounters an error when it detects Cuda 8.0, saying “PyTorch requires CUDA 9.0 and above.” Unless I’m missing something, it seems like that documentation is not in sync with the latest release. I think it’d be worth my while to get this working if it is feasible to do so, but now that a custom build has refused to compile, I’m not sure what to try next.
st82246
I am using Mac and jupyter notebook(anaconda). From the PyTorch website, I’ve tried to install PyTorch by: conda install pytorch torchvision -c pytorch and pip3 install torch torchvision When I import torch in jupyter notebook, the error shows: ModuleNotFoundError: No module named 'torch' However, in the terminal, if I enter python3 environment by: $ python3 , then import torch , there is no error. Then I tried: conda list pytorch in the terminal, and found: # Name Version Build Channel pytorch 1.2.0 py2.7_0 pytorch So I finally notice in the jupyter notebook, if I use python2 and import torch, there would be no error. But I don’t know how to install pytorch in python3 environment? I’ve already followed the PyTorch website pip3 install torch torchvision , but it still not works.
st82247
Hello, I am trying to train a production network, where the agent has to decide on 3 actions: 0-continue production mode 1-exchange filter 2-harvest production I used a Linear network to process the state signal containing of: x - Product Mass - from 0 to 4 y - side product (currently not important) from 1 to 3 z - Filter level - from 1 to 0.2 d - Time after production start - bounded to 9 days t - overall time - max 183 days So basically you have to continue production for some days, until you reach the perfect harvest point, where you get the reward for the accumlated product mass. Starting state is : (0,1,1,0,0). After each action we have a state transition. When you harvest, you come back to initial state (0,1,z,0,t) and need to start a new production. When you harvest you get reward for the product and you have to use your filter, which will result in a decay. When you continue production, you can grow your product for one day. If the filter is below the 0.2 treshold, you should exchange the filter. Which will reset z to 1. Should I just use the vector input? Or would a convolution make more sene? Why? I try to train the perfect production mode over the time horizon of 183 days. I tried PPO and rainbow Reinforcement Learning implementations. Using Linear input with the size of the vector shape. While the PPO learns to harvest at the best time, he did not manage to learn to exchange the filters efficiently. The rainbow implementations finds a local optimum after some time, which is way beow of the baseline. My rainbow-implementation gets stucked on a local minimum. I am not sure, if it will find out of this. There should be reward in the range of 4,0e9 grafik.png767×333 25.8 KB Any suggestions how to tackle the problem? I would appreciate every help
st82248
Hi, I’m not sure if I should use InstanceNorm1D or BatchNorm1D in my network and I’d be grateful for some help. I have an output x of shape (N, L) where N is the number of elements in the batch and L is the number of activations. I’d like to perform normalization for each l in L where the statistics are computed across x[:,l] and there are separate parameters gamma and beta for each l. Based on the docs it seems to me that both of the following layers will achieve the desired effect: torch.nn.BatchNorm1d(L, affine=True) torch.nn.InstanceNorm1d(L, affine=true) and there would only be a difference if I had an output (N, C, L). Is this correct? Thanks!
st82249
Residual for max_pool I did. But I don’t understand how to make alignment for layers. resid.png660×541 6.13 KB
st82250
Is there an efficient way to apply a function such as torch.inverse for a matrix of size (n, m, m) where the function is applied to each of the (m, m) matrices ? It seems that this does the job: def apply(func, M): tList = [func(m) for m in torch.unbind(M, dim=0) ] res = torch.stack(tList, dim=0) return res apply(torch.inverse, torch.randn(100, 200, 200)) but I am wondering if there is a more efficient approach. Tensorflow functions seem to generically achieve that, as explained here: https://www.tensorflow.org/api_docs/python/tf/matrix_inverse 69 but I am not sure whether their method uses a for loop or parallelizes the process. It would be interesting to see a benchmark between the approach I see used in Pytorch and the one in Tensorflow. I will put the results of the benchmark later today.
st82251
We dont support batch inverse right now, looks like TF does. I wonder if they do anything more sophisticated than a for-loop internally.
st82252
Does tensor comprehensions allow us to apply a function across an axis much faster ?
st82253
I was also using unbind and stack as the equivalent of apply along axis in numpy. But the greatest problem was that it increased the processing by 2 times. The only way around this problem is to somehow convert the function as matrix operation. Luckily for me I was able replace the axis operation with a series of matrix multiplication.
st82254
If we consider the architecture of transformer, then we see two inputs. One input for (src) for the encoder, another (tgt) for the decoder. On my network, a sequence of numbers is sent to the encoder input (this is like a sentence with words). I have nothing to apply to the decoder. At the output of the decoder, I should have three neurons. As a target, I tell the network which neuron gives the correct answer (as in the classification of softmax tasks). I can’t figure out what to input to the decoder. Can I submit torch.zeros (1, batch, d_model) there? Help me solve my problem.
st82255
Who worked with the nn.transformer module help me figure it out. I am ready for a paid consultation. My model is not learning, something in my code is not correct.
st82256
I’m not sure to understand the question completely. Is it not possible to feed the output of the encoder into the decoder?
st82257
Not. If you look at the documentation on nn.transformer, you will see that it accepts two required parameters src and tgt. stc on encoder, tgt on decoder. The dimension they have (S, N, E) and (T, N, E). In my example, src is available. I don’t have tgt, but since this parameter is required, I submit tg = torch.zeros (1, batch, 128) .to (device) as a tgt to the decoder. At the output, I have softmax for 3 classifications. My network cannot be trained even with the simplest example, I have an error somewhere. import numpy as np import torch import torch.nn as nn wn1 = 160 batch = 30 epochs = 300 learning_rate = 0.00005 class Trans(nn.Module): def __init__(self, d_model=128, nhead=8, num_encoder_layers=4, num_decoder_layers=4, dim_feedforward=512): super().__init__() self.fc1 = nn.Linear(9,128) self.tr = nn.Transformer(d_model, nhead, num_encoder_layers, num_decoder_layers, dim_feedforward) self.fc = nn.Linear(d_model,3) def forward(self, src, tgt): out = self.fc1(src) out = self.tr(out, tgt) out = out.reshape(out.size(1), -1) out_nograd = self.fc(out) return out_nograd device = torch.device("cuda:0") net = Trans() net.to(device) for p in net.parameters(): if p.dim() > 1: nn.init.xavier_uniform_(p) tg = torch.zeros(1, batch, 128).to(device) #tgt: (T, N, E) #net.load_state_dict(torch.load('save_cuda_2year_trans_v2_2.pth')) criterion = nn.CrossEntropyLoss(reduction='sum') optimizer = torch.optim.AdamW(net.parameters(), lr=learning_rate) st_s = np.interp(np.loadtxt('src.txt',delimiter=';'), [0,10], [-1,1]) st_t = np.loadtxt('target.txt') len_st = len(st_t) b = (len_st-wn1)//batch len_batch = b*batch+wn1 for epoch in range(epochs): for wn_start in range(0,len_batch-wn1,batch): wn_tick = wn_start + wn1 wn_all = [] los_l = [] for b_iter in range(batch): wn_all.append(st_s[wn_start+b_iter:wn_tick+b_iter,:]) los_l.append(st_t[wn_tick+b_iter]) los_l = np.array(los_l,dtype=np.int) los_l = torch.from_numpy(los_l) los_l = los_l.type(torch.long).to(device) wn_all = np.array(wn_all,dtype=np.float32) wn_all = torch.from_numpy(wn_all) wn_all = torch.transpose(wn_all,dim0=1, dim1=0).to(device) outputs = net(wn_all,tg) loss1 = criterion(outputs, los_l) optimizer.zero_grad() loss1.backward() optimizer.step() I am missing '<sos>' and '<eos>'
st82258
I have tif images that have a data type of unsigned int 16 bit. PIL can read these images without problem and have the correct type. But as my custom data set reader reads this tif image and then tries to contert it to a tensor, for the normal normalization and then usage in the network, things goes wrong. I read the image which have values from zero to the max value of uint16. then i use the standard way transformation: image_transform = transfroms.Compose([ transforms.ToTensor(), transforms.Normalize((value,), (value,)) )] This code will result in errors. As the ToTensor function will convert images with value between [0, 255] and of some specific format to a tensor with values between [0,1] (thus a float tensor). But for other images it will keep the same datatype and just convert the values. So for this case it will take the data type closes to a unsigned int16 which is a signed int16… This will result in overflows and non correct data. So the question is how to do it an easy (using torch) and fast way? They way i do it is to first convert to a numpy array; then convert to a signed float 32 then to a float tensor, that can be used as normal. image_fp = open("filepath", "rb") image = PIL.Image.open(image_fp) im_arr = np.array(image) im_arr32 = im_arr.astype(np.float32) im_tensor = torch.tensor(im_arr32) im_tensor = im_tensor.unsqueeze(0) And this results in this ugly lambda: image_transform = transfroms.Compose([ transforms.Lambda(lambda image: torch.tensor(numpy.array(image).astype(numpy.float32)).unsqueeze(0)), transforms.Normalize((value,), (value,)) )] (All these conversion will impact the loading time with large datasets) [edit] So the differense that can save alot of time is to use: transforms.Lambda(lambda image: torch.from_numpy(numpy.array(image).astype(numpy.float32)).unsqueeze(0)) instead of the torch.tensor function
st82259
Your approach seems valid. You could maybe save a unnecessary copy by using torch.from_numpy instead of creating a new tensor.
st82260
Yes, i made a little script with a for loop where i read an tif image and then did the conversion. This was repeated 100 000 times (with nothing else in the loop) and timed in linux. I saved about 30 seconds by using from_numpy (torch.tensor ~ 2 min, torch.from_numpy ~1.5 min for 100K repetitions)
st82261
I am training a simple model with three input features and one output (both inputs and outputs are numerical). The printed outputs are sometimes nan, sometimes [0.] for every training example. I adjust the number of layers and nodes, but it didn’t help. I tried the model with another random dataset and it gave some reasonable outputs. So maybe the data is not imported correctly? Dataset 1 import torch import torch.nn as nn import torch.nn.functional as F import pandas as pd import numpy as np from torch.utils.data import DataLoader, Dataset from torch.autograd import Variable import torch.optim as optim class DatasetCDFarm(Dataset): def __init__(self, file_path): df = pd.read_excel(file_path, header = 0) df_array = df.to_numpy() # transform fd to numpy array self.len = df_array.shape[0] self.x = torch.from_numpy(df_array[:, 0:3]) # prices of barley, rapeseed, wheat self.y = torch.from_numpy(df_array[:, [3]]) # profit def __getitem__(self, index): return self.x[index], self.y[index] def __len__(self): return self.len train_dataset = DatasetCDFarm('nn_CDFarm_torch_profit_train.xlsx') train_loader = DataLoader(dataset=train_dataset, batch_size = 16, shuffle = True) class Net(nn.Module): def __init__(self, input_size, hidden1_size, hidden2_size, output_size): super(Net, self).__init__() self.fc1 = nn.Linear(input_size, hidden1_size) # first layer self.relu1 = nn.ReLU() self.fc2 = nn.Linear(hidden1_size, hidden2_size) # second layer self.relu2 = nn.ReLU() self.fc3 = nn.Linear(hidden2_size, output_size) # output layer self.relu3 = nn.ReLU() def forward(self, x): out = self.fc1(x) out = self.relu1(out) out = self.fc2(out) out = self.relu2(out) out = self.fc3(out) out = self.relu3(out) return out Net = Net(3, 60, 60, 1) print(Net) criterion = nn.MSELoss() optimizer = optim.SGD(Net.parameters(), lr=0.01, momentum=0.9) def train(epoch): Net.train() for batch_id, data in enumerate(train_loader): inputs, labels = data inputs = Variable(inputs).float() labels = Variable(labels).float() print(epoch, batch_id, "inputs", inputs.data, "labels", labels.data) out = Net(inputs) print('out', type(out),out) print('labels', type(labels),labels) loss = criterion(out, labels) print(epoch, batch_id, loss.data) optimizer.zero_grad() loss.backward() optimizer.step() for epoch in range(1, 100): train(epoch)
st82262
problem solved by using another optimizer, e.g. Adam optimizer. The reason was I did not standardize the target variable and it caused exploding gradients. The SGD Optimizer was not performing well in this problem. and of course remove the last relu.
st82263
Let’s say I have a model that I only want to run until the 8-th layer. I register a forward hook on that layer to save the output, and all works well. The problem is that the network will still run until completion, which makes things slower than necessary. Is there a way to stopping it early? Or should I raise an exception in the forward hook and catch it outside, to get out of the network forward function?
st82264
Do you have any control over the architecture of the network? In the python script for that network I imagine you could just delete the lines after the layer whose output you want.
st82265
Sure, but I want to do this automatically, without manually deleting lines of code every time
st82266
You could add an argument to the forward method of your model. def forward(self, input, num_layers): # automatically use only num_layers
st82267
Yes, sure. But then I would need to manually modify the forward method of every class I am interested in. What I want is to have a layer of abstraction; I don’t care what module it is or how it’s defined, I want to stop execution after a certain forward hook is triggered.