id
stringlengths
3
8
text
stringlengths
1
115k
st48768
Solved by ptrblck in post #2 Yes, but the propsed method might still work “good enough” for the tutorial. I think you should be able to find a proper explaination in Goodfellow et al., Deep Learning and I’m sure Bishop, Pattern Recognition and Machine Learning explains it as well (which is also a general recommendation to t…
st48769
snowe: But: not using the actual mean and standard deviation will not result in an overall mean of 0 and variance 1 as often desired, right? Yes, but the propsed method might still work “good enough” for the tutorial. snowe: Does anyone have a paper or book explaining this (I need a reference for my project )? I think you should be able to find a proper explaination in Goodfellow et al., Deep Learning 1 and I’m sure Bishop, Pattern Recognition and Machine Learning 2 explains it as well (which is also a general recommendation to take a look at).
st48770
I have a symmetric tensor of size [B, N, N, C]. I would like to eliminate the diagonal elements. I was wondering what is the most efficient way to do this? My solution: Use idx1 = torch.triu_indices(N, N, 0) idx2 = torch.triu_indices(N, N, 1) Then turn them into lists, subtract to get diagonal indices. Then use them to subtract diagonal term from the original tensor. But I’m wondering Would turning indices into list break backprop? Also, I’m not sure if this is the most efficient way to do this.
st48771
Interesting, does it have to be inplace? How about creating an inverted identity diagonal matrix, and then multiply element wise? your_tensor *= (1 - torch.eye(N, N)) Also checkout fill_diagonal_ 83 function it might fit perfectly Roy.
st48772
I am working to denoising autoencoder. I use the ConcatDataset below to concatenate noisy and original images. I am running into a problem where the ConcatDataset(train_dataset_noisy, train_dataset_original) produces a tuple as a list. How can I index the tuple to extract the tensor? class ConcatDataset(torch.utils.data.Dataset): def init(self, *datasets): self.datasets = datasets def getitem(self, i): return tuple(d[i] for d in self.datasets) def len(self): return min(len(d) for d in self.datasets) Error: AttributeError Traceback (most recent call last) in () 6 7 for epoch in range(1, max_epoch+1): ----> 8 train(epoch, device=device) 9 test(epoch, device=device) in train(epoch, device) 6 7 optimizer.zero_grad() ----> 8 images = images.to(device) 9 output = AE(images) 10 loss = loss_fn(output, images) # Here is a typical loss function (Mean square error) AttributeError: ‘list’ object has no attribute ‘to’
st48773
Does your model accept multiple inputs? If images are separate inputs, do: inputs = [x.to(device) for x in images] outputs = AE(*inputs) if images are single inputs, as a batch, use the collate_fn to process the list of samples to form a batch. Roy
st48774
Sorry for my lack of understanding. My model accepts single inputs. Since my ConcatDataset return a tuple where it contains the tensor and label, I still confuse whether I need to make changes to the Concat Dataset() or make change in my model to form a batch. I look for collete_fn function but I can’t find it (torch.utils.data). class our_AE(nn.Module): def init(self): super(our_AE, self).init() self.encoder = nn.Sequential( nn.Conv2d(1, 16, 3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(16, 32, 3, stride=2, padding=1), nn.ReLU(), nn.Conv2d(32, 64, 7) ) self.decoder = nn.Sequential( nn.ConvTranspose2d(64, 32, 7), nn.ReLU(), nn.ConvTranspose2d(32, 16, 3, stride=2, padding=1, output_padding=1), nn.ReLU(), nn.ConvTranspose2d(16, 1, 3, stride=2, padding=1, output_padding=1), nn.Sigmoid() ) def forward(self, x): x = self.encoder(x) x = self.decoder(x) return x AE = our_AE().to(device) optimizer = optim.Adam(AE.parameters(), lr=1e-4) loss_fn = nn.MSELoss(reduction=‘sum’) def train(epoch, device): AE.train() for batch_idx, (images, _) in enumerate(train_loader): optimizer.zero_grad() images = images.to(device) output = AE(images) loss = loss_fn(output, images) # Here is a typical loss function (Mean square error) loss.backward() optimizer.step() if batch_idx % 10 == 0: # We record our output every 10 batches train_losses.append(loss.item()/batch_size_train) # item() is to get the value of the tensor directly train_counter.append( (batch_idx*64) + ((epoch-1)len(train_loader.dataset))) if batch_idx % 100 == 0: # We visulize our output every 100 batches print(f’Epoch {epoch}: [{batch_idxlen(images)}/{len(train_loader.dataset)}] Loss: {loss.item()/batch_size_train}’) def test(epoch, device): AE.eval() test_loss = 0 correct = 0 with torch.no_grad(): for images, _ in test_loader: images = images.to(device) output = AE(images) test_loss += loss_fn(output, images).item() test_loss /= len(test_loader.dataset) test_losses.append(test_loss) test_counter.append(len(train_loader.dataset)*epoch) print(f’Test result on epoch {epoch}: Avg loss is {test_loss}’) train_losses = [] train_counter = [] test_losses = [] test_counter = [] max_epoch = 3 for epoch in range(1, max_epoch+1): train(epoch, device=device) test(epoch, device=device)
st48775
Sorry, I’ll try to explain my previous response: You mentioned your error is AttributeError: ‘list’ object has no attribute ‘to’ From what I see, the only place you have (possibly problematic) “to” is in images = images.to(device) Can you describe “images”? it appears to be a list, so that gives 2 possible options: Does this list represent a batch? aka a list of tensor samples? if so, you need to turn this list into a tensor before calling images.to(device) Does this list represents multiple inputs to forward function? a.k.a your forward function looks like: forward(self, x1, x2, x3) then you need to do inputs = [x.to(device) for x in images] outputs = AE(*inputs) From the new code you posted, it seems like your model’s forward is forward(self, x) so no multiple inputs, then I guess “images” is a list of samples (a mini-batch), and all you need to do it to turn it to a tensor before calling to(device) Side note 1: The collate_fn is an argument to DataLoader class, in the doc 1 it says: collate_fn (callable , optional ) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset. It is a function you provide to handle the list of samples and turn them into a Tensor batch that your model can get as an input Side note 2: The default collate_fn expects all the images in a batch to have the same size because it uses torch.stack() to pack the images. If the images provided by Dataset have variable size, you have to provide your custom collate_fn
st48776
Hi, i want to compute the pairwise cosine_similarity of Tensor.shape == (N, 200), and i will get the similarity matrix with shape == (N,N) , Moreover, i want comput it with GPU. can any one could give me some advices? i will be very appreciate!
st48777
The istft is not entirely the inverse operation for stft when center=False. hl = 512 n_fft = 1024 dummy_spec = torch.stft(torch.randn(1,1536), n_fft, hop_length=hl, center=False, onesided=True) print(torch.istft(dummy_spec, n_fft, hop_length=hl, center=False, onesided=True).shape) output >>> torch.Size([1, 1024]) Where the expected output shape should be (1, 1536). In librosa, the same operation yield the correct result dummy_lib_spec = librosa.stft(np.random.randn(1536), n_fft, hop_length=hl, center=False) print(librosa.istft(dummy_lib_spec, hop_length=hl, center=False).shape) output >>> (1536,) I am using PyTorch 1.6.0
st48778
After git clone the latest source code, then run “python setup.py install”, it says: In file included from /home/liangstein/anaconda3/include/python3.6m/pyport.h:194:0, from /home/liangstein/anaconda3/include/python3.6m/Python.h:53, from /home/liangstein/src/pytorch/torch/csrc/python_headers.h:10, from /home/liangstein/src/pytorch/torch/csrc/DataLoader.h:3, from /home/liangstein/src/pytorch/torch/csrc/DataLoader.cpp:1: /home/liangstein/src/pytorch/aten/src/ATen/core/aten_interned_strings.h:614:9: error: expected unqualified-id before ‘sizeof’ _(aten, signbit) \ blablabla [ 98%] Building CXX object test_api/CMakeFiles/test_api.dir/parallel.cpp.o In file included from /home/liangstein/src/pytorch/third_party/googletest/googletest/include/gtest/gtest.h:59:0, from /home/liangstein/src/pytorch/test/cpp/api/tensor.cpp:1: /home/liangstein/src/pytorch/test/cpp/api/tensor.cpp: In function ‘void test_TorchTensorCtorSingleDimFloatingType_expected_dtype(c10::ScalarType)’: /home/liangstein/src/pytorch/test/cpp/api/tensor.cpp:427:34: warning: ‘bool at::Tensor::is_variable() const’ is deprecated: Tensor.is_variable() is deprecated; everything is a variable now. (If you want to assert that variable has been appropriately handled already, use at::impl::variable_excluded_from_dispatch()) [-Wdeprecated-declarations] ASSERT_TRUE(tensor.is_variable()) blablabla gmake: *** [all] Error 2 Building wheel torch-1.8.0a0 – Building version 1.8.0a0 cmake -DBUILD_PYTHON=True -DBUILD_TEST=True -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/home/liangstein/src/pytorch/torch -DCMAKE_PREFIX_PATH=/home/liangstein/SOTAML/lib/python3.6/site-packages -DCUDNN_INCLUDE_DIR=/home/liangstein/cudnn8_cuda10.2/include -DCUDNN_LIBRARY=/home/liangstein/cudnn8_cuda10.2/lib64 -DNUMPY_INCLUDE_DIR=/home/liangstein/SOTAML/lib/python3.6/site-packages/numpy-1.19.2-py3.6-linux-x86_64.egg/numpy/core/include -DPYTHON_EXECUTABLE=/home/liangstein/SOTAML/bin/python -DPYTHON_INCLUDE_DIR=/home/liangstein/anaconda3/include/python3.6m -DPYTHON_LIBRARY=/home/liangstein/anaconda3/lib/libpython3.6m.so.1.0 -DTORCH_BUILD_VERSION=1.8.0a0 -DUSE_NUMPY=True /home/liangstein/src/pytorch cmake --build . --target install --config Release – -j 8 Traceback (most recent call last): File “setup.py”, line 724, in build_deps() File “setup.py”, line 317, in build_deps cmake=cmake) File “/home/liangstein/src/pytorch/tools/build_pytorch_libs.py”, line 62, in build_caffe2 cmake.build(my_env) File “/home/liangstein/src/pytorch/tools/setup_helpers/cmake.py”, line 346, in build self.run(build_args, my_env) File “/home/liangstein/src/pytorch/tools/setup_helpers/cmake.py”, line 141, in run check_call(command, cwd=self.build_dir, env=env) File “/home/liangstein/anaconda3/lib/python3.6/subprocess.py”, line 291, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command ‘[‘cmake’, ‘–build’, ‘.’, ‘–target’, ‘install’, ‘–config’, ‘Release’, ‘–’, ‘-j’, ‘8’]’ returned non-zero exit status 2. Environment is: Centos7 gcc5.5 cmake3.15 python3.6.5(Anaconda5.5) MKL2019 cuda10.2 cudnn8 openmpi1.10.7
st48779
Hi, I meet a problem and I have searched this kind of posts and revise based on the suggestion such as +=, adding .clone() or inplace=False. However, it doesn’t work. I don’t know the reason for my problem. My code is as follows. The initial part is simple self.criterion = nn.CrossEntropyLoss(reduction='none') self.optimizer = optim.SGD(target_model.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4) First I want to get the output, label, loss, and last layer’s gradient from a model called target model. def get_data(self, inputs, targets): outputs= self.target_model(inputs) losses = self.criterion(outputs, targets) gradient = [] for loss in losses: loss.backward(retain_graph=True) gradient_list = reversed(list(self.target_model.named_parameters())) for name, parameter in gradient_list: if 'weight' in name: gradients = parameter.grad.clone() gradients = gradients.unsqueeze_(0) gradient.append(gradients.unsqueeze_(0)) break gradient = torch.cat(gradient, dim=0) losses = losses.unsqueeze_(1) targets = targets.unsqueeze_(1).float() return outputs, gradient, losses, targets And use these to train another classifier. def train(self): self.attack_model.train() train_loss = 0 correct = 0 total = 0 for inputs, targets, members in self.train_loader: inputs, targets, members = inputs.to(self.device), targets.to(self.device), members.to(self.device) outputs, gradient, losses, targets = self.get_data(inputs, targets) results = self.model(outputs, losses, gradient, targets) with torch.autograd.set_detect_anomaly(True): loss_2 = self.criterion(results, members).mean() self.optimizer.zero_grad() loss_2.backward() self.optimizer.step() My model has five parts, the first four are used to get a result of each network and after that I concatenate them together and as an input to the fifth network to get result. class model(nn.Module): def __init__(self): super(model, self).__init__() self.Output_Component = nn.Sequential( nn.Dropout(p=0.2), nn.Linear(100, 128), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(128, 64), ) self.Label_Component = nn.Sequential( nn.Dropout(p=0.2), nn.Linear(1, 128), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(128, 64), ) self.Loss_Component = nn.Sequential( nn.Dropout(p=0.2), nn.Linear(1, 128), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(128, 64), ) self.Gradient_Component = nn.Sequential( nn.Conv2d(1, 3, 5), nn.ReLU(), nn.Flatten(), nn.Dropout(p=0.2), nn.Linear(3 * 96 * 4092, 128), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(128, 64), ) self.Encoder_Component = nn.Sequential( nn.Dropout(p=0.2), nn.Linear(256, 256), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(256, 128), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(128, 64), nn.ReLU(), nn.Dropout(p=0.2), nn.Linear(64, 2), ) def forward(self, a, b, c, d): Output_Component_result = self.Output_Component(a) Loss_Component_result = self.Loss_Component(b) Gradient_Component_result = self.Gradient_Component(c) Label_Component_result = self.Label_Component(d) final_inputs = torch.cat((Output_Component_result,Loss_Component_result,Gradient_Component_result, Label_Component_result), 1) final_result = self.Encoder_Component(final_inputs) return final_result
st48780
Bug Pytorch’s DataLoader gives “Broken pipe” error on Linux platform (not Windows). Using num_workers=0 suppresses the error but that is not a satisfying solution (more of a workaround) because it will largely reduce the efficiency of the code. If it is not a bug, hopefully a guide on how to correct the following code can be given. Thanks. To Reproduce Steps to reproduce the behavior: import os import sys import time import glob import numpy as np import torch import utils import logging import argparse import torch.nn as nn import torch.utils import torch.nn.functional as F import torchvision.datasets as dset import torch.backends.cudnn as cudnn import torchvision.transforms as transforms def _data_transforms_cifar10(): CIFAR_MEAN = [0.49139968, 0.48215827, 0.44653124] CIFAR_STD = [0.24703233, 0.24348505, 0.26158768] train_transform = transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(CIFAR_MEAN, CIFAR_STD), ]) valid_transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(CIFAR_MEAN, CIFAR_STD), ]) return train_transform, valid_transform def main(): train_portion = 0.5 train_transform, valid_transform = _data_transforms_cifar10() train_data = dset.CIFAR10(root='.', train=True, download=True, transform=train_transform) num_train = len(train_data) indices = list(range(num_train)) split = int(np.floor(train_portion * num_train)) train_queue = torch.utils.data.DataLoader( train_data, batch_size=64, sampler=torch.utils.data.sampler.SubsetRandomSampler(indices[:split]), pin_memory=True, num_workers=2) valid_queue = torch.utils.data.DataLoader( train_data, batch_size=64, sampler=torch.utils.data.sampler.SubsetRandomSampler(indices[split:num_train]), pin_memory=True, num_workers=2) train(train_queue, valid_queue) def train(train_queue, valid_queue): for step, (input, target) in enumerate(train_queue): input_search, target_search = next(iter(valid_queue)) if __name__ == '__main__': main() Stack trace: Traceback (most recent call last): File "/usr/local/lib/python3.6/multiprocessing/queues.py", line 240, in _feed send_bytes(obj) File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes self._send(header + buf) File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) Traceback (most recent call last): File "/usr/local/lib/python3.6/multiprocessing/queues.py", line 240, in _feed send_bytes(obj) File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 200, in send_bytes self._send_bytes(m[offset:offset + size]) File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 404, in _send_bytes self._send(header + buf) File "/usr/local/lib/python3.6/multiprocessing/connection.py", line 368, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe Expected behavior Runs without any error. Environment Please copy and paste the output from our environment collection script 1 (or fill out the checklist below manually). You can get the script and run it with: wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py # For security purposes, please check the contents of collect_env.py before running it. python collect_env.py PyTorch version: 1.6.0+cu101 Is debug build: False CUDA used to build PyTorch: 10.1 ROCM used to build PyTorch: N/A OS: Debian GNU/Linux 9.9 (stretch) (x86_64) GCC version: (Debian 6.3.0-18+deb9u1) 6.3.0 20170516 Clang version: Could not collect CMake version: Could not collect Python version: 3.6 (64-bit runtime) Is CUDA available: True CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB GPU 1: Tesla V100-SXM2-32GB GPU 2: Tesla V100-SXM2-32GB GPU 3: Tesla V100-SXM2-32GB GPU 4: Tesla V100-SXM2-32GB GPU 5: Tesla V100-SXM2-32GB GPU 6: Tesla V100-SXM2-32GB GPU 7: Tesla V100-SXM2-32GB Nvidia driver version: 418.87.00 cuDNN version: /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7 HIP runtime version: N/A MIOpen runtime version: N/A Versions of relevant libraries: [pip3] numpy==1.19.1 [pip3] torch==1.6.0+cu101 [pip3] torchvision==0.7.0+cu101 [conda] Could not collect
st48781
I don’t know, why the error is raised, but it seems to be raised by recreating the iterator for the valid_queue inside the training DataLoader loop in combination with pinned memory. If I initialize the iterator before entering the loop, the code works fine. Also, if I disable pinned memory for valid_queue. CC @vincentqb have you seen this issue or a similar before?
st48782
What do you mean by “initialize the iterator before entering the loop”? How should I write this code?
st48783
I think this worked for me as a workaround: def train(train_queue, valid_queue): i = iter(valid_queue) for step, (input, target) in enumerate(train_queue): input_search, target_search = next(i)
st48784
I am following this tutorial and I have the same structure as in the tutorial. However, near the end when I type the make command nothing happens. My file example-app.cpp is the same as in the tutorial and the CMakeLists.txt is the same also. This is a picture of my terminal makeDoesNothing1432×572 106 KB Is there something to install because i have the compiler and everything so I do not see what is missing in my installation. I would expect this to happens when I type “make” so I could execute the program. expected852×129 16.7 KB
st48785
Solved by fgauthier in post #6 Well I started the project in another folder and it worked instantly. No clue why it would not work before. Anyway thanks for the help!
st48786
What is ls showing in this folder? There should be some generated cmake files as well as the Makefile, which should build the application via make or cmake --build . --config Release.
st48787
This is whats in my folder but still when I execute the command you just said nothing happens… I do not think my traced_network should be the problem since I did not use it at this point… Screenshot from 2020-10-19 17-18-39976×107 34.8 KB If you need more information please tell me. My example-app.cpp is copy-pasted from the tutorial like my CMakeLists.txt was there any change to make in those file? Maybe thats my error?
st48788
Do you get any output, if you run make? I rebuilt the tutorial yesterday and don’t know, why you don’t get any output.
st48789
Hi sorry for answering late, I’ve been pretty busy this week. Here is what happened when I call makeScreenshot from 2020-10-24 20-27-47731×489 45.2 KB I really don’t know what else to do… Would it help you if I show you my MakeFile?
st48790
Well I started the project in another folder and it worked instantly. No clue why it would not work before. Anyway thanks for the help!
st48791
Warm hello to everyone! I am trying to implement the semantic segmentation using Resnet+Unet, for some reason my results are blurry. Any ideas of the source of the error? I have an original colored image which i use K-Means to split into different colors, and then use these colors to make masks/channels.
st48792
Here’s the situation. I have a very large 2d tensor which is mostly sparse (like, <0.1% density), everything else is filled with nan. I want to calculate the minimum value across the first dimension of the tensor: essentially torch.min(t, dim=1). But doing this for the entire shape of the tensor seems extremely wasteful when you’re only really looking for the min of 2-4 values out of tens of thousands. I already need to grab the elements into an array for separate processing, which flattens the tensor into a 1d array, something similar to : a=~torch.isnan(b). I can also calculate how many real entries were in each column, so I’ll get another tensor which gives me how many elements I need to take the min of for each row: [3, 2, 4, 2, 4, 3, 2, …]. The expected value for all of these is roughly similar. I can convert this to index ranges by taking the cumulative sum [0, 3, 5, 9, 11…], so the first row of the original tensor is now stored in elements 0:3 of the new tensor, the second row from 3:5, third 5:9 etc… After doing all of this, essentially what I’m looking to do is some parallel operation of torch.min where I feed in the ranges for each kernel to check through as a tensor. For example min_fancy(a, [0, 3, 5, 9, 11…]) would go through elements 0:3 and find the min and store it in row 0, go through 3:5 and store it in row 1, etc. I imagine this will involve writing a custom c++ extension, but I figured I should ask if there is any built in method I am missing or if anyone has any better ideas of how to do this, since I’m not the most familiar with writing pytorch extensions.
st48793
Demo: a = torch.randn(10, 10) b = a[1:, :] a.shape Out[45]: torch.Size([10, 10]) b.shape Out[46]: torch.Size([9, 10]) id(a) Out[47]: 139889952764416 id(a) Out[48]: 139889952764416 id(b) Out[49]: 139889948003072 id(b) Out[50]: 139889948003072 id(a.storage()) Out[51]: 139889948211136 id(a.storage()) Out[52]: 139889948120832 id(b.storage()) Out[53]: 139889952743616 id(b.storage()) Out[54]: 139892027430848 Why is id(a.storage()) or id(b.storage()) different every time? AND How can I tell if a and b here take up the same memory space?
st48794
Solved by SimonW in post #2 the python storage object is construct when calling .storage(), so id is different. test x.storage().data_ptr()
st48795
the python storage object is construct when calling .storage(), so id is different. test x.storage().data_ptr()
st48796
Hi guys, I am trying to learn Pytorch by using it for the Titanic kaggle competition. I completed the intro CIFAR-10 tutorial and decided to try to build a simple fully connected neural net. The problem is that I keep running into this error: ‘Assertion `cur_target >= 0 && cur_target < n_classes’ failed.’. I think I am loading the data correctly (I tried to apply the Data Loading and Processing tutorial) and I believe I have correctly set up the neural net. Of course, there’s always the chance that I’m missing something completely obvious and it’s a quick fix. I have attached my jupyter notebook. btw I am running this on macOS High Sierra with Python 3.6 and Anaconda without CUDA. Thanks! Brian import torch import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline train_file = open("./data/train.csv", "r") train = pd.read_csv(train_file) def process(df): df_X = df.join(pd.get_dummies(train['Pclass'], prefix="class")) df_X = df_X.join(pd.get_dummies(train['Sex'])) df_X = df_X.join(pd.get_dummies(train['Embarked'], prefix="port")) df_X = df_X.join(pd.get_dummies(train['SibSp'], prefix="sibsp")) df_X = df_X.join(pd.get_dummies(train['Parch'], prefix="parch")) df_Y = train['Survived'] df_X.drop(['Survived', 'Name', 'PassengerId', 'Sex', 'Embarked', 'SibSp', 'Parch', 'Cabin', 'Ticket'], axis=1, inplace=True) return df_X, df_Y train_X, train_y = process(train) print(train_X.shape) print(train_Y.shape) (891, 25) (891,) train_X.head() .dataframe thead tr:only-child th { text-align: right; } .dataframe thead th { text-align: left; } .dataframe tbody tr th { vertical-align: top; } Expand Table Pclass Age Fare class_1 class_2 class_3 female male port_C port_Q ... sibsp_4 sibsp_5 sibsp_8 parch_0 parch_1 parch_2 parch_3 parch_4 parch_5 parch_6 0 3 22.0 7.2500 0 0 1 0 1 0 0 ... 0 0 0 1 0 0 0 0 0 0 1 1 38.0 71.2833 1 0 0 1 0 1 0 ... 0 0 0 1 0 0 0 0 0 0 2 3 26.0 7.9250 0 0 1 1 0 0 0 ... 0 0 0 1 0 0 0 0 0 0 3 1 35.0 53.1000 1 0 0 1 0 0 0 ... 0 0 0 1 0 0 0 0 0 0 4 3 35.0 8.0500 0 0 1 0 1 0 0 ... 0 0 0 1 0 0 0 0 0 0 5 rows × 25 columns train_Y.head() 0 0 1 1 2 1 3 1 4 0 Name: Survived, dtype: int64 import os from torch.utils.data import Dataset, DataLoader class TitanicDataset(Dataset): def __init__(self, X, Y): self.X = X self.Y = Y def __len__(self): return len(self.X) def __getitem__(self, idx): X = self.X.iloc[idx].as_matrix().astype('double') X = X.reshape(-1, 25) Y = self.Y.iloc[idx].astype('double') return torch.from_numpy(X).double(), int(Y) from torch.autograd import Variable import torch.nn as nn import torch.nn.functional as F class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(25, 100) self.fc2 = nn.Linear(100, 100) self.fc3 = nn.Linear(100, 50) self.fc4 = nn.Linear(50, 20) self.fc5 = nn.Linear(20, 10) self.fc6 = nn.Linear(10, 1) def forward(self, x): x = self.fc1(x).clamp(min=0) x = F.relu(self.fc2(x)) x = F.relu(self.fc3(x)) x = F.relu(self.fc4(x)) x = F.relu(self.fc5(x)) x = self.fc6(x) return x net = Net() import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) dataset = TitanicDataset(train_X, train_Y) dataloader = DataLoader(dataset, batch_size=4, shuffle=True, num_workers=4) print(dataloader.dataset[0][0].shape) print(dataloader.dataset[0][0].size()) print(type(dataloader.dataset[0][0])) print(type(dataloader.dataset[0][1])) torch.Size([1, 25]) torch.Size([1, 25]) <class 'torch.DoubleTensor'> <class 'int'> for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(dataloader, 0): # get the inputs inputs, labels = data # wrap them in Variable inputs, labels = Variable(inputs.float()), Variable(labels.float()) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) labels = labels.long() loss = criterion(outputs[:,0], labels) loss.backward() optimizer.step() # print statistics running_loss += loss.data[0] if i % 100 == 99: # print every 100 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 100)) running_loss = 0.0 '' print('Finished Training') --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-375-f4e44e3f5cbb> in <module>() 15 outputs = net(inputs) 16 labels = labels.long() ---> 17 loss = criterion(outputs[:,0], labels) 18 loss.backward() 19 optimizer.step() /Users/brian/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 222 for hook in self._forward_pre_hooks.values(): 223 hook(self, input) --> 224 result = self.forward(*input, **kwargs) 225 for hook in self._forward_hooks.values(): 226 hook_result = hook(self, input, result) /Users/brian/anaconda/lib/python3.6/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 480 _assert_no_grad(target) 481 return F.cross_entropy(input, target, self.weight, self.size_average, --> 482 self.ignore_index) 483 484 /Users/brian/anaconda/lib/python3.6/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index) 744 True, the loss is averaged over non-ignored targets. 745 """ --> 746 return nll_loss(log_softmax(input), target, weight, size_average, ignore_index) 747 748 /Users/brian/anaconda/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index) 670 dim = input.dim() 671 if dim == 2: --> 672 return _functions.thnn.NLLLoss.apply(input, target, weight, size_average, ignore_index) 673 elif dim == 4: 674 return _functions.thnn.NLLLoss2d.apply(input, target, weight, size_average, ignore_index) /Users/brian/anaconda/lib/python3.6/site-packages/torch/nn/_functions/thnn/auto.py in forward(ctx, input, target, *args) 45 output = input.new(1) 46 getattr(ctx._backend, update_output.name)(ctx._backend.library_state, input, target, ---> 47 output, *ctx.additional_args) 48 return output 49 RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /Users/soumith/miniconda2/conda-bld/pytorch_1503975723910/work/torch/lib/THNN/generic/ClassNLLCriterion.c:62
st48797
I fixed it! Although it didn’t have anything to do with squeeze() or unsqueeze(), I still thank you @quoniammm because I was unaware of squeeze() and unsqueeze() and they seem very useful! Turns out it was a quick fix. I had to change self.fc6 = nn.Linear(10, 1) to self.fc6 = nn.Linear(10, 2) in my Net class.
st48798
I had the same error. The error is saying that the labels must be 0 indexed. So, for example, if you have 20 classes, and the labels are 1th indexed, the 20th label would be 20, so cur_target < n_classes assert would fail. If it’s 0th indexed, the 20th label is 19, so cur_target < n_classes assert passes.
st48799
Thank you for this tip ! I was having the exact same problem and I had [0…8] clases in my dataset and In my network I forgot to pay attention on the lengh of the unique classes (9 classes)
st48800
I had the same error and realized the output number was wrong as well. In the final layer, ensure your output matches the number of classifications your model will be making.
st48801
brianyu: ‘Assertion `cur_target >= 0 && cur_target < n_classes’ I also run into this error when computing a loss on GPU, but the error is "Unable to get repr for <class ‘torch.Tensor’>. After I move the prediction and truth to cpu and call the F.cross_entropy again. I see the informative error, “cur_target…”.
st48802
Im classifying music files in 10 classes and I labelled them from 1,2,3…,9,0. Now i used this dataser for softmax regression where it worked perfectly when i just wanted to classify for any number of classes but when i used neural networks to classify it into first 3 classes it didnt work and gave same error. Any comments onwhy it didnt give error in case of softmax regression for 3 classes and gave error in neural networks for 3 class classification? Thank you in advance!
st48803
Thank you, this was the needed answer. Build a wrapper class for loading CIFAR-10 and CIFAR-100. With a look on the output layer I saw I was on CIFAR-10 but loaded CIFAR-100.
st48804
Hi, I have same problem look this is my model VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace=True) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace=True) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace=True) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace=True) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace=True) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace=True) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace=True) (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (18): ReLU(inplace=True) (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace=True) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace=True) (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (25): ReLU(inplace=True) (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (27): ReLU(inplace=True) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace=True) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) (classifier): Sequential( (0): Linear(in_features=25088, out_features=64, bias=True) (1): ReLU() (2): Dropout(p=0.25, inplace=False) (3): Linear(in_features=64, out_features=32, bias=True) (4): ReLU() (5): Dropout(p=0.25, inplace=False) (6): Linear(in_features=32, out_features=6, bias=True) ) ) when I print(dataset…class_to_idx) i get {'0-9': 0, '10-19': 1, '20-29': 2, '30-39': 3, '40-49': 4, '50-inf': 5}
st48805
What kind of dataset are you using and what data does your target contain? The printed mapping looks a bit weird and I am not sure, how you’ve created it.
st48806
I had this problem as I had classes 0,1,2,…,9 and 10, and my network had only 10 outputs instead of 11.
st48807
Here is a recreation of the problem The below code works fine >>> loss = nn.CrossEntropyLoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.empty(3, dtype=torch.long).random_(5) >>> output = loss(input, target) But this wont work >>> loss = nn.CrossEntropyLoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.empty(3, dtype=torch.long).random_(100) >>> output = loss(input, target) This is because target can be [0,5,99]. The target is screaming at you, that your 3rd sample is the 100th class. But if you look at your input there is only 5 classes. Meaning target value should in range [0-4]. cheers
st48808
I realize there are similar questions particularly this one: Could someone explain batch_first=True in LSTM 2 However my question pertains to a specific tutorial I found online: https://medium.com/dair-ai/building-rnns-is-fun-with-pytorch-and-google-colab-3903ea9a3a79 5 I do not understand why they are not using batch_first=true. image1052×659 105 KB I realized that the data being fed to the model is of the form [64, 28, 28]. Because they are working with MNIST data and they specified a batch size of 64 -64 is batch size -28 (number of rows) is seq_len -28 (number of cols) is features Could someone please explain why they are not using batch_first=True? Thank you very much for your help. edit: removed a part of question for simplicity
st48809
Solved by ptrblck in post #2 Most likely for performance reasons. The input will be permuted in the forward method via: # transforms X to dimensions: n_steps X batch_size X n_inputs X = X.permute(1, 0, 2)
st48810
Most likely for performance reasons. The input will be permuted in the forward method via: # transforms X to dimensions: n_steps X batch_size X n_inputs X = X.permute(1, 0, 2)
st48811
When I run my code, this message is displayed: How can I fix it? I’m writing a neural network with pytorch-lightning and dgl with multiple optimizers, and I’m training with ddp on 1 gpu. Traceback (most recent call last): File "main.py", line 50, in <module> trainer.fit(model, train_loader, val_loader) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 439, in fit results = self.accelerator_backend.train() File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 146, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 279, in ddp_train results = self.train_or_test() File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 66, in train_or_test results = self.trainer.train() File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 482, in train self.train_loop.run_training_epoch() File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 541, in run_training_epoch batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 678, in run_training_batch self.trainer.hiddens File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 760, in training_step_and_backward result = self.training_step(split_batch, batch_idx, opt_idx, hiddens) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 304, in training_step training_step_output = self.trainer.accelerator_backend.training_step(args) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 156, in training_step output = self.trainer.model(*args) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__ result = self.forward(*input, **kwargs) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/overrides/data_parallel.py", line 176, in forward output = self.module.training_step(*inputs[0], **kwargs[0]) File "/afs/ece.cmu.edu/usr/xujinl/dynamic_grpah_pooling_rl/gcn_w_dyn_pool.py", line 195, in training_step self.manual_backward(L_est, opts[1], retain_graph=True) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/core/lightning.py", line 1081, in manual_backward self.trainer.train_loop.backward(loss, optimizer, -1, *args, **kwargs) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/trainer/training_loop.py", line 781, in backward self.trainer.accelerator_backend.backward(result, optimizer, opt_idx, *args, **kwargs) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 98, in backward closure_loss.backward(*args, **kwargs) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/tensor.py", line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/autograd/__init__.py", line 100, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: has_marked_unused_parameters_ INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1591914880026/work/torch/csrc/distributed/c10d/reducer.cpp:327, please report a bug to PyTorch. (mark_variable_ready at /opt/conda/conda-bld/pytorch_1591914880026/work/torch/csrc/distributed/c10d/reducer.cpp:327) frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x4e (0x7f27ea039b5e in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/libc10.so) frame #1: c10d::Reducer::mark_variable_ready(c10d::Reducer::VariableIndex) + 0x9ba (0x7f2817a1b3aa in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #2: c10d::Reducer::autograd_hook(c10d::Reducer::VariableIndex) + 0x2d0 (0x7f2817a1b910 in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #3: <unknown function> + 0x8a395c (0x7f2817a1095c in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #4: torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 0x60d (0x7f281412d00d in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #5: torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&, bool) + 0x3d2 (0x7f281412eed2 in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #6: torch::autograd::Engine::thread_init(int) + 0x39 (0x7f2814127549 in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/libtorch_cpu.so) frame #7: torch::autograd::python::PythonEngine::thread_init(int) + 0x38 (0x7f2817677638 in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/libtorch_python.so) frame #8: <unknown function> + 0xc819d (0x7f2819ed219d in /afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/torch/lib/../../../.././libstdc++.so.6) frame #9: <unknown function> + 0x7ea5 (0x7f2838eebea5 in /lib64/libpthread.so.0) frame #10: clone + 0x6d (0x7f2838c148cd in /lib64/libc.so.6) Exception ignored in: <function tqdm.__del__ at 0x7f27dcf30b90> Traceback (most recent call last): File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/tqdm/std.py", line 1122, in __del__ File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/tqdm/std.py", line 1335, in close File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/tqdm/std.py", line 1514, in display File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/tqdm/std.py", line 1125, in __repr__ File "/afs/ece.cmu.edu/usr/xujinl/anaconda3/envs/CSD/lib/python3.7/site-packages/tqdm/std.py", line 1475, in format_dict TypeError: cannot unpack non-iterable NoneType object
st48812
Solved by ptrblck in post #2 Are you seeing the same issue without using Lightning? Also, could you post an executable code snippet so that we could reproduce this issue?
st48813
Are you seeing the same issue without using Lightning? Also, could you post an executable code snippet so that we could reproduce this issue?
st48814
I’ve confirmed it’s an error on pytorch-lightning side. Thank you for the suggestion!
st48815
Hi, If anyone has experience in experimenting with hard negative mining in say object detection, I need some insight on its implementation: Should it be applied on background only? Can it be applied on every object class? If yes, how do we decide the top-k threshold for every epoch?
st48816
I am also getting this from a clean git clone. I am using MacOS (Mojave 10.14.6) and I just updated xcode to the latest version. I did a fresh git clone of pytorch repostiory and started following the README directions. This is what I got: Undefined symbols for architecture x86_64: "__cvtu32_mask16", referenced from: _xnn_f32_clamp_ukernel__avx512f in libXNNPACK.a(avx512f.c.o) _xnn_f32_dwconv_ukernel_up16x25__avx512f in libXNNPACK.a(up16x25-avx512f.c.o) _xnn_f32_dwconv_ukernel_up16x4__avx512f in libXNNPACK.a(up16x4-avx512f.c.o) _xnn_f32_dwconv_ukernel_up16x9__avx512f in libXNNPACK.a(up16x9-avx512f.c.o) Here’s the exact order of what I did following the fresh git clone. conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi # Omitted `typing` because I’m on Python 3.7 git submodule sync git submodule update --init —recursive export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../“} USE_CUDA=0 MACOSX_DEPLOYMENT_TARGET=10.14 CC=clang CXX=clang++ python setup.py install Clang clang --version Apple LLVM version 10.0.1 (clang-1001.0.46.4) Target: x86_64-apple-darwin18.7.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin GCC (same as clang) gcc --version Configured with: --prefix=/Library/Developer/CommandLineTools/usr --with-gxx-include-dir=/Library/Developer/CommandLineTools/SDKs/MacOSX10.14.sdk/usr/include/c++/4.2.1 Apple LLVM version 10.0.1 (clang-1001.0.46.4) Target: x86_64-apple-darwin18.7.0 Thread model: posix InstalledDir: /Library/Developer/CommandLineTools/usr/bin
st48817
Thanks for the reply! That did not fix it. I stumbled upon this documenation 7 from Doxygen, which I’m aware Pytorch uses per CONTRIBUTING.md You can see the _cvtu32_mask16 function is specified there. This is definitely beyond my realm of expertise ><. I commented on an issue on the github project but I wasn’t sure if that was the right place. It appears someone else is getting the same issue and it happened very recently.
st48818
Could you share the link to the issue where you have commented. I have the same issue when building on macOS 10.13 but it looks fine on macOS 10.15.
st48819
I believe I have figured out why. The documentation where you found _cvtu32_mask16 indicates that _cvtu32_mask16 is an llvm(clang) function. And the error message indicates it was XNNPACK that is linking this function. So I dive into the XNNPACK source code and found this line: https://github.com/google/XNNPACK/blob/master/src/xnnpack/intrinsics-polyfill.h#L36 8 When I checkout pytorch tags/v1.5.0, the same line says Apple Clang pre-10 so the judgement is (__apple_build_version__ < 10000000). In this way, XNNPACK does not define its own _cvtu32_mask16 implementation but refer it to clang, tries to link it finally and failed because the clang installed on macOS 10.13/10.14 does not have _cvtu32_mask16 implemented in its libraries. That also explains why macOS 10.15 works because the clang version on macOS 10.15 is 11.
st48820
zw19906: _cvtu32_mask16 so can we compile clang 11 on macOSX 10.13.6, then use this for compilation? From my point of view, this can be a simple approach to avoid side effect. Any other suggestion? Because I really want to strive out on 10.13.6 with portable GPU.
st48821
@zw19906 @Stevers sorry for late response, I figured out how to build torch on macOS 10.13.6 + CUDA 10.1(update 2) + cudnn 7.6.5. Just summarize in pytorch/pytorch#46803 8 and google/XNNPACK#1081 6 for your reference. But, honestly speaking, I thought XNNPACK might have more potential bugs for code generation template. Anyway make torch building for avx passing…
st48822
Well… I have 2 global problems , may be I have other problems, but I didnt found they and it’s not SO important (except optimization). I tried to train models like: base upscale (2x/4x), remove JPG defects. (**All details of code at the bottom. **) Problem 1: After I trained model and test it, output looks normal in general, BUT it have artifacts on edges of image. Example (input/output/target - from tests): image_№9bbdd52eb441fa230949b670d2893e90324b8308r1-1920-1080v2_uhq.jpg_1800×800 812 KB image_№9bbdd52eb441fa230949b670d2893e90324b8308r1-1920-1080v2_uhq.jpg_2800×800 824 KB This artfacts small: ~ 5 - 20 px, and I could leave this, but if I don’t upscale whole image (I cant do this with big img like full hd to 4k), I upscale parts of image. So, in this situation this small artifact is more important. Example: 9bbdd52eb441fa230949b670d2893e90324b8308r1-1920-1080v2_uhq1920×1080 787 KB Problem 2: I have artifacts looks like gauss noise. I think that it is because my dataset is not best, I made it by myself, and it has not many images with some colors (so artifacts occur on images which have seldom colors). But may be it haw artifacts by another thing, therefore mentioned this. Now I have my dataset (100 000 images) + another dataset (350 000 images), but I don’t use them all always, because it is too long Example (model output/target with gauss noise) - it's after 100 000 images with 5 epochs: target is bigger beacuse I resize inputs for the model, so that give max 800x800 image (other OOM error) image_№image_№96-965159_wallpaper-anime-hot-anime-girl-overlord-voracity-by..jpg_2.png_1800×448 518 KB 96-965159_wallpaper-anime-hot-anime-girl-overlord-voracity-by.1920×1080 315 KB image_№tof3.png_1800×800 762 KB tof31920×1195 1.03 MB My question for both problems: Can I fix it and how? I just need to have more training, or something else? Or I have mistakes on layers, or preparing my dataset? I can’t check them all by myself because the processing will take 30 days on my computer with my code. XD IMPORTANT: If you saw such a discussion with the same question and decision, you don’t have to explain it to me, just give the link to that, and I’ll check it. And if you see some mistakes with optimization (I know - they are), please indicate this if it’s not difficult for you. It’s very important for me, because… Come on, 30 DAYS… My code “ngf” I always do 64. Model (example for 4x). Model’s layers always looks like : self.g0 = nn.Sequential( #nn.Dropout2d(p=0.2), nn.Conv2d(3, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), ) self.g1 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True) ) self.main1 = nn.Sequential( nn.ConvTranspose2d(ngf * 4, ngf * 4, 4, stride=2, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), ) self.g2 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 2), nn.ReLU(True), nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), ) self.main2 = nn.Sequential( nn.ConvTranspose2d(ngf * 4, ngf, 4, stride=2, padding=1, bias=False), nn.BatchNorm2d(ngf), nn.ReLU(True), nn.Conv2d(ngf, 3, 3, stride=1, padding=1, bias=False), nn.Tanh() ) Forward function like this: def forward(self, X): X = self.g0(X) X = self.g1(X) + X X = self.main1(X) X = self.g2(X) + X X = self.main2(X) return X And training step: def training_step(self, batch, batch_idx): x, _ = batch x = x.view(self.hparams['batch_size'], x.size()[-3], x.size()[-2], x.size()[-1]).to(self.device) #x = F.interpolate(x, size=(int(y[0]), int(y[1])), mode='bicubic', align_corners=False) g = self.generator(F.interpolate(x, size=(x.size()[-2] // self.increase, x.size()[-1] // self.increase), mode='bicubic', align_corners=False)) #########################____GRADS____############################ self.generator.zero_grad() g_real_loss = criterion(g, F.interpolate(x, size=(x.size()[-2] // self.increase * self.increase, x.size()[-1] // self.increase * self.increase), mode='bicubic', align_corners=False)) g_real_loss.backward(retain_graph=True) self.opt_g.step() ################################################################### Exaple for “remove defects”: Model's layers: self.ngf = ngf self.g0 = nn.Sequential( #nn.Dropout2d(p=0.2), nn.Conv2d(3, ngf*4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf*4), nn.ReLU(inplace=False), ) self.g1 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False), nn.Conv2d(ngf * 4, ngf *4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False) ) self.main1 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False), # state size. (ngf*8) x 4 x 4 ) self.g2 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False), nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False), ) self.main2 = nn.Sequential( nn.Conv2d(ngf * 4, ngf*4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf*4), nn.ReLU(inplace=False), nn.Conv2d(ngf*4, 3, 3, stride=1, padding=1, bias=False), nn.Tanh() ) And train step, here I have 3 versions of this step, where I just train whole resized image, where I cut image and train with these parts, or I cuts image, put parts into forward, but loss get by criterion of all of parts and whole target image (I cuts it double, because first cut for exclude OOM error, and next to train by parts). I choose this, because if I do this, it dont require many resources in forward. I commented parts of old versions step def training_step_crop(self, batch, batch_idx): x, y = batch big_res = max(round(x.size()[-2] / self.hparams['max_input'][0] + 0.5), round(x.size()[-1] / self.hparams['max_input'][1] + 0.5)) for r0 in range(big_res): for r1 in range(big_res): big_tmp_x = torch.chunk(x,big_res,dim=-2)[r0] big_tmp_x = torch.chunk(big_tmp_x,big_res,dim=-1)[r1] big_tmp_y = torch.chunk(y,big_res,dim=-2)[r0] big_tmp_y = torch.chunk(big_tmp_y,big_res,dim=-1)[r1] res = max(round(big_tmp_x.size()[-2] / self.hparams['max_crop'][0] + 0.5), round(big_tmp_x.size()[-1] / self.hparams['max_crop'][1] + 0.5)) outs = [] for it0 in range(res): out0 = [] for it1 in range(res): tmp_x = torch.chunk(big_tmp_x,res,dim=-2)[it0] tmp_x = torch.chunk(tmp_x,res,dim=-1)[it1] #tmp_y = torch.chunk(y,res,dim=-2)[it0] #tmp_y = torch.chunk(tmp_y,res,dim=-1)[it1] #print('Memory:', get_gpu_memory_map()) - I chaked memory... tmp_x = tmp_x.view(1, 3, tmp_x.size()[-2], tmp_x.size()[-1]).to(self.device) #tmp_y = tmp_y.view(1, 3, tmp_y.size()[-2], tmp_y.size()[-1]).to(self.device) tmp_x = self.generator(tmp_x) out0.append(tmp_x.cpu()) #########################__GENERATOR__############################# #self.generator.zero_grad() #loss = self.criterion(tmp_x, tmp_y) #loss.backward(retain_graph=True) #self.opt.step() ################################################################### del tmp_x #del tmp_y outs.append(torch.cat(out0, 3)) del out0 outs = torch.cat(outs, 2)#.clone() ##################################### self.generator.zero_grad() loss = self.criterion(outs.cpu(),big_tmp_y) loss.backward() self.opt.step() ##################################### del outs torch.cuda.empty_cache() #with torch.cuda.device('cuda'): #torch.cuda.empty_cache() My dataset (~same for both models): “RandomPilTransforms” - is my class of same flips and deformations for sequential of images (so, here it useless, but still…) I use save image with small quality on JPG and load it again, for simulation JPG defects, because I didn’t found function, which do this without save. If you know same functions, I’ll be glad to hear about it. class MyDataset(Dataset): def __init__(self, path, input_size, quality=50, max_len=False, save_path=r'data\cash'): RPTargs = { 'perspective': { 'deformation': 0.5, 'chance': 0.5, 'resample': Image.BICUBIC, 'fill': 0, 'fillcolor': None }, 'flip_horizontal': 0.5, 'flip_vertical': 0.5, 'rotate_right': 0.5, 'rotate_left': 0.5 } self.transform = RandomPilTransforms(**RPTargs) self.path = path self.quality = quality self.save_path = save_path self.piltotensor = transforms.ToTensor() self.input_size = input_size self.names = [] for dirpath,_,filenames in os.walk(path): for f in filenames: self.names.append(os.path.abspath(os.path.join(dirpath, f))) random.shuffle(self.names, random.seed()) print('') print('All dataset:', len(self.names)) self.len = len(self.names) if max_len: self.len = min(max_len, self.len) print('Using dataset:', self.len) print('') def __getitem__(self, index): try: y = Image.open(self.names[index]).convert('RGB') except: del self.names[index] return self.__getitem__(index) if y.size[0] * y.size[1] > self.input_size[0] * self.input_size[1]: if y.size[0] > y.size[1]: y = y.resize((self.input_size[0], self.input_size[0] * y.size[1] // y.size[0])) else: y = y.resize((self.input_size[1] * y.size[0] // y.size[1], self.input_size[1])) y = self.transform([y])[0] x = self.noisy(y) #y = cv2.convertScaleAbs(np.asarray(y)) x = self.piltotensor(x) y = self.piltotensor(y) return x, y def noisy(self, image): image.save(self.save_path + r'\cash_img.jpg', quality=self.quality) image = Image.open(self.save_path + r'\cash_img.jpg').convert('RGB') return image def __len__(self): return self.len (**All details of code at the bottom. **) **Problem 1:** After I trained model and test it, output looks normal in general, BUT it have artifacts on edges of image. Example (input/output/target - from tests): ![image_№9bbdd52eb441fa230949b670d2893e90324b8308r1-1920-1080v2_uhq.jpg_0|200x200](upload://k6PTlEHIsLMEp28Qv5uU69MI4H1.png) ![image_№9bbdd52eb441fa230949b670d2893e90324b8308r1-1920-1080v2_uhq.jpg_1|500x500](upload://tsAzQF7YOOGUAVMDT3qA4Jc6O4u.png) ![image_№9bbdd52eb441fa230949b670d2893e90324b8308r1-1920-1080v2_uhq.jpg_2|500x500](upload://nsgMiWTuvOgIFdf5SKfoVap26Yw.png) This artfacts small: ~ 5 - 20 px, and I could leave this, but if I don't upscale whole image (I cant do this with big img like full hd to 4k), I upscale parts of image. So, in this situation this small artifact is more important. Exaple: ![9bbdd52eb441fa230949b670d2893e90324b8308r1-1920-1080v2_uhq|690x388](upload://lThfAo9IFEIImjeGJCilYCDuc8Q.jpeg) **Problem 2:** I have artifacts looks like gauss noise. I thinks that it because my dataset is not best, I made it myself, and he have not many images with some colors (so artifacts meets on images which have seldom colors). But may be it have artifacts by another thing, therefore mentioned this. *Now I have my dataset (100 000 images) + another dataset (350 000 images), but not use this always, because it is too long* Example (model output/target with gauss noise) - it's after 100 000 images with 5 epochs: *target is bigger beacuse I resize inputs for the model, so that give max 800x800 image (other OOM error)* ![image_№image_№96-965159_wallpaper-anime-hot-anime-girl-overlord-voracity-by..jpg_2.png_1|690x386](upload://aJUJwPXClKuq00JjJqiB5yXUndG.png) ![96-965159_wallpaper-anime-hot-anime-girl-overlord-voracity-by.|690x388](upload://a4AbJCtP3BZpQfDtd9KipfFSJIE.jpeg) ![image_№tof3.png_1|500x500](upload://1fYEJwTWamdX1CFFYk41kT7nK6f.png) ![tof3|690x429](upload://4yAsO73ZULJPKZ7g02xOjUPcSnP.jpeg) **My question for both problems:** Can I fix it and how? I just need to have more training, or something else? Or I have mistakes on layers, or preparing my dataset? I can't check it myself because on my code with my computer all dataset runs 30 days. XD **IMPORTANT:** If you saw discuss with same question and with decision, you don't have to explain it to me, just give the link to that, and I'll check it. And if you see some mistakes with optimization (I know - they are), please indicate this if it's not difficult for you. It very important for me, because... Come on, 30 DAYS... **My code** "ngf" I always do 64. Model (example for 4x). Model's layers always looks like : self.g0 = nn.Sequential( #nn.Dropout2d(p=0.2), nn.Conv2d(3, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), ) self.g1 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True) ) self.main1 = nn.Sequential( nn.ConvTranspose2d(ngf * 4, ngf * 4, 4, stride=2, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), ) self.g2 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 2), nn.ReLU(True), nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(True), ) self.main2 = nn.Sequential( nn.ConvTranspose2d(ngf * 4, ngf, 4, stride=2, padding=1, bias=False), nn.BatchNorm2d(ngf), nn.ReLU(True), nn.Conv2d(ngf, 3, 3, stride=1, padding=1, bias=False), nn.Tanh() ) Forward function like this: def forward(self, X): X = self.g0(X) X = self.g1(X) + X X = self.main1(X) X = self.g2(X) + X X = self.main2(X) return X And training step: def training_step(self, batch, batch_idx): x, _ = batch x = x.view(self.hparams['batch_size'], x.size()[-3], x.size()[-2], x.size()[-1]).to(self.device) #x = F.interpolate(x, size=(int(y[0]), int(y[1])), mode='bicubic', align_corners=False) g = self.generator(F.interpolate(x, size=(x.size()[-2] // self.increase, x.size()[-1] // self.increase), mode='bicubic', align_corners=False)) #########################____GRADS____############################ self.generator.zero_grad() g_real_loss = criterion(g, F.interpolate(x, size=(x.size()[-2] // self.increase * self.increase, x.size()[-1] // self.increase * self.increase), mode='bicubic', align_corners=False)) g_real_loss.backward(retain_graph=True) self.opt_g.step() ################################################################### Exaple for "remove defects": Model's layers: self.ngf = ngf self.g0 = nn.Sequential( #nn.Dropout2d(p=0.2), nn.Conv2d(3, ngf*4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf*4), nn.ReLU(inplace=False), ) self.g1 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False), nn.Conv2d(ngf * 4, ngf *4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False) ) self.main1 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False), # state size. (ngf*8) x 4 x 4 ) self.g2 = nn.Sequential( nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False), nn.Conv2d(ngf * 4, ngf * 4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf * 4), nn.ReLU(inplace=False), ) self.main2 = nn.Sequential( nn.Conv2d(ngf * 4, ngf*4, 3, stride=1, padding=1, bias=False), nn.BatchNorm2d(ngf*4), nn.ReLU(inplace=False), nn.Conv2d(ngf*4, 3, 3, stride=1, padding=1, bias=False), nn.Tanh() ) And training step, here I have 3 versions of this step, where I just train whole resized image, where I cut image and train by these parts, or I cut image, put these parts into forward, but get loss by criterion of all parts and whole target image (I cut it double, because firstly I cut for exclude OOM error, and then train by parts). I choose this way to reduce resources consumption by forward function. *I commented parts of old step versions* def training_step_crop(self, batch, batch_idx): x, y = batch big_res = max(round(x.size()[-2] / self.hparams['max_input'][0] + 0.5), round(x.size()[-1] / self.hparams['max_input'][1] + 0.5)) for r0 in range(big_res): for r1 in range(big_res): big_tmp_x = torch.chunk(x,big_res,dim=-2)[r0] big_tmp_x = torch.chunk(big_tmp_x,big_res,dim=-1)[r1] big_tmp_y = torch.chunk(y,big_res,dim=-2)[r0] big_tmp_y = torch.chunk(big_tmp_y,big_res,dim=-1)[r1] res = max(round(big_tmp_x.size()[-2] / self.hparams['max_crop'][0] + 0.5), round(big_tmp_x.size()[-1] / self.hparams['max_crop'][1] + 0.5)) outs = [] for it0 in range(res): out0 = [] for it1 in range(res): tmp_x = torch.chunk(big_tmp_x,res,dim=-2)[it0] tmp_x = torch.chunk(tmp_x,res,dim=-1)[it1] #tmp_y = torch.chunk(y,res,dim=-2)[it0] #tmp_y = torch.chunk(tmp_y,res,dim=-1)[it1] #print('Memory:', get_gpu_memory_map()) - I chaked memory... tmp_x = tmp_x.view(1, 3, tmp_x.size()[-2], tmp_x.size()[-1]).to(self.device) #tmp_y = tmp_y.view(1, 3, tmp_y.size()[-2], tmp_y.size()[-1]).to(self.device) tmp_x = self.generator(tmp_x) out0.append(tmp_x.cpu()) #########################__GENERATOR__############################# #self.generator.zero_grad() #loss = self.criterion(tmp_x, tmp_y) #loss.backward(retain_graph=True) #self.opt.step() ################################################################### del tmp_x #del tmp_y outs.append(torch.cat(out0, 3)) del out0 outs = torch.cat(outs, 2)#.clone() ##################################### self.generator.zero_grad() loss = self.criterion(outs.cpu(),big_tmp_y) loss.backward() self.opt.step() ##################################### del outs torch.cuda.empty_cache() #with torch.cuda.device('cuda'): #torch.cuda.empty_cache() My dataset (~same for both models): "RandomPilTransforms" - is my class of same flips and deformations for sequential of images (so, here it useless, but still...) I use save image with small quality on JPG and load it again, to simulate JPG defects, because I didn't find function, which do this without save. If you know the same functions, I'll be glad to hear about it. class MyDataset(Dataset): def init(self, path, input_size, quality=50, max_len=False, save_path=r’data\cash’): RPTargs = { 'perspective': { 'deformation': 0.5, 'chance': 0.5, 'resample': Image.BICUBIC, 'fill': 0, 'fillcolor': None }, 'flip_horizontal': 0.5, 'flip_vertical': 0.5, 'rotate_right': 0.5, 'rotate_left': 0.5 } self.transform = RandomPilTransforms(**RPTargs) self.path = path self.quality = quality self.save_path = save_path self.piltotensor = transforms.ToTensor() self.input_size = input_size self.names = [] for dirpath,_,filenames in os.walk(path): for f in filenames: self.names.append(os.path.abspath(os.path.join(dirpath, f))) random.shuffle(self.names, random.seed()) print('') print('All dataset:', len(self.names)) self.len = len(self.names) if max_len: self.len = min(max_len, self.len) print('Using dataset:', self.len) print('') def __getitem__(self, index): try: y = Image.open(self.names[index]).convert('RGB') except: del self.names[index] return self.__getitem__(index) if y.size[0] * y.size[1] > self.input_size[0] * self.input_size[1]: if y.size[0] > y.size[1]: y = y.resize((self.input_size[0], self.input_size[0] * y.size[1] // y.size[0])) else: y = y.resize((self.input_size[1] * y.size[0] // y.size[1], self.input_size[1])) y = self.transform([y])[0] x = self.noisy(y) #y = cv2.convertScaleAbs(np.asarray(y)) x = self.piltotensor(x) y = self.piltotensor(y) return x, y def noisy(self, image): image.save(self.save_path + r'\cash_img.jpg', quality=self.quality) image = Image.open(self.save_path + r'\cash_img.jpg').convert('RGB') return image def __len__(self): return self.len
st48823
Hi, I’m using pytorch with an AMD card and rocm; I can train my model but when I try to detect something with it I run into an out of memory error: RuntimeError: HIP out of memory. Tried to allocate 138.00 MiB (GPU 0; 7.98 GiB total capacity; 1.55 GiB already allocated; 6.41 GiB free; 1.57 GiB reserved in total by PyTorch) It seems to me however that there is memory available, so why it fails to allocate the memory? Is it a rocm bug? Thanks, Andrea
st48824
I rebooted the system and now it works, I guess there was some core component that crashed and the system was in an unclean state. Bye Andrea
st48825
Hi there, I am new to Pytorch with very little understanding of programming classes. While everything worked out so far, I don’t understand the following problem now: In oder to allow my feed-forward network to process data with different input feature size, I was trying to implement an encoder, that processes each feature in tensor of shape [730, 10] seperately, so a tensor of shape [730,1] with hidden size 64 to [730,64]. concatenates all of them together to [730,64,10] activates it. Because I want to have the number of input features variable, I now apply adaptive pooling with arbitrary output size to this tensor. While the network is working on the first view, the loss that it outputs is: tensor(nan, grad_fn=) Am I missing something in the class implementation, such that I can’t do proper backpropagation? I would really appreciate any help on this! The class I created looks like this: class MLPmod(nn.Module): def __init__(self, hidden_features, dimensions, activation): super(MLPmod, self).__init__() self.hidden_features = hidden_features self.activation = activation() self.encoder = nn.Linear(1, hidden_features) self.avgpool = nn.AdaptiveAvgPool1d(hidden_features) self.classifier = self.mlp(dimensions, activation) def forward(self, x): x = self.encode(x) x = self.avgpool(x).view(x.shape[0],-1) x = self.classifier(x) return(x) def mlp(self, dimensions, activation): network = nn.Sequential() network.add_module(f"hidden0", nn.Linear(self.hidden_features*self.hidden_features, dimensions[0])) network.add_module(f'activation0', activation()) for i in range(len(dimensions)-1): network.add_module(f'hidden{i+1}', nn.Linear(dimensions[i], dimensions[i+1])) if i < len(dimensions)-2: network.add_module(f'activation{i+1}', activation()) return(network) def encode(self, x): x = x.unsqueeze(1) latent = torch.empty(x.shape[0], self.hidden_features, 1) for feature in range(x.shape[-1]): latent = torch.cat((latent, self.encoder(x[:,:,feature]).unsqueeze(2)),dim=2) latent = self.activation(latent) return(latent)
st48826
Solved by ptrblck in post #2 In these lines of code: latent = torch.empty(x.shape[0], self.hidden_features, 1) for feature in range(x.shape[-1]): latent = torch.cat((latent, self.encoder(x[:,:,feature]).unsqueeze(2)),dim=2) you are appending an empty and thus uninitialized tensor to itself. Since you are not initiali…
st48827
In these lines of code: latent = torch.empty(x.shape[0], self.hidden_features, 1) for feature in range(x.shape[-1]): latent = torch.cat((latent, self.encoder(x[:,:,feature]).unsqueeze(2)),dim=2) you are appending an empty and thus uninitialized tensor to itself. Since you are not initializing the values of the tensor, it might contain invalid values such as Infs/NaNs etc. I would recommend to append the output of the encoder to a list and use torch.stack afterwards.
st48828
Hello, the following code ceases to be reproducible when the weights in cross entropy are non-integers. Here’s the example: import numpy as np from collections import Counter import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import Dataset, DataLoader h, w, in_ch, out_ch = 32, 32, 3, 5 class Dtst(Dataset): def __init__(self, N=20): self.X = [torch.randn([in_ch, h, w], dtype=torch.float32) for _ in range(N)] self.Y = [torch.randint(low=0, high=out_ch, size=(h,w), dtype=torch.int64) for _ in range(N)] def __getitem__(self, ix): return self.X[ix], self.Y[ix] def __len__(self): return len(self.Y) class Network(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Conv2d(in_channels=in_ch, out_channels=10, kernel_size=3, padding=1) self.drop = nn.Dropout2d(p=0.1) self.layer2 = nn.Conv2d(in_channels=10, out_channels=out_ch, kernel_size=3, padding=1) def forward(self, x): out = self.layer2(self.drop(self.layer1(x))) return out seed = 4 torch.backends.cudnn.enabled = True torch.backends.cudnn.benchmark = True torch.backends.cudnn.deterministic = True torch.manual_seed(seed) torch.cuda.manual_seed(seed) torch.cuda.manual_seed_all(seed) np.random.seed(seed) dtst = Dtst() model = Network() device = 'cuda' model.to(device) class_weights = ((torch.arange(out_ch)+1).type(torch.FloatTensor)**0.5).to(device) loss_fn = torch.nn.CrossEntropyLoss(weight=class_weights) opt = torch.optim.Adam(model.parameters()) preds_dict = dict() for e in range(1500): dtldr = DataLoader(dtst, batch_size=4) for x,y in dtldr: preds = model(x.to(device)) loss = loss_fn(preds, y.to(device)) loss.backward() opt.step() preds_argmax = preds.argmax(dim=1).flatten() preds_dict.update(Counter(preds_argmax.tolist())) print(sorted(preds_dict.items(), key=lambda x: x[1])) print(model.layer1.weight.data.norm(2).item()) It’s a very simple network with a very basic Dataset, and a simple train loop. This code is not reproducible. But when I remove the (**0.5) part from the class_weights it becomes reproducible. I.e., if the class weight values are actual floats, not integers cast to floats, then the code is not reproducible. Also, the problem exists only on cuda. If the device is set to ‘cpu’, the code is reproducible again. I run this on Ubuntu 18. My environment is the following: pytorch 1.6.0 cudatoolkit 10.1.243 numpy 1.19.1
st48829
I’ve added this same issue on github as well, seems like there it is getting more attention. Here’s the link https://github.com/pytorch/pytorch/issues/46024 37
st48830
I’d like to augment the training and validation dataset that I currently have. I’m not sure whereabouts to put this code in the main code: transforms.Compose([ transforms.Resize((229,229)), transforms.RandomResizedCrop((229,229)), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=[unknown], std=[unknown]) ]) MAIN CODE: class STLData(Dataset): def __init__(self,trn_val_tst = 0, transform=None): data = np.load('hw3.npz') if trn_val_tst == 0: #trainloader self.images = data['arr_0'] self.labels = data['arr_1'] elif trn_val_tst == 1: #valloader self.images = data['arr_2'] self.labels = data['arr_3'] else: #testloader self.images = data['arr_4'] self.labels = data['arr_5'] self.images = np.float32(self.images)/1.0 self.transform = transform def __len__(self): return len(self.labels) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() sample = self.images[idx,:] labels = self.labels[idx] if self.transform: sample = self.transform(sample) return sample, labels train_set = STLData(trn_val_tst=0, transform=torchvision.transforms.ToTensor()) val_set = STLData(trn_val_tst=1, transform=torchvision.transforms.ToTensor()) test_set = STLData(trn_val_tst=2, transform=torchvision.transforms.ToTensor()) batch_size = 100 n_workers = multiprocessing.cpu_count() trainloader = torch.utils.data.DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=n_workers) valloader = torch.utils.data.DataLoader(val_set, batch_size=batch_size, shuffle=True, num_workers=n_workers) testloader = torch.utils.data.DataLoader(test_set, batch_size=batch_size, shuffle=True, num_workers=n_workers)
st48831
You could pass the transforms.Compose as the transform argument to the STLData, if you would like to use it for this dataset. Note that torchvision.transforms work on PIL.Images by default (in the nightly more transformation can also be applied to tensors directly) so you might need to transform the numpy arrays to PIL.Images first.
st48832
Good evening, I have been implementing some raycasting operations using numpy and thought moving them to pytorch to make use of the gpu parallelization. However I am struggling to use some functions like cross product over list/tensors of tensors. For exemple I want to compute the cross product between 10 vectors and 6000 other vectors. With numpy I would broadcast them and get thing along this: pvec = np.cross( directions[None,:,:],v0v2[:,None,:]) With pytorch it seems to be a problem as cross requires same size tensors and broadcasting apparently is not available for this methods. Any idea on how to do something similar efficiently ? Also what would be the a good way to do some computations in parallel in a good way ? For example the same 10 operations for a lot of vectors. In cuda I see how the kernel would work executing the same code, but directly in pytorch does not seem so clear, is it even possible ? Yours Justin
st48833
You could manually broadcast the tensors as shown in this example: # numpy directions = np.random.randn(3, 3) v0v2 = np.random.randn(3, 3) pvec = np.cross(directions[None, :, :], v0v2[:, None, :]) # PyTorch with manual broadcasting d, v = torch.from_numpy(directions[None, :, :]), torch.from_numpy(v0v2[:, None, :]) d = d.expand(v.size(0), -1, -1) v = v.expand(-1, d.size(1), -1) p = torch.cross(d, v, dim=2) # Compare print(torch.allclose(torch.from_numpy(pvec), p))
st48834
The following is a part of my code: It works well on a single GPU, but I need to use multi GPU, but I find something wrong while using .cuda() in “forward” with Dataparallel, even I use something like ‘epsilon = self.normal.sample(self.mu.size()).cuda(self.mu.device()) ’(still can’t send tensor to right GPU) or use register_buffer (This only works in originally init). I seriously need your help !!! class Gaussian(object): def __init__(self, mu, rho): super().__init__() self.mu = mu self.rho = rho self.normal = torch.distributions.Normal(0, 1) @property def sigma(self): return torch.log1p(torch.exp(self.rho)) def sample(self): epsilon = self.normal.sample(self.mu.size()).cuda() # This is where the error happens ! return self.mu + self.sigma * epsilon class SharableLinear(nn.Module): """Modified linear layer.""" __constants__ = ['bias', 'in_features', 'out_features'] def __init__(self, in_features, out_features, bias=True, ratio=0.5): super(SharableLinear, self).__init__() self.in_features = in_features self.out_features = out_features # weight and bias are no longer Parameters. self.weight = Parameter(torch.Tensor(out_features, in_features), requires_grad=True) nn.init.normal_(self.weight, 0, 0.01) if bias: self.bias = Parameter(torch.Tensor(out_features), requires_grad=True) nn.init.constant_(self.bias, 0) else: self.register_parameter('bias', None) fan_in, _ = _calculate_fan_in_and_fan_out(self.weight) total_var = 2 / fan_in noise_var = total_var * ratio mu_var = total_var - noise_var noise_std, mu_std = math.sqrt(noise_var), math.sqrt(mu_var) rho_init = np.log(np.exp(noise_std) - 1) self.weight_rho = nn.Parameter(torch.Tensor(out_features, 1).uniform_(rho_init, rho_init)) self.weight_gaussian = Gaussian(self.weight, self.weight_rho) def forward(self, input, sample=False): if sample: weight = self.weight_gaussian.sample() # I have to reset weight inside forward, which means .cuda() have to be used else: weight = self.weight return F.linear(input, weight, self.bias)
st48835
Hi, The DataParallel is splitting your model to run on mutiple GPUs. So different copies of your model will be located on different GPUs. But when you do .cuda() , this is the same as .cuda(0) and so all the copies that don’t live on the GPU 0 will have problems as you give them a Tensor on the wrong GPU. You can replace it with: .to(self.mu.device) to be sure to always place it on the same device as the other Tensors for that copy.
st48836
Hi, Many thanks for your reply! when I changed .cuda() to .cuda(self.mu.device) or .to(self.mu.device) It still raise RuntimeError: arguments are located on different GPUs. epsilon = self.normal.sample(self.mu.size()).to(self.mu.device) Here are some details. File "/home/bzg/anaconda3/envs/torch1.2/lib/python3.7/site-packages/torch/_utils.py", line 369, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 1 on device 1. return F.linear(input, weight, self.bias) File "/home/bzg/anaconda3/envs/torch1.2/lib/python3.7/site-packages/torch/nn/functional.py", line 1371, in linear output = input.matmul(weight.t()) RuntimeError: arguments are located on different GPUs at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/generic/THCTensorMathBlas.cu:260
st48837
My closest try is to turn Gaussian to inherent nn.Module and turn sample to forward(): class SharableLinear(nn.Module): def forward(self, input, sample=False): if sample: weight = self.weight_gaussian.forward() else: weight = self.weight class Gaussian(nn.Module): def __init__(self, mu, rho): super().__init__() self.mu = mu self.rho = rho self.normal = torch.distributions.Normal(0, 1) @property def sigma(self): return torch.log1p(torch.exp(self.rho)) def forward(self): epsilon = self.normal.sample(self.mu.size()).cuda() return self.mu + 0.1 * self.sigma * epsilon This time there is no error and the code can run. But a warning left : image1917×87 7 KB This should still have a bad effect.Do you have any ideas?
st48838
If it is the loss generate from different GPUs, I can simply do loss.mean(). But I have no idea to handle this problem.
st48839
The warning seems to say that your forward returns a scalar which cannot be concatenated directly so they are made into 1D Tensor with 1 element and then concatenated. This is fine
st48840
Thank you for your reply! The existence of this warning still worries me, maybe I’ll just have to postpone that.
st48841
You can call .view(1) or .unsqueeze(1) on your return value from the forward to get something that is 1D and silence the warning.
st48842
Thanks again, but I need to return 2D tensor, as self.mu is 2D, sigma is 1D. Besides, I use self.sigma.expand(self.mu.size()) or sigma.unsqueeze(1) still not fix the warning. I even don’t know where this warning refers to. The changed code still works well on a single GPU, so the problem must be in Data Parallel, maybe I should learn more about its mechanism first. The only information is image1917×87 7 KB
st48843
In a case where my data fits in memory as numpy array, I realize that batching the data using getitem from the Dataset interface is much slower than when it is indexed manually with numpy. I am sure it’s because the Dataloader build batches sample by sample by calling getitem to fetch each sample. Is there any workaround to build batches faster while still using the standard Dataset / Dataloader interface ?
st48844
Hi @veda101, could you clarify a small detail? You mentioned that the data fits in memory, and so, do you read the entire data in __init__? Or do you read it lazily in __getitem__?
st48845
I can for instance load all the data in memory in the __init__, and then access each row of the dataset in the __getitem__, but because getitem fetch each row one by one, it is definitely slower than fetching with a slice in numpy like data[0:batch_size]
st48846
You could use BatchSampler 29 to pass a batch of indices to __getitem__ and create multiple samples in a single call, if that fits your use case.
st48847
Hi all, I am working on semantic segmentation using the UNET architecture. I initially started off by trying to predict the 3 RGB channels from the target. I know that semantic segmentation expects a class to have binary values but gave it a shot anyways. The results I obtain are ok as shown but also blurry/fuzzy and I would like to find the reason or source of this error and also to improve it. Is there a way to improve this for prediction of RGB channels without having to split the image into multiple colors? Note that the top picture shows the target, or label that we want to predict while the bottom shows the prediction output from UNET architecture
st48848
I was wondering how to do First Linear Layer L1 Weight regularization, for feature engineering. Out of curiosity I want to see what a MLP thinks the top N features are. I read this post I’m probably mistaken, but this seems wrong… past answers recommend for W in model.parameters(), so in my case, where my model’s first Linear layer is L1 it would be for W in model.L1.parameters(). But this includes the bias term!. Most posts guilty of this, however 1 saw one 1 that is in line with my expectation. So what’s going on here, who is mistaken? I think that regularizing the bias probably isn’t too bad, it will be tiny, and doesn’t matter if there will be normalization done directly after.
st48849
See if this works for you (applying L1 regularization for layer L1): for name, param in model.named_parameters(): if 'L1' in name and 'weight' in name: L1_reg = L1_reg + torch.norm(param, 1) This can also be modified for L2 regluarization.
st48850
Hi, I’m trying to implement a vanilla autoencoder for the STL10 image dataset, but I’m facing some issues. The output of the model looks rather blurry and doesn’t really capture much of the original image. Could be due to bad model size (approx 3M parameters) or poor architecture. What I do not understand is the ‘blockiness’ of the output (attached figure) - it looks like the output image consists of 9 square segments with visible borders, and it happens for every output image. Any ideas where such behaviour coming from? Screenshot 2020-05-05 at 21.33.31630×1002 103 KB I’m using BCELoss, Adam optimizer, 128 batch size and the model architecture looks like self.downscale_conv = nn.Sequential( nn.Conv2d(3, 64, kernel_size=5, stride=2, padding=0), nn.BatchNorm2d(64), nn.ReLU(), nn.Conv2d(64, 128, kernel_size=5, stride=2, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.Conv2d(128, 128, kernel_size=5, stride=2, padding=1), nn.BatchNorm2d(128), nn.ReLU(), nn.Conv2d(128, 64, kernel_size=5, stride=2, padding=1), nn.BatchNorm2d(64), nn.LeakyReLU(), ) self.linearEnc = nn.Sequential( nn.Linear(1024, 900), nn.LeakyReLU(), ) self.linearDec = nn.Sequential( nn.Linear(900, 1024), nn.LeakyReLU(), ) self.upscale_conv = nn.Sequential( nn.ConvTranspose2d(64,128, kernel_size=3, stride=1, padding=0), nn.BatchNorm2d(128), nn.ReLU(), nn.ConvTranspose2d(128,128, kernel_size=3, stride=2, padding=0), nn.BatchNorm2d(128), nn.ReLU(), nn.ConvTranspose2d(128, 128, kernel_size=3, stride=2, padding=1), nn.BatchNorm2d(128), nn.LeakyReLU(), nn.ConvTranspose2d(128, 64, kernel_size=3, stride=2, padding=1), nn.BatchNorm2d(64), nn.LeakyReLU(), nn.ConvTranspose2d(64, 64, kernel_size=3, stride=2, padding=1), nn.BatchNorm2d(64), nn.LeakyReLU(), nn.Conv2d(64, 3, kernel_size=6, stride=1, padding=2), nn.Sigmoid(), )
st48851
Hi @lauriat, I am getting similar blurry results for segmentation even when using a more complex network. Is your prediction target only 3 channels (RGB)?. I initially tried the same approach where each of the 3 channels to predict were the original RGB channels and I think this might be one of the main reasons why. Loss function BCELoss for segmentation expects each channel to be either 0 or 1 (binary) where 1 will be the location of the specific class or color in your case in the image, however, rgb images will contain values from 0 to 1 including fractions at each pixel in the image. I am now trying a different approach by breaking/splitting the image into multiple channels where each will represent a specific color to use these as prediction target. I have not yet obtained results but will notify you if I make some progress.
st48852
Hello, I’m getting an error that I’m having trouble solving. It’s a binary classification problem on tabular data. It’s something to do with the dimension(I think ) on the log_softmax. Below is the error. Traceback (most recent call last): File "<ipython-input-4-626b771bc4cb>", line 39, in <module> outputs = model1(x) File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "<ipython-input-4-626b771bc4cb>", line 14, in forward x = F.log_softmax(x,dim=2) File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\functional.py", line 1591, in log_softmax ret = input.log_softmax(dim) IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2) Here is the model: class model(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(D, 10) self.fc2 = nn.Linear(10, 5) self.fc3 = nn.Linear(5, 2) def forward(self, x_train): x = self.fc1(x_train) x = self.fc2(x) x = self.fc3(x) x = F.log_softmax(x, dim=2) return x model1 = model() # Loss and optimizer criterion = nn.NLLLoss() optimizer = torch.optim.Adam(model1.parameters()) Here is the training loop: # Train the model n_epochs = 1000 # Stuff to store train_losses = np.zeros(n_epochs) test_losses = np.zeros(n_epochs) for it in range(n_epochs): # zero the parameter gradients optimizer.zero_grad() # Forward pass outputs = model1(x) loss = criterion(outputs, y) # Backward and optimize loss.backward() optimizer.step() # Get test loss outputs_test = model1(test_x) loss_test = criterion(outputs_test, test_y) # Save losses train_losses[it] = loss.item() test_losses[it] = loss_test.item() if (it + 1) % 50 == 0: print(f'Epoch {it+1}/{n_epochs}, Train Loss: {loss.item():.4f}, Test Loss: {loss_test.item():.4f}') Thanks for any help.
st48853
Solved by ptrblck in post #4 Your target tensor might have an additional unnecessary dimension as [batch_size, 1], so remove dim1 via target = target.squeeze(1) if that’s the case.
st48854
X in the forward function of your network is a 1-D vector, right? (a vector of shape (1, 2)). So you want to call softmax along that dim, so pass dim=1 in your log_softmax. It’s the index of the dimension along which you want softmax to operate, not the number of values in it.
st48855
Thank you. I now get this error after changing the dim: Traceback (most recent call last): File "C:\Users\JORDAN.HOWELL.GITDIR\Documents\GitHub\Inspection Model Contest\q_r_train_torch.py", line 155, in <module> loss = criterion(outputs, y) File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\modules\loss.py", line 211, in forward return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction) File "C:\Users\JORDAN.HOWELL.GITDIR\AppData\Local\Continuum\anaconda3\envs\torch_env\lib\site-packages\torch\nn\functional.py", line 2218, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: 1D target tensor expected, multi-target not supported My y is a tensor of 1’s or 0’s
st48856
Your target tensor might have an additional unnecessary dimension as [batch_size, 1], so remove dim1 via target = target.squeeze(1) if that’s the case.
st48857
I have a tensor of shape (50,50,2). That is filled with values of a grid in (x,y). Like this. r =0.5 #Sample 3D space min = -10 max = 10 complexnum = 50j #Sample 3D space X,Y = np.mgrid[min:max:complexnum,min:max:complexnum,] * r T = np.sin(3*X) * np.sin(3*Y) XY = np.concatenate((X.reshape((X.shape[0],X.shape[0],1)),Y.reshape((X.shape[0],X.shape[0],1))),2) XY = torch.from_numpy(XY.astype(np.float32)).to(device) T = T.reshape((T.shape[0],T.shape[0],1)) T = torch.from_numpy(T.astype(np.float32)).to(device) I created a Neural Network that can take the XY as input and T as an output. In that is creating a function f(x,y). Now I need to create a way to train that NN using batches. The total amount of samples should be 2500 = 50 x 50. I want to divide the XY and T tensor into batches. I have tried DataLoader and creating a data set class with no luck. Is there a way I can achieve this? Are my Tensor shapes correct? Sorry for this very basic question, I have been struggling to work with higher dimensions. Are the questions badly explained? I would have guessed someone would reply by now.
st48858
If the total number of samples should be 2500, you should use this shape in dim0 via: XY = XY.view(-1, 2) # should have shape [2500, 2] now T = T.view(-1, 1) # should have shape [2500, 1] now Afterwards you can pass these tensors to a TensorDataset and this dataset to a DataLoader. Let me know, if you get stuck.
st48859
Another question, If I want to transform that grid into a list of the points of (x,y). Would I do the same? I found that if I do XY.view(-1, 2) I will have reaping (x,y) pairs.
st48860
Thank you for your reply. Maybe I didn’t understand the whole problem myself before explaining. But now I do. So what I want to create batches of parts of the grid. If you think of it as an image is divided into subsets of the images, then randomly sample each of them to create new sub-images filled with random samples of those subdivisions. So for example I want to make batches of shape [12,12,2] and [12,12,1] that have random samples of the data set. I think I would have to create and slice it in my own way. I tried to find something similar but I could find anything like it. For now, I have done something similar to what you have told me to do. Thank you for the reply, if you do know how I could start doing this it could improve my current code.
st48861
I’m not sure if I fully understand your use case, but would you like to create windows or patches from a larger image? E.g. given an input image of [channels=3, height=224, width=224], would you like to create multiple smaller patches in e.g. [channels=3, height=12, width=12] using a sliding window approach? If so, then you should use tensor.unfold to create these patches. Here 1 is an example.
st48862
Yeah, I think I haven’t mastered how to properly use grids. I’m trying to train a NN that can learn f(x,y), so for each pair of x,y there is a value associated with it. This specific function: T = np.sin(3*X) * np.sin(3*Y) I wanted to do this example first to try and see if I can do it before using it for more complicated inputs such as images. That is why I tried to use np.mgrid to produce sample (x,y) uniformly and use it as my training set. Could you think of any way I could do it more easily?
st48863
Yes, exactly to later use as mini-batches in the training. And if you are confused of why I’m trying to do this for. I’m working on a new algorithm for academic purposes .
st48864
Adrian_Briceno_Aguil: That is why I tried to use np.mgrid to produce sample (x,y) uniformly and use it as my training set. Could you think of any way I could do it more easily? I think you are using a valid approach. Are you concerned about the ordering of the data after the reshape? If so, you could print some data and check, if the shape is expected. After my view operation, the [50, 50] numpy array would be flattened to [2500, 1], so that each “point” is a sample now.
st48865
Yeah, that is one of my concerns. Since I want to train a NN to a specific image I wanted to see if I could just use a grid structure to train the network and try not to flatten it. Since I wanted to cut it into overlapping quadrants, and somehow is been quite difficult to do it in a 1d array. Haha, I’m just trying to avoid redoing my whole code.
st48866
If I import torch from python 3.8.0, I get the following error: OSError: libstdc++.so.6: cannot open shared object file: No such file or directory 11920×1080 626 KB The strangest thing about it, however, is that I do have that shared object file and I can even grab it using ctypes.CDLL("libstdc++.so.6", mode=ctypes.RTLD_GLOBAL) exactly as pytorch tries and fails to do! After doing so, torch imports fine! What is going on, and how can I avoid having to put the ctypes.CDLL call before everytime I import torch in every file I use torch in? working1920×1080 572 KB Thanks, Jack
st48867
Upon further inspection, it seems like pytorch doesn’t directly include libstdc++.so.6, but rather includes libtorch_global_deps.so. Why can I find libstdc++.so.6 directly, but not when importing libtorch_global_deps.so? Thanks, Jack