id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st48068 | The signal resample function of SciPy (scipy.signal.resample) uses a Fourier method for up/downsampling. For many cases, it is observed that the performance of Fourier-based methods are better when compared against nearest, linear etc. interpolation (which are present under torch.nn.functional. interpolate).
My implimentation of the same can be found at:
My GitHub code 16
This resample has one extra functionality than the scipy version. This can work with multiple axes for interpolation. The scipy version can only work with one axis. With this, one can just send multiple dimension sizes as a list to “num” and the axes list to “axis”. |
st48069 | The standard batching method is to provide multiple inputs to a single network. Does PyTorch have support for batching calls with the same input to multiple networks, provided those networks have the same architecture, just different weights? |
st48070 | If you want to feed the same input (or batch of inputs) to multiple networks, you can create a list of modules and optimizers. Just use these inside the training, like you would use a single network. Something like this should work.
models = list([mode1, model2,...., model10])
optimizers = list([optimizer1, optimizer2,...., optimizer10])
for model in models:
model.train()
for batch in train_loader:
inputs, targets = batch['input'].to(device), batch['target'].to(device)
for c in range(classnum):
optimizers[c].zero_grad()
:
:
:
outputs = models[c](each_input)
loss = criterion(outputs, targets)
loss.backward()
optimizers[c].step()
running_loss += loss.item() |
st48071 | This runs each model on the input sequentially. What I’m looking for is a way to run multiple models on the same input in parallel on the GPU (the same way that one model can run on multiple inputs in parallel instead of sequentially). |
st48072 | You can read the data just once (with the DataLoader). Then, create multiple models. Next, push each one to the desired device via to('cuda:id') and simply pass the data to each model.
Now, the training happens on different devices, so it should be executed in parallel.
If you want to train a model using multiple devices, set CUDA_VISIBLE_DEVICES. |
st48073 | Hi everyone,
New guy here. I’m a complete beginner to programming but I’m excited and eager to start learning. I was referred to Pytorch and Keras when I expressed an interest in AI programming but I’m curious if this is where I’d start as a complete beginner?
Can I just jump into Pytorch or should I work on getting an understanding of Python first? |
st48074 | Hi,
You should definitely find an introduction tutorial on python online first.
Then for pytorch, the 60min blitz is the best place to start I think: https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html 7 |
st48075 | Hi! I am a beginner in deep learning. I have implemented the following model. The problem I am facing now is that the weights and bias are not being updated. When I plot the losses, the losses are periodic(looks the same every epoch). Can anyone please help me see what is causing this? Thank you!
class MyModel(torch.nn.Module):
def __init__(self):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(MyModel, self).__init__()
# Pooling layer
self.pool = torch.nn.MaxPool1d(20, stride=2)
self.pool_avg = torch.nn.AvgPool1d(127)
# Time(left) convolution
self.time1 = torch.nn.Conv1d(in_channels=1, out_channels=64, kernel_size=16, stride=1, padding=17)
self.time2 = torch.nn.Conv1d(in_channels=64, out_channels=128, kernel_size=16, stride=1, padding=17)
self.time3 = torch.nn.Conv1d(in_channels=128, out_channels=256, kernel_size=16, stride=1, padding=17)
# Frequency(right) convolution
self.freq1 = torch.nn.Conv1d(in_channels=1, out_channels=64, kernel_size=16, stride=1, padding=17)
self.freq2 = torch.nn.Conv1d(in_channels=64, out_channels=128, kernel_size=16, stride=1, padding=17)
self.freq3 = torch.nn.Conv1d(in_channels=128, out_channels=256, kernel_size=16, stride=1, padding=17)
# Fully conencted layer
self.linear1 = torch.nn.Linear(512, 256)
self.linear2 = torch.nn.Linear(256, 128)
self.linear3 = torch.nn.Linear(128, 64)
# Fianl layer
self.final = torch.nn.Softmax(dim=1)
def forward(self, time_domian, freq_domain, clean_result):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
# input dimension
time_domian = time_domian.unsqueeze(1)
freq_domain = freq_domain.unsqueeze(1)
# Time(left) convolution
# print(f"len time domain: {time_domian.shape}")
time1_out = self.time1(time_domian)
time1_out = self.pool(time1_out)
# print(f"len time1_out: {time1_out.shape}")
time2_out = self.time2(time1_out)
time2_out = self.pool(time2_out)
# print(f"len time2_out: {time2_out.shape}")
time3_out = self.time3(time2_out)
time3_out = self.pool(time3_out)
# print(f"len time3_out: {time3_out.shape}")
# Frequency(right) convolution
freq1_out = self.freq1(freq_domain)
freq1_out = self.pool(freq1_out)
freq2_out = self.freq2(freq1_out)
freq2_out = self.pool(freq2_out)
freq3_out = self.freq3(freq2_out)
freq3_out = self.pool(freq3_out)
# print(f"len freq3_out: {freq3_out.shape}")
# Connection
conv_out = torch.cat((time3_out, freq3_out), dim=1)
conv_out_ave = torch.squeeze(self.pool_avg(conv_out))
# print(f"len conv_out: {conv_out.shape}")
# print(f"len conv_out_ave: {conv_out_ave.shape}")
# Fully conencted layer
fc1_out = self.linear1(conv_out_ave).clamp(min=0) # relu
# print(f"len fc1_out: {fc1_out.shape}")
fc2_out = self.linear2(fc1_out).clamp(min=0) # relu
# print(f"len fc2_out: {fc2_out.shape}")
fc3_out = self.linear3(fc2_out).clamp(min=0) # relu
#print(f"len fc3_out: {fc3_out.shape}")
# print(fc3_out)
# Final layer
final_out = torch.max(self.final(fc3_out), dim=1)[0]
# print(f"len final_out: {final_out.shape}")
# print(final_out)
return final_out |
st48076 | The torch.max operation in:
final_out = torch.max(self.final(fc3_out), dim=1)[0]
will allow the gradient to pass to the max. value only and will set all other values to zero, so I’m not sure if that’s really what you want.
If you are dealing with e.g. a multi-class classification, pass the raw logits to nn.CrossEntropyLoss instead of using torch.max or applying softmax. |
st48077 | Do you mean passing “fc3_out” directly to loss_fn?
Also, the code below is how I am performing forward and backward propagation. I wonder if this part of the code is causing the gradient not updating:
y_pred = model(time, freq, clean_result=False)
# convert to 1 & -1
for i in range(len(y_pred)):
if y_pred[i] >= 0.5:
y_pred[i] = 1
else:
y_pred[i] = 0
# Compute loss
loss = loss_fn(y_pred, y)
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step() |
st48078 | what is loss function you are using?
the loop in which you are changing probablities to prediction is causing the problem!!! |
st48079 | dhyey:
what loss function you are using
Can you please elaborate on “the loop in which you are changing probablities to prediction is causing the problem”? Thank you!
I am using the following loss function:
loss_fn = torch.nn.BCELoss()
# define learning rate and optimizer
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) |
st48080 | so yo need to remove
for i in range(len(y_pred)):
if y_pred[i] >= 0.5:
y_pred[i] = 1
else:
y_pred[i] = 0
because by doing this you are converting probablities to labels but bceLoss expects probablities in “y_pred”
also for this you need to apply sigmoid function to last layer output!!
and last layer should be a linear layer that transforms data to n*1 dimensions |
st48081 | also are you doing binary classification or multilabel??
if binary your last layer should be linear layer for 128 X 64 mapping to 64 X 1
if multi class :
loss_function = CrossEntropyLoss
remove the layer which is doing max(self.final)
and make sure tht y contains encoded label according to target and dtype of y to be as torch.Long |
st48082 | I wonder why Pytorch uses PIL not the cv2.
As the article(https://www.kaggle.com/vfdev5/pil-vs-opencv 156) says, cv2 is three times faster than PIL.
I did an experiment and I really think so.
But why does Pytorch use PIL? |
st48083 | If PyTorch can remove it’s dependency on PIL and use OpenCV 3 instead, it would help me work with multi-channel images that are greater than 3-channels. At the moment, if you pass a 10-ch image or 10-ch mask, PIL truncates it to a 3-ch image.
See these issues:
github.com/python-pillow/Pillow
Issue: PIL cannot handle multi-channel images, they get truncated to 3 channels by default 47
opened by edowson
on 2018-06-07
What did you do?
Several Deep Learning Framework Libraries (.e.g PyTorch's TorchVision), Image Processing libraries (.e.g skimage) and machine learning data augmentation...
github.com/python-pillow/Pillow
Issue: Tracking Issue for high bit depth multichannel images 26
opened by wiredfool
on 2016-05-05
Pillow (and PIL) is currently able to open 8 bit per channel multi-channel images (such as RGB) but is able to...
Enhancement |
st48084 | The only thing that uses PIL is the torchvision package. Rewriting it to support cv2 probably it’s not such a huge effort, since you just have to replace functions call here and there - if you have experience with cv2 you could probably try that yourself! |
st48085 | If you want a performance increase, you could install Pillow-SIMD 575 instead of Pillow. This is a drop-in replacement without any code changes. |
st48086 | antspy:
Rewriting it to support cv2 probably it’s not such a huge effort, since you just have to replace functions call here and there - if you have experience with cv2 you could probably try that yourself!
Thanks for the heads up, I’ll try that out. |
st48087 | If you finish it, will you make it public like on github? That will be beneficial to many others. |
st48088 | deJQK:
If you finish it, will you make it public like on github? That will be beneficial to many others.
Yes, I’ll will. I ran into this issue with PIL last week, trying to work with multi-channel images. Replacing the TorchVision’s dependency on PIL sounds like the right thing to do. It looks like TorchVision library already supports multiple imaging backends, so it should be a matter of adding a cv2 backend.
GitHub
pytorch/vision 160
vision - Datasets, Transforms and Models specific to Computer Vision
I’ll let you know once I’m done. |
st48089 | I have rewritren the “transforms” in torchvision package with cv2.
GitHub
YU-Zhiyang/opencv_transforms_torchvision 274
opencv reimplement for transforms in torchvision. Contribute to YU-Zhiyang/opencv_transforms_torchvision development by creating an account on GitHub. |
st48090 | May be because cv2 is a heavier module, and PIL is a lighter module. Not going for a heaver module, if we could get our work done just by a lighter one. |
st48091 | With the recent addition of torchvision.io, now might be a good time to consider adding an option to use opencv as the image backend. |
st48092 | i was making a nn with just categorical features , hence used nn.Embedding , after which i applied linear layer!
and i found out that the output distribution does not nave 1 as standard deviation, seems to me because embedding are initalized with normal(0,1) distribution and layer with uniform distribution!
hence if the std is not 1 with increasing depth of network std of output must tend towards 0 , hece vanishing gradients!!
so should i change the type of initialization or it will work fine , with no problems??
becuz to me seems to be a bit odd!!!
i tried a deep network of 8 layers and found that output std is 0.09 and same is the gradients std , hence it seems to me a vanishing gradients! |
st48093 | can someone explain to me the point of number of heads in the MultiheadAttention?
what happens if I increase or decrease them? would it change the number of learnable parameters?
what is the intuition behind increasing or decreasing the number of heads in the MultiheadAttention? |
st48094 | I’ll try the intuition part…
You can think all the heads like a panel of people, in such a way that each head is a different person, it has its own thoughts and view of the situation (the head’s weights).
So each person give his output, and then there is a leader, that takes into account all the outputs of the panel, and gives out the final verdict, that leader is the final feed forward part of the multi head, it concatenates all the outputs from the heads, and feed it to a linear layer to produce final output.
Adding more heads will add more parameters.
As a side note, more heads does not mean better model, it’s a hyper parameter, and depends on the challenge.
Roy. |
st48095 | RoySadaka:
Adding more heads will add more parameters.
This part in not correct I think, embed_dim is split into num_heads groups. So, parameter shapes are the same, but these groups are processed independently using reshaping (source 9). |
st48096 | @googlebot
Sorry you are correct, the pytorch implementation (following “attention is all you need paper”) will have the same paramaeter count regardless of num heads.
Just to note, there are other types of implementations of MultiHeadAttention where parameters amount scales with the number of heads.
Roy |
st48097 | @RoySadaka @googlebot Thanks for the help.
hmmmm so large or small number of heads cannot specify better or worse generalization? |
st48098 | More heads might get you more generalization, and I suggest you try it out, but there’s a chance it will not yield better results.
For example (true story)
I’ve created a model that uses 4 heads and adding more heads actually degraded the accuracy, tested both in pytorch implementation and in another implementation (that adds more parameters for more heads).
Also reducing heads hurts accuracy, so 4 is the magic number for my model and data.
Num heads is a parameter that you need to explore that fits best to the problem you try to solve.
Roy |
st48099 | I would like to use an iter on DataLoader instead of a for loop. At the beginning of each training step I use the following code:
train_loader = DataLoader(dataset=train_dataset, batch_size=8, shuffle=True, num_workers=4)
train_loader_iter = iter(train_loader)
while(True):
try:
data_dict = next(train_loader_iter)
except StopIteration:
print('Refreshing iterator...')
train_loader_iter = iter(train_loader)
data_dict = next(train_loader_iter)
print('Iterator refreshed...')
However when my iterator should refresh (i.e. I read ‘Refreshing iterator…’) the process hangs and gets stuck infinitely. Am I doing something wrong or this is just not possible? I’m on Pytorch 1.6.0 and my dataset is a simple subclass of the Dataset class. |
st48100 | Your code is looping infinitely over the data (that’s why it looks like your process is hung). You can limit the number of times you ‘refresh’ your iterator (eg: 100) by replacing while True with for _ in range(100). |
st48101 | That is not the problem I’m having, but exactly what I want. The problem is that the first time i refresh the iterator my process gets stuck. It would happen also if I swap the “while(True)” with “for _ in range(2)” |
st48102 | Hmm, I can’t reproduce this issue on an example dataset. Can you check if you have the same problem with another dataset? |
st48103 | Ubuntu : 16.04 server
python3.6
pytorch:0.2.0_3
Error : RuntimeError: unable to write to file </torch_18693_1954506624> at /pytorch/torch/lib/TH/THAllocator.c:271
I have encounted this error When run pytorch code in ubuntu server.
when debuging the code, i found the error occured at DataLoader.
The dataset’s __getitem__ method returned (img, label), the img’s type is ndarray. and i also tried returning img Tensor but in that condition, the process is blocked.
The code run properly at local, but failed at server.
What should i do to fix that?
Thanks! |
st48104 | Are you using Docker?
I had a similar issue and had to add the --ipc=host flag.
Please note that PyTorch uses shared memory to share data between processes, so if torch multiprocessing is used (e.g. for multithreaded data loaders) the default shared memory segment size that container runs with is not enough, and you should increase shared memory size either with --ipc=host or --shm-size command line options to nvidia-docker run. |
st48105 | Hi, I use conda create env, pytorch1.2.0 cuda10.0, when train 2epochs, this problem happens, how can i solve it? |
st48106 | You might not have enough shared memory, so you could try to increase it on your system (or docker, if you are using it).
I would also recommend to update to the latest stable PyTorch version (1.5) just in case you are hitting an older bug.
If you are using multiple workers in your DataLoader, you could also try to set num_workers=0 for the sake of debugging. |
st48107 | Thanks~ I kill other process, only run this pytorch task, this problem dispears. The reason is my system does not have enough shared memory. Thanks for your reply~ |
st48108 | Is there a way to override the location of /dev/sm (shared memory) for PyTorch.
Reference for skelarn : https://stackoverflow.com/questions/40115043/no-space-left-on-device-error-while-fitting-sklearn-model 32.
Example : %env JOBLIB_TEMP_FOLDER=/tmp
Please suggest some alternatives |
st48109 | I’m not aware of a way to do so and would recommend to increase the shared memory, if your setup doesn’t provide a sufficiently large amount. |
st48110 | Unfortunately for me increasing the shared memory is not possible. Please suggest alternatives. |
st48111 | I don’t know alternatives to shared memory for multiprocessing IPC.
The fallback would be to use the main thread as for the data loading via num_workers=0, but this would also reduce the performance. |
st48112 | Hi,
When I check the folder of my images by below code, there is no corrupted images.
but when I use a custom dataset and start to train the network this error happens during training (it does not stop the training and it will go on after that, but I do not know if it affect the parameters being trained and I wonder how I can handle this).
Thanks,
Epoch: 0
Corrupt JPEG data: 24 extraneous bytes before marker 0xd9
[===========================================================>…] Step: 147ms | Tot: 11s720ms | Loss: 1.285 | Acc: 57.554% (640/111 12/12 2
[====================================================>…] Step: 883ms | Tot: 4s311ms | Loss: 0.665 | Acc: 60.587% (289/47 5/5 5
Saving…
annotations = pd.read_csv(datapath, sep=’\t’)
root_dir=’/home/ubuntu/files’
for index in range(len(annotations)):
img_path = os.path.join(root_dir, annotations.loc[index, ‘image_path’])
try:
img = Image.open(img_path)
img.verify()
except:
print(‘Bad file:’, index) # print out the names of corrupt files |
st48113 | Solved by ptrblck in post #2
I would try to track down the image file, which is raising this error and either remove it or load and save it again to hopefully get rid of the JPEG corruption. |
st48114 | I would try to track down the image file, which is raising this error and either remove it or load and save it again to hopefully get rid of the JPEG corruption. |
st48115 | Thanks. I did the same thing. Using mogrify command line tool (provided by ImageMagick), I could detect which image cause this error. |
st48116 | Hey,
I want to implement a certain improvement for a Vanilla RNN.
The regular format of the network is
h_t = tanh( W_ih x_t + b_ih + W_hh h_t-1 + b_hh)
I want to add the next change:
h_t = tanh( W_ih x_t + b_ih + C(W_hh) h_t-1 + b_hh)
Where C is a linear function and C(W_hh) is a linear transformation of W_hh
How would you recommend implementing this change?
Thanks
|
st48117 | Hello !
Here is my simple autoencoder code :
It seems to work well on my laptop, without GPU acceleration. However when I run it on a computer (remotely using SSH) with a RTX 2080, I get an error
File “autoencoder.py”, line 87, in
batch = batch.to(device)
RuntimeError: CUDA error: an illegal memory access was encountered
from pathlib import Path
import os
import torch
from torchvision.utils import make_grid
from torch.utils.data import Dataset, DataLoader
from sklearn.preprocessing import normalize
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
import numpy as np
import datetime
from datamaestro import prepare_dataset
ds = prepare_dataset("com.lecun.mnist");
train_images, train_labels = ds.train.images.data(), ds.train.labels.data()
test_images, test_labels = ds.test.images.data(), ds.test.labels.data()
writer = SummaryWriter("runs/runs"+datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
savepath = Path("model.pch")
#dataset
class MNISTDataset(Dataset):
def __init__(self, data, label):
super().__init__()
self.data, self.label = data, label
self.data.reshape(data.shape[0],-1)
self.data = self.data/np.max(self.data)
self.data = torch.tensor(data).reshape(data.shape[0], 784).float()
self.label = torch.tensor(label)
def __getitem__(self, index):
return self.data[index], self.label[index]
def __len__(self):
return len(self.data)
class AutoEncoder(torch.nn.Module):
def __init__(self, size_in = 784, size_out = 392):
super().__init__()
self.encoder = nn.Linear(size_in, size_out, bias = False)
def forward(self, x):
x = self.encoder(x)
x = F.linear(x, self.encoder.weight.t()) #decoder
return x
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
if torch.cuda.is_available():
print('CUDA is available')
autoencod = AutoEncoder()
eps = 10e-6
BS = 50
nb_epochs = 1000
dataset = MNISTDataset(train_images, train_labels)
data_train = DataLoader(dataset, shuffle = True, batch_size = BS)
optimizer = torch.optim.SGD(params = autoencod.parameters(), lr = eps)
loss = torch.nn.MSELoss()
autoencod = autoencod.to(device)
for epoch in range(nb_epochs):
for batch, labels in data_train:
batch = batch.to(device)
encod_decod = autoencod(batch.to(device))
l = loss(encod_decod, batch)
l.backward()
optimizer.step()
optimizer.zero_grad()
with torch.no_grad():
t_l = loss(dataset.data, autoencod(dataset.data))
print(t_l)
writer.add_scalar(' AutoEncoder MCELoss train :', t_l, epoch)
Thanks for any help ! |
st48118 | Solved by Alexey_Demyanchuk in post #2
Hi, I am not sure, but probably here you don’t need to put batch on device twice.
Also at this line your model (autoencod) is on device, but data is not |
st48119 | Hi, I am not sure, but probably here you don’t need to put batch on device twice.
TeaWaterSleep:
batch = batch.to(device)
encod_decod = autoencod(batch.to(device))
Also at this line your model (autoencod) is on device, but data is not
TeaWaterSleep:
t_l = loss(dataset.data, autoencod(dataset.data)) |
st48120 | Based on my understanding, PyTorch provides two APIs for profiling our application.
One is the torch.autograd.profiler.profile API. It has use_cuda flag, and we can choose to set it for either CPU or CUDA mode.
Another API for profiling is torch.utils.bottleneck (which provides both CPU and CUDA mode profiling according to the documentation https://pytorch.org/docs/stable/bottleneck.html 15).
And I expect them to give same or at least very close profiling results, but they actually didn’t. Below are my snippet for profiling and results I got.
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-o', '--option', default=1, type=int)
args = parser.parse_args()
data = torch.randn(1, 3, 224, 224).cuda()
net = resnet18().cuda()
net.eval()
# Warm up run
for _ in range(10):
net(data)
if args.option == 0:
for _ in range(100):
out = net(data)
elif args.option == 1:
# Profiling 1
start = torch.cuda.Event(enable_timing=True)
end = torch.cuda.Event(enable_timing=True)
timing = 0
for _ in range(100):
start.record()
out = net(data)
end.record()
torch.cuda.synchronize()
timing += start.elapsed_time(end)
print(timing)
elif args.option == 2:
# Profiling 2
with torch.autograd.profiler.profile(use_cuda=True) as prof:
for _ in range(100):
net(data)
print(prof.key_averages().table(sort_by='cuda_time_total'))
elif args.option == 3:
# Profiling 3
with torch.autograd.profiler.profile() as prof:
for _ in range(100):
net(data)
print(prof.key_averages().table(sort_by='cpu_time_total'))
torch.util.bottleneck API
python -m torch.utils.bottleneck prof.py -o 0
CPU mode ...
Self CPU time total: 218.316ms
CUDA time total: 0.000us
CUDA mode ...
Self CPU time total: 207.595ms
CUDA time total: 207.673ms
torch.autograd.profiler.profile API
python prof.py -o 3
CPU mode ...
Self CPU time total: 295.712ms
python prof.py -o 2
CUDA mode ...
Self CPU time total: 499.643ms !!!!!!
CUDA time total: 1.802s !!!!!!
My question is why there is such a big profiling time discrepancy when I use torch.autograd.profiler.profile(use_cuda=True)? |
st48121 | Could you add synchronizations after the first CUDA call?
I’m not sure, if the autograd.profiler will accumulate the time needed for the CUDA context creation (and all other calls which were executed before and are not finished). |
st48122 | Hey @ptrblck, thanks for your suggestion. Unfortunately that does not change much.
I also attempted to use nvvp to profile the execution on the Nvidia side, and the kernel takes roughly 190ms for compute as shown in the below graph.
Screen Shot 2020-07-03 at 11.11.44 AM531×685 61 KB
That makes me really confused about how to interpret each API’s result. |
st48123 | This would approx. fit the first output, and could mean that the second approach is accumulating the time of all operations. |
st48124 | --------------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- ---------------
Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg CUDA total % CUDA total CUDA time avg Number of Calls
--------------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- ---------------
conv2d 3.07% 13.165ms 42.23% 181.103ms 90.551us 19.28% 278.898ms 139.449us 2000
convolution 3.44% 14.748ms 39.16% 167.937ms 83.969us 18.64% 269.652ms 134.826us 2000
_convolution 6.31% 27.039ms 35.72% 153.189ms 76.595us 17.82% 257.776ms 128.888us 2000
cudnn_convolution 27.42% 117.591ms 27.42% 117.591ms 58.796us 15.84% 229.185ms 114.593us 2000
batch_norm 3.47% 14.885ms 38.57% 165.399ms 82.700us 8.48% 122.629ms 61.315us 2000
_batch_norm_impl_index 11.03% 47.299ms 35.10% 150.514ms 75.257us 7.81% 112.969ms 56.485us 2000
cudnn_batch_norm 18.00% 77.173ms 18.00% 77.173ms 38.586us 3.93% 56.870ms 28.435us 2000
relu_ 8.11% 34.773ms 8.11% 34.773ms 20.455us 2.08% 30.042ms 17.672us 1700
contiguous 8.14% 34.914ms 8.14% 34.914ms 2.885us 1.97% 28.449ms 2.351us 12100
add_ 3.78% 16.191ms 3.78% 16.191ms 20.239us 1.02% 14.801ms 18.501us 800
to 2.89% 12.375ms 3.12% 13.363ms 133.634us 0.91% 13.126ms 131.262us 100
adaptive_avg_pool2d 0.46% 1.954ms 1.63% 6.982ms 69.815us 0.48% 6.873ms 68.725us 100
max_pool2d 0.20% 859.574us 0.85% 3.662ms 36.618us 0.35% 5.001ms 50.011us 100
addmm 0.99% 4.265ms 0.99% 4.265ms 42.647us 0.32% 4.568ms 45.685us 100
max_pool2d_with_indices 0.65% 2.802ms 0.65% 2.802ms 28.022us 0.31% 4.461ms 44.609us 100
view 0.67% 2.873ms 0.67% 2.873ms 9.578us 0.20% 2.943ms 9.809us 300
mean 0.63% 2.697ms 0.63% 2.697ms 26.970us 0.20% 2.822ms 28.224us 100
flatten 0.16% 684.070us 0.51% 2.188ms 21.880us 0.15% 2.153ms 21.531us 100
reshape 0.15% 649.266us 0.35% 1.504ms 15.039us 0.10% 1.491ms 14.906us 100
empty 0.23% 988.832us 0.23% 988.832us 9.888us 0.07% 985.961us 9.860us 100
unsigned short 0.21% 909.201us 0.21% 909.201us 9.092us 0.06% 872.228us 8.722us 100
--------------------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- --------------- ---------------
Self CPU time total: 428.835ms
CUDA time total: 1.447s
The CUDA time total is just sum of everything from table above. And I assume the operations in the table CUDA category are for compute. If it is for API call, then the time should fall under CPU time category right? |
st48125 | I also met a similar problem: The profiling result is quite different from what I record manually and I dont know the exact meaning of “Self CPU time total/ CUDA time total”. |
st48126 | Yes, met the same problem. The difference is actually big from what the profiler printed and what I recorded using the following code:
with torch.autograd.profiler.profile(use_cuda=True) as prof:
with torch.autograd.profiler.record_function(“model_test”):
out_test = model_test(img)
print('test time is ', prof)
prof.export_chrome_trace("/trace.json".format(count))
I tested with model with only one conv layer, and did 1 batch warm up before this…
The printed prof gives me results:
Self CPU time total: 367.306us
CUDA time total: 2.113ms
whereas the record tracking log gives me:
cpu time is the same, but cuda time is so different. tracing log gives me:
cuda: wall duration 0.505 ms, self time 0.063 ms
image3320×908 126 KB
@ptrblck do you by chance know why? Thanks! |
st48127 | Not directly, but did you properly synchronize your code in your manual profiling? |
st48128 | Hi, I did put cuda synchronize before, is that enough?
but unfortunately, the difference between the tracing log and the printed report still big (only cuda total). It seems for me that in the printed report, the cuda total time just sum up all the items in the table without considering some of the processes are of subprocess of other processes(for example, the first item in the table is the whole inference time consumption). so it over-calculated the time. Does that make sense to you? thanks!
torch.cuda.synchronize(),
with torch.autograd.profiler.profile(use_cuda=True) as prof:
out_test = model_test(img)
print('test time is ', prof) |
st48129 | I’m training the model with DistributedDataParallel and made weight file
Then trying to load the pth file with model and eval
# multi gpu load
self.model = EfficientDet(num_classes=args.num_class,
network=args.network,
W_bifpn=EFFICIENTDET[args.network]['W_bifpn'],
D_bifpn=EFFICIENTDET[args.network]['D_bifpn'],
D_class=EFFICIENTDET[args.network]['D_class']
)
if torch.cuda.is_available():
self.model = self.model.cuda()
if args.distributed:
print('args.distributed...FF')
self.model = self.model.to(args.rank)
torch.cuda.set_device(0)
self.model = torch.nn.parallel.DistributedDataParallel(self.model
,device_ids=[args.rank]
,output_device=[args.rank]
,find_unused_parameters=True)
self.model = self.model.module
#self.model = self.model.cuda()
if(self.weights is not None):
print('load state dic...',self.weights)
checkpoint = torch.load(
self.weights, map_location=lambda storage, loc: storage)
state_dict = checkpoint['state_dict']
self.model.load_state_dict(state_dict)
if torch.cuda.is_available():
self.model = self.model.cuda()
self.model.eval()
Then got the following error
Loaded pretrained weights for efficientnet-b0
args.distributed...FF
Traceback (most recent call last):
File "demokogas.py", line 174, in <module>
detect = Detect(weights=args.weight)
File "demokogas.py", line 88, in __init__
,find_unused_parameters=True)
File "/home/jake/venv/lib/python3.6/site-packages/torch/nn/parallel/distributed.py", line 305, in __init__
self.process_group = _get_default_group()
File "/home/jake/venv/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 285, in _get_default_group
raise RuntimeError("Default process group has not been initialized, "
RuntimeError: Default process group has not been initialized, please make sure to call init_process_group. |
st48130 | Have a look a this post 1.9k which dealt with the same error and which might have been solved using the proposed solutions. |
st48131 | Hi,
I’m working with VAEs, 3 individual ones: vae1, vae2, vae3. In vae3 i’m loading the pre-trained encoder of vae2 and the pre-trained decoder of vae1. After the decoder i add some layer aswell.
My problem is now: I want to freeze the decoder. Therefore, just the encoder and the last layer should be trained.
I’ve read a lot about freezing layers but i’m still not sure if it is enough to run through the decoder (existing of Conv2d, ConvdTrans, BatchNorm, LeakyRelu) and just set require_grad=false… The optimizer is initialized with model.parameters(). Is this the proper way? Will the encoder and the last layer be correctly optimized?
Thank you!
EDIT: for clarification: the architecture is basically a variational autoencoder, which consists of a trainable encoder, a froze decoder and some trainable layer after the decoder. |
st48132 | Sorry for I can not completely understand the structure of your network, but I think your main question is how to freeze part of the network with conv layers and BN layers.
You can achieve it by:
Set the decoder to eval mode:decoder.eval()
This is used to freeze BN layers (and dropout). In BN layers, besides parameters, there are buffers which are not optimized by the optimizer but updated automatically during forwarding in training mode. Please see explanation at How to properly fix batchnorm layers 2.
Exclude decoder parameters from the optimizer.
(Optional) set: requires_grad=false. I think this is mainly to speed up training and save memory. If not, please tell me. |
st48133 | Thanks for your response.
I edited the post, maybe the architecture is now more clear.
Setting the whole decoder into evaluation mode with decoder.eval(); are gradients properly calculated for this inner layers and is backpropagation done correctly from the last layer until the encoder?
What is the benefit of telling the optimizer not to optimize the decoder anymore? I thought that the decodee wouldn’t be optimized anyways due to freezing weights/biases. |
st48134 | The only way to “freeze weights and biases” is to exclude parameters from the optimizer. eval() has no effect on the convolutional layers. It is used to freeze BN and dropout. Besides, excluding parameters can save GPU memory.
.eval() does not affect the gradient calculation, but I am not sure if the gradient can be properly calculated when the inner layers are set requires_grad=false |
st48135 | Suppose I have a tensor output, which is the result of my network and a tensor, ‘target’, which is my desired output:
output.shape = torch.Size([B, C, N])
target.shape = torch.Size([B, C, N])
I am interested that the network predicts a given N correctly, and not the particular permutation of C that is given by the network as output, as this cannot be ordered.
For this reason, I would like to calculate the loss for each possible permutation of C in output and input respectively, taking the minimum possible overall loss.
To demonstrate in normal code what I want to do, it would be written in normal Python script as follows:
import torch
def Loss(target, output):
loss = 0
min_loss=0
#Calculate minimum MSE and add to loss value
for b in range(target.shape[0]):
for c_i in range(target.shape[1]):
for c_ii in range(target.shape[1]):
loss_temp = torch.sum(target[b, c_i] - output[b,c_ii])**2)
if(c_ii == 0 or loss_temp < min_loss):
min_loss = loss_temp
loss = loss + min_loss
#Calculate mean over batches
loss = loss/target.shape[0]
return loss
Is there a more elegant, PyTorch-oriented way of perform this operation? |
st48136 | I have made an attempt, with the following method:
utilize itertools.permutations to index one of the tensors;
repeat the other tensor along a new axis;
calculate sum of square difference along last axes
find minimum along the new, permutation axis
take average over batch.
def Loss (target, input):
#calculate indices
idx = torch.from_numpy(np.array(list(itertools.permutations(range(output.shape[-2])))))
#index tensor with indices
input_perms = input[:, idx, :]
#repeat other tensor to same length
target_perms = target.unsqueeze(1).repeat(1,len(idx),1,1)
#calculate sum of squares
losses = (input_perms - target_perms)**2
losses = losses.flatten(start_dim = -2)
loss_len = losses.shape[-1]
losses = torch.sum(losses, dim=-1)
#calculate minimum along permutation axis and then mean along batch axis. Remember, we need still to divide through by the number of entries in each of the parts onf the tensor we summed!
min_loss = torch.mean(losses.min(dim=-1, keepdim=True)[0])/loss_len
return min_loss |
st48137 | I am trying to follow the pytorch multiprocessing docs 2 to set up a basic worker pool of processes playing games for my RL algorithm. The game playing only requires forwards passes on my neural net, so I don’t need gradients (I’m using torch.no_grad()).
I’m calling model.share_memory() before passing my model to the workers using torch.multiproccessing.Pool.apply_async. What I find though is that although the model is passed very quickly to the other processes, the forward pass on those other processes becomes much slower, to the point that it is always much slower than a single-process implementation.
I tried profiling my processes to see what was causing the slowdown, but it looks like the basic pytorch neural net forward passes were taking up the vast majority of the time:
%Own %Total OwnTime TotalTime Function (filename:line)
37.00% 37.00% 685.1s 685.1s linear (torch/nn/functional.py:1676)
22.00% 22.00% 464.5s 464.5s layer_norm (torch/nn/functional.py:2048)
4.00% 4.00% 166.2s 166.2s softmax (torch/nn/functional.py:1498)
13.00% 13.00% 165.3s 165.3s multi_head_attention_forward (torch/nn/functional.py:4130)
4.00% 4.00% 164.9s 164.9s multi_head_attention_forward (torch/nn/functional.py:4108)
The slowdown gets worse and worse the more processes I add to the pool. (Overall throughput continues to go down). I also can tell that the problem isn’t that there is a one time overhead, beacuse just doing a single forward pass in each worker is not that much slower, but doing many forward passes the slowdown becomes extremely noticeable.
I have no idea what could be causing this slowdown. Is this expected that inference running from shared memory would be dramatically slower? I have also tried changing the multiprocessing context to “fork” or “spawn” to no avail. |
st48138 | Hello, all,
I’m trying to get a local install of fastai running…I was hoping that using the fastai docker images would spare me having to install and manage the fastai and pytorch libraries myself, but I’m running into a segfault in pytorch, which I’m not sure how to fix.
My setup:
CPU: Intel® Core™ i7 CPU 950 @ 3.07GHz
RAM: 12GB
Video card: GeForce RTX 2070 SUPER
OS: Ubuntu 20.04 (clean, just re-installed)
Nvidia driver: Ubuntu-provided nvidia-450
Followed instructions to install docker and nvidia-docker extensions.
Running torch.cuda.is_available() in the fastai docker container returns True.
The error:
In Jupyter the kernel crashes & is restarted at the first “learn.fine_tune(1)” line.
In dmesg, there’s a line that says: traps: python[1910] trap invalid opcode ip:7fc9d0d63869 sp:7fff30e315a0 error:0 in libtorch_cpu.so[7fc9cfa41000+6754000]
If it helps, I had fastai v1 working on this hardware back in March, but got derailed by life and just picked it back up now. So, I know this setup can work, but something’s changed since March that’s not agreeing with my setup.
Has anyone seen this before, or have an idea of what I did wrong? |
st48139 | Solved by g-clef in post #11
Thanks everyone for your help.
Fastai patched this weekend to support torch 1.7, and that seems to have been enough to support the master branch of pytorch. I manually built torch and torchvision from master, with ENABLE_NNPACK=0 set as an environment variable to avoid the “Unsupported Hardware” er… |
st48140 | Are you seeing this issue only with the FastAI installation / docker image or also if you install the PyTorch binaries?
If I’m not mistaken these kind of errors are raised if your CPU encounters unsupported instructions, e.g. avx instructions on older CPUs. |
st48141 | I tried installing pytorch via conda locally (outside the docker container) and I’m seeing the same thing, yeah. Is there a way to configure/compile pytorch to not use those newer instructions?
I admit the CPU itself is a bit old (1st gen core i7, Nehalem)…I repurposed my old game machine and swapped out the GPU for something recent and powerful. I had hoped the CPU wouldn’t matter that much if the GPU was recent.
Thanks for your help. |
st48142 | If you build from source, cmake might automatically detect the CPU capability and might disable e.g. AVX, if it’s not supported.
Could you try that and see if it would be working? |
st48143 | I tried that, and got…mixed results.
The build finished properly when I ran it on master, but master is calling itself version 1.8.0, and FastAI apparently isn’t compatible with that version…it’s expecting 1.6. I tried loading it anyway, and it exceptions with an error about “FakeLoader” not having a “persistent_workers” attribute.
If I checkout the v1.6.0 tag, the build fails with errors like:
../caffe2/quantization/server/conv_dnnlowp_op.cc:1211:55: error: ‘depthwise_3x3x3_per_channel_quantization_pad_1’ was not declared in this scope
depthwise_3x3x3_per_channel_quantization_pad_1(
Is there something special I should be doing to build 1.6 instead of master?
Thanks again for the help. |
st48144 | Did you update all submodules after the 1.6 branch checkout?
Also, did you clean the build via python setup.py clean?
These types of error are often raised if the build is trying to reuse some temp. files from the previous builds. |
st48145 | Rather than mess with cleanup, I just deleted the folder & re-cloned it. I did:
git clone --recursive https://github.com/pytorch/pytorch
cd pytorch
git pull
git checkout tags/v1.6.0
git pull
python3 setup.py build
and the end result was the same:
…/caffe2/quantization/server/conv_dnnlowp_op.cc: In instantiation of ‘void caffe2::ConvDNNLowPOp<T, ReluFused>::ConvNHWCCore_(const T*, std::vector*) [with T = unsigned char; bool ReluFused = false]’:
…/caffe2/quantization/server/conv_dnnlowp_op.cc:1752:16: required from here
…/caffe2/quantization/server/conv_dnnlowp_op.cc:1211:55: error: ‘depthwise_3x3x3_per_channel_quantization_pad_1’ was not declared in this scope
depthwise_3x3x3_per_channel_quantization_pad_1(
Any ideas what else I could try?
Thanks again. |
st48146 | I’ve got a similar CPU (Westmere Xeon) and have the same problem. This appears to be PyTorch issue 43300 (https://github.com/pytorch/pytorch/issues/43300 3). I’ve tried the latest 1.7.0 nightly build and while I don’t get this problem, I get a different one that’s probably also AVX related.
FWIW, my new issue with the nightly is that NNPACK, while running nnp_initialize returns “unsupported hardware”
The exact error is:
[W NNPACK.cpp:80] Could not initialize NNPACK! Reason: Unsupported hardware.
For a CPU as old as yours and mine, I think we’re going to have to build from source to ensure that no AVX support sneaks in. |
st48147 | I just tested the release of 1.7.0 on my Westmere Xeon and the NNPACK error is still present. I’ve thrown in the towel and have some AVX hardware to replace it, but I think anyone who wants to continue with pre-AVX hardware will need to build from source to avoid the SIGILL issues. |
st48148 | Thanks everyone for your help.
Fastai patched this weekend to support torch 1.7, and that seems to have been enough to support the master branch of pytorch. I manually built torch and torchvision from master, with ENABLE_NNPACK=0 set as an environment variable to avoid the “Unsupported Hardware” error, and the master branch of fastai. That seems to be working for me. |
st48149 | I have data that is Minmax scaled i.e. in the range [0,1]
For some purpose I need this data to be Standard scaled, i.e. in range [-1, 1]
But, I do not have access to data before it was Minmax scaled
Can I simply so the operation:
x = (x - mean) / std
where,
mean = mean of data in range [0,1]
std = standard deviation of data in range [0,1]
I mean, if I do this operation for all data points in range [0,1] now,
Will this be equivalent to doing Standard Scaling to original data (before it was minmax scaled)? |
st48150 | Seems like I was totally wrong to understand these concepts:
I did a little bit more study, Could you verify these:
1. Standard Scaling
x = x - mean / std
makes the data have a zero mean and unit variance and standard deviation
2. Minmax scaling
makes the data have a range between min and max
3. Z-score normalization
same as standard scaling
4. [0, 1] scaling
same as minmax scaling but with the min max range set to 0 and 1 respectively
And it turns out that, standard scaling the minmax scaled data is equivalent to standard scaling the original data |
st48151 | Hi,
Yes, you are right.
Actually, it is the common way to first scale data to [0, 1], then compute mean and std to get to z-score.
But something I would like to mention is that using z-score does not necessrilty convert your data to [-1, 1]. See below post please:
Understanding transform.Normalize( ) vision
Hi,
There is main difference here. If you use mean=0.5 and std=0.5, your output value will be between [-1, 1] based on the normalization formula : (x - mean) / std which also called z-score. This score can be out of [-1, 1] when used mean and std of dataset, for instance ImageNet.
The definition says that we need to use population mean and std but it is usually unavailable, sample mean/std can be used as an estimation.
Bests |
st48152 | Hi!
I would appreciate it if you could give me a detailed explanation of what affine does in nn.Conv2d() or nn.BatchNorm2d().
Thank you! |
st48153 | It is just scale & shift: y = x*w+b, for batch norm it is done channelwise, i.e.: x[B,C,H,W] * w[1,C,1,1]+b[1,C,1,1].
Conv operations don’t have this functionality, as kernel and bias parameters implicitly do scale & shift. |
st48154 | I changed torch.tensor(x) to torch.tensor(x).clone().Detach() but the problem is not solved.
Do you know what I am doing wrong here?
Thanks in advance!
for epoch in range(num_epochs):
outputs = []
outputs = torch.tensor(outputs, requires_grad=True)
outputs= outputs.clone().detach().cuda()
for fold in range(0, len(training_data), 5): # we take 5 images
xtrain = training_data[fold : fold+5]
xtrain = torch.tensor(xtrain, requires_grad=True).clone().detach().float().cuda()
xtrain = xtrain.view(5, 3, 120, 120, 120)
# Clear gradients
#optimizer.zero_grad()
# Forward propagation
optimizer.zero_grad()
v = model(xtrain)
v = torch.tensor(v, requires_grad=True).clone().detach()
outputs = torch.cat((outputs,v),dim=0)
# Calculate softmax and ross entropy loss
targets = torch.Tensor(targets).clone().detach()
labels = targets.cuda()
outputs = torch.tensor(outputs, requires_grad=True)
_, predicted = torch.max(outputs, 1) #prendre valeur maximale [0.96 0.04] ==> 0 (position de classe)
accuracy = accuracyCalc(predicted, targets)
labels = labels.long()
labels=labels.view(-1)
loss = nn.CrossEntropyLoss()
loss = loss(outputs, labels)
# Calculating gradients
loss.backward()
# Update parameters
optimizer.step()
loss_list_train.append(loss.clone())
accuracy_list_train.append(accuracy/100)
np.save('Datasets/brats/accuracy_list_train.npy', np.array(accuracy_list_train))
np.save('Datasets/brats/loss_list_train.npy', np.array(loss_list_train))
print('Iteration: {}/{} Loss: {} Accuracy: {} %'.format(epoch+1, num_epochs, loss.clone(), accuracy))
print('Model training : Finished')
result :
UserWarning: To copy construct from a tensor, it is recommended to use source
Tensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True),
rather than torch.tensor(sourceTensor) |
st48155 | Solved by ptrblck in post #2
The warning points to wrapping a tensor in torch.tensor, which is not recommended.
Instead of torch.tensor(outputs) use outputs.clone().detach() or the same with .requires_grad_(True), if necessary. |
st48156 | The warning points to wrapping a tensor in torch.tensor, which is not recommended.
Instead of torch.tensor(outputs) use outputs.clone().detach() or the same with .requires_grad_(True), if necessary. |
st48157 | For a particular application, I am porting the code from keras to pytorch. The input is of the size [ bs x timesteps x features ], lstm output is [ bs x time step x hidden ]. Now I want to reduce this to [ bs x time step x out_features](time distributed layer on keras)
Using linear,
nn.Linear(in_features=hidden, out_features=out_features)
Is this the right way to do this if I want to preserve time information or do I need to reshape the data using contiguous in any way to achieve it?
Any help appreciated. |
st48158 | For anyone who might need help on this in the future, the linear layer as mentioned in the post works correctly. Using MSE loss on such data, works normally while using a cross entrophy loss, there might be an issue.
On a time step x feats data, I did not find a way to state the dim to apply softmax on. Instead you can specify it in log softmax and then use nllloss. (you get the same loss overall with proper softmax norm) |
st48159 | Hello!
I am trying to zero out some filter weights of a pytorch model before and after training. Once located the correct layers and filters, I go ahead and replace that precise key in the OrderedDictionary that is state_dict with a value of torch.zeros(correct size). By changing the value in the state_dict, am I satisfactorily changing the whole model, making it ready for training with my intended change (in other words, does the change propagate also to model.parameters() or anything that is use in train.py)? If not, what’s the best way of doing so.
Thanks a lot! |
st48160 | If you don’t create a deepcopy of the state_dict, it should work:
model = models.resnet18()
sd = model.state_dict()
sd['fc.weight'].zero_()
print(model.fc.weight)
> Parameter containing:
tensor([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], requires_grad=True)
However, the better approach would be probably to zero out the parameters directly in the model by wrapping it in a with torch.no_grad() statement and manipulating the parameters as you wish. |
st48161 | Thank you for the quick response! If I were to use the with torch.no_grad(), how would I change the model parameters directly as you mentioned? For instance, if I had a tensor where I want to zero out the weights of the filter at index 34 in it, how would I use torch.no_grad() and model.parameters() to do so? Thanks in advance! |
st48162 | This code snippet should work:
model = models.resnet18()
with torch.no_grad():
model.conv1.weight[34].zero_()
print(model.conv1.weight[33:35]) |
st48163 | modifying by reference doesn’t work for me, this method worked:
How to change weight after traning nueralNetwork
You can get the parameters by state_dict = model.state_dit(), and state_dict will hold all the trainable parameters.
You can do whatever changes you want to the content of the dict.
At last you just use model.load_state_dict(state_dict) to load all the updated state_dict back to the model. |
st48164 | Hi everyone
I have a stupid question,
Is anyone knows that what should be the form of loss function in an Denoising Autoencoder?
should it be like below?;
loss = criterion (model (noisy_data),noise_less_data)
basically model (noisy_data) is the model will be trained with inputs that are corrupted data and loss function calculates the difference (here MSE) between output of the model and the data that are not noisy ?
In this way that makes no sense to me. because if we already have access to noiseless data, then what’s the point of building up a denoising autoencoder? |
st48165 | The goal would be to train the model to be able to denoise new data.
The same question might apply to why we would like to train a model to classify dogs and cats, if we already have the labels. |
st48166 | That totally makes sense.
One more question:
Do u if there is any method (model) that can be do in an unsupervised way? I mean without seeing noiseless data, to Denise noisy data?
Tnx |
st48167 | This is most likely not the state of the art technique anymore, but have a look at Deep Image Prior 85. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.