id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st31868
|
I have changed the code:` def forward(self, x):
for i in range(len(self.stack)):
torch.cuda.synchronize()
t1=time.time()
x = self.stack[i](x)
torch.cuda.synchronize()
t2=time.time()
print(t2-t1,'layer:{}'.format(i))
and run more iterations to get running time as followed:
iteration: 0
1.0876386165618896 layer:0
0.0024077892303466797 layer:1
0.002411365509033203 layer:2
0.002465963363647461 layer:3
0.002351045608520508 layer:4
1.097477674484253 total
iteration: 1
0.002420186996459961 layer:0
0.0024178028106689453 layer:1
0.002460479736328125 layer:2
0.0024292469024658203 layer:3
0.0023517608642578125 layer:4
0.01389312744140625 total
iteration: 2
0.0024149417877197266 layer:0
0.002396106719970703 layer:1
0.0003743171691894531 layer:2
0.00017976760864257812 layer:3
7.557868957519531e-05 layer:4
0.0074841976165771484 total
iteration: 3
0.0002980232238769531 layer:0
0.0004563331604003906 layer:1
0.0024309158325195312 layer:2
0.0023963451385498047 layer:3
0.0023696422576904297 layer:4
0.008256196975708008 total
...
I am wondering is there something different with the so-called warm-up iteration?
|
st31869
|
Hello all,
Consider a MNIST dataloader with batch size 128:
train_loader = data.DataLoader( datasets.MNIST(’./data’, batch_size=128, shuffle=True, train=True, transform=transforms.ToTensor())
When a batch of 128 images is processed during training, will this data loader always need to go to the disk for fetching the next batch of 128 images into the RAM?
In case it has to go to the disk every time, how can we fetch it all at once (as MNIST is a small dataset where most of it could fit directly into RAM) and keep it in the RAM until all training iterations are complete – assuming that the training still needs batch size 128?
Thanks and Regards,
Sumit
|
st31870
|
Solved by ptrblck in post #4
It depends on the implementation of the corresponding dataset and you could check it in the source code (as done for MNIST using my link).
E.g. the often used ImageFolder dataset uses lazy loading, to save memory, while the CIFAR datasets also load the data and target into the RAM as seen here.
Cu…
|
st31871
|
The data and target tensors in MNIST will be directly stored into the RAM, as they are quite small, as seen in these lines of code 39.
|
st31872
|
Thank you very much for the answer and the code reference.
Just to follow up, how is the data and target storage in RAM managed by the dataloder when the datasets become bigger e.g. CIFAR-100 (medium scale) to Imagenet (large scale). I mean what logic is used by the dataloader to figure out how much of it is to be kept in RAM, considering available memory and batch size? I somtimes need to use custom datasets (some small scale, some large), so such conceptual clarification and reference to some code parts will help me in creating an efficient dataloader for them based on their size.
|
st31873
|
It depends on the implementation of the corresponding dataset and you could check it in the source code (as done for MNIST using my link).
E.g. the often used ImageFolder dataset uses lazy loading, to save memory, while the CIFAR datasets also load the data and target into the RAM as seen here 26.
Custom Dataset implementations can be written in either way, since you are defining how the data is loaded.
|
st31874
|
Hi, I wonder how to pad other values in the grid_sample function instead of 0, could you give me some advice?
|
st31875
|
I think I get a solution. We can firstly map the tensor through y = x-255, when we apply the grid_sample for y, we can use z = y+255 to get the padding effect of 255
|
st31876
|
We decided to implement the CNN architecte which looks like this
implement the MNISTNet network architecture
class FashionMNISTNet(nn.Module):
# define the class constructor
def __init__(self):
# call super class constructor
super(FashionMNISTNet, self).__init__()
# specify convolution layer 1
self.layer_1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=32, kernel_size=3, padding=1),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
#specify convultion layer 2
self.layer_2 = nn.Sequential(
nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.MaxPool2d(2)
)
self.linear1 = nn.Linear(64*6*6, 600)
self.drop = nn.Dropout2d(0.25)
self.linear2 = nn.Linear(600, 120)
self.linear3 = nn.Linear(120, 10)
# add a softmax to the last layer
# self.logsoftmax = nn.LogSoftmax(dim=1) # the softmax
# define network forward pass
def forward(self, images):
x = self.layer_1(images)
x = self.layer_2(x)
x = x.view(x.size(0), -1)
x = self.linear1(x)
x = self.drop(x)
x = self.linear2(x)
x = self.linear3(x)
# define layer 3 forward pass
# x = self.logsoftmax(self.linear3(x))
# return forward pass result
return x
When we try to run the Network Training with the same code (with the exception of mini_batch_size = 100), all the train_epoch_loss display “nan”:
and the error show this:
image1478×441 28 KB
Does anyone know how to fix this error?
Thank you very much!
|
st31877
|
You’ll need to reshape/unsqueeze your inputs as [10000, 1, 28, 28]. You need to this this because the network expects inputs of shape [_, channel, height, width].
|
st31878
|
You could do it in your forward function as images=images.unsqueeze(dim=1). However, it would be better if you include this step in your data preprocessing pipeline.
|
st31879
|
pchandrasekaran:
images=images.unsqueeze(dim=1)
You mean here :
image776×598 11.5 KB
It does not seem to work, could you be more precise?
I am fairyl new to the coding world.
|
st31880
|
def forward(self, images):
images=images.unsqueeze(dim=1)
x = self.layer_1(images)
x = self.layer_2(x)
x = x.view(x.size(0), -1)
.
.
.
You need to perform the step before you begin passing it into the network.
|
st31881
|
Thanks. now this work:
define network forward pass
def forward(self, images):
images=images.unsqueeze(dim=1)
x = self.layer_1(images)
x = self.layer_2(x)
x = x.view(x.size(0), -1)
x = self.linear1(x)
x = self.drop(x)
x = self.linear2(x)
x = self.linear3(x)
# define layer 3 forward pass
# x = self.logsoftmax(self.linear3(x))
# return forward pass result
return x
But it still leads to an issue here:
init collection of training epoch losses
train_epoch_losses = []
set the model in training mode
model.train()
train the MNISTNet model
for epoch in range(num_epochs):
# init collection of mini-batch losses
train_mini_batch_losses = []
# iterate over all-mini batches
for i, (images, labels) in enumerate(fashion_mnist_train_dataloader):
# push mini-batch data to computation device
images = images.to(device)
labels = labels.to(device)
# run forward pass through the network
outputs = model(images)
# reset graph gradients
model.zero_grad()
# determine classification loss
loss = nll_loss(outputs, labels)
# run backward pass
loss.backward()
# update network paramaters
optimizer.step()
# collect mini-batch reconstruction loss
train_mini_batch_losses.append(loss.data.item())
# determine mean min-batch loss of epoch
train_epoch_loss = np.mean(train_mini_batch_losses)
# print epoch loss
now = datetime.utcnow().strftime("%Y%m%d-%H:%M:%S")
print('[LOG {}] epoch: {} train-loss: {}'.format(str(now), str(epoch), str(train_epoch_loss)))
# set filename of actual model
model_name = 'fashion_mnist_model_epoch_{}.pth'.format(str(epoch))
# save current model to GDrive models directory
torch.save(model.state_dict(), os.path.join(models_directory, model_name))
# determine mean min-batch loss of epoch
train_epoch_losses.append(train_epoch_loss)
with this error message:
image1467×450 27.4 KB
|
st31882
|
I compared Pytorch to Pytorch JIT
Before inference, I did warm-up 4 times.
Pytorch load:
models.resnet152(pretrained=True).cuda().eval()
Pytorch JIT load :
torch.jit.load('./resnet152_model.pt').cuda().eval()
then both case are faster.
The reason of result of pytorch JIT is that the optimized graphs are cached along with the bytecode by JIT compiler.
But why is pytorch’s result faster?
|
st31883
|
I am new to PyTorch. I want to install all the packages in the spyder. Can someone help me to give me all the commands or the link from where I can install everything?
|
st31884
|
Download anaconda link
Go to anaconda-navigator, and install spyder.
Install pytorch in conda environment link 16
|
st31885
|
On pytorch install page, select cuda and the install command will do everything for you.
|
st31886
|
ImportError: cannot import name ‘USE_GLOBAL_DEPS’ from ‘torch._utils_internal’ (C:\ProgramData\Anaconda3\lib\site-packages\torch_utils_internal.py)
I am facing this error
|
st31887
|
Try changing directory. You have to provide the complete process that got you to the error, it is difficult to tell without that.
|
st31888
|
ImportError Traceback (most recent call last)
in
----> 1 import torch
~\anaconda3\lib\site-packages\torch_init_.py in
21
22 from ._utils import _import_dotted_name
—> 23 from ._utils_internal import get_file_path, prepare_multiprocessing_environment,
24 USE_RTLD_GLOBAL_WITH_LIBTORCH, USE_GLOBAL_DEPS
25 # TODO(torch_deploy) figure out how to freeze version.py in fbcode build
ImportError: cannot import name ‘USE_GLOBAL_DEPS’ from ‘torch._utils_internal’ (C:\Users\ANTHONY\anaconda3\lib\site-packages\torch_utils_internal.py)
I am having this error. I reinstalled it but, same error persists
|
st31889
|
Is there a plan for when different python version will be deprecated? I am specifically curious about when Python 3.6 will be deprecated as it reaches EOL at the end of this year.
Thanks for the help
|
st31890
|
Hello all, I am trying to train a Resnet50 model on imagenet. My requirement is that I want to only train a subset of weights using a mask. I have successfully done this with general resnet models on cifar10 using the following sequence of code:
optimizer.zero_grad()
loss.backward()
for name, p in model.named_parameters():
if 'weight' in name:
p.grad.data = p.grad.data * masks[name]
optimizer.step()
Where masks is a dictionary containing layerwise masks with values of 1s and 0s.
Now, currently I am trying to incorporate the same idea for distributed imagenet training. I am using nVidia’s Resnet repo which utilizes AMP. I tried with including the following codeblock in nVidia’s implementation in this function in line 209 here:
scaler.scale(loss).backward()
for name, p in model_and_loss.named_parameters():
if 'weight' in name:
p.grad.data = p.grad.data * masks[name]
However, with the above code, still all of the weights are getting trained after gradient step. The optimizer step is done with (I think) here:
optimizer_step = ((i + 1) % batch_size_multiplier) == 0
loss = step(input, target, optimizer_step=optimizer_step)
if ema is not None:
ema(model_and_loss, epoch*steps_per_epoch+i)
I am unsure, how to achieve this in this nvidia implementation, also unsure how the gradient step is being taken.
My goal was to zero-out a subset of the gradients before taking the optimizer.step(), however with AMP it does not seem to be straightforward, specially in nvidia’s repo.
My question is, is there a different way to accomplish this in the training code here.
Moreover, during AMP gradient update, is the same gradients from parameters.grad are used or am I missing something?
Thanks a lot for reading this. Please let me know if I need to clarify something here, I hope to be quick to response. I appreciate the help.
|
st31891
|
Hello, I am developing a model to apply on FMD (Flickr Material Database), but training on that same database just lead to 30% accuracy. Now I’m gonna pre-train the model on ImageNet, but don’t know how to do it. I haven’t yet even discovered how to download it in a simple way. How should I do it?
Also, since don’t have GPUs I am using Colab, wich has a small storage (64GB) in comparison to ImageNet’s size (think 150GB zipped). So I think my way is to upload the dataset to my edu drive account, wich has unlimited storage (as for now).
Thank you!
|
st31892
|
If you are just prototyping you might be able to just use a pretrained torchvision model:
https://pytorch.org/vision/stable/models.html 5
Downloading from the original website 1 is often tricky, but the academic torrents 2 version works well if you have a way to mount or access > 150GB of storage.
|
st31893
|
how to code the following graph structure ?
which kind of data structure is suitable in this case for both forward inference and backward propagation ?
Would Tutorial — NetworkX 2.5 documentation or PyTorch Geometric Documentation — pytorch_geometric 1.7.0 documentation or https://www.dgl.ai/ be suitable ?
image976×1015 285 KB
github.com/D-X-Y/AutoDL-Projects
Questions about DARTS
opened
May 1, 2021
promach
question
1. For DARTS complexity analysis, anyone have any idea how to derive the *(k+1)\…*k/2* expression ? Why 2 input nodes ? How will the calculated value change if graph isomorphism is considered ? Why "2+3+4+5" learnable edges ? If there is lack of connection, the paper should not add 1 which does not actually contribute to learnable edges configurations at all ?
2. Why need to train the weights for normal cells and reduction cells **separately** as shown in Figures 4 and 5 below ?
3. How to arrange the nodes such that the NAS search will actually converge with minimum error ? Note: Not all nodes are connected to each and every other nodes
4. Why is [GDAS](https://arxiv.org/pdf/1910.04465.pdf#page=7) 10 times faster than DARTS ?


gist.github.com
https://gist.github.com/promach/b6f526c56e20f029d68e6f9041c3f5c0#file-gdas-py-L106-L107 19
gdas.py
# https://github.com/D-X-Y/AutoDL-Projects/issues/99
import torch
import torch.utils.data
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
This file has been truncated. show original
|
st31894
|
Hello, i am training a CNN model in federated learning settings. i update my local_models and then i average the weights i get from the local_model update to get the weight of my global_model. Since i use loss.backward() during my local_model training, i can get the gradients by using :
updates_list = []
for item in model.parameters():
updates_list.append(copy.deepcopy(item.grad))
print(updates_list)
But i can not use .grad for my global_model. My question is, how do i get the updated gradients of my global_model?
|
st31895
|
I’m not sure, if I understand the use case correctly, but in case you would like to use the gradients in updates_list, you could assign these values to the .grad attributes of the parameters of the global_model and call optimizer.step() afterwards.
|
st31896
|
Hello,
While trying to debug what seemed to be a deadlock I realized that I could not share a tensor to sub processes if it was more than 128kb.
# Works fine
buffer = ch.zeros(32768).share_memory_()
# sub-processes hang when try to read/write the tensor (even protected by a lock)
# (Main process can read it fine)
buffer = ch.zeros(32769).share_memory_()
Is there a configuration option to allow me to allocate more shared memory in a single tensor ?
Thank you for your help
|
st31897
|
It seems to be a bug since it was working in version 1.4.0. Filed a bug report here: Operating on more than 128kb of shared memory hangs · Issue #58962 · pytorch/pytorch · GitHub 1
|
st31898
|
Hi,
My usecase is to take a FP32 pre-trained PyTorch model, convert it to FP16 (both weights and computation that is amenable to Fp16 computation) and then trace the model. Later, I will read this model in TVM (a deep learning compiler) and use it to generate code for ARM servers. ARM servers have instructions to speed up FP16 computation. Please let me know if this is possible today.
Note that I need to use Torchscript (as TVM input is traced model). Also, the goal here is get speedup so my hope is that the resulting traced model has some operators (like conv2d, dense) whose inputs dtypes are in FP16.
@ptrblck
|
st31899
|
autocast is not supported yet in scripted models, but in case your model doesn’t use any data dependent control flow, you could try to trace the model inside the autocast context.
I don’t know, how the TorchScript model would be provided to TVM and what kind of instructions ARM servers are using.
|
st31900
|
I see. Thanks for the reply @ptrblck
I will read more about autocast tomorrow and will try tracing it.
Meanwhile, I also used the following script (using half) which gave me a fp16 graph
import torchvision
import torch
import tvm
from tvm import relay
is_fp16 = True
model = torchvision.models.resnet18(pretrained=True)
x = torch.rand(1, 3, 224, 224)
if is_fp16:
model = model.cuda().half()
x = x.cuda().half()
scripted_model = torch.jit.trace(model, (x))
scripted_model.save("rn50_fp16.pt")
else:
scripted_model = torch.jit.trace(model, (x))
scripted_model.save("rn50_fp32.pt")
input_name = "input0"
input_shape = (1, 3, 224, 224)
shape_list = [(input_name, input_shape)]
mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
mod = relay.transform.InferType()(mod)
print(mod)
My usecase is inference. The above code converted the whole model to fp16. But I am worried that it might be doing to much of fp16 and may need to go back to fp32 for some operators like sum or batch_norm.
|
st31901
|
That’s a reasonable concern and is the reason for the implementation of the amp.autocast, which would use FP32 precision, where necessary. For your use case you could certainly try to call .half() on the entire model and check the results manually.
|
st31902
|
Makes sense. So, I followed up on autocast. This is what my code looks like
model = torchvision.models.resnet18()
x = torch.rand(1, 3, 224, 224).cuda()
model = model.cuda()
with torch.cuda.amp.autocast(enabled=True):
scripted_model = torch.jit.trace(model, (x))
scripted_model.save("rn50_amp.pt")
It failed with the following error - RuntimeError: Cannot insert a Tensor that requires grad as a constant. Consider making it a parameter or input, or detaching the gradient
I then added no_grad
model = torchvision.models.resnet18()
x = torch.rand(1, 3, 224, 224).cuda()
model = model.cuda()
with torch.cuda.amp.autocast(enabled=True):
with torch.no_grad():
scripted_model = torch.jit.trace(model, (x))
scripted_model.save("rn50_amp.pt")
But this also failed with some different error
torch.jit._trace.TracingCheckError: Tracing failed sanity checks!
ERROR: Graphs differed across invocations!
I am not sure if my usage is correct.
|
st31903
|
Thanks for the update. I guess that some (new) graph passes might be running into these issues, but I’m unsure which optimizations are performed. Until scripting + amp is fully supported, you could manually cast the model and check the outputs.
|
st31904
|
Thanks @ptrblck for the prompt responses. You have been really helpful. I will manually cast the models till then.
|
st31905
|
Hello PyTorch-Community,
I am new to deep learning and I am currently taking a class on the topic in college. For this we (me and a class mate) are supposed to find a topic that interests us and try deep learning in its sphere.
We are trying to build a neural net that can be used to forecast the price of various cryptocurrencies based on their previous values, the amount of trades etc.
We have managed to set it up and an epoch loss is calculated by the programm but it is always staying the same or changes are very minor. We think that the issue is that the value of cryptocurrencies is very hard to be forecasted based on our data but maybe we have missed something?
I have included the code below. Thanks a lot in advance for any help!
run.py (the main script):
from CryptoPredicter.util.pred_data import PredicterDataset
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
import custom_lstm as lstm
import matplotlib.pyplot as plt
seq_size = 100
batch_size = 100
# See documentation on PredictorDataset
data = PredicterDataset(seq_size, interval="Hourly", refresh=False, data_to_consider=100, coin_name='ETH')
# Get input size from data
input_size = data.__getitem__(0)[0].shape[1]
# Creating DataLoader
loader = DataLoader(dataset=data, batch_size=batch_size)
net = OurNet(input_size)
criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.5)
# optimizer = optim.LBFGS(net.parameters(), lr=0.01)
epoch_loss_list = []
for epoch in range(50):
running_loss = 0
for inputs, labels in loader:
optimizer.zero_grad()
outputs = net(inputs.float())
loss = criterion(torch.squeeze(outputs), labels.float())
loss.backward()
optimizer.step()
running_loss += loss.item()
epoch_loss = running_loss / len(loader)
epoch_loss_list.append(epoch_loss)
print('Epoch loss: ' + str(running_loss / len(loader)))
OurNet.py (were we instantiate our custom LSTM):
import torch
import torch.nn as nn
import math
class CustomLSTM(nn.Module): #Man kann auch das LSTM von Pytorch nehmen
def __init__(self, input_sz: int, hidden_sz: int):
super().__init__()
self.input_size = input_sz
self.hidden_size = hidden_sz
self.U_f = nn.Parameter(torch.Tensor(input_sz, hidden_sz)) # hängt vom Input ab
self.V_f = nn.Parameter(torch.Tensor(hidden_sz, hidden_sz))
self.b_f = nn.Parameter(torch.Tensor(hidden_sz))
self.U_i = nn.Parameter(torch.Tensor(input_sz, hidden_sz)) # hängt vom Input ab
self.V_i = nn.Parameter(torch.Tensor(hidden_sz, hidden_sz))
self.b_i = nn.Parameter(torch.Tensor(hidden_sz))
self.U_o = nn.Parameter(torch.Tensor(input_sz, hidden_sz)) # hängt vom Input ab
self.V_o = nn.Parameter(torch.Tensor(hidden_sz, hidden_sz))
self.b_o = nn.Parameter(torch.Tensor(hidden_sz))
self.U_g = nn.Parameter(torch.Tensor(input_sz, hidden_sz)) # hängt vom Input ab
self.V_g = nn.Parameter(torch.Tensor(hidden_sz, hidden_sz))
self.b_g = nn.Parameter(torch.Tensor(hidden_sz))
self.init_weights()
def init_weights(self):
stdv = 1.0 / math.sqrt(self.hidden_size)
for weight in self.parameters():
weight.data.uniform_(-stdv, stdv)
def forward(self, x):
bs, seq_sz, _ = x.shape # assumes x.shape represents (batch_size, sequence_size, input_size)
hidden_seq = []
# c_t und h_t müssen hier initialisiert werden
c_t = torch.zeros(bs, self.hidden_size)
h_t = torch.zeros(bs, self.hidden_size)
for t in range(seq_sz):
x_t = x[:, t, :]
# h_t = torch.sigmoid(x_t @ self.W_f + h_t @ self.U_f + self.b_f)
f_t = torch.sigmoid(x_t @ self.U_f + h_t @ self.V_f + self.b_f) # @ ist Multiplikation von Matrizen
i_t = torch.sigmoid(x_t @ self.U_i + h_t @ self.V_i + self.b_i)
o_t = torch.sigmoid(x_t @ self.U_o + h_t @ self.V_o + self.b_o)
g_t = torch.tanh((x_t @ self.U_g + h_t @ self.V_g + self.b_g))
c_t = f_t * c_t + i_t * g_t
h_t = o_t * torch.tanh(c_t)
hidden_seq.append(h_t.unsqueeze(
0)) # transform h_t from shape (batch_size, input_size) to shape (1, batch_size, input_size)
# reshape hidden_seq
hidden_seq = torch.cat(hidden_seq,
dim=0) # concatenate list of tensors into one tensor along dimension 0 (sequence_size, batch_size, input_size)
hidden_seq = hidden_seq.transpose(0,
1).contiguous() # exchange new dimension with batch dimension so that new tensor has required shape (batch_size, sequence_size, input_size)
return h_t, hidden_seq
downloader.py (we use it to download the currency data):
import json
import os
import pathlib
import pickle
import shutil
import ssl
from os import listdir
import numpy as np
import pandas as pd
from urllib import request
from bs4 import BeautifulSoup
from sklearn import preprocessing
base_url = 'available upon request'
download_base_url = 'available upon request'
path = str(pathlib.Path(__file__).parent.absolute()) + os.sep + 'data' + os.sep
samples_path = path + "Samples" + os.sep
mil_seconds_of_one_day = 86400000
MOZILLA_HEADER = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36',
'Accept-Language': 'de-DE,de;q=0.9,en-US;q=0.8,en;q=0.7',
'Accept-Encoding': 'gzip, deflate, br'}
ssl._create_default_https_context = ssl._create_unverified_context
def prepare_data(interval, refresh, data_to_consider):
download(interval, refresh)
### Create sample_path
if not os.path.exists(samples_path + interval):
os.makedirs(samples_path + interval)
# List of all available coins and their data
all_coins = listdir(path + interval)
# Getting newest unix for calculation data to consider and list of all avialable timestamps
all_ts = get_newest_ts_and_all_ts(path + interval + os.sep + all_coins[0])
# Calculate the min timestamp that should be considered
min_timestamp = get_min_unix_timestamp(data_to_consider, max(all_ts))
# Filter all relevant timestamps
ts_list = list(filter(lambda all_ts: all_ts > min_timestamp, all_ts))
# Creating samples data
print('Taking %s timestamps in consideration' % (len(ts_list) - 1))
# Check if samples already exist else delete data
if check_if_samples_exist(interval, len(ts_list) - 1, refresh):
print('Sample data already exist')
return
count = 0
print('Creating samples ...')
for ts in ts_list:
# Leave out first timestamp because we have no y data for this
if ts == max(all_ts):
continue
count += 1
percent = str(((count / (len(ts_list) - 1)) * 100))
print('... %s/%s -> %s' % (count, len(ts_list) - 1, str(((count / (len(ts_list) - 1)) * 100))))
dataf = pd.DataFrame(columns=['name', 'open', 'high', 'low', 'close', 'Volume', 'Volume USDT', 'tradecount'])
for coin in all_coins:
with open(path + interval + os.sep + coin, 'r') as f:
df = pd.read_csv(f, sep=',', header=1)
feature = df.loc[df['unix'] == ts].values[:, 3:]
if not len(feature) == 1:
raise RuntimeError('No or to much data found')
dataf.loc[len(dataf)] = [coin] + feature[0].tolist()
# Normalize
min_max_scaler = preprocessing.MinMaxScaler()
dataf.iloc[:, 1:] = min_max_scaler.fit_transform(dataf.iloc[:, 1:])
with open(samples_path + interval + os.sep + str(int(ts / 1000)) + ".pkl", 'wb') as f:
pickle.dump(dataf, f, pickle.HIGHEST_PROTOCOL)
def check_if_samples_exist(interval, ts, refresh):
if os.path.exists(samples_path + interval):
if len(listdir(samples_path + interval)) == ts and not refresh:
return True
return False
def get_newest_ts_and_all_ts(file):
with open(file, 'r') as file:
return pd.read_csv(file, header=1)['unix']
raise EOFError("Could not find newest date. Please check your data.")
def get_min_unix_timestamp(data_to_consider, newest_ts):
min_unix_timestamp = newest_ts - mil_seconds_of_one_day * data_to_consider
return min_unix_timestamp
def download(interval, refresh):
### Check if download is needed
if os.path.exists(path + interval) and not refresh:
print(interval + " - Data already exist")
return
### Delete old data
if os.path.exists(path + interval):
shutil.rmtree(path + interval)
### Make sure path exist
os.makedirs(path + interval)
### Get all url to donwload
urls = get_download_urls(interval)
i = 0
### Downlad data
print("Downloading %s data ..." % interval)
for url in urls:
i += 1
print("Downloading ... %s/%s" % (str(i), str(len(urls))))
download_url = download_base_url + url
request.urlretrieve(download_url, path + interval + os.sep + get_name(download_url))
print("... download done!")
def get_download_urls(interval):
### search for all links and see if link contains USDT. If yes -> it's data we want to download
page = request.urlopen(base_url).read()
soup = BeautifulSoup(page, 'html.parser')
links = soup.findAll('a')
download_links = [link['href'] for link in links if
"USDT".upper() in link['href'].upper()
and interval in link.text
and "futures".upper() not in link['href'].upper()]
return download_links
def get_name(link):
### just give us the name of the crypto coin
return link.split("_")[1][:-4]
pred_data.py (we use to transform the data into a dataframe):
import json
import os
import pathlib
import pickle
import random
from datetime import datetime
import pandas as pd
from os import listdir
import torch
from torch.utils.data import Dataset
from CryptoPredicter.util.Downloader import prepare_data
from CryptoPredicter.util.interval_to_unix import int_to_unix
path_data = str(pathlib.Path(__file__).parent.absolute()) + os.sep + 'data' + os.sep
path_sample = path_data + 'Samples' + os.sep
class PredicterDataset(Dataset):
'''
seq_size = How many data should be delivered for one run.
interval = Daily, Hourly100 or Minute -> describes the data detail gradient.
refresh = Set to "True" if data should be reloaded from internet.
data_to_consider = Defines how many days in the past should be considered.
coin_name = The short name of coin that should be predicted excepted as input data -> ['ADA', 'BNB', 'BTC', 'BTT', 'DASH', 'EOS', 'ETC', 'ETH', 'LINK', 'LTC', 'NEO', 'QTUM', 'TRX', 'XLM', 'XMR', 'XRP', 'ZEC']
'''
def __init__(self, seq_size, interval="Minute", refresh=False, data_to_consider=100, coin_name='BTC'):
prepare_data(interval, refresh, data_to_consider)
self.seq_size = seq_size
self.paths = listdir(path_sample + interval)
self.interval = interval
self.y_data = get_y_data(coin_name, interval)
self.coin_name = coin_name
random.shuffle(self.paths)
def __len__(self):
return len(self.paths)
def __getitem__(self, idx):
data_link = path_sample + os.sep + self.interval + os.sep + str(self.paths[idx])
with open(data_link, 'rb') as f:
item = pickle.load(f)
x = torch.tensor(item[['open', 'high', 'low', 'close', 'Volume', 'Volume USDT', 'tradecount']].values)
x = x[len(x) - self.seq_size: len(x)]
timestamp = int(self.paths[idx][:-4]) * 1000
try:
y = float(self.y_data.loc[self.y_data['unix'] == timestamp]['diff'].values[0])
except IndexError as e:
return self.__getitem__(idx + 1)
return x, y
def get_y_data(coin_name, interval):
with open(path_data + interval + os.sep + coin_name) as f:
# Load dataframe of the needed coin
df = pd.read_csv(f, sep=',', header=1).copy()
# Set unix timestamp to t-1
df['unix'] = df['unix'] - int_to_unix[interval]
# Calculate diff
df['diff'] = df['close'] - df['open']
# Drop not needed columns
df = df.drop(columns=df.columns[1:-1].values.tolist())
return df
Any help is kindly appreciated!
|
st31906
|
Hi all,
my computer has a GeForceGT710 graphics card (OS Linux Mint). My understanding is that I need to install pytorch from source code to avoid the “CUDA error: no kernel image is available for execution on this device” warning.
I have so far followed the instructions specified on github (GitHub - pytorch/pytorch: Tensors and Dynamic neural networks in Python with strong GPU acceleration). However, I am getting an error while compiling:
CMake Warning at modules/observers/CMakeLists.txt:12 (add_library):
Cannot generate a safe runtime search path for target caffe2_observers
because files in some directories may conflict with libraries in implicit
directories:
runtime library [libcudart.so.10.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
/home/mareike/miniconda3/envs/ginjinn/lib
runtime library [libnvToolsExt.so.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
/home/mareike/miniconda3/envs/ginjinn/lib
runtime library [libcufft.so.10] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
/home/mareike/miniconda3/envs/ginjinn/lib
runtime library [libcurand.so.10] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
/home/mareike/miniconda3/envs/ginjinn/lib
runtime library [libcublas.so.10] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
/home/mareike/miniconda3/envs/ginjinn/lib
Some of these libraries may not be found correctly.
CMake Error at modules/observers/CMakeLists.txt:12 (add_library):
The install of the caffe2_observers target requires changing an RPATH from
the build tree, but this is not supported with the Ninja generator unless
on an ELF-based platform. The CMAKE_BUILD_WITH_INSTALL_RPATH variable may
be set to avoid this relinking step.
CMake Error at modules/observers/CMakeLists.txt:12 (add_library):
The install of the caffe2_observers target requires changing an RPATH from
the build tree, but this is not supported with the Ninja generator unless
on an ELF-based platform. The CMAKE_BUILD_WITH_INSTALL_RPATH variable may
be set to avoid this relinking step.
– Generating done
CMake Generate step failed. Build files cannot be regenerated correctly.
My best idea so far is to set CMAKE_BUILD_WITH_INSTALL_RPATH to TRUE. However, I am not too knowledgeable about programming, let alone cmake.
Other information that may be of use: I run gcc 9.3.0, g++ 9.3.0, cudatoolkit 10.1.243, and cudnn 7.6.5. I have everything including dependencies (numpy,… again as specified on github) installed in a conda environment. However, the error also occurs when I install everything globally.
Can someone help me to resolve this?
Best,
Mareike
|
st31907
|
Hello,
I have semantic segmentation code, this code help me to test 25 images results (using confusion matrix). But I want to plot ROC Curve of testing datasets. But I am unable to do this job. Please check my shared code, and let me know, how I properly draw ROC curve by using this code.
import os
import cv2
import torch
import numpy as np
from glob import glob
from model import AI_Net
from operator import add
from crf import apply_crf
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
device = torch.device(‘cuda’ if torch.cuda.is_available() else ‘cpu’)
“”" Load the checkpoint “”"
model = AI_Net()
model = model.to(device)
model.load_state_dict(torch.load(‘datasets/models/A_Net/Fold_1_Model.pth’, map_location=device))
model.eval()
def calculate_metrics(y_true, y_pred):
y_true = y_true.cpu().numpy()
y_pred = y_pred.cpu().numpy()
y_pred = y_pred > 0.5
y_pred = y_pred.reshape(-1)
y_pred = y_pred.astype(np.uint8)
y_true = y_true > 0.5
y_true = y_true.reshape(-1)
y_true = y_true.astype(np.uint8)
## Score
score_fpr, score_tpr, _ = roc_curve(y_true, y_pred)
score_roc_auc = roc_auc_score(y_true, y_pred)
return [score_fpr, score_tpr, score_roc_auc]
""" Load dataset """
root_path = ‘datasets/UDIAT/’
test_x = sorted(glob(os.path.join(root_path, “test/images”, “.png")))
test_y = sorted(glob(os.path.join(root_path, “test/masks”, ".png”)))
metrics_score = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
for i, (x, y) in enumerate(zip(test_x, test_y)):
name = y.split("/")[-1].split(".")[0]
## Image
image = cv2.imread(x, cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image, (128, 128))
ori_img1 = image
image1 = np.expand_dims(image, axis=0)
image1 = image1/255.0
image1 = np.expand_dims(image1, axis=0)
image1 = image1.astype(np.float32)
image1 = torch.from_numpy(image1)
image1 = image1.to(device)
## Mask
mask = cv2.imread(y, cv2.IMREAD_GRAYSCALE)
mask = cv2.resize(mask, (128, 128))
ori_mask1 = mask
mask1 = np.expand_dims(mask, axis=0)
mask1 = mask1/255.0
mask1 = np.expand_dims(mask1, axis=0)
mask1 = mask1.astype(np.float32)
mask1 = torch.from_numpy(mask1)
mask1 = mask1.to(device)
# Shape
#print('image shape:', image1.shape, 'roi shape:', mask1.shape)
with torch.no_grad():
pred_y1 = torch.sigmoid(model(image1))
score = calculate_metrics(mask1, pred_y1)
metrics_score = list(map(add, metrics_score, score))
tpr = metrics_score[0]/len(test_x)
fpr = metrics_score[1]/len(test_x)
roc_auc = metrics_score[2]/len(test_x)
algorithm = ‘CNN’
dataset = ‘BUS’
rocCurve = plt.figure()
plt.plot(fpr, tpr, ‘-’, label=algorithm + ‘_’ + dataset + ‘(AUC = %0.4f)’ % roc_auc)
plt.title(‘ROC curve’, fontsize=14)
plt.xlabel(“FPR (False Positive Rate)”, fontsize=15)
plt.ylabel(“TPR (True Positive Rate)”, fontsize=15)
plt.legend(loc=“lower right”)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.show()
|
st31908
|
Solved by ptrblck in post #4
I guess the inputs to roc_curve are wrong, so you would have to make sure they fit the expected arrays as described in the docs:
y_true ndarray of shape (n_samples,)
True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos_label should be explicitly given.
y_score ndarray of sh…
|
st31909
|
You could use the ROC implementations from other libraries such as sklearn.metrics.roc_curve 257. To do so you could transform the predictions and targets to numpy arrays via tensor.numpy() and apply the mentioned method.
|
st31910
|
Hello,
To store all iterations results of y_true, and y_pred, i added all_y_true, all_y_pred. But when i try to plot ROC Curve, it shows ValueError: continuous format is not supported, at line → 11 fpr, tpr, _ = roc_curve(y_true, y_pred)
Here is my code:
all_y_true = []
all_y_pred = []
for i, (x, y) in enumerate(zip(test_x, test_y)):
name = y.split("/")[-1].split(".")[0]
## Image
image = cv2.imread(x, cv2.IMREAD_GRAYSCALE)
image = cv2.resize(image, (128, 128))
image = np.expand_dims(image, axis=0)
image = image/255.0
image = np.expand_dims(image, axis=0)
image = image.astype(np.float32)
image = torch.from_numpy(image)
image = image.to(device)
## Mask
mask = cv2.imread(y, cv2.IMREAD_GRAYSCALE)
mask = cv2.resize(mask, (128, 128))
mask = np.expand_dims(mask, axis=0)
mask = mask/255.0
mask = np.expand_dims(mask, axis=0)
mask = mask.astype(np.float32)
mask = torch.from_numpy(mask)
#mask = mask.to(device)
all_y_true = np.append(all_y_true, mask)
# Shape
#print('image shape:', image1.shape, 'roi shape:', mask1.shape)
with torch.no_grad():
pred_y1 = torch.sigmoid(model(image))
y_pred = pred_y1.cpu().numpy()
y_pred = y_pred > 0.5
y_pred = y_pred.reshape(-1)
y_pred = y_pred.astype(np.uint8)
#y_pred.flatten()
all_y_pred = np.append(all_y_pred, y_pred)
Calculate image-level ROC AUC score
y_true = all_y_true
y_pred = all_y_pred
fpr, tpr, _ = roc_curve(y_true, y_pred)
roc_auc = roc_auc_score(y_true, y_pred)
plt.figure(1)
plt.plot([0, 1], [0, 1], ‘k–’)
plt.plot(fpr, tpr, label=‘CNN(area = {:.3f})’.format(roc_auc))
plt.xlabel(‘False positive rate’)
plt.ylabel(‘True positive rate’)
plt.title(‘ROC curve’)
plt.legend(loc=‘best’)
plt.show()
|
st31911
|
I guess the inputs to roc_curve are wrong, so you would have to make sure they fit the expected arrays as described in the docs:
y_true ndarray of shape (n_samples,)
True binary labels. If labels are not either {-1, 1} or {0, 1}, then pos_label should be explicitly given.
y_score ndarray of shape (n_samples,)
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers).
|
st31912
|
I resolved error, but now i am getting this error
ValueError: multiclass format is not supported → Line 12 fpr, tpr, _ = roc_curve(y_true, y_pred)
|
st31913
|
This indicates a wrong shape of one of the inputs, so you would have to make sure to use the described shapes from my previous post.
|
st31914
|
Thanks very much, I transform my y_true, y_score into acceptable shapes, and issue is resolved.
|
st31915
|
I would like to test the impact of randomly resetting (re-initializing) certain neurons during training, in a Pytorch-based model with a number of linear layers like this:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear1 = nn.Linear(D, M)
self.linear2 = nn.Linear(M, M)
self.linear3 = nn.Linear(M, M)
self.linear4 = nn.Linear(M, K)
def forward(self, x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = F.relu(self.linear3(x))
x = self.linear4(x)
x = F.log_softmax(x, dim=1)
return x
Ideally, I would like to implement a function/code snippet that resets the weights of 1 or more random neurons in each layer at the start of each epoch. The idea is that each neuron has a small probability of being re-initialized each epoch, to see what the effects are on performance in general, as well as on common issues such as dying relu’s and vanishing gradients for sigm and tanh.
In other words: instead of a single epoch dropout, you’d have a permanent replacement of an existing neuron with a ‘fresh’ new trainable neuron.
I tried implementing this via calls to parameters or state_dict, but so far I only get permission and anomaly errors when trying this. I’m also not sure whether this should be implemented in the module itself or in the training part.
Any suggestions would be highly welcome!
|
st31916
|
There are several ways to initialize the parameters of a model in deep learning : torch.nn.init — PyTorch 1.8.1 documentation 2
So based on the code you proposed above, I suggest something like :
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linears = nn.ModuleList([
nn.Linear(D, M)
nn.Linear(M, M)
nn.Linear(M, M)
nn.Linear(M, K)
])
# ...
# forward
# ...
def init_linear(linear):
with torch.no_grad():
# Choose your approach here, I just suggest some
nn.init.normal_(linear.weight, mean=0, std=1)
#nn.init.xavier_uniform_(linear.weight)
nn.init.constant_(linear.bias, 0.)
# Or when you want to initialize the whole model
def init_model(model):
with torch.no_grad():
for name, param in model.named_parameters():
# Choose your approach here, I just suggest some
if 'weight' in name:
torch.nn.init.xavier_uniform_(param.data)
elif 'bias' in name:
param.data.fill_(0)
# elif :
# ...
And during training:
for _ in range(MAX_EPOCH) :
to_init = you choose randomly the indices of the layers you want to reinitialize
for i in to_init :
init_linear(model.linears[i])
# ...
# training
|
st31917
|
Can I build a multi-layer RNN with different hidden size per layer using PyTorch?
For example, a 3-layer RNN with feature size of 512, 256, 128 at each layer respectivey?
|
st31918
|
chenyuntc:
Yes, but you need to figure out the input and output of RNN/LSTM/GRU.
By ‘layer’ I mean the layers of a stacked RNN. PyTorch RNN module only takes a single parameter ‘hidden_size’ and all stacked layers are of exactly the same hidden size. But is it possible to make layers have different hidden size?
|
st31919
|
How about this:
import torch
from torch import nn
from torch.autograd import Variable
# layer1
# input_dim=10, output_dim=20
rnn1 = nn.LSTM(10, 20, 1)
input = Variable(torch.randn(5, 3, 10))
output1, hn = rnn1(input)
# layer2
# input_dim=20 output_dim=30
rnn2 = nn.LSTM(20, 30, 1)
output2, hn2 = rnn2(output1)
|
st31920
|
Could you please elaborate on that: I do not understand why we have to discard hn in the second layer while scheme says, that h2_t depends on both h2_t-1 and h1_t? Is there a way to handle both states in pytorch?
|
st31921
|
h2_t-1 is included in the same layer and h1_t is included in the output of the previous layer. You don’t need hn to make your network work.
|
st31922
|
I am doing an unittest, and was wondering wether to use .item() or not. Both give the same result, so my question is it correct If I do it the 1-way? (more clean to me)
import unittest
import torch
class TestTensor(unittest.TestCase):
def test_values(self):
x = torch.arange(1, 10) > 8
self.assertTrue(x.any()) # 1-way
self.assertTrue(x.any().item()) # 2-way
|
st31923
|
i am trying to generate a heatmap for image depending on weights from last conv layer in pretrained densenet121, but when i try to multiply the weights by the output of model in gives me this error:
“ValueError: operands could not be broadcast together with shapes (1024,) (7,7)”
this is the code :
import os
import torch
import urllib
from PIL import Image
from torchvision import transforms
import numpy as np
import cv2
# load the model
model = torch.hub.load('pytorch/vision:v0.9.0', 'densenet121', pretrained=True)
# or any of these variants
# model = torch.hub.load('pytorch/vision:v0.9.0', 'densenet169', pretrained=True)
# model = torch.hub.load('pytorch/vision:v0.9.0', 'densenet201', pretrained=True)
# model = torch.hub.load('pytorch/vision:v0.9.0', 'densenet161', pretrained=True)
model.eval()
filename = 'dog.jpg'
# get input image and apply preprocessing
input_image = Image.open(filename)
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0)
# output = model(input_batch)
# print(output.shape)
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
# probabilities = torch.nn.functional.softmax(output[0], dim=0)
# print(probabilities)
named_layers = dict(model.named_modules())
# for layer in named_layers:
# print(layer)
# print(named_layers['features.denseblock4.denselayer16'])
class densenet_last_layer(torch.nn.Module):
def __init__(self, model):
super(densenet_last_layer, self).__init__()
self.features = torch.nn.Sequential(
*list(model.children())[:-1]
)
def forward(self, x):
x = self.features(x)
x = torch.nn.functional.relu(x, inplace=True)
return x
conv_model = densenet_last_layer(model)
# print(conv_model)
input_batch = torch.autograd.Variable(input_batch)
conv_output = conv_model(input_batch)
# print(conv_output.shape)
conv_output = conv_output.cpu().data.numpy()
# conv_output = np.squeeze(conv_output)
# for state in model.state_dict():
# print(state)
weights = model.state_dict()['classifier.weight']
weights = weights.cpu().numpy()
bias = model.state_dict()['classifier.bias']
bias = bias.cpu().numpy()
# print(conv_output.shape)
heatmap = None
for i in range(0, len(weights)):
map = conv_output[0, i, :, :]
if i == 0:
heatmap = weights[i] * map
else:
heatmap += weights[i] * map
# ---- Blend original and heatmap
npHeatmap = heatmap.cpu().data.numpy()
transCrop = 224
imgOriginal = cv2.imread(filename, 1)
imgOriginal = cv2.resize(imgOriginal, (transCrop, transCrop))
cam = npHeatmap / np.max(npHeatmap)
cam = cv2.resize(cam, (transCrop, transCrop))
heatmap = cv2.applyColorMap(np.uint8(255 * cam), cv2.COLORMAP_JET)
img = heatmap * 0.5 + imgOriginal
cv2.imwrite(os.getcwd(), img)
print(model.state_dict()['classifier.weight'])
|
st31924
|
Based on the error message the heatmap and imgOriginal are two numpy arrays in different shapes, which cannot be accumulated:
heatmap = np.random.randn(1024,)
imgOriginal = np.random.randn(7, 7)
img = heatmap * 0.5 + imgOriginal
> ValueError: operands could not be broadcast together with shapes (1024,) (7,7)
so you would have to make sure the heatmap has the same shape as imgOriginal (or can be broadcasted).
I’m also unsure, what exactly is happening in these lines of code:
heatmap = weights[i] * map
as it seems you are indexing the weight matrix of the last linear layer and are multiplying it with a channel of an output activation of a conv layer.
|
st31925
|
i am trying to generate heatmap but always have corrupted images…i am not sure about the logic of generating CAM but i will explain the steps i do so that if you can please help me…
1-) the first thing i load my trained model then remove the last classification layer from it to get the last conv layer.
2-) then i convert the image to tensor so i can feed it to the last convolutional layer and get the output feature map.
3-) i loop through the weights of classification layer of my trained model and then multiply the classification weights by output feature map.
that was the logic i do…i am not pretty sure it is right because i am new to deep learning and don’t have a lot of experience with it.
i would be very thankful for you help.
here is the code :
import os
import numpy as np
import time
import sys
from PIL import Image
import heatmap_test as ht
import cv2
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
from densenet import densenet121
from densenet import densenet169
from densenet import densenet201
from densenet import DenseNet
from torchvision.transforms import ToTensor
class densenet_last_layer(torch.nn.Module):
def __init__(self, model):
super(densenet_last_layer, self).__init__()
# remove last classification layer
self.features = torch.nn.Sequential(
*list(model.children())[:-1]
)
def forward(self, x):
x = self.features(x)
x = torch.nn.functional.relu(x, inplace=True)
return x
class DrawHeatmap():
def __init__(self, pathModel, img, pathOutputFile):
model = densenet121(False)
modelCheckpoint = torch.load(pathModel, map_location=torch.device('cpu'))
model.load_state_dict(modelCheckpoint['state_dict'], strict=False)
named_layers = dict(model.named_modules())
model.eval()
weights = None
bias = None
for name, params in model.named_parameters():
print(name)
if name == 'classifier.weight':
weights = params
if name == 'classifier.bias':
bias = params
conv_model = densenet_last_layer(model)
# ---- Initialize the image transform - resize + normalize
normalize = transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
transformList = []
# transformList.append(transforms.Resize(transCrop))
transformList.append(transforms.ToTensor())
# transformList.append(normalize)
transformSequence = transforms.Compose(transformList)
imageData = Image.open(img).convert('RGB')
imageData = transformSequence(imageData)
imageData = imageData.unsqueeze(0)
input_img = torch.autograd.Variable(imageData)
output = conv_model(input_img)
# print(output.shape)
#
# output = output.cpu().data.numpy()
# output = np.squeeze(output)
print(output.shape)
heatmap = None
for i in range(0, len(weights)):
map = output[0, i, :, :]
if i == 0:
heatmap = weights[i] * map
else:
heatmap += weights[i] * map
npHeatmap = heatmap.cpu().data.numpy()
imgOriginal = cv2.imread(img, 1)
imgOriginal = cv2.resize(imgOriginal, (224, 224))
cam = npHeatmap / np.max(npHeatmap)
cam = cv2.resize(cam, (224, 224))
heatmap = cv2.applyColorMap(np.uint8(255 * cam), cv2.COLORMAP_JET)
img = heatmap * 0.5 + imgOriginal
cv2.imwrite(pathOutputFile, img)
pathInputImage = 'C:/Users/mahmo/Desktop/best projects/Stanford Code/cxr-code/00000001_000.png'
pathModel = 'C:/Users/mahmo/Desktop/best projects/Stanford Code/cxr-code/m-25012018-123527.pth.tar'
pathOutputImage = 'C:/Users/mahmo/Desktop/best projects/Stanford Code/cxr-code/heatmap_test.png'
h = DrawHeatmap(pathModel, pathInputImage, pathOutputImage)
# ht.draw_CAM(pathModel, pathInputImage, True)
|
st31926
|
model.load_state_dict(checkpoint[‘model’])
Error(s) in loading state_dict for Cnn14:
Unexpected key(s) in state_dict: “spectrogram_extractor.stft.conv_real.weight”, “spectrogram_extractor.stft.conv_imag.weight”, “logmel_extractor.melW”.
class Cnn14(nn.Module):
def __init__(self, classes_num=527):
super(Cnn14, self).__init__()
self.bn0 = nn.BatchNorm2d(64)
self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
self.fc1 = nn.Linear(2048, 2048, bias=True)
self.fc_audioset = nn.Linear(2048, classes_num, bias=True)
self.init_weight()
def init_weight(self):
init_bn(self.bn0)
def forward(self, input):
x = input.unsqueeze(1) # (batch_size, 1, time_steps, mel_bins)
x = x.transpose(1, 3)
x = self.bn0(x)
x = x.transpose(1, 3)
x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block5(x, pool_size=(1, 1), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training) #(batch_size, 2048, T/16, mel_bins/16)
'''
x = torch.mean(x, dim=3)
(x1, _) = torch.max(x, dim=2)
x2 = torch.mean(x, dim=2)
x = x1 + x2
x = F.dropout(x, p=0.5, training=self.training)
x = F.relu_(self.fc1(x))
embedding = F.dropout(x, p=0.5, training=self.training)
clipwise_output = torch.sigmoid(self.fc_audioset(x))
output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
'''
return x
print(checkpoint['model'])
'spectrogram_extractor.stft.conv_real.weight',
tensor([[[ 0.0000e+00, 9.4124e-06, 3.7649e-05, ..., 8.4709e-05,
3.7649e-05, 9.4124e-06]],
[[ 0.0000e+00, 9.4122e-06, 3.7646e-05, ..., 8.4695e-05,
3.7646e-05, 9.4122e-06]],
[[ 0.0000e+00, 9.4117e-06, 3.7638e-05, ..., 8.4652e-05,
3.7638e-05, 9.4117e-06]],
...,
[[ 0.0000e+00, -9.4117e-06, 3.7638e-05, ..., -8.4652e-05,
3.7638e-05, -9.4117e-06]],
[[ 0.0000e+00, -9.4122e-06, 3.7646e-05, ..., -8.4695e-05,
3.7646e-05, -9.4122e-06]],
[[ 0.0000e+00, -9.4124e-06, 3.7649e-05, ..., -8.4709e-05,
3.7649e-05, -9.4124e-06]]], device='cuda:0')),
('spectrogram_extractor.stft.conv_imag.weight',
I faced a problem when I was transfer learning. There is an error that does not match the number of weights of pre-train model with the number of weights of the cnn14 model to be learned. Should the weight be the same when I do transfer learning?
The following is the structure of the model called up by the torch.load command:
class Cnn14(nn.Module):
def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin,
fmax, classes_num):
super(Cnn14, self).__init__()
window = 'hann'
center = True
pad_mode = 'reflect'
ref = 1.0
amin = 1e-10
top_db = None
# Spectrogram extractor
self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size,
win_length=window_size, window=window, center=center, pad_mode=pad_mode,
freeze_parameters=True)
# Logmel feature extractor
self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size,
n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db,
freeze_parameters=True)
# Spec augmenter
self.spec_augmenter = SpecAugmentation(time_drop_width=64, time_stripes_num=2,
freq_drop_width=8, freq_stripes_num=2)
self.bn0 = nn.BatchNorm2d(64)
self.conv_block1 = ConvBlock(in_channels=1, out_channels=64)
self.conv_block2 = ConvBlock(in_channels=64, out_channels=128)
self.conv_block3 = ConvBlock(in_channels=128, out_channels=256)
self.conv_block4 = ConvBlock(in_channels=256, out_channels=512)
self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024)
self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048)
self.fc1 = nn.Linear(2048, 2048, bias=True)
self.fc_audioset = nn.Linear(2048, classes_num, bias=True)
self.init_weight()
def init_weight(self):
init_bn(self.bn0)
init_layer(self.fc1)
init_layer(self.fc_audioset)
def forward(self, input, mixup_lambda=None):
"""
Input: (batch_size, data_length)"""
x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins)
x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins)
x = x.transpose(1, 3)
x = self.bn0(x)
x = x.transpose(1, 3)
if self.training:
x = self.spec_augmenter(x)
# Mixup on spectrogram
if self.training and mixup_lambda is not None:
x = do_mixup(x, mixup_lambda)
x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg')
x = F.dropout(x, p=0.2, training=self.training)
x = torch.mean(x, dim=3)
(x1, _) = torch.max(x, dim=2)
x2 = torch.mean(x, dim=2)
x = x1 + x2
x = F.dropout(x, p=0.5, training=self.training)
x = F.relu_(self.fc1(x))
embedding = F.dropout(x, p=0.5, training=self.training)
clipwise_output = torch.sigmoid(self.fc_audioset(x))
output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding}
return output_dict
Help me
|
st31927
|
Solved by I-Love-U in post #2
Maybe you should load the parameters like this:
def load_pretrained_params(model, model_state_path: str):
pretrained_dict = torch.load(model_state_path, map_location="cpu")
model_dict = model.state_dict()
# 1. filter out unnecessary keys
if list(pretrained_dict.keys())[0].startswith…
|
st31928
|
Maybe you should load the parameters like this:
def load_pretrained_params(model, model_state_path: str):
pretrained_dict = torch.load(model_state_path, map_location="cpu")
model_dict = model.state_dict()
# 1. filter out unnecessary keys
if list(pretrained_dict.keys())[0].startswith("module."):
pretrained_dict = {k[7:]: v for k, v in pretrained_dict.items() if k[7:] in model_dict}
else:
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
# 2. overwrite entries in the existing state dict
model_dict.update(pretrained_dict)
# 3. load the new state dict
model.load_state_dict(model_dict)
|
st31929
|
How can I get the current learning rate being used by my optimizer?
Many of the optimizers in the torch.optim 30 class use variable learning rates. You can provide an initial one, but they should change depending on the data. I would like to be able to check the current rate being used at any given time.
This question is basically a duplicate of this one 74, but I don’t think that one was very satisfactorily answered. Using Adam, for example, when I print:
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
'''...training...'''
for param_group in optimizer.param_groups:
print(param_group['lr'])
I always see the initial learning rate no matter how many epochs of data I run.
However, if I want to restart interrupted training progress or even just debug my loss, it makes sense to know where the optimizer left off.
How can I do this?
|
st31930
|
Solved by TheShadow29 in post #2
You could save the optimizer state dict. Something like this suggested here: https://github.com/pytorch/pytorch/issues/2830#issuecomment-336194949 should work.
After that you could just load it back.
I am guessing the reason you see the same lr is that the lr for adam has not changed. The effectiv…
|
st31931
|
You could save the optimizer state dict. Something like this suggested here: https://github.com/pytorch/pytorch/issues/2830#issuecomment-336194949 2.1k should work.
After that you could just load it back.
I am guessing the reason you see the same lr is that the lr for adam has not changed. The effective lr which has components from moment estimate are different.
|
st31932
|
That makes sense; thanks for the link. Is there a way to show the effective learning rate, short of calculating it on my own?
|
st31933
|
for most optim all layers use the same lr, so u can just do:
print(optimizer.param_groups[0]['lr'])
If you’re using a lr_scheduler u can do the same, or use:
print(lr_scheduler.get_lr())
|
st31934
|
Nit: get_lr() might not yield the current learning rate, so you should use get_last_lr().
|
st31935
|
Maybe you could add some additional methods for the optimizer instance:
# -*- coding: utf-8 -*-
# @Time : 2020/12/19
# @Author : Lart Pang
# @GitHub : https://github.com/lartpang
import types
from torch import nn
from torch.optim import SGD, Adam, AdamW
class OptimizerConstructor:
def __init__(self, model, initial_lr, mode, group_mode, cfg):
"""
A wrapper of the optimizer.
:param model: nn.Module
:param initial_lr: int
:param mode: str
:param group_mode: str
:param cfg: A dict corresponding to your ``mode`` except the ``params`` and ``lr``.
"""
self.mode = mode
self.initial_lr = initial_lr
self.group_mode = group_mode
self.cfg = cfg
self.params = self.group_params(model)
def construct_optimizer(self):
if self.mode == "sgd":
optimizer = SGD(params=self.params, lr=self.initial_lr, **self.cfg)
elif self.mode == "adamw":
optimizer = AdamW(params=self.params, lr=self.initial_lr, **self.cfg)
elif self.mode == "adam":
optimizer = Adam(params=self.params, lr=self.initial_lr, **self.cfg)
else:
raise NotImplementedError
return optimizer
def group_params(self, model):
....
def __call__(self):
optimizer = self.construct_optimizer()
optimizer.lr_groups = types.MethodType(get_lr_groups, optimizer)
optimizer.lr_string = types.MethodType(get_lr_strings, optimizer)
return optimizer
def get_lr_groups(self):
return [group["lr"] for group in self.param_groups]
def get_lr_strings(self):
return ",".join([f"{group['lr']:10.3e}" for group in self.param_groups])
|
st31936
|
Hello All,
I am trying to create dataset for multi-class segmentation. Image sizes are 48,64,64 and segmentation masks have 9 different labels.
.
.
.
imgnp = imgnp[None, ...]
segD = np.zeros((num_labels, 48, 64, 64))
for i in range(0, num_labels): # this loop starts from label 1
seg_one = segnp == labels[i]
segD[i, :, :, :] = seg_one[0:segnp.shape[0], 0:segnp.shape[1], 0:segnp.shape[2]]
imgD = imgnp.astype('float32')
segD = segD.astype('float32')
return imgD, segD
The output is image of shape 1,48,64,64 and one-hot encoded binary segmentation mask with 9 channels for 9 tissue labels: 9,48,64,64.
My ques: is that an efficient way to create the dataset or there are any mistakes or other better ways to do so.
*N.B.: the chunk of the code is from def __getitem__
Thanks in advance.
|
st31937
|
Use scatter
def create_one_hot(x: Tensor, n_class: int):
"""
:param x: [B, D1, D2, D3] 0 <= x.min() && x.max() < C
:return: [B, C, D1, D2, D3]
"""
B, D1, D2, D3 = x.shape
out = torch.zeros((B, n_class, D1, D2, D3), dtype=torch.float, device=x.device)
x = x[:, None, :, :, :]
out.scatter_(dim=1, index=x, src=torch.ones_like(x, dtype=out.dtype))
return out
|
st31938
|
For creating a one-hot tensor, I have implemented five functions…
github.com
lartpang/PyLoss/blob/main/losspy/utils/convert_onehot.py 2
# -*- coding: utf-8 -*-
# @Time : 2020/12/13
# @Author : Lart Pang
# @GitHub : https://github.com/lartpang
import time
import torch
from torch.nn.functional import one_hot
def bhw_to_onehot_by_for(bhw_tensor: torch.Tensor, num_classes: int):
"""
Args:
bhw_tensor: b,h,w
num_classes:
Returns: b,h,w,num_classes
"""
assert bhw_tensor.ndim == 3, bhw_tensor.shape
This file has been truncated. show original
Hope it works for you.
|
st31939
|
Easy there a way to take the element-wise max between two tensors, as in tf.maximum? My current work-around is
def max(t1, t2):
combined = torch.cat((t1.unsqueeze(2), t2.unsqueeze(2)), dim=2)
return torch.max(combined, dim=2)[0].squeeze(2)
but it’s a bit clunky.
|
st31940
|
http://pytorch.org/docs/torch.html#torch.max 14.5k
The third version of torch.max is exactly what you want.
|
st31941
|
Now, wo can use the torch.max
like this: https://pytorch.org/docs/stable/torch.html#torch.max 334
>>> a = torch.randn(4)
>>> a
tensor([ 0.2942, -0.7416, 0.2653, -0.1584])
>>> b = torch.randn(4)
>>> b
tensor([ 0.8722, -1.7421, -0.4141, -0.5055])
>>> torch.max(a, b)
tensor([ 0.8722, -0.7416, 0.2653, -0.1584])
|
st31942
|
How to do elementwise max between multiple filters? getting the following error for 3 tensors
a = torch.ones(1,4,1,1)*2
b = torch.ones(1,4,1,1)*3
c = torch.ones(1,4,1,1)*4
#d = torch.ones(1,4,1,1)*5
max_ = torch.max(a,b,c)
print(max_)
TypeError: max() received an invalid combination of arguments - got (Tensor, Tensor, Tensor), but expected one of:
* (Tensor input)
* (Tensor input, name dim, bool keepdim, *, tuple of Tensors out)
* (Tensor input, Tensor other, *, Tensor out)
* (Tensor input, int dim, bool keepdim, *, tuple of Tensors out)
|
st31943
|
Here is a simple method:
# 1 reduce
import functools
max_tensor = functools.reduce(torch.max, [a, b, c])
# 2 for
tensor_list = [a, b, c]
max_tensor = tensor_list[0]
for tensor in tensor_list[1:]:
max_tensor = torch.max(max_tensor, tensor)
|
st31944
|
Thanks for your solution @I-Love-U. But how should one extend this to an arbitrary number of arrays? The first method does not work for 4 arrays.
|
st31945
|
https://pytorch.org/docs/stable/generated/torch.maximum.html#torch.maximum 23
Does this help here?
|
st31946
|
A overload implementation of torch.max is the same as torch.maximum: torch.max — PyTorch 1.8.1 documentation 10
So, we can focus on torch.max
|
st31947
|
Can you give more information about the error?
Here are some examples and as we can see that these code can work well.
tensor_list = [torch.randn(2, 3) for _ in range(5)]
tensor_list
Out[9]:
[tensor([[-0.6082, -0.9290, -0.4921],
[ 0.3344, -0.9338, -0.8563]]),
tensor([[-0.3530, -0.5673, 2.6954],
[ 1.5262, 2.3859, 0.3481]]),
tensor([[ 0.5392, 0.9646, -1.5962],
[-2.2931, 0.6707, -0.4896]]),
tensor([[-1.3532, -0.5953, 1.6039],
[ 0.2937, 0.3643, 1.3153]]),
tensor([[ 1.1544, 0.7681, -1.0410],
[-0.1305, -0.8855, -0.3516]])]
import functools
functools.reduce(torch.max, tensor_list)
Out[14]:
tensor([[1.1544, 0.9646, 2.6954],
[1.5262, 2.3859, 1.3153]])
max_tensor = tensor_list[0]
for tensor in tensor_list[1:]:
max_tensor = torch.max(max_tensor, tensor)
max_tensor
Out[19]:
tensor([[1.1544, 0.9646, 2.6954],
[1.5262, 2.3859, 1.3153]])
|
st31948
|
How can I calculate sensitivity, precision, recall and F1 score of my binary dataset. The output values are 0 and 1.
model = Net()
optimizer = optim.Adam(model.parameters(), lr=0.001)
loss_func = nn.NLLLoss()
criterion = nn.CrossEntropyLoss()
epochs = 2
loss_list = []
model.train()
for epoch in range(epochs):
total_loss = []
for i, data in enumerate(train_ldr, 0):
# get the inputs; data is a list of [inputs, labels]
X_train,Y_train = data.values()
X_train = X_train.unsqueeze(0)
X_train = X_train.unsqueeze(1)
optimizer.zero_grad()
# Forward pass
output = model(X_train)
# Calculating loss
loss = criterion(output, Y_train)
# Backward pass
loss.backward()
# Optimize the weights
optimizer.step()
total_loss.append(loss.item())
loss_list.append(sum(total_loss)/len(total_loss))
print('Training [{:.0f}%]\tLoss: {:.4f}'.format(
100. * (epoch + 1) / epochs, loss_list[-1]))
model.eval()
criterion = nn.CrossEntropyLoss()
with T.no_grad():
correct = 0
for i, data in enumerate(test_ldr, 0):
X_test,Y_test = data.values()
X_test = X_test.unsqueeze(0)
X_test = X_test.unsqueeze(1)
output = model(X_test)
pred = output.argmax(dim=1, keepdim=True)
correct += pred.eq(Y_test.view_as(pred)).sum().item()
loss = criterion(output, Y_test)
total_loss.append(loss.item())
print('Performance on test data:\n\tLoss: {:.4f}\n\tAccuracy: {:.1f}%'.format(
sum(total_loss) / len(total_loss),
correct / len(test_ldr) * 100)
)
|
st31949
|
Solved by sh0416 in post #2
In this case, I recommend you use the scikit-learn package for computing some evaluation metric.
It would be great unless you have to accelerate the evaluation process due to the large data.
|
st31950
|
In this case, I recommend you use the scikit-learn package for computing some evaluation metric.
It would be great unless you have to accelerate the evaluation process due to the large data.
|
st31951
|
I would like to ask how the pytorch code implements Graph Convolution Network and CRF for sequence labeling?
|
st31952
|
Hello there,
In my recent research i tried to minimize the corssdataset difference of CT-Scans made by different CT-Maschines in different countries with the goal of making the data perform better when used together to train an Classifier.
For this i tried using an classic VGG16 Autoencoder and an imaging technoloy classifier to train this autoencoder before actually training the medical Classifier.
Our result are sadly quite unstable and we can not achieve good performance with same and cross dataset images at the same time.
Is the approach generaly flawed or do you have some recommondations and tipps ?
|
st31953
|
Hello everyone,
I’m trying to run classification on the CIFAR10 dataset using a custom CNN which looks like this:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = nn.Sequential(nn.Conv2d(3, 16, kernel_size=3, stride=2),
nn.BatchNorm2d(16), nn.ReLU(inplace=True),
nn.Conv2d(16, 32, kernel_size=3, stride=2),
nn.BatchNorm2d(32), nn.ReLU(inplace=True),
nn.Conv2d(32, 64, kernel_size=3, stride=2),
nn.BatchNorm2d(64), nn.ReLU(inplace=True),
nn.Conv2d(64, 10, kernel_size=3),
nn.BatchNorm2d(10), nn.Flatten())
def forward(self, x):
x = self.conv(x)
return x
The training and test goes well, I’m getting around 73% accuracy. Then I save the model using torch.save(model.state_dict(), save_path).
From another script I load the model like this:
def load_model(path):
model = Net()
model.load_state_dict(torch.load(path))
print(model)
model.eval()
return model
Thing is, I want to do detection on CIFAR10 but directly from images without using DataLoader or Dataset classes. I saved images from the DataLoader into class_i.png files and then load them like this:
def load_imgs(path):
imgs = []
labels = []
img_files = os.listdir(path)
for i in range(len(img_files)):
label = re.search(r"\w*(?=_)", img_files[i]).group(0)
img_path = os.path.join(path, img_files[i])
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
img = np.asarray(img)
imgs.append(img)
labels.append(label)
return imgs, labels
Finally, I’m running classification on these images by firstly converting them to tensors:
def classification(model, imgs):
if torch.cuda.is_available():
print(f"Using CUDA device {torch.cuda.get_device_name(0)}")
device = torch.device("cuda:0")
else:
print("No CUDA device found, using CPU")
device = torch.device("cpu")
to_tensor = torchvision.transforms.ToTensor()
tensor_imgs = [to_tensor(img).float() for img in imgs]
pred = []
with torch.no_grad():
for img in tensor_imgs:
img = img.to(device)
model = model.to(device)
output = model(img[None, ...])
pred.append(output.argmax(dim=1, keepdim=True).cpu().squeeze().item())
return pred
But when comparing predicted classes with the original labels, I’m getting less than 25% of accuracy. My guess is that the problem is somewhere around the way I pass images for detection. Unfortunately, I am strictly limited to the OpenCV library for loading the images.
What would be the correct way to run classification on imported images without using data loaders?
|
st31954
|
Solved by denishem in post #5
I managed to find my error. It was due to color channels order since OpenCV loads images as BGR and not RGB. Converting images uisng cv2.cvtColor(img, cv2.BGR2RGB) fixed it.
|
st31955
|
Loading images directly without a DataLoader should work as long as you are applying the same processing (you wouldn’t need data augmentation).
What kind of transformations did you use during the training, i.e. did you resize the images etc.?
To isolate the issue to the data loading you could load some images during training and compare the predictions to the same images returned by the DataLoader (comparing the raw tensor values could also give you more information about the differences).
|
st31956
|
These are the transforms that I apply for training and validation:
transform_train = transforms.Compose([
transforms.RandomAffine(5, translate=(0.1, 0.1)),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
I realized I made a mistake for inference by not using the same transforms. Applying transform_test on images before inference now gives me around 50% for a run on 200 random images, which is the figure I got after the first epoch during training.
This is how my code looks right now:
def classification(model, imgs):
if torch.cuda.is_available():
print(f"Using CUDA device {torch.cuda.get_device_name(0)}")
print(DIV)
device = torch.device("cuda:0")
else:
print("No CUDA device found, using CPU")
print(DIV)
device = torch.device("cpu")
tensor_imgs = [transform_test(img).unsqueeze_(0) for img in imgs]
pred = []
with torch.no_grad():
for img in tensor_imgs:
img = img.to(device)
model = model.to(device)
output = model(img)
pred.append(output.data.cpu().numpy().argmax())
return pred
I didn’t quite understand your suggestion, should I save the images that I use for training and run inference on them for comparison? For the training part I am using DataLoader for both training and test datasets.
|
st31957
|
denishem:
I didn’t quite understand your suggestion, should I save the images that I use for training and run inference on them for comparison? For the training part I am using DataLoader for both training and test datasets.
This would be one possibility to further isolate it.
Issues like these are often caused by either a different data loading and processing pipeline (majority of the issues) or by an invalid model loading (e.g. using strict=False as a “workaround” to load an invalid state_dict).
To further isolate the root cause I would suggest to start with the data loading and verify that the training and test script create the “same” input data. Note that the tensors wouldn’t be exactly the same, if the training pipeline uses random transformations, so for debugging purposes you could disable them.
The model can be checked by comparing the output for a static input (e.g. torch.ones) and make sure both scripts return an allclose output.
|
st31958
|
I managed to find my error. It was due to color channels order since OpenCV loads images as BGR and not RGB. Converting images uisng cv2.cvtColor(img, cv2.BGR2RGB) fixed it.
|
st31959
|
For my purposes I want to apply Group Normalization to three specific channel groups of a tensor. This groups are contiguous so for a Tensor with dimensions (N,C,H,W). Group 1 would be (N, : C /3,H,W), Group 2 (N, C/3: 2C/3, H,W) and Group 3 (N; 2C/3 : C, H , W). I tried looking in the code base but did not find what the logic of group creation is.
Can someone explain to me the logic for the formation of groups in GroupNorm? Does it match my contiguous scenario?
|
st31960
|
Solved by ptrblck in post #2
Your assumption seems to be correct based on this small experiment:
N, H, W = 16, 224, 224
# increasing groups
norm = nn.GroupNorm(num_groups=3, num_channels=6)
x = torch.cat((
torch.randn(N, 2, H, W) * 2 + 5,
torch.randn(N, 2, H, W) * 4 + 10,
torch.randn(N, 2, H, W) * 6 + 15,
), dim=1…
|
st31961
|
Your assumption seems to be correct based on this small experiment:
N, H, W = 16, 224, 224
# increasing groups
norm = nn.GroupNorm(num_groups=3, num_channels=6)
x = torch.cat((
torch.randn(N, 2, H, W) * 2 + 5,
torch.randn(N, 2, H, W) * 4 + 10,
torch.randn(N, 2, H, W) * 6 + 15,
), dim=1)
out = norm(x)
for i, o in enumerate(out.split(1, dim=1)):
print("channel {}: mean: {:.5f}, {:.5f}".format(i, o.mean(), o.std()))
> channel 0: mean: 0.00021, 0.99910
channel 1: mean: -0.00021, 1.00089
channel 2: mean: -0.00071, 0.99964
channel 3: mean: 0.00071, 1.00036
channel 4: mean: -0.00172, 1.00064
channel 5: mean: 0.00172, 0.99936
# mix channels
norm = nn.GroupNorm(num_groups=3, num_channels=6)
x = torch.cat((
torch.randn(N, 1, H, W) * 2 + 5,
torch.randn(N, 1, H, W) * 1/2 - 5,
torch.randn(N, 1, H, W) * 4 + 10,
torch.randn(N, 1, H, W) * 1/4 - 10,
torch.randn(N, 1, H, W) * 6 + 15,
torch.randn(N, 1, H, W) * 1/6 - 15,
), dim=1)
out = norm(x)
for i, o in enumerate(out.split(1, dim=1)):
print("channel {}: mean: {:.5f}, {:.5f}".format(i, o.mean(), o.std()))
> channel 0: mean: 0.96000, 0.38414
channel 1: mean: -0.96000, 0.09611
channel 2: mean: 0.96217, 0.38453
channel 3: mean: -0.96217, 0.02406
channel 4: mean: 0.96211, 0.38544
channel 5: mean: -0.96211, 0.01068
As you can see, the first approach gives a standardized (zero mean, unit variance) output per channel, while the second approach using the mixed stats does not.
|
st31962
|
Thank you for your reply. Yes, it seems almost certain the group formation is done for contiguous channels.
|
st31963
|
Hi all,
I am currently in touch with the use of pytorch. When I was running the convolutional neural network code, I tested the conv2d() function on the CPU. I used “top” to check performance.
When I set the number of threads equal to 1, I found that “%sy” in the information printed by “top” reached 40-60% (sy: system cpu time (or)% CPU time spent in kernel space).
Is it normal? And can anyone explain why the kernel space is used?
cpu_num = 1
print("cpu_num: ",cpu_num)
os.environ ['OMP_NUM_THREADS'] = str(cpu_num)
os.environ ['OPENBLAS_NUM_THREADS'] = str(cpu_num)
os.environ ['MKL_NUM_THREADS'] = str(cpu_num)
os.environ ['VECLIB_MAXIMUM_THREADS'] = str(cpu_num)
os.environ ['NUMEXPR_NUM_THREADS'] = str(cpu_num)
torch.set_num_threads(cpu_num)
Many thanks!
|
st31964
|
I am trying to use transfer learning. I want to freeze the parameters of the model apart from the batch norm layers. I am using the below code but getting too many values error.
for name,param in model_transfer.parameters():
if ("bn" not in name):
param.requires_grad = False
Error:
ValueError Traceback (most recent call last)
in
----> 1 for name,param in model_transfer.parameters():
2 if ("bn" not in name):
3 param.requires_grad = False
4
5
ValueError: too many values to unpack (expected 2)
How to resolve it?
|
st31965
|
Solved by albanD in post #2
Hi,
If you want both the name and the parameter, you need to use model_transfer.named_parameters().
|
st31966
|
Hi,
If you want both the name and the parameter, you need to use model_transfer.named_parameters().
|
st31967
|
bing:
ValueError: too many values to unpack (expected 2)
Python functions can return multiple variables . These variables can be stored in variables directly. This is a unique property of Python , other programming languages such as C++ or Java do not support this by default.
The valueerror: too many values to unpack occurs during a multiple-assignment where you either don’t have enough objects to assign to the variables or you have more objects to assign than variables. If for example myfunction() returned an iterable with three items instead of the expected two then you would have more objects than variables necessary to assign to.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.