id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st46468 | Thanks, I am aware of torch.clamp. But, ignoring [0,1] range what if I want to use a specific function like sigmoid, square of the weights. Is there a way to this via hooks etc.? |
st46469 | I have a binary classification model, that in the latest linear layer, it outputs only positive values (don’t ask why, that’s a different matter), now when i give the final layer’s output to torch.sigmoid, all the results are above 50%, because the final linear layer is only outputting positive values, how can i fix this and output probability? is there any “positive only” sigmoid in pytorch? |
st46470 | Hi Richard!
Richard_S:
I have a binary classification model, that in the latest linear layer, it outputs only positive values
This is the core issue you need to address – tweaking sigmoid() to
undo whatever damage you’ve done would be a sideshow.
(don’t ask why, that’s a different matter)
Well, it actually does matter, because whatever you’ve done is
breaking the interpretation of the output of your model as reasonable
logits (that become reasonable probabilities when passed through
sigmoid()).
My advice: Tell us what you are actually doing and what your
motivation is. My guess is that you’ll be able to use standard
techniques to build your classifier. But if your problem has some
unusual properties that prevent you form using standard techniques,
you’ll likely get better advice from the forum if you describe your
problem and what makes it atypical.
the final linear layer is only outputting positive values, how can i fix this and output probability? is there any “positive only” sigmoid in pytorch?
(As an aside, and against my better judgment, I will comment on some
mathematical structure. But please don’t actually try doing this. You
want to convert “positive-only” things that look sort of like logits into
“normal” logits that range from -inf to inf, and that therefore become
“normal” probabilities that range from 0 to 1 when passed through
sigmoid(). The log() function maps the positive half real line to the
whole real line, so you could use the following conversion:
normal_logit = log (positive_only_logit).)
Good luck.
K. Frank |
st46471 | when i run pip3 command or similiar command,it said;
torch-1.0.1-cp37-cp37m-win_amd64.whl is not a supported wheel on this platform.
when i run
2.when i run conda install pytorch-cpu torchvision-cpu -c pytorch
it said
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com 2 blocked, please file
a support request with your network engineering team.
SSLError(MaxRetryError(‘HTTPSConnectionPool(host=‘repo.anaconda.com’, port=443): Max retries exceeded with url: /pkgs/main/noarch/repodata.json.bz2 (Caused by SSLError(“Can’t connect to HTTPS URL because the SSL module is not available.”))’)) |
st46472 | It seems to be a conda bug. Could you try the suggestions posted in this issue 338? |
st46473 | I build a unet in pytorch and keras, however it seems much more slower in pytorch.I used Nvidia 1080Ti and Tesla v100 GPU card. I search for the reason why pytorch is slower and I find that pytorch should faster than keras.I wonder if i made some mistakes in my code? so could someone show me how to accelerate the training of my code? Here is my pytorch code:
‘’’
import argparse
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
from torch import nn
import torch.nn.functional as F
from torch import optim
import numpy as np
from skimage import io
import torch
import os
from torch.utils.tensorboard import SummaryWriter
from sklearn.model_selection import train_test_split
from SSIM import SSIM
import torchvision.transforms as transforms
import torchvision
from torch.autograd import Variable
from unet_model import *
import os
from patchify import *
def normlize(im):
for i in range(im.size(0)):
im[i] = (im[i] - im[i].min())/(im[i].max() - im[i].min())
return im
def standard(im,mean,var):
return (im - mean)/var
def dataaugment(inputs,target):
rotatetimes = np.random.randint(4)
fliplr = np.random.randint(2)
flipud = np.random.randint(2)
inputs = torch.rot90(inputs,rotatetimes+1,(2,3))
target = torch.rot90(target,rotatetimes+1,(2,3))
batch, width, height = inputs.size(0),inputs.size(2),inputs.size(3)
if fliplr:
for i in range(batch):
img_input = inputs[i][0]
img_target = target[i][0]
inputs[i][0] = torch.fliplr(img_input)
target[i][0] = torch.fliplr(img_target)
if flipud:
for i in range(batch):
img_input = inputs[i][0]
img_target = target[i][0]
inputs[i][0] = torch.flipud(img_input)
target[i][0] = torch.flipud(img_target)
return inputs, target
parser = argparse.ArgumentParser(description='Debackground')
parser.add_argument('--batch_size',type=int,default=16)
parser.add_argument('--epochs',type=int,default=100)
args = parser.parse_args()
cuda = torch.cuda.is_available()
device = torch.device("cuda" if cuda else "cpu")
input_ = io.imread('input_actin.tif')
gt = io.imread('gt_actin.tif')
input_ = torch.tensor(input_,dtype=torch.float32).unsqueeze_(dim=1)
gt = torch.tensor(gt,dtype=torch.float32).unsqueeze_(dim=1)
input_ = normlize(input_)
gt = normlize(gt)
x_train, x_test, y_train, y_test = train_test_split(input_,gt,test_size=0.001)
print(x_train.device)
print(x_test.device)
train_ds = TensorDataset(x_train,y_train)
train_dl = torch.utils.data.DataLoader(train_ds, batch_size=args.batch_size, shuffle=True, num_workers=4)
def weight_init(module):
if isinstance(module,nn.Conv2d):
nn.init.xavier_normal_(module.weight)
elif isinstance(module,nn.Linear):
nn.init.xavier_normal_(module.weight)
elif isinstance(module,nn.BatchNorm2d):
nn.init.constant_(module.weight,1)
nn.init.constant_(module.bias,1)
model = UNet(1,1)
criterion = nn.MSELoss()
learning_rate = 1e-3
if cuda:
model = model.cuda()
criterion.cuda()
optimizer = optim.Adam(model.parameters(),lr=learning_rate)
milestone = [25,50,75]
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestone,gamma=0.5)
writer = SummaryWriter('runs/lightsheet_experiment')
step = 0
for epoch in range(args.epochs):
for j,(data,label) in enumerate(train_dl,0):
model.train()
model.zero_grad()
optimizer.zero_grad()
if cuda:
data = data.cuda()
label = label.cuda()
pred = model(data)
loss = 1000*criterion(pred,label)
loss.backward()
optimizer.step()
scheduler.step()
print("[epoch %d step %d loss %.4f]"%(epoch,j,loss.item()))
if step%10==0:
writer.add_scalar('train_loss', loss.item(),step)
step +=1
with torch.no_grad():
for jj,(x_test,y_test) in enumerate(test_dl,0):
noise_x = Variable(x_test, volatile=True)
target_y = Variable(y_test, volatile=True)
if torch.cuda.is_available():
noise_x = noise_x.cuda()
target_y = target_y.cuda()
y_val = model(noise_x)
val_loss = SSIM()(y_val,target_y)
ssim += val_loss.item()
recurrent += 1
ssim = ssim/recurrent
writer.add_scalar('ssim', ssim,epoch)
if (epoch+1)%50==0:
clean_grid = torchvision.utils.make_grid(normlize(y_test),nrow=4)
writer.add_image('clean image'+str(epoch+1),clean_grid,dataformats='CHW')
dirty_grid = torchvision.utils.make_grid(normlize(x_test),nrow=4)
writer.add_image('dirty image'+str(epoch+1),dirty_grid,dataformats='CHW')
debackground_grid = torchvision.utils.make_grid(normlize(y_val),nrow=4)
writer.add_image('debackground image'+str(epoch+1),debackground_grid,dataformats='CHW')
print("[epoch %d val_loss %.4f]"%(epoch,ssim))
del val_loss
del y_val
torch.save(model.state_dict(), os.path.join(os.getcwd(), 'net_latest.pth'))
if (epoch+1)%10==0:
path = os.path.join(os.getcwd(),'model','deback_epoch%d.pth'%(epoch+1))
torch.save(model.state_dict(),path)
‘’’ |
st46474 | bruce:
with torch.no_grad():
Also add model.eval() after the above statement to switch from train mode to eval mode. |
st46475 | @bruce Is the data loading time is same as that used to be in keras? This also contributes to slow training, checking this data loading(batch loading) might help. |
st46476 | I am attempting to use libtorch on aws lambda which has a limit of 250MB. However, my libtorch_cpu.so is 290MB and so I am not able to upload it on lambda
I have a cpu build from here:
https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-1.7.0%2Bcpu.zip 4 |
st46477 | My main concern is how & where should I keep the log_loss_metric in my pytorch training as well as for my evaluation loop to calculate the mean column wise log_loss value? As I am doing multilabel binary classification there are 206 prediction columns in total.
This is how I am writing my evaluation loop,
def eval_loop_fn(data_loader, model, device):
running_loss = 0.0
score = 0.0
model.eval()
for batch_index,dataset in enumerate(data_loader):
tabular_data = dataset["tabular_data"]
output = dataset["output"]
tabular_data = tabular_data.to(device, dtype=torch.float)
targets = output.to(device, dtype=torch.float)
outputs = model(tabular_data)
loss = loss_fn(outputs , targets)
running_loss += loss.item()
valid_loss = running_loss / float(len(val_data))
xm.master_print('validation Loss: {:.4f} '.format(valid_loss)) |
st46478 | I just tried to install PyTorch in mac with python 64bit in Pycharm,
pip install torch torchvision torchaudio
the result
ERROR: Could not find a version that satisfies the requirement torchaudio (from versions: none)
ERROR: No matching distribution found for torchaudio
Is there something wrong with the setting or what ? |
st46479 | Solved by kang-im in post #2
I found to install Torch
pip install --pre torch -f “https://download.pytorch.org/whl/nightly/${WHEEL_DIR}torch_nightly.html” |
st46480 | I found to install Torch
pip install --pre torch -f “https://download.pytorch.org/whl/nightly/${WHEEL_DIR}torch_nightly.html 10” |
st46481 | Hi,
I am a bit confused about where to exactly apply dropout in CNN network.
In the below model I applied dropout in both of the Conv layers and also in the linear layer.
But I am not sure whether I need to apply it. After ReLu? or before ReLu ? in linear layers.
And also I am not sure if I implemented dropout in correct place in Conv layers.
I am experimenting on dropout mc outputs of the CNN model : uncertainty metrics
I got different mean confidence values and uncertainty values, when I used dropout before or after the F.relu for fc1.
class CNN_dropout(nn.Module):
def __init__(self):
super(CNN_dropout, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
self.drop_layer = nn.Dropout(p=0.5)
def last_hidden_layer_output(self, x):
x = F.max_pool2d(F.relu(self.drop_layer(self.conv1(x))), 2)
x = F.max_pool2d(F.relu(self.drop_layer(self.conv2(x))), 2)
x = x.view(-1, 320)
x = F.relu(self.drop_layer(self.fc1(x)))
return x
def forward(self, x):
x = self.last_hidden_layer_output(x)
x = self.fc2(x)
return x
My experiment results differ when I switch
from
def last_hidden_layer_output(self, x):
x = F.max_pool2d(F.relu(self.drop_layer(self.conv1(x))), 2)
x = F.max_pool2d(F.relu(self.drop_layer(self.conv2(x))), 2)
x = x.view(-1, 320)
x = F.relu(self.drop_layer(self.fc1(x)))
return x
to
def last_hidden_layer_output(self, x):
x = F.max_pool2d(F.relu(self.drop_layer(self.conv1(x))), 2)
x = F.max_pool2d(F.relu(self.drop_layer(self.conv2(x))), 2)
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = self.drop_layer(x)
return x
If I am sure where exactly to apply dropout in linear layer, maybe I also need to change the place of dropout ın conversation layers too?
To me, these seem to be a better choice but I am not sure unfortunately.
x = F.relu(self.drop_layer(self.fc1(x)))
x = F.max_pool2d(F.relu(self.drop_layer(self.conv1(x))), 2) |
st46482 | or is it like?
x = self.drop_layer(F.max_pool2d(F.relu(self.conv1(x)), 2))
and
x = self.drop_layer(F.relu(self.fc1(x))) |
st46483 | Hi!
I’m experiencing one or more than one problem with my training.
First problem:
training freeze:
Experienced at random even after hours of training (up to 12h, 5 epochs).
After it happens the cpu/gpu usage is very low but the process is still running.
No warning, or errors.
Second problem:
training shutdown:
Experienced only one time after trying to restart training.
error:
malloc(): mismatching next->prev_size (unsorted)
Aborted (core dumped)
Unfortunately I’ve modified a lot of component of the model since the last functioning version, so I’m not able to identify the source exactly. I suspect it has something to do with a modified version of collate_fn.
def default_collate_mod(batch):
r"""Puts each data field into a tensor with outer dimension batch size"""
elem = batch[0]
elem_type = type(elem)
if isinstance(elem, torch.Tensor):
out = None
elem_size = elem.size()
if not all(elem.shape == elem_size for elem in batch):
#you can feed this to a linear layer and then separate again in batches
return ([el.shape[0] for el in batch], torch.cat(batch))
if torch.utils.data.get_worker_info() is not None:
# If we're in a background process, concatenate directly into a
# shared memory tensor to avoid an extra copy
numel = sum([x.numel() for x in batch])
storage = elem.storage()._new_shared(numel)
out = elem.new(storage)
return torch.stack(batch, 0, out=out)
elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
and elem_type.__name__ != 'string_':
if elem_type.__name__ == 'ndarray' or elem_type.__name__ == 'memmap':
# array of string classes and object
if np_str_obj_array_pattern.search(elem.dtype.str) is not None:
raise TypeError(default_collate_err_msg_format.format(elem.dtype))
return default_collate_mod([torch.as_tensor(b) for b in batch])
elif elem.shape == (): # scalars
return torch.as_tensor(batch)
elif isinstance(elem, float):
return torch.tensor(batch, dtype=torch.float64)
elif isinstance(elem, int_classes):
return torch.tensor(batch)
elif isinstance(elem, string_classes):
return batch
elif isinstance(elem, container_abcs.Mapping):
return {key: default_collate_mod([d[key] for d in batch]) for key in elem}
elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
return elem_type(*(default_collate_mod(samples) for samples in zip(*batch)))
elif isinstance(elem, container_abcs.Sequence):
# check to make sure that the elements in batch have consistent size
it = iter(batch)
elem_size = len(next(it))
if not all(len(elem) == elem_size for elem in it):
#Don't display the warning, just return the list
return batch
transposed = zip(*batch)
return [default_collate_mod(samples) for samples in transposed]
raise TypeError(default_collate_err_msg_format.format(elem_type))
versions:
pytorch 1.7
cuda 11.1
Edit:
It appears that there is also some problem with the memory of my GPU, maybe what caused the second error?
The amount of free memory looks lower than expected
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.32.00 Driver Version: 455.32.00 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 On | 00000000:06:00.0 On | N/A |
| 42% 47C P2 28W / 160W | 3724MiB / 5931MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1093 G /usr/lib/xorg/Xorg 100MiB |
| 0 N/A N/A 1516 G /usr/bin/plasmashell 73MiB |
| 0 N/A N/A 2506 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 2557 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 3736 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 5249 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 5366 G /usr/lib/firefox/firefox 2MiB |
| 0 N/A N/A 5423 G /usr/lib/firefox/firefox 2MiB |
+-----------------------------------------------------------------------------+
thanks |
st46484 | Do you mean ram memory or GPU memory?
I’m actually encountering a Segmentation Fault (core dumped). My fear is that It’s related to this problem https://github.com/pytorch/pytorch/issues/31758 7.
I’m running the script in debug, I’ll update the post as soon as i get the error.
Also, I got the error just by running an empty training loop, just loading the data into RAM. |
st46485 | System ram issues could manifest as all kinds of weird/random errors, your malloc error is suspicious in that regard. Could also only happen when power supply is under stress. |
st46486 | gdb reports No stack.
I’ll run a ram memtest.
Do you have any advice on the settings of the memtester, MB and number of iteration?
I have 32 GB of ram. |
st46487 | that’s program specific, just run it for some time (at least 5 minutes, I’d guess) |
st46488 | I got this, can this be the reason of my issue?
Loop 37/50:
Stuck Address : ok
Random Value : ok
Compare XOR : ok
Compare SUB : ok
Compare MUL : ok
Compare DIV : ok
Compare OR : ok
Compare AND : ok
Sequential Increment: ok
Solid Bits : testing 41FAILURE: 0xffffffffffffffff != 0xffffffffffffdfff at offset 0x00f5dbf8.
Block Sequential : ok
Checkerboard : ok
Bit Spread : ok
Bit Flip : ok
Walking Ones : ok
Walking Zeroes : ok
8-bit Writes : ok
16-bit Writes : ok
Loop 45/50:
Stuck Address : ok
Random Value : ok
Compare XOR : ok
Compare SUB : ok
Compare MUL : ok
Compare DIV : ok
Compare OR : ok
Compare AND : ok
Sequential Increment: ok
Solid Bits : ok
Block Sequential : ok
Checkerboard : ok
Bit Spread : ok
Bit Flip : testing 76FAILURE: 0xffffffffffffddff != 0xfffffffffffffdff at offset 0x003f0c00.
Walking Ones : ok
Walking Zeroes : ok
8-bit Writes : ok
16-bit Writes : ok |
st46489 | Yes. Try playing with BIOS memory settings (CL/frequency), otherwise you’ll need to find problematic hardware part. |
st46490 | I recently added some ram and checked the BIOS settings, probably it was defective from the start.
Thanks for your help. |
st46491 | I am unfortunately back, this time with nothing wrong on memtest.
The names of the files printed below are due to this part of the code and they are printed just to be sure that was not a probelm related to corrupted data (i can load those file just fine):
try:
with lz4.frame.open(f'{self.root_dir}/{name}', 'rb') as f:
data = pickle.load(f)
except:
print(name)
It gets stuck without doing anything, then when i stop the program:
Traceback (most recent call last):
File "train_AD.py", line 332, in <module>
sample_5383326.lz4
sample_12725641.lz4
sample_4159972.lz4
sample_2696410.lz4
train_net(args,cfg, DEBUG, model)
File "train_AD.py", line 211, in train_net
for data in train_dataloader:
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1068, in _next_data
idx, data = self._get_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1034, in _get_data
success, data = self._try_get_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 872, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 107, in get
if not self._poll(timeout):
File "/usr/lib/python3.8/multiprocessing/connection.py", line 257, in poll
return self._poll(timeout)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 424, in _poll
r = wait([self], timeout)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 931, in wait
ready = selector.select(timeout)
File "/usr/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt |
st46492 | If particular files fail, it may be some (bit) corruption not detected by pickle. Otherwise, try using non-zero timeout argument for DataLoader and make sure exceptions are not swallowed up (for example, with jupyter stderr is not printed in browser, I think). You may also check if this is multiprocessing related by disabling workers. |
st46493 | I’m just running the script in the terminal, do i have to worry about exception being swallowed up?
Also I’m not sure about the way I coded try except, Is there a specific exception I should handle to avoid other important exception to be missed?
Edit: the error below is probably just due to not having enough ram. I will see if some error happens with 0 workers.
With 8 workers the error below happens within 10 seconds, probably the time required to load a single batch, not sure if is the same error as the previous one.
With 0 workers I got no error within 16 minutes of execution. I will leave it running and see if something happens.
I don’t feel like It is guaranteed to never happen even after hours and hours of training, since I previously managed to run the code for several hours with 4 workers.
I haven’t experimented enough with this, but It is clear that I will never fetch the first batch with 8 workers, so it looks strongly dependent with the number of workers.
Traceback (most recent call last):
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 872, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 116, in get
return _ForkingPickler.loads(res)
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 88, in rebuild_tensor
t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/_utils.py", line 133, in _rebuild_tensor
return t.set_(storage, storage_offset, size, stride)
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 4157) is killed by signal: Killed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "train_AD.py", line 333, in <module>
train_net(args,cfg, DEBUG, model)
File "train_AD.py", line 211, in train_net
for data in train_dataloader:
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1068, in _next_data
idx, data = self._get_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1034, in _get_data
success, data = self._try_get_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 885, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 4157) exited unexpectedly
Below with a timeout different from zero:
Traceback (most recent call last):
File "train_AD.py", line 334, in <module>
train_net(args,cfg, DEBUG, model)
File "train_AD.py", line 212, in train_net
for data in train_dataloader:
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1068, in _next_data
idx, data = self._get_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1021, in _get_data
raise RuntimeError('DataLoader timed out after {} seconds'.format(self._timeout))
RuntimeError: DataLoader timed out after 1 seconds |
st46494 | Lorenzo_Atzeni:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 116, in get
return _ForkingPickler.loads(res)
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/multiprocessing/reductions.py", line 88, in rebuild_tensor
t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/_utils.py", line 133, in _rebuild_tensor
return t.set_(storage, storage_offset, size, stride)
It is either OOM or corrupted unpickled data. In the latter case, it is hard to guess the reason, assuming that files are fine. |
st46495 | That is OOM. To be honest I think every error is OOM.
The problem is kinda solved (I hope) by reducing the number of workers to 3 (12 h of training right now).
With 3 workers I have 5 GB of free memory, not really sure why it would need so much free memory, but with 4 workers it crashes after hours.
It’s not a memory leak either, since the memory usage doesn’t change overtime.
I don’t think It’s worth investigating any further, since it works fine with 3 workers and I wouldn’t even know where to start.
Thanks for the help. |
st46496 | I’m sorry for making this issue more confusing than It’s supposed to be.
At the end with 3 workers I had to restart my PC, because even the UI freezed.
I’m currently running the script with 0 workers, and for now It’s working. With 3 workers I was also experiencing lag spike, and what it looked like a UI refresh (I don’t know how to better describe it).
Following what you advised me, here it is what happens if i I use timeout in the dataloader:
Traceback (most recent call last):
File "train_AD.py", line 332, in <module>
train_net(args,cfg, DEBUG, model)
File "train_AD.py", line 215, in train_net
for inputs,x_2,target_availabilities,target_positions,targert_yaws in train_dataloader:
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1068, in _next_data
idx, data = self._get_data()
File "/home/lorenzo/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1021, in _get_data
raise RuntimeError('DataLoader timed out after {} seconds'.format(self._timeout))
RuntimeError: DataLoader timed out after 3 seconds |
st46497 | I suggested timeouts only because they reveal worker crashes, if you also have memory leaks or something, you should investigate that, you’re probably time-outing as system goes out of resources (swap would be my first guess). |
st46498 | RAM and GPU memory are pretty much half free. They don’t seem to increase overtime. Also the only thing I do in the dataloader is load a lz4 archive and modify numpy arrays.
The freezing happened to me before, just because the ram was using swap memory, this doesn’t seem to be the case though.
Going from 3 workers and batch 128 to 4 workers and batch 32 doesn’t seem to help with workers being timed-out. |
st46499 | Could be GPU issue with growing memory fragmentation or shared use (i.e. for display output). Worker timeouts are probably just sympthoms of something, I’d set that to 0 or something like 60, and just monitor system performance. |
st46500 | Just the process of loading the data causes workers being timed-out. They aren’t loaded on GPU yet. |
st46501 | Ah, I see. If you have a reproducible epoch one hanging, it should be possible to localize it by attaching a debugger (python or gdb), or maybe just with tracing messages. If it is a genuine deadlock, it can be hard to find a reason. But maybe it is some stupid linux quota setting or something. I can only guess at this point, sorry. |
st46502 | Whatever i give input its predict same value; example:
Input:
[90, 91, 26, 62, 92, 93, 26, 94, 95, 96]
incumbering soil and washed into immediate and glittering popularity possibly
Masked Input:
[90, 91, 26, 62, 92, 93, 26, 1, 95, 96]
incumbering soil and washed into immediate and unnk popularity possibly
Output:
[90, 91, 26, 62, 92, 93, 26, 33, 95, 96]
incumbering soil and washed into immediate and the popularity possibly
As you can see like this, it always predict “the” token.
Model:
class Kemal(nn.Module):
def __init__(self, src_vocab_size, embedding_size, num_heads, dim_forward, num_encoder_layers, max_len, src_pad_idx, dropout, device):
super(Kemal, self).__init__()
self.src_word_embedding = nn.Embedding(src_vocab_size, embedding_size)
self.src_position_embedding = nn.Embedding(max_len, embedding_size)
self.device = device
self.encoder_norm = nn.LayerNorm(embedding_size)
self.encoder_layer = nn.TransformerEncoderLayer(embedding_size, num_heads, dim_feedforward=dim_forward, dropout=dropout, activation='gelu')
self.encoder = nn.TransformerEncoder(self.encoder_layer, num_encoder_layers, self.encoder_norm)
self.fc = nn.Linear(embedding_size, src_vocab_size)
self.src_pad_idx = src_pad_idx
def make_src_pad_mask(self, src):
src_mask = src.transpose(0, 1) == self.src_pad_idx
return src_mask
# (N, src_len)
def forward(self, src):
src_seq_lenght, N = src.shape
src_mask = nn.Transformer.generate_square_subsequent_mask(None, src_seq_lenght).to(self.device)
src_positions = (
torch.arange(0, src_seq_lenght).unsqueeze(1).to(self.device)
)
embed_src = (self.src_word_embedding(src) + self.src_position_embedding(src_positions))
src_padding_mask = self.make_src_pad_mask(src)
out = self.encoder(embed_src, mask=src_mask, src_key_padding_mask=src_padding_mask)
out = self.fc(out)
return out
Thanks in advance |
st46503 | I forgot add training loop, here it is:
# Hyperparameters
src_vocab_size = len(vocab)
d_model = 512
nhead = 8
dim_forward = 1024
num_layers = 12
max_len = 10
pad_idx = 0
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
load_model = False
save_model = True
learning_rate = 1e-5
src_pad_idx = 0
model = Kemal(src_vocab_size, d_model, nhead, dim_forward, num_layers, max_len, pad_idx, 0.3, device).to(device)
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
#scheduler = optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
loss_list = []
model.train()
for epoch in range(500):
for sequence in range(int(len(encoded_text[0:30])/10)):
x, y = get_data(encoded_text[0:30], sequence)
x = masking(x)
x = torch.LongTensor(x).to(device)
y = torch.LongTensor(y).to(device)
x = x.reshape(1, -1)
out = model(x)
out = out.view(-1, len(vocab))
optimizer.zero_grad()
loss = criterion(out, y)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
if sequence % 1000 == 0 and sequence != 0:
print("Loss: ", loss, "S: ", sequence)
optimizer.step()
#scheduler.step()
if epoch % 1 == 0:
if len(loss_list) > 0:
print(f'Epoch: {epoch}, Step: {sequence}, Loss: {sum(loss_list) / len(loss_list)}')
loss_list.append(loss) |
st46504 | Your training loop part looks fine to me.
Are you referring to an open-source codebase for this?
Can you overfit your model to a small dataset first and ensure the model is training perfectly before generalizing? You can also set the random seeds to ensure consistent masking. |
st46505 | Hello, thanks for reply
I tried to develop the model using Aladdin Persson and Pytorch examples (Word level language model), but both were not designed for my purpose, so I tried to play with their mixture.
I tried overfitting with 2 sentence and 5000 epochs but it’s same
I already random masking
def masking(src):
mask_idx = random.randint(1, 10)
src[mask_idx] = 1
return src
Can be my model isn’t good for purpose? |
st46506 | Hello,
how would I automate the connection between a backbone and a head?
I’m interested into developing some code for replacing the backbone of any model, if its backbone is known by the code.
So let’s say I have a model using a resnet backbone, I detected the backbone and the head, and I would like to replace resnet by mobilenet.
The problem I see is mainly the mismatch between dimensions, I guess I need a layer in between to make the connection. |
st46507 | Hi, I’m a newbie in PyTorch.
I’ve been wondering if there is any reference or project going on or done already about offloading task to ARM processor.
I’ve wondered this by the reason below.
As far as I’m aware of, target devices, such as GPU, FPGA and etc, are used for offloading computation of some NN models.
The target devices are assumed to be connected to the mother board via PCIe interfaces.
My project is about offloading computation workload of a NN model to ARM processor currently and other DSP in the future.
So I’m working on building PyTorch to detect ARM processor on PCIe bus and include library for ARM processor offloading execution.
In summary, my questions are these.
Is there any project or references that I can refer to which is about offloading a task/computation to a ARM processor via PCIe interfaces.
If else, what about NEON architecture? I mean I can find some source code on PyTorch source code of NEON execution. Thus I wonder if it is possible to offload task to a NEON processor on a board which is connected to a mother board via PCIe interface? |
st46508 | hi.
I’m using a pytorch profiler to analyze memory consumption.
I want to know the units.
Mb, Kb, b
Does this means bytes?? or bits?
Mega bytes? or Mega bits? |
st46509 | Hy there, the answer is quite clear, we always consider bytes as unit of memory consumption of our primitive datatypes. |
st46510 | First I train the autoencoder for number of epochs for normal images and save the model. Then I load it and evaluate it for validation dataset consist of normal and abnormal images and calculate the reconstruction loss for each image in validation set. Then I add the each reconstruction loss and the label of that image (normal or abnormal) to two arrays. Then I use sklearn, svm to to fit the validation set. Then I predict the test images using svm and calculate the accuracy.
for data in dataloader:
images, label, paths = data
x = images.to(device)
with torch.set_grad_enabled(False):
x_reconstructed = model(x)
for i in range(len(x)):
reconstructed_loss = reconstruct_loss_fn(x_reconstructed[i], x[i])
if test == “abnormal”:
label = 0
if test == “normal”:
label = 1
trainDdata.append(reconstructed_loss)
targets.append(label)
model1 = svm.SVC()
trainDdata= (np.array(trainDdata)).reshape(-1, 1)
targets= (np.array(targets)).reshape(-1, 1)
model1.fit(trainDdata, targets.ravel())
testDdata= (np.array(testDdata)).reshape(-1, 1)
testTargets= (np.array(testTargets)).reshape(-1, 1)
targets2 = model1.predict(testDdata)
targets2= (np.array(targets2)).reshape(-1, 1)
equals = targets2 == testTargets.reshape(*targets2.shape)
print(f"Accuracy = {np.mean(equals.astype(int))*100:.3f}%")
I’m new to pytorch and I have a doubt whether this is correct? |
st46511 | Hi,
I am training a multi-class multi-label model with torch.nn.BCEWithLogitsLoss() on 8M data points for 10 epochs. I have 54 classes and a single image can have multiple labels. During training the loss decreases nice and decreasing:
image898×423 9.8 KB
However, when I look at trained my model outputs for the last epoch, I see that the model is outputing negative values only. For example, for one sample with the following label:
target =
tensor([[0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 0., 1., 0., 0., 0., 0., 0.]])
I get the following output from the model:
output =
tensor([[-1.2380, -2.3283, -2.3025, -2.1275, -2.1020, -2.3684, -3.4669, -3.4503,
-2.1905, -1.8565, -3.4215, -3.5318, -3.5715, -4.3836, -4.5215, -6.2270,
-3.8660, -3.7280, -4.6043, -4.7601, -9.5219, -9.4969, -9.4392, -8.0596,
-6.0773, -5.7972, -4.2495, -4.4533, -4.2641, -4.1068, -4.9987, -4.9321,
-7.9726, -7.4475, -4.8016, -5.6634, -6.3762, -6.0103, -6.7561, -3.3259,
-3.8778, -6.7682, -6.5663, -4.0945, -3.0747, -5.5408, -5.6429, -5.9659,
-5.8574, -7.6435, -7.8895, -6.6514, -6.5506, -5.0583]],
device='cuda:0')
So if I do sigmoid on top of this, I won’t get any good prediction. |
st46512 | Hi Amir -
amirhf:
target =
tensor([[0., 0., 1., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 1., 0., 1., 0., 0., 0., 0., 0.]])
The short answer is that your dataset is “unbalanced,” so you
should try using the pos_weight argument when you construct
your BCEWithLogitsLoss loss criterion.
Looking at your target, and naively assuming that all positive
class labels appear about equally frequently, it appear that any
giving class will be labelled positive only once in about every
nine images.
So your classifier could do a pretty good job by just predicting
negative for all of your classes all of the time.
It is the purpose of the pos_weight argument to address this
issue by weighting the less-frequent positive samples more heavily
than the more-frequent negative samples. Doing so will penalize
a model that simply tries to predict negative all the time.
It’s quite likely that some classes have positive labels more often
than others. pos_weight takes a vector of per-class weights so
that you can reweight each class separately. A common (and
reasonable) choice for the class weights is:
weights[i] = total_samples / number_of_positive_class_i_samples[i]
Best.
K. Frank |
st46513 | Hi @KFrank,
Thank you fo analysis. However I wonder how you calculated this:
Looking at your target, and naively assuming that all positive
class labels appear about equally frequently, it appear that any
giving class will be labelled positive only once in about every
nine images.
The target that I have here is only for one image, meaning that some classes are present for only one image! |
st46514 | Hi Amir!
amirhf:
I wonder how you calculated this:
Yes, this was not meant to be a realistic calculation. I was illustrating
an oversimplified estimate of the class weights based on the single
data point you provided.
The (unrealistic) details: Out of the 54 binary labels in your target,
six were positive. If one assumes (without any evidence) that positive
labels occur equally frequently for all 54 classes, and further assumes
(again, without any evidence) that the single target you gave is in
some sense randomly representative of all of your targets, then one
would conclude that, across your ensemble of targets, any given
class is labelled positive about one time in nine.
Under these assumptions, you would want to use the same value
of 9.0 for the pos_weight for all of your 54 classes.
Of course, these assumptions are probably not correct, so you should
look at a representative sample of your training data to determine the
per-class pos_weight values.
Best.
K. Frank |
st46515 | In this repo torchsparse 1, there is a customed datatype SparseTensor` 4.
class SparseTensor:
def __init__(self, feats, coords, cur_tensor_stride=1):
self.F = feats
self.C = coords
self.s = cur_tensor_stride
self.coord_maps = {}
self.kernel_maps = {}
def check(self):
if self.s not in self.coord_maps:
self.coord_maps[self.s] = self.C
def cuda(self):
assert type(self.F) == torch.Tensor
assert type(self.C) == torch.Tensor
self.F = self.F.cuda()
self.C = self.C.cuda()
return self
def detach(self):
assert type(self.F) == torch.Tensor
assert type(self.C) == torch.Tensor
self.F = self.F.detach()
self.C = self.C.detach()
return self
def to(self, device, non_blocking=True):
assert type(self.F) == torch.Tensor
assert type(self.C) == torch.Tensor
self.F = self.F.to(device, non_blocking=non_blocking)
self.C = self.C.to(device, non_blocking=non_blocking)
return self
def __add__(self, other):
tensor = SparseTensor(self.F + other.F, self.C, self.s)
tensor.coord_maps = self.coord_maps
tensor.kernel_maps = self.kernel_maps
return tensor
And I want to export to ONNX model, but when I ran torch.onnx.export, I got this ERROR:
RuntimeError: Only tuples, lists and Variables supported as JIT inputs/outputs.
Dictionaries and strings are also accepted but their usage is not recommended.
But got unsupported type SparseTensor
This problem may be same to other custome data types.
I also noticed this line in torch.onnx.init.py 3
What do you mean by this ?
Any non-Tensor arguments (including None) will be hard-coded into the exported model
Thanks in advance for any help! |
st46516 | Hi there,
I was wondering if someone could shed some light on the following questions:
Why is ReduceLROnPlateau the only object without get_lr() method among all schedulers?
How to retrieve the learning rate in this case? Previously without scheduler I would do optimizer.param_groups[0]['lr'] but now after using the scheduler and printing optimizer.param_groups[0]['lr'] I see no change in the learning rate?
How to save the optimizer state if you’re using a scheduler. Previous examples I’ve seen show optimizer.state_dict() in order to save the optimizer state should we replace that with scheduler.state_dict()?
Thanks! |
st46517 | Solved by ptrblck in post #2
The scheduler might track multiple param_groups as described here.
It should still work as shown in this example:
model = nn.Linear(10, 2)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(
optimizer, patience=10, verbose=True)
for i in… |
st46518 | The scheduler might track multiple param_groups as described here 201.
It should still work as shown in this example:
model = nn.Linear(10, 2)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(
optimizer, patience=10, verbose=True)
for i in range(25):
print('Epoch ', i)
scheduler.step(1.)
print(optimizer.param_groups[0]['lr'])
No, you should still save the optimizer’s state_dict (and also call optimizer.step(), as the scheduler is not a replacement for the optimizer). Additionally, you could also store the scheduler’s state_dict. |
st46519 | Thanks for the great answer!
By the way, is there anyway to obtain the learning rate from the ReduceLROnPlateau scheduler?
When I’m using other schedulers they all have get_last_lr() function; but since ReduceLROnPlateau is not inherited from _LRScheduler this function is not defined. Any equivalent way with get_last_lr() with this scheduler?
Thanks a lot! |
st46520 | You could use the internal scheduler._last_lr attribute, the scheduler.state_dict() or alternatively you could check the learning rate in the optimizer via optimizer.param_groups[0]['lr'].
Note that the first two approaches would only work after the first scheduler.step() call. |
st46521 | Hello,
I have work with LSTM in PyTorch, but I have faced a new challenge.
Assume you have some sequences during training that each of contains 10 time step like this:
a0, a1, a2, a3, a4, a5, a6, a7, a8, a9
in this form we easily give these sequence to LSTM, and we train it.
Now my problem is this. In some of sequences, some time steps are not available. For example, above sequece can be like this:
a0, a1, a2, a3, , a5, a6, , a8, a9
In this form, for this sequence, a4 and a7 are not available during training.
How should I handle this? |
st46522 | It depends on your exact data and task.
In text processing one often has to deal with “missing” words, i.e., words that are not available in the index (e.g., rare words, typos, named entities). Such missing words are generally represented by a special word or token. For example, “the loudd noise woke me up” becomes “the noise woke me up”. Maybe such a special “not available” token/value works for you as well.
Alternatively, is it a problem if the sequence is simply shorter. RNNs can handle sequences of different lengths. |
st46523 | I have used DDP in my Transformer model, but when I execute, init_process_group is hanging.
command used: python -m torch.distributed.launch --nnodes=1 --node_rank=1 --nproc_per_node=1 --use_env standard.py
With the above command, my goal is to run my model on a single node with a single GPU.
the system has 8 GPUs, but I would like to use a single GPU, just to make DDP API is working |
st46524 | Was DDP working on this machine before and are you able to use e.g. all 8 GPUs or are all calls hanging? |
st46525 | Hello,
I have a model whose architecture is found by a search method. Many such models are saved as checkpoints, but it’s challenging to restore and run them again because the exact architecture was programmatically determined. Each time I would want to load one of the checkpoints, it would require re-writing by hand the model architecture to exactly fit the particular one of the checkpoint file.
It would be wonderful if there were a way to do this:
module = SomeModule()
checkpoint = torch.load(checkpoint_file)
module.restoreArchitecture(checkpoint["architecture"])
module.load_state_dict(checkpoint["state"])
In principle this could be done by saving the architecture configuration and building a parser, though it would get hairy for custom objects and expressions, where markup isn’t as expressive as code.
Is there a canonical way of doing this?
It seems a reasonably popular request:
Load pretrained model without creating architecture manually vision
How to load a base network such as Resnet152 or efficient net model. Do I need to recreate the architecture and copy the pretrained weights to my architecture?. Is there a way to use the model without creating the architecture and copying the parameters?
How to save/load Torch models?
Is there any way to load a trained model without declaring the class definition before ?
I want the model architecture as well as parameters to be loaded.
stackoverflow.com
How to save model architecture in PyTorch?
pytorch
asked by
M.Z.
on 12:33AM - 05 Jan 20 UTC |
st46526 | KLDivLoss can take a weight parameter but the docs don’t specify how it should be formatted. What should the format be? |
st46527 | It should be a 1D tensor having as many elements as you have classes. We’ll have to add that to the docs, thanks for reporting that! |
st46528 | Hi there
I have tained an 1D Autoencoder NN, which you can find the its details below:
Autoencoder(
(encoder): Sequential(
(0): Conv1d(1, 5, kernel_size=(5,), stride=(2,))
(1): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)
(2): ReLU(inplace=True)
(3): Conv1d(5, 10, kernel_size=(5,), stride=(2,))
(4): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)
(5): ReLU(inplace=True)
(6): Conv1d(10, 15, kernel_size=(5,), stride=(2,))
(7): MaxPool1d(kernel_size=3, stride=1, padding=0, dilation=1, ceil_mode=False)
(8): ReLU(inplace=True)
(9): Conv1d(15, 20, kernel_size=(4,), stride=(1,))
(10): ReLU(inplace=True)
)
(decoder): Sequential(
(0): ConvTranspose1d(20, 15, kernel_size=(1,), stride=(4,))
(1): ReLU(inplace=True)
(2): ConvTranspose1d(15, 10, kernel_size=(2,), stride=(4,))
(3): ReLU(inplace=True)
(4): ConvTranspose1d(10, 5, kernel_size=(9,), stride=(2,))
(5): ReLU(inplace=True)
(6): ConvTranspose1d(5, 1, kernel_size=(10,), stride=(2,))
(7): ReLU(inplace=True)
)
)
my loss decrease like 1/f function however, when I feed my NN with some data, I get almost same output for different input. Has any one had any similar experience before?
I should mention that my training sample was 65536 signal each has length of 94 points.
When I said feeding my data I meant I did this below:
for i in range(65536) :
out_put[i] = model(data_pixel2[i].unsqueeze(0))
which here data_pixel2 is my input.
Appreciate any comments. |
st46529 | Could you try to remove the last ReLU from your decoder and rerun the code again?
I’ve seen similar issues, where the last non-linearity was messing with the training procedure. |
st46530 | thank you Patrick however it became worse than what it was before. Don’t you think if there would be a problem with the way that I normalized my input data? below you can see how I did that:
new_data = np.empty((256,256,94))
for i in range (256) :
for j in range (256) :
for k in range (94) :
new_data[i,j,k] = ((data[i,j,k])-(mean))/(max-min)
data is a tensor that I stored my signal there and new data was used as input of my autoencoder
Regards |
st46531 | How did you calculate mean, min, and max?
Usually you would normalize the input signals using the channel stats, but your use case might be different. |
st46532 | thank you for your reply.
I did it in the below way:
mean = np.mean(data)
max = np.amax(data)
min = np.amin(data)
new_data = np.empty((256,256,94))
for i in range (256):
for j in range (256):
for k in range (94):
new_data[i,j,k] = (data[i,j,k]-mean)/(max-min)
where data is my input that has the shape of (256,256,94), its a tiff file. |
st46533 | Since you are calculating the global statistics, you could simply use:
data = (data - mean) / (max - min)
Did you play around with some hyperparameters, e.g. the learning rate and checked, if this would change the behavior?
If that doesn’t help, I would recommend to try to overfit your model on a small dataset samples, e.g. just 10 samples, and make sure your model predicts these samples perfectly.
If that doesn’t work, there might be another error in the code I haven’t found yet. |
st46534 | Yes I did, things didn’t changed significantly even at some sets of hyper parameter it became worse.
Sure, I’m going to try to overfit my model with just a few example.
would u please take a look at my training section to see if I am feeding my NN correctly?
learning_rate = 1e-5
num_epochs = 360
model = Autoencoder().cuda()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate,
weight_decay=1e-5)
#training loop
loss = 0
epoch_num = 0
error = []
for epoch in range(num_epochs):
for data in train_loader:
img = data
img = Variable(img).cuda()
output = model (img)
loss = criterion (output,img)
optimizer.zero_grad()
loss.backward()
optimizer.step()
error.append(loss.item())
if epoch%10 == 9:
epoch_num += epoch_num
plt.plot(error)
print ('\r Train Epoch : {}/{} \tLoss : {:.4f}'.format (epoch+1,num_epochs,loss/32))
model_save_name = ‘Autoencoder 09 Jan 2020 2nd with new normalzation 2nd trial.pt’
path = F"/content/drive/My Drive/{model_save_name}"
torch.save(model.state_dict(), path)
plt.plot(error)
I have to recall you thay my input size is a tensor of (65536,1,94) shape, which 65536 is number of signals 1 is number of channel and 94 is length of each signal.
Kind Regards, |
st46535 | I cannot see any obvious mistakes, besides the usage of the deprecated Variable, so you can just remove it.
Did you overfit the small sample successfully? |
st46536 | Thank you for your time
Actually I tried that one as well, however got similar results. same output for all different input values, |
st46537 | I generated some artificial data to see if anything changes,
although generated data was quite different from my actual data, surprisingly I got the same out out, quite similar to what I have had with actual data |
st46538 | How can I use a sent2vec based word embedding to convert a document to pytorch tensor, my word embedding is in binary format. I do not know the input or output dimension for the embedding. |
st46539 | I have a document corpus (some 3000 documents) and a few query documents, what is the best way to find relevant document for a given query using pytorch ? cosine similarity and tf-idf are not giving desired results. |
st46540 | Hi,
is there a way to explore the graph of an unknown model in a way that we can recreate the model module by module?
I need to develop a framework that can modify unknown models.
Thank you |
st46541 | Solved by marvosyntactical in post #4
Do you mean something like this?
def dfs(module:nn.Module, modify_module:Function, is_root=True) -> nn.Module:
i = -1
for i, (child_name, child_module) in enumerate(module.named_children()):
dfs(child_module, modify_module, is_root=False)
is_leaf = i == -1
modify_m… |
st46542 | Basicaly the user should be able to provide a model found on internet or other, he supply the module class so I can instantiate the model from it, and I should be able to do some modifications on it.
Usually we represent DL models as a graph of nodes of computation, that is what I mean by the graph.
So I would like to explore the graph of computation and recreate an altered equivalent module. |
st46543 | Do you mean something like this?
def dfs(module:nn.Module, modify_module:Function, is_root=True) -> nn.Module:
i = -1
for i, (child_name, child_module) in enumerate(module.named_children()):
dfs(child_module, modify_module, is_root=False)
is_leaf = i == -1
modify_module(module, is_root, is_leaf)
return module |
st46544 | Seems like it yes!
So with this I can iterate through the graph modules and eventually create the same one or altered? |
st46545 | Yes, you only need to copy.deepcopy your model before calling dfs if you dont want to modify it inplace. modify_module is your function that does with a given child module what you want, for example you can add more child modules with .add_module, or modify the forward behavior by registering forward pre hooks (executed before forward call) and forward hooks (executed after forward call see here 1. |
st46546 | I think theres a button on replies that says something like “solved” or some button at the bottom of the post you can click |
st46547 | Hi,
I am trying to understand why sigmoid is a class in pytorch and not tanh.
Also can we directly use torch.tanh as an activation function or there is some better implementation of tanh as activation function in pytorch. |
st46548 | Hi,
There is no reason AFAIK.
Both the class and function version use the exact same implementation and the two exist just to make writing NN nicer.
If you write a custom forward function for your Module, you can use the function version. If you use constructs like Sequential, you will have to use the class version. |
st46549 | Hello Granth -
granth_jain:
I am trying to understand why sigmoid is a class in pytorch and not tanh.
I’m not quite sure what you’re asking, but just to be clear, pytorch does
have both a class and a function version of Tanh:
torch.nn.Tanh
torch.nn.functional.tanh
(Pytorch also has both a class and function version of Sigmoid.)
As an aside, note that pytorch’s sigmoid() is the so-called logistic
function, and that is essentially the same function as tanh().
Best.
K. Frank |
st46550 | Hi,
Are both torch.tanh and NN.tanh same?
Which is better to be used as an activation function |
st46551 | Hello Granth -
granth_jain:
Are both torch.tanh and NN.tanh same?
I haven’t checked the code, but I am quite certain that what Alban
said is correct: Not only do:
torch.tanh (some_tensor)
torch.nn.functional.tanh (some_tensor)
torch.nn.Tanh() (some_tensor)
all perform the same computation (and return the same result), but
they all ultimately resolve to the same implementation to do so.
Which is better to be used as an activation function
The first two versions in my above code are functions. I believe that
the second is being deprecated in favor of the first (for reasons that
I don’t understand).
The third version is a class whose instances are function objects.
(Note, there is no torch.nn.tanh. There is a torch.nn.Tanh (a
class) and a torch.nn.functional.tanh (a function) (and the
function torch.tanh).)
If you like using functions, use torch.tanh (some_tensor),
although you can instantiate the function-object version on the fly
and then call it: torch.nn.Tanh() (some_tensor). (The on-the-fly
function-object approach has the de minimus inefficiency of
instantiating a new object every time the Tanh()() call is made.)
If you want or need to have an instance of a function object, for
example in order to include it as a “layer” in a Sequential, then
you will need to use torch.nn.Tanh.
But again, in terms of the function that gets computed (rather than
the packaging), they are all the same.
Best.
K. Frank |
st46552 | Reposting of an old question I never got a reply to Multi GPU Hook not correctly filling buffer. I know runnable minimal examples are the way to go so I now I have a full runnable minimal example of the code written. Please take a look and help me get to the bottom of this. This is what it looks like with two runs:
> CUDA_VISIBLE_DEVICES=1 python mainMinimalExampleMultiGPU.py
Conv Average Gradient:
[0.00044749726757895453, 0.0014000369330415242, -0.0008686411516918384]
fc Average Gradient:
[-0.004141018711068057, 0.0015833583892040112, 0.0011787552821185693, 0.0010372935249398085, -0.004048425233274684, 0.0006052607123126522, 0.0013055124756185216, 0.0007034393619838467, 0.0007521140892023609, 0.0010237101089629694]
> CUDA_VISIBLE_DEVICES=0,1 python mainMinimalExampleMultiGPU.py
Conv Average Gradient:
[0.0, 0.0, 0.0]
fc Average Gradient:
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
The following is the entire program which was started with the mnist example script.
from __future__ import print_function
import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
def saveAverageD(grad_out, Values):
with torch.no_grad():
if(len(grad_out.shape) == 2):
Values[0].average = Values[0].average * 0.99 + grad_out.sum((0)) * 0.01
else:
Values[0].average = Values[0].average * 0.99 + grad_out.sum((0,2,3)) * 0.01
class valueTracker(nn.Module):
def __init__(self, out_channels):
super(valueTracker, self).__init__()
self.register_buffer('average', torch.zeros(out_channels, device=device, dtype=torch.double))
class averageSaveConv(nn.Module):
def __init__(self, startLayer, out_channels):
super(averageSaveConv, self).__init__()
self.values = nn.ModuleList([])
self.values.append(valueTracker(out_channels))
self.layer = startLayer.double()
def forward(self,x):
out = self.layer(x)
out.register_hook(lambda grad: saveAverageD(grad, self.values))
return out
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = averageSaveConv(nn.Conv2d(1, 3, 5, 1),3)
self.fc = averageSaveConv(nn.Linear(432, 10),10)
def forward(self, x):
x = self.conv(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = torch.flatten(x, 1)
x = self.fc(x)
output = F.log_softmax(x, dim=1)
return output
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device).double(), target.to(device).long()
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
print('Conv Average Gradient:')
print(model.module.conv.values[0].average.tolist())
print('fc Average Gradient:')
print(model.module.fc.values[0].average.tolist())
exit()#just here for debugging
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
if args.dry_run:
break
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device).double(), target.to(device).long()
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
def main():
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
parser.add_argument('--batch-size', type=int, default=2, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=14, metavar='N',
help='number of epochs to train (default: 14)')
parser.add_argument('--lr', type=float, default=1.0, metavar='LR',
help='learning rate (default: 1.0)')
args = parser.parse_args()
torch.manual_seed(1)
kwargs = {'batch_size': args.batch_size}
if use_cuda:
kwargs.update({'num_workers': 1,
'pin_memory': True,
'shuffle': True},
)
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
dataset1 = datasets.MNIST('../data', train=True, download=True,
transform=transform)
dataset2 = datasets.MNIST('../data', train=False,
transform=transform)
train_loader = torch.utils.data.DataLoader(dataset1,**kwargs)
test_loader = torch.utils.data.DataLoader(dataset2, **kwargs)
model = Net()
model = torch.nn.DataParallel(model, device_ids=range(torch.cuda.device_count())).to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr)
for epoch in range(1, args.epochs + 1):
train(args, model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
if __name__ == '__main__':
main() |
st46553 | @ptrblck @albanD Any chance either of you can take a look at this? I got the minimal example up so hopefully it will be easy to replicate the problem. |
st46554 | Just bumping again since I keep posting this on weekends. Please @ptrblck @albanD, you’re my only hope. |
st46555 | Hi,
I did see the issue when you pinged the first time but I don’t think I have much to say about it.
I would advise that you try and reduce your code as much as possible. Given all the things that happen there, I have no idea what could be going wrong |
st46556 | Sorry, I kept a bit of extra code since I started with https://github.com/pytorch/examples/blob/master/mnist/main.py. and thought it would be easy to compare to that. I have further reduced the code below to only have the exact things to reproduce this problem. My network is 1 layer, the ‘main’ just inititalizes things and runs backprop on one random input. The only thing that is unique is my saveAverageD function and 2 custom modules. The custom modules and saveAverageD function were made by your and @ptrblck 's reccomendations over a number of previous posts and they work perfectly on 1 GPU. But as you can see it does not translate to DataParallel and 2 GPUs.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
def saveAverageD(grad_out, Values):
print('calling with:')
print(grad_out)
with torch.no_grad():
Values.average = Values.average * 0.99 + grad_out.sum((0,2,3)) * 0.01
class valueTracker(nn.Module):
def __init__(self, out_channels):
super(valueTracker, self).__init__()
self.register_buffer('average', torch.zeros(out_channels, device=device, dtype=torch.double))
class averageSaveConv(nn.Module):
def __init__(self, startLayer, out_channels):
super(averageSaveConv, self).__init__()
self.values = valueTracker(out_channels)
self.layer = startLayer.double()
def forward(self,x):
out = self.layer(x)
out.register_hook(lambda grad: saveAverageD(grad, self.values))
return out
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = averageSaveConv(nn.Conv2d(1, 10, 28,1),10)
def forward(self, x):
x = self.conv(x)
x = torch.flatten(x, 1)
output = F.log_softmax(x, dim=1)
return output
if __name__ == '__main__':
#setup the net and data parallel and optimizer in the most basic way
model = Net()
model = torch.nn.DataParallel(model, device_ids=range(torch.cuda.device_count())).to(device)
optimizer = optim.SGD(model.parameters(), lr=0.1)
optimizer.zero_grad()
data, target = torch.rand((2,1,28,28),dtype=torch.float64).to(device), torch.zeros(2).to(device).long()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
print('Conv Average Gradient:')
print(model.module.conv.values.average.tolist()) |
st46557 | Just so everything is on this page this is a summary of what should be happening:
saveAverageD is a function to be called in a backward hook to keep track of the average value in grad_out
valueTracker just has a single buffer to store this average
averageSaveConv: this keeps track of a single layer and a single instance of valueTracker to be used for average saving
I edited the code to have an extra print. The following is now the output. Again, this works exactly as you helped me get to with 1 GPU but fails with 2 GPUs. As you can see with 1 GPU it loads both random inputs together and computes the average and saves with saveAverageD called one time on the one GPU. on 2GPUs it calls saveAverageD twice, each GPU gets one of the inputs, but then the average buffer is not tracked.
$ CUDA_VISIBLE_DEVICES=1 python mainMinimalExampleMultiGPU.py
calling with:
tensor([[[[-0.4504]],
[[ 0.0442]],
[[ 0.0555]],
[[ 0.0622]],
[[ 0.0781]],
[[ 0.0507]],
[[ 0.0317]],
[[ 0.0323]],
[[ 0.0349]],
[[ 0.0607]]],
[[[-0.4533]],
[[ 0.0368]],
[[ 0.0420]],
[[ 0.0848]],
[[ 0.0637]],
[[ 0.0569]],
[[ 0.0421]],
[[ 0.0330]],
[[ 0.0346]],
[[ 0.0594]]]], device='cuda:0', dtype=torch.float64)
Conv Average Gradient:
[-0.009037267504024003, 0.0008102439568684652, 0.0009758195369250783, 0.001469522990143697, 0.001417696908934414, 0.0010763320891902935, 0.0007385695544331516, 0.0006529860781865106, 0.0006942532124915154, 0.0012018431768508766]
$ CUDA_VISIBLE_DEVICES=0,1 python mainMinimalExampleMultiGPU.py
calling with:
calling with:
tensor([[[[-0.4665]],
[[ 0.0500]],
[[ 0.0467]],
[[ 0.0672]],
[[ 0.0635]],
[[ 0.0213]],
[[ 0.0543]],
[[ 0.0817]],
[[ 0.0586]],
[[ 0.0232]]]], device='cuda:1', dtype=torch.float64)
tensor([[[[-0.4679]],
[[ 0.0354]],
[[ 0.0464]],
[[ 0.0746]],
[[ 0.0570]],
[[ 0.0386]],
[[ 0.0673]],
[[ 0.0740]],
[[ 0.0512]],
[[ 0.0235]]]], device='cuda:0', dtype=torch.float64)
Conv Average Gradient:
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] |
st46558 | Is my new code sample reduced enough? I don’t know how I can reduce it any further. |
st46559 | Ho I didn’t saw that you updated the code in Multi GPU Hook not correctly doing filling buffer and not the original post.
This looks ok. I don’t have a multigpu machine to run it though @ptrblck would you have a minute to check that please? |
st46560 | As mentioned in my previous post, I don’t think this will work out of the box without reducing the value somehow manually.
Hooks would be registered on each replica and thus would only be valid for this model only.
I don’t think that nn.DataParallel or DDP reduces hook by default, but might be wrong, so I still think that your best bet would be to use e.g. torch.nn.parallel.gather. |
st46561 | The only information I could find about parallel.gather is from this parallelism tutorial 2. I think you’re right that might be what I want to be using, But I don’t think I understand how to use it correctly. I’ve tried a few different ways to rework the code but when I call nn.parallel.gather(model.module.conv.values.average, 0) it still is always returning all 0’s with 2GPUs. Could you please provide any additional help? The backword hook is already in the forward call so it should be on both devices. I tried initializing the array after dataparallel as well. and I tried the dataprallel subclass from that tutorial fixed with the info here 1 |
st46562 | I think I got it! This seems super inefficient though. Can one of you please confirm this is what you meant? Is this the right way to do this and its whats happening behind the scenes with dataparallel anyway? I am specifically concerned about everything in Net(). I do know mathematically I wouldn’t have to ‘split’ every iteration if all I was actually doing in the non-minimal example was averaging. And does this mean if I want to use dataparallel with these buffers I need to manually wrap every layer rather than just calling dataparallel on the network? Sorry, im just generally getting the feeling I did not do this right.
This is the new output:
$ CUDA_VISIBLE_DEVICES=0,1 python mainMinimalExampleMultiGPU.py
calling with:
calling with:
[[[[-0.41571250258624853]], [[0.04142077242016727]], [[0.06950999356652776]], [[0.03586417237630698]], [[0.04916706555685479]], [[0.0479719964996224]], [[0.04534353418505202]], [[0.036737063334092955]], [[0.04897484751185222]], [[0.04072305713577202]]]]
[[[[-0.4097441719176553]], [[0.05437682468935652]], [[0.056508038808905765]], [[0.04807834249760404]], [[0.04681839171079582]], [[0.04986861282603519]], [[0.0284716049455237]], [[0.04319707706917308]], [[0.053327966629887764]], [[0.02909731274037328]]]]
Conv Average Gradient 0:
[-0.041571250258624855, 0.004142077242016727, 0.006950999356652776, 0.003586417237630698, 0.004916706555685479, 0.0047971996499622405, 0.004534353418505202, 0.0036737063334092955, 0.004897484751185222, 0.004072305713577202]
Conv Average Gradient 1:
[-0.04097441719176553, 0.005437682468935653, 0.005650803880890577, 0.004807834249760404, 0.004681839171079582, 0.004986861282603519, 0.00284716049455237, 0.0043197077069173076, 0.005332796662988777, 0.002909731274037328]
Conv Average Gradient:
[-0.04127283372519519, 0.00478987985547619, 0.0063009016187716765, 0.004197125743695551, 0.004799272863382531, 0.00489203046628288, 0.003690756956528786, 0.003996707020163302, 0.005115140707086999, 0.003491018493807265]
updated Conv Average Gradient 0:
[-0.04127283372519519, 0.00478987985547619, 0.0063009016187716765, 0.004197125743695551, 0.004799272863382531, 0.00489203046628288, 0.003690756956528786, 0.003996707020163302, 0.005115140707086999, 0.003491018493807265]
updated Conv Average Gradient 1:
[-0.04127283372519519, 0.00478987985547619, 0.0063009016187716765, 0.004197125743695551, 0.004799272863382531, 0.00489203046628288, 0.003690756956528786, 0.003996707020163302, 0.005115140707086999, 0.003491018493807265]
$ CUDA_VISIBLE_DEVICES=0 python mainMinimalExampleMultiGPU.py
calling with:
[[[[-0.41571250258624853]], [[0.04142077242016727]], [[0.06950999356652776]], [[0.03586417237630698]], [[0.04916706555685479]], [[0.0479719964996224]], [[0.04534353418505202]], [[0.036737063334092955]], [[0.04897484751185222]], [[0.04072305713577202]]], [[[-0.4097441719176553]], [[0.05437682468935652]], [[0.056508038808905765]], [[0.04807834249760404]], [[0.04681839171079582]], [[0.04986861282603519]], [[0.0284716049455237]], [[0.04319707706917308]], [[0.053327966629887764]], [[0.02909731274037328]]]]
Conv Average Gradient 0:
[-0.04127283372519519, 0.00478987985547619, 0.0063009016187716765, 0.004197125743695552, 0.004799272863382531, 0.00489203046628288, 0.003690756956528786, 0.003996707020163302, 0.005115140707087, 0.0034910184938072653]
Conv Average Gradient:
[-0.04127283372519519, 0.00478987985547619, 0.0063009016187716765, 0.004197125743695552, 0.004799272863382531, 0.00489203046628288, 0.003690756956528786, 0.003996707020163302, 0.005115140707087, 0.0034910184938072653]
updated Conv Average Gradient 0:
[-0.04127283372519519, 0.00478987985547619, 0.0063009016187716765, 0.004197125743695552, 0.004799272863382531, 0.00489203046628288, 0.003690756956528786, 0.003996707020163302, 0.005115140707087, 0.0034910184938072653] ```
and this is the new code for the minimal example:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import random
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
def saveAverageD(grad_out, Values):
print('calling with:')
print(grad_out.tolist())
with torch.no_grad():
Values.average = Values.average * 0.9 + grad_out.mean((0,2,3)) * 0.1
class valueTracker(nn.Module):
def __init__(self, out_channels):
super(valueTracker, self).__init__()
self.register_buffer('average', torch.zeros(out_channels, device=device, dtype=torch.double))
class averageSaveConv(nn.Module):
def __init__(self, startLayer, out_channels):
super(averageSaveConv, self).__init__()
self.layer = startLayer.double()
self.out_channels = out_channels
self.values=valueTracker(self.out_channels)
def forward(self,x):
out = self.layer(x)
out.register_hook(lambda grad: saveAverageD(grad, self.values))
return out
if(torch.cuda.device_count() == 1):
device_ids = [0]
else:
device_ids = [0,1]
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv = averageSaveConv(nn.Conv2d(1, 10, 28,1),10).to(device)
self.replicas = nn.parallel.replicate(self.conv, device_ids)
def forward(self, x):
output_device = 0
inputs = nn.parallel.scatter(x, device_ids)
replicas = self.replicas[:len(inputs)]
outputs = nn.parallel.parallel_apply(replicas, inputs)
x = nn.parallel.gather(outputs, output_device)
x = torch.flatten(x, 1)
output = F.log_softmax(x, dim=1)
return output
def gather(self):
self.conv.values.average = nn.parallel.gather([self.replicas[x].values.average for x in range(len(device_ids))], self.conv.values.average.device).reshape(len(device_ids),-1).mean(0)
def split(self):
for replica in self.replicas:
replica.values.average.copy_(self.conv.values.average)
if __name__ == '__main__':
random_seed = 4
random.seed(random_seed)
torch.manual_seed(random_seed)
torch.cuda.manual_seed(random_seed)
model = Net()
optimizer = optim.SGD(model.parameters(), lr=0.1)
optimizer.zero_grad()
data, target = torch.rand((2,1,28,28),dtype=torch.float64).to(device), torch.zeros(2).to(device).long()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
model.gather()
optimizer.step()
for id in device_ids:
print('Conv Average Gradient %d:' % id)
print(model.replicas[id].values.average.tolist())
print('Conv Average Gradient:')
print(model.conv.values.average.tolist())
model.split()
for id in device_ids:
print('updated Conv Average Gradient %d:' % id)
print(model.replicas[id].values.average.tolist()) |
st46563 | @ptrblck thoughts? The things in Net and manually calling my gather and split functions doesn’t seem like the correct usage of these functions. And does this mean I can’t use DataParallel anymore? |
st46564 | Hi have a question concerning ImageFolder which i want to split into train and validation dataset and balance it with a WeightedRandomSampler into a dataloader.
dataset = datasets.ImageFolder(path, transform)
First I split it with sklearn
from sklearn.model_selection import train_test_split
image_datasets = {}
train_idx, val_idx = train_test_split(list(range(len(dataset))), stratify=dataset.targets, test_size=0.2)
image_datasets['train'] = split_result['train']
image_datasets['val'] = split_result['val']
Then i want to sample train dataset with WeightedRandomSampler because it is unbalanced.
# get the labels for the subset
labels = np.array(image_datasets['train'].dataset.targets)[image_datasets['train'].indices]
# count the label occurrence
num_class_elements = np.bincount(labels)
# length of new dataset
num_epoch_elements = len(image_datasets['train'])
create the WeightedRandomSampler
numerator = 1.
denominator = torch.tensor(num_class_elements,dtype=torch.float) # can be converted with torch.tensor(..., dtype=torch.float)
# calculate weight per class
class_weights = numerator / denominator
# create vector where index contains class weight
element_weights = [class_weights[class_index] for class_index in labels]
# create sampler
sampler = torch.utils.data.sampler.WeightedRandomSampler(element_weights, num_epoch_elements, replacement=False)
Is it now save to use the SubSet combined with the sampler for the dataloader?
train_dl = torch.utils.data.DataLoader(image_datasets['train'], batch_size=18, num_workers=2,
pin_memory=True, drop_last=False, shuffle=False, sampler=sampler)
Question herer:
My concern is as follows. Maby i can explain it with an example, the original dataset has 100 elements [0:99] and the training dataset has 80, with index [19:99]=80. The sampler will return indices from [0:79] which the SubSet needs to transform to its parents datasets indices [0:99] excluding the [0:19].
So is the Subset handling this or not?
Is the Subset mapping the indices from the sampler to its indices of the parent Dataset?
Sampler says 0 which corresponds to train_ds[0] which calls parent.dataset[train_ds.indices[0]] #pseudocode |
st46565 | Consider the following code:
import torch
@torch.jit.script
def foo(x, xi, y, yi):
return torch.sum(x[xi] * y[yi], dim=1)
emb_size = 100
cooc_count = 10_000_000
word_count = 12543
device = "cuda"
xw = torch.randn(word_count, emb_size, device=device)
yw = torch.randn(word_count, emb_size, device=device)
xi = torch.randint(word_count, (cooc_count,), device=device)
yi = torch.randint(word_count, (cooc_count,), device=device)
dots = foo(xw, xi, yw, yi)
For those curious, this comes up when training word embeddings but that background is not really relevant, I’m more interested in this question in general.
When I run this on my computer the code crashes with a “CUDA out of memory” error, which makes sense because x[xi] and y[yi] expand to very large tensors of size (cooc_count, emb_size) with a bunch of repeated rows in them. This is an unneccesary allocation though, because those tensors are immediatly collapsed into a smaller tensor of size (cooc_count,) by the multiplication and sum which implement the rowwise dot product.
I was hoping the JIT would realize this and optimize foo to do this dot product directly while indexing. Is this a known limitation? Is there a way to rewrite the expression that avoids the intermediate allocation? |
st46566 | I have, as an input, 2 sequences of vectors.
Sequence 1 is passed to an RNN. The RNN is initialized with an hidden state of zeros. I collect the output hidden state, and save it. Let’s call this output H1.
I do the same with Sequence 2. I use the same RNN as before. Again, I initialize it’s hidden state to zeroes, and collect the output hidden: H2.
I then use another network to compare H1 and H2, get my predictions, and thus I can compute the loss and perform the backward + optim step.
Is this design correct ? Can I reuse the same RNN before performing the backward pass? Is it ok even if I reset the hidden state to 0?
Thanks |
st46567 | Solved by googlebot in post #3
Yes, it is like any other module in that regard. It is also same as [sub]batching sequences (H1,H2) for rnn module. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.