id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st45968
|
It should work and pass a zero gradient to the masked elements:
a = torch.randn(2, 2, requires_grad=True)
b = torch.randn(2, 2)
diff = a - b
diff[ diff < 0 ] = 0
print(diff)
> tensor([[0.6480, 0.0000],
[0.0000, 0.0272]], grad_fn=<IndexPutBackward>)
diff.mean().backward()
print(a.grad)
> tensor([[0.2500, 0.0000],
[0.0000, 0.2500]])
|
st45969
|
Hello,
I initialize my model and save the state-dict of it together with the optimizer.
I load this model for every rerun and it gives me similar results but not equal ones.
Is it even possible to rerun a model with the exact same results as before?
|
st45970
|
It should be possible as long as you are not using non-deterministic operations as explained here 2.
|
st45971
|
Hi guys, very new to PyTorch but have been using successfully on a few small projects until I hit a problem which is summarised in the MWE below. Should this not be returning 6? I am missing something here but am unsure why, and I am certain it is simple.
import torch
p = torch.tensor([1.], requires_grad=True)
class Square(torch.autograd.Function):
@staticmethod
def forward(ctx, target):
ir = torch.pow(target, 2.)
ctx.save_for_backward(target)
return ir
@staticmethod
def backward(ctx, u):
target, = ctx.saved_tensors
return target * 2.
sq = Square.apply(p)
six = sq.pow(3.)
six.backward()
p.grad
This returns tensor([2.]), which is clearly the derivative from square, so why is it not also applying the cube part?
Thank you!!
|
st45972
|
Solved by Nikronic in post #2
Hi,
Because you are not using any gradient value from any operation after you custom autograd function.
If you check your backward method, you have a argument u which contains all grads from operations after Square. Because you are not using it, no matter what you do, it just breaks the all backw…
|
st45973
|
Hi,
Because you are not using any gradient value from any operation after you custom autograd function.
kingstar101:
return target * 2.
If you check your backward method, you have a argument u which contains all grads from operations after Square. Because you are not using it, no matter what you do, it just breaks the all backwards edges right there.
What you are computing:
What you should compute:
So overally:
To fix this, just incorporate u into your computation of backward:
return u * (target * 2.)
Bests
|
st45974
|
As pointed out in https://github.com/pytorch/examples/issues/164 19, the imagenet training gets almost zero gpu utilization, I have 4 titan-V gpu, and the data is locally stored, although not on SSD, but my disk loading is high enough that should not be so slow at least, I posted a issue on github, but no response for several days, and the previous post was not solved, I tried 8 workers and 20 workers, gpu usage are all low, for 8 workers:
Epoch: [20][100/5005] Time 0.283 (1.396) Data 0.001 (1.006) Loss 2.5632 (2.3716) Prec@1 48.438 (47.486) Prec@5 67.969 (72.424)
Epoch: [20][110/5005] Time 0.239 (1.390) Data 0.001 (1.013) Loss 2.4275 (2.3647) Prec@1 49.609 (47.646) Prec@5 71.484 (72.579)
Epoch: [20][120/5005] Time 3.725 (1.422) Data 3.495 (1.056) Loss 2.0656 (2.3711) Prec@1 53.906 (47.556) Prec@5 75.781 (72.382)
Epoch: [20][130/5005] Time 3.500 (1.427) Data 3.267 (1.069) Loss 2.4683 (2.3707) Prec@1 45.312 (47.442) Prec@5 68.750 (72.343)
Epoch: [20][140/5005] Time 0.228 (1.396) Data 0.001 (1.046) Loss 2.2713 (2.3637) Prec@1 50.781 (47.565) Prec@5 72.266 (72.407)
for 20 workers:
Epoch: [20][530/5005] Time 0.227 (0.781) Data 0.001 (0.507) Loss 2.4582 (2.3638) Prec@1 42.969 (47.641) Prec@5 68.359 (72.389)
Epoch: [20][540/5005] Time 0.317 (0.772) Data 0.001 (0.498) Loss 2.2743 (2.3646) Prec@1 48.438 (47.633) Prec@5 75.000 (72.362)
Epoch: [20][550/5005] Time 0.225 (0.786) Data 0.001 (0.511) Loss 2.1320 (2.3634) Prec@1 50.000 (47.668) Prec@5 76.172 (72.384)
Epoch: [20][560/5005] Time 0.290 (0.776) Data 0.003 (0.502) Loss 2.4872 (2.3635) Prec@1 44.141 (47.633) Prec@5 67.969 (72.380)
Epoch: [20][570/5005] Time 0.250 (0.782) Data 0.002 (0.507) Loss 2.3034 (2.3634) Prec@1 47.266 (47.608) Prec@5 74.609 (72.364)
Epoch: [20][580/5005] Time 2.115 (0.776) Data 1.873 (0.502) Loss 2.3284 (2.3650) Prec@1 45.312 (47.570) Prec@5 72.656 (72.340)
Epoch: [20][590/5005] Time 0.399 (0.782) Data 0.002 (0.508) Loss 2.4217 (2.3645) Prec@1 45.703 (47.591) Prec@5 70.703 (72.348)
Epoch: [20][600/5005] Time 3.144 (0.778) Data 2.857 (0.504) Loss 2.3866 (2.3629) Prec@1 48.828 (47.632) Prec@5 71.875 (72.362)
Epoch: [20][610/5005] Time 0.236 (0.784) Data 0.002 (0.510) Loss 2.3191 (2.3638) Prec@1 51.953 (47.630) Prec@5 73.047 (72.362)
Epoch: [20][620/5005] Time 0.231 (0.776) Data 0.001 (0.502) Loss 2.4194 (2.3634) Prec@1 50.000 (47.652) Prec@5 71.875 (72.359)
Epoch: [20][630/5005] Time 0.298 (0.788) Data 0.001 (0.514) Loss 2.3440 (2.3624) Prec@1 47.266 (47.674) Prec@5 69.922 (72.368)
Epoch: [20][640/5005] Time 1.156 (0.782) Data 0.841 (0.507) Loss 2.5047 (2.3640) Prec@1 46.094 (47.629) Prec@5 69.531 (72.345)
Epoch: [20][650/5005] Time 0.230 (0.787) Data 0.002 (0.513) Loss 2.4881 (2.3637) Prec@1 46.484 (47.629) Prec@5 73.438 (72.354)
Epoch: [20][660/5005] Time 0.733 (0.780) Data 0.385 (0.506) Loss 2.3043 (2.3642) Prec@1 48.828 (47.620) Prec@5 74.219 (72.355)
Epoch: [20][670/5005] Time 0.222 (0.791) Data 0.001 (0.517) Loss 2.4218 (2.3640) Prec@1 50.000 (47.635) Prec@5 70.312 (72.358)
Epoch: [20][680/5005] Time 0.726 (0.784) Data 0.497 (0.510) Loss 2.0819 (2.3638) Prec@1 53.906 (47.653) Prec@5 75.391 (72.349)
Epoch: [20][690/5005] Time 0.224 (0.795) Data 0.002 (0.521) Loss 2.2428 (2.3634) Prec@1 49.219 (47.669) Prec@5 75.000 (72.358)
Epoch: [20][700/5005] Time 0.278 (0.787) Data 0.003 (0.513) Loss 2.4094 (2.3639) Prec@1 44.141 (47.653) Prec@5 70.312 (72.346)
Epoch: [20][710/5005] Time 0.436 (0.798) Data 0.003 (0.523) Loss 2.3120 (2.3633) Prec@1 50.000 (47.665) Prec@5 71.484 (72.351)
Epoch: [20][720/5005] Time 0.234 (0.790) Data 0.001 (0.516) Loss 2.5496 (2.3646) Prec@1 44.922 (47.650) Prec@5 69.141 (72.336)
Epoch: [20][730/5005] Time 0.232 (0.800) Data 0.001 (0.526) Loss 2.1596 (2.3641) Prec@1 51.953 (47.666) Prec@5 76.562 (72.350)
Epoch: [20][740/5005] Time 0.226 (0.793) Data 0.001 (0.519) Loss 2.4315 (2.3641) Prec@1 45.703 (47.657) Prec@5 71.094 (72.357)
Epoch: [20][750/5005] Time 0.244 (0.803) Data 0.001 (0.529) Loss 2.2962 (2.3637) Prec@1 45.703 (47.650) Prec@5 72.266 (72.376)
Epoch: [20][760/5005] Time 0.316 (0.796) Data 0.001 (0.522) Loss 2.4111 (2.3642) Prec@1 50.781 (47.631) Prec@5 72.656 (72.366)
Epoch: [20][770/5005] Time 0.245 (0.802) Data 0.001 (0.529) Loss 2.4344 (2.3643) Prec@1 48.828 (47.611) Prec@5 71.875 (72.360)
Epoch: [20][780/5005] Time 0.346 (0.795) Data 0.001 (0.522) Loss 2.3858 (2.3640) Prec@1 45.703 (47.617) Prec@5 71.094 (72.362)
Epoch: [20][790/5005] Time 0.290 (0.802) Data 0.002 (0.529) Loss 2.5051 (2.3643) Prec@1 44.922 (47.622) Prec@5 72.656 (72.356)
Epoch: [20][800/5005] Time 0.224 (0.795) Data 0.001 (0.522) Loss 2.2296 (2.3641) Prec@1 48.047 (47.624) Prec@5 74.219 (72.347)
Epoch: [20][810/5005] Time 0.239 (0.800) Data 0.002 (0.527) Loss 2.2256 (2.3643) Prec@1 49.609 (47.622) Prec@5 74.219 (72.345)
my pin_memory is set to True, my dataloader is configured as
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=args.batch_size, shuffle=(train_sampler is None),
num_workers=args.workers, pin_memory=True, sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])),
batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True)
The only change I made is adding my own resnet101 module for training, without changing other part,
@ptrblck, I saw you have some suggestion in GPU: high memory usage, low GPU volatile-util 38, but I don’t know how to further check on the issue, could you pls help? thanks!
|
st45975
|
Thanks for your response, I remove the my custom code part and use the original code with default resnet18 training, still very low gpu usage:
|Epoch: [0][0/5005]|Time 279.412 (279.412)|Data 8.915 (8.915)|Loss 7.0321 (7.0321)|Prec@1 0.000 (0.000)|Prec@5 0.391 (0.391)|
|---|---|---|---|---|---|
|Epoch: [0][10/5005]|Time 0.112 (26.322)|Data 0.001 (1.583)|Loss 7.0418 (7.0443)|Prec@1 0.000 (0.036)|Prec@5 0.781 (0.426)|
|Epoch: [0][20/5005]|Time 0.153 (14.640)|Data 0.087 (1.647)|Loss 7.1721 (7.0928)|Prec@1 0.000 (0.056)|Prec@5 0.391 (0.558)|
|Epoch: [0][30/5005]|Time 0.784 (10.761)|Data 0.723 (1.932)|Loss 6.9021 (7.0816)|Prec@1 0.391 (0.076)|Prec@5 0.781 (0.655)|
|Epoch: [0][40/5005]|Time 0.111 (8.569)|Data 0.001 (1.873)|Loss 6.9479 (7.0614)|Prec@1 0.391 (0.114)|Prec@5 0.781 (0.696)|
|Epoch: [0][50/5005]|Time 0.880 (7.415)|Data 0.813 (2.015)|Loss 6.8931 (7.0361)|Prec@1 0.000 (0.130)|Prec@5 0.391 (0.781)|
|Epoch: [0][60/5005]|Time 0.104 (6.499)|Data 0.001 (1.970)|Loss 6.8358 (7.0106)|Prec@1 0.000 (0.166)|Prec@5 1.172 (0.890)|
|Epoch: [0][70/5005]|Time 2.886 (5.963)|Data 2.786 (2.059)|Loss 6.8376 (6.9882)|Prec@1 0.000 (0.165)|Prec@5 1.172 (0.935)|
|Epoch: [0][80/5005]|Time 0.105 (5.458)|Data 0.001 (2.026)|Loss 6.7916 (6.9656)|Prec@1 0.000 (0.183)|Prec@5 1.562 (1.008)|
|Epoch: [0][90/5005]|Time 4.114 (5.140)|Data 4.039 (2.075)|Loss 6.7940 (6.9472)|Prec@1 0.000 (0.219)|Prec@5 1.172 (1.039)|
|Epoch: [0][100/5005]|Time 0.113 (4.808)|Data 0.001 (2.038)|Loss 6.7005 (6.9283)|Prec@1 0.000 (0.251)|Prec@5 0.781 (1.114)|
|Epoch: [0][110/5005]|Time 6.494 (4.620)|Data 6.420 (2.092)|Loss 6.7363 (6.9102)|Prec@1 0.391 (0.289)|Prec@5 1.562 (1.228)|
|Epoch: [0][120/5005]|Time 0.104 (4.387)|Data 0.001 (2.060)|Loss 6.7741 (6.8942)|Prec@1 0.391 (0.313)|Prec@5 1.562 (1.311)|
|Epoch: [0][130/5005]|Time 6.324 (4.253)|Data 6.260 (2.097)|Loss 6.6735 (6.8780)|Prec@1 1.172 (0.331)|Prec@5 2.734 (1.378)|
|Epoch: [0][140/5005]|Time 0.104 (4.076)|Data 0.001 (2.067)|Loss 6.5866 (6.8644)|Prec@1 0.391 (0.341)|Prec@5 3.125 (1.438)|
actually I’m also not sure what the printed line means, Time 0.104 (6.499)|Data 0.001 (1.970) what is their unit, in seconds?
|
st45976
|
Buy an SSD.
Hard drives (spinning magnetic disk) don’t do well with small random reads. Data loading is going to be a bottleneck.
|
st45977
|
I see, currently how can I estimate the training time? for the printed time I’m really confused what is the total time for each print, should I add Time and Data together, and what is the unit, thank you!
|
st45978
|
And there is always the warining :
/home/user/anaconda2/envs/pytorch/lib/python2.7/site-packages/PIL/TiffImagePlugin.py:747: UserWarning: Possibly corrupt EXIF data. Expectin
g to read 2555904 bytes but only got 0. Skipping tag 0
do I have to take care of this?
|
st45979
|
Ignore that warning. We saw that on our copy of ImageNet too, so it’s probably ImageNet’s problem.
|
st45980
|
@twangnh, @SimonW, @colesbury
My GPU speed is badly affected by Using the following custom dropout layer. Can anyone tell me how it can be improved
import torch
import torch.nn as nn
import torch.optim as optim
import torch.utils.data as data
from torch.autograd import Variable
from torchvision import datasets, transforms
from tqdm import tqdm_notebook
class GaussianDropout(nn.Module):
def __init__(self, alpha=1.0):
super(GaussianDropout, self).__init__()
self.alpha = torch.Tensor([alpha])
def forward(self, x):
"""
Sample noise e ~ N(1, alpha)
Multiply noise h = h_ * e
"""
if self.train():
# N(1, alpha)
epsilon = torch.randn(x.size()) * self.alpha + 1
epsilon = Variable(epsilon)
if x.is_cuda:
epsilon = epsilon.cuda()
return x * epsilon
else:
return x
class VariationalDropout(nn.Module):
def __init__(self, alpha=1.0, dim=None):
super(VariationalDropout, self).__init__()
self.dim = dim
self.max_alpha = alpha
# Initial alpha
log_alpha = (torch.ones(dim) * alpha).log()
self.log_alpha = nn.Parameter(log_alpha)
def kl(self):
c1 = 1.16145124
c2 = -1.50204118
c3 = 0.58629921
alpha = self.log_alpha.exp()
negative_kl = 0.5 * self.log_alpha + c1 * alpha + c2 * alpha**2 + c3 * alpha**3
kl = -negative_kl
return kl.mean()
def forward(self, x):
"""
Sample noise e ~ N(1, alpha)
Multiply noise h = h_ * e
"""
if self.train():
# N(0,1)
epsilon = Variable(torch.randn(x.size()))
if x.is_cuda:
epsilon = epsilon.cuda()
# Clip alpha
self.log_alpha.data = torch.clamp(self.log_alpha.data, max=self.max_alpha)
alpha = self.log_alpha.exp()
# N(1, alpha)
epsilon = epsilon * alpha
return x * epsilon
else:
return x
def dropout(p=None, dim=None, method='standard'):
if method == 'standard':
return nn.Dropout(p)
elif method == 'gaussian':
return GaussianDropout(p/(1-p))
elif method == 'variational':
return VariationalDropout(p/(1-p), dim)
If I use inbuilt pytorch dropout nn.Dropout(p=dropout_rate) then GPU utilization is nearly 98%.
I have also observed that the above code is working fine with the dense neural network. However, in CNN architecture such as wide-renet replacing nn.Dropout(p=dropout_rate) (line 27) with Gaussian or Variational dropout is reducing the overall GPU utilization.
|
st45981
|
I facing a common problem when loading pre-training model using PyTorch. Jupyter notebook is crashing “The kernel appears to have died. It will restart automatically”
I have followed the discussion link 29, link 11, and link 10 but not fix, any suggestions?
The environment specifications as follows:
OS : CentOS Linux release 7.8.2003 (Core)
Python : Python 3.6.12 :: Anaconda, Inc.
Conda version : conda 4.5.4
Available resources:
ibm-game-center Thu Nov 12 14:19:05 2020 410.48
[0] GeForce RTX 2080 Ti | 46’C, 0 % | 10 / 10989 MB |
[1] GeForce RTX 2080 Ti | 49’C, 0 % | 10 / 10989 MB |
Conda list :
packages in environment at /home/aiman/anaconda3:
Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py36he11e457_0
_libgcc_mutex 0.1 main
alabaster 0.7.10 py36h306e16b_0
anaconda 5.2.0 py36_3
anaconda-client 1.6.14 py36_0
anaconda-navigator 1.8.7 py36_0
anaconda-project 0.8.2 py36h44fb852_0
asn1crypto 0.24.0 py36_0
astroid 1.6.3 py36_0
astropy 3.0.2 py36h3010b51_1
attrs 18.1.0 py36_0
babel 2.5.3 py36_0
backcall 0.1.0 py36_0
backports 1.0 py36hfa02d7e_1
backports.shutil_get_terminal_size 1.0.0 py36hfea85ff_2
beautifulsoup4 4.6.0 py36h49b8c8c_1
bitarray 0.8.1 py36h14c3975_1
bkcharts 0.2 py36h735825a_0
blas 1.0 openblas
blaze 0.11.3 py36h4e06776_0
bleach 2.1.3 py36_0
blosc 1.14.3 hdbcaa40_0
bokeh 0.12.16 py36_0
boto 2.48.0 py36h6e4cd66_1
bottleneck 1.2.1 py36haac1ea0_0
bzip2 1.0.6 h14c3975_5
ca-certificates 2020.10.14 0
cairo 1.14.12 h7636065_2
certifi 2020.6.20 pyhd3eb1b0_3
cffi 1.11.5 py36h9745a5d_0
chardet 3.0.4 py36h0f667ec_1
click 6.7 py36h5253387_0
cloudpickle 0.5.3 py36_0
clyent 1.2.2 py36h7e57e65_1
colorama 0.3.9 py36h489cec4_0
conda 4.5.4 py36_0
conda-build 3.10.5 py36_0
conda-env 2.6.0 1
conda-verify 2.0.0 py36h98955d8_0
contextlib2 0.5.5 py36h6c84a62_0
cpuonly 1.0 0 pytorch
cryptography 2.2.2 py36h14c3975_0
curl 7.60.0 h84994c4_0
cycler 0.10.0 py36h93f1223_0
cython 0.28.2 py36h14c3975_0
cytoolz 0.9.0.1 py36h14c3975_0
dask 0.17.5 py36_0
dask-core 0.17.5 py36_0
datashape 0.5.4 py36h3ad6b5c_0
dbus 1.13.2 h714fa37_1
decorator 4.3.0 py36_0
distributed 1.21.8 py36_0
docutils 0.14 py36hb0f60f5_0
entrypoints 0.2.3 py36h1aec115_2
et_xmlfile 1.0.1 py36hd6bccc3_0
expat 2.2.5 he0dffb1_0
fastcache 1.0.2 py36h14c3975_2
filelock 3.0.4 py36_0
flask 1.0.2 py36_1
flask-cors 3.0.4 py36_0
fontconfig 2.12.6 h49f89f6_0
freetype 2.10.4 h5ab3b9f_0
gensim 3.8.3
get_terminal_size 1.0.0 haa9412d_0
gevent 1.3.0 py36h14c3975_0
glib 2.56.1 h000015b_0
glob2 0.6 py36he249c77_0
gmp 6.1.2 h6c8ec71_1
gmpy2 2.0.8 py36hc8893dd_2
graphite2 1.3.11 h16798f4_2
greenlet 0.4.13 py36h14c3975_0
gst-plugins-base 1.14.0 hbbd80ab_1
gstreamer 1.14.0 hb453b48_1
h5py 2.7.1 py36ha1f6525_2
harfbuzz 1.7.6 h5f0a787_1
hdf5 1.10.2 hba1933b_1
heapdict 1.0.0 py36_2
html5lib 1.0.1 py36h2f9c1c0_0
icu 58.2 h9c2bf20_1
idna 2.6 py36h82fb2a8_1
imageio 2.3.0 py36_0
imagesize 1.0.0 py36_0
intel-openmp 2020.2 254
ipykernel 4.8.2 py36_0
ipython 6.4.0 py36_0
ipython_genutils 0.2.0 py36hb52b0d5_0
ipywidgets 7.2.1 py36_0
isort 4.3.4 py36_0
itsdangerous 0.24 py36h93cc618_1
jbig 2.1 hdba287a_0
jdcal 1.4 py36_0
jedi 0.12.0 py36_1
jinja2 2.10 py36ha16c418_0
jpeg 9b h024ee3a_2
jsonschema 2.6.0 py36h006f8b5_0
jupyter 1.0.0 py36_4
jupyter_client 5.2.3 py36_0
jupyter_console 5.2.0 py36he59e554_1
jupyter_core 4.4.0 py36h7c827e3_0
jupyterlab 0.32.1 py36_0
jupyterlab_launcher 0.10.5 py36_0
kiwisolver 1.0.1 py36h764f252_0
lazy-object-proxy 1.3.1 py36h10fcdad_0
lcms2 2.11 h396b838_0
ld_impl_linux-64 2.33.1 h53a641e_7
libcurl 7.60.0 h1ad7b7a_0
libedit 3.1.20191231 h14c3975_1
libffi 3.3 he6710b0_2
libgcc-ng 9.1.0 hdf63c60_0
libgfortran 3.0.0 1
libgfortran-ng 8.2.0 hdf63c60_1
libpng 1.6.37 hbc83047_0
libsodium 1.0.16 h1bed415_0
libssh2 1.8.0 h9cfc8f7_4
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.1.0 h2733197_1
libtool 2.4.6 h544aabb_3
libxcb 1.13 h1bed415_1
libxml2 2.9.8 h26e45fe_1
libxslt 1.1.32 h1312cb7_0
llvmlite 0.23.1 py36hdbcaa40_0
locket 0.2.0 py36h787c0ad_1
lxml 4.2.1 py36h23eabaa_0
lz4-c 1.9.2 heb0550a_3
lzo 2.10 h49e0be7_2
markupsafe 1.0 py36hd9260cd_1
matplotlib 2.2.2 py36h0e671d2_1
mccabe 0.6.1 py36h5ad9710_1
mistune 0.8.3 py36h14c3975_1
mkl 2020.2 256
mkl-service 1.1.2 py36h17a0993_4
mkl_fft 1.0.1 py36h3010b51_0
mkl_random 1.0.1 py36h629b387_0
more-itertools 4.1.0 py36_0
mpc 1.0.3 hec55b23_5
mpfr 3.1.5 h11a74b3_2
mpmath 1.0.0 py36hfeacd6b_2
msgpack-python 0.5.6 py36h6bb024c_0
multipledispatch 0.5.0 py36_0
navigator-updater 0.2.1 py36_0
nbconvert 5.3.1 py36hb41ffb7_0
nbformat 4.4.0 py36h31c9010_0
ncurses 6.2 he6710b0_1
networkx 2.1 py36_0
ninja 1.10.1 py36hfd86e86_0
nltk 3.3.0 py36_0
nomkl 3.0 0
nose 1.3.7 py36hcdf7029_2
notebook 5.5.0 py36_0
numba 0.38.0 py36h637b7d7_0
numexpr 2.6.5 py36h7bf3b9c_0
numpy 1.13.1 py36_nomkl_0
numpy-base 1.14.3 py36h9be14a7_1
numpydoc 0.8.0 py36_0
odo 0.5.1 py36h90ed295_0
olefile 0.46 py_0
openblas 0.2.19 0
openpyxl 2.5.3 py36_0
openssl 1.1.1h h7b6447c_0
packaging 17.1 py36_0
pandas 0.23.0 py36h637b7d7_0
pandoc 1.19.2.1 hea2e7c5_1
pandocfilters 1.4.2 py36ha6701b7_1
pango 1.41.0 hd475d92_0
parso 0.2.0 py36_0
partd 0.3.8 py36h36fd896_0
patchelf 0.9 hf79760b_2
path.py 11.0.1 py36_0
pathlib2 2.3.2 py36_0
patsy 0.5.0 py36_0
pcre 8.42 h439df22_0
pep8 1.7.1 py36_0
pexpect 4.5.0 py36_0
pickleshare 0.7.4 py36h63277f8_0
pillow 8.0.1 py36he98fc37_0
pip 20.2.4 py36h06a4308_0
pixman 0.34.0 hceecf20_3
pkginfo 1.4.2 py36_1
pluggy 0.6.0 py36hb689045_0
ply 3.11 py36_0
prompt_toolkit 1.0.15 py36h17d85b1_0
psutil 5.4.5 py36h14c3975_0
ptyprocess 0.5.2 py36h69acd42_0
py 1.5.3 py36_0
PyArabic 0.6.10
pycodestyle 2.4.0 py36_0
pycosat 0.6.3 py36h0a5515d_0
pycparser 2.18 py36hf9f622e_1
pycrypto 2.6.1 py36h14c3975_8
pycurl 7.43.0.1 py36hb7f436b_0
pyflakes 1.6.0 py36h7bd6a15_0
pygments 2.2.0 py36h0d3125c_0
pylint 1.8.4 py36_0
pyodbc 4.0.23 py36hf484d3e_0
pyopenssl 18.0.0 py36_0
pyparsing 2.2.0 py36hee85983_1
pyqt 5.9.2 py36h751905a_0
pysocks 1.6.8 py36_0
pytables 3.4.3 py36h02b9ad4_2
pytest 3.5.1 py36_0
pytest-arraydiff 0.2 py36_0
pytest-astropy 0.3.0 py36_0
pytest-doctestplus 0.1.3 py36_0
pytest-openfiles 0.3.0 py36_0
pytest-remotedata 0.2.1 py36_0
python 3.6.12 hcff3b4d_2
python-dateutil 2.7.3 py36_0
pytorch 1.4.0 py3.6_cpu_0 [cpuonly] pytorch
pytz 2018.4 py36_0
pywavelets 0.5.2 py36he602eb0_0
pyyaml 3.12 py36hafb9ca4_1
pyzmq 17.0.0 py36h14c3975_0
qt 5.9.5 h7e424d6_0
qtawesome 0.4.4 py36h609ed8c_0
qtconsole 4.3.1 py36h8f73b5b_0
qtpy 1.4.1 py36_0
readline 8.0 h7b6447c_0
requests 2.18.4 py36he2e5f8d_1
rope 0.10.7 py36h147e2ec_0
ruamel_yaml 0.15.35 py36h14c3975_1
scikit-image 0.13.1 py36h14c3975_1
scikit-learn 0.19.1 py36h7aa7ec6_0
scipy 1.1.0 py36hfc37229_0
seaborn 0.8.1 py36hfad7ec4_0
send2trash 1.5.0 py36_0
setuptools 50.3.1 py36h06a4308_1
simplegeneric 0.8.1 py36_2
singledispatch 3.4.0.3 py36h7a266c3_0
sip 4.19.8 py36hf484d3e_0
six 1.15.0 py_0
smart-open 3.0.0
snappy 1.1.7 hbae5bb6_3
snowballstemmer 1.2.1 py36h6febd40_0
sortedcollections 0.6.1 py36_0
sortedcontainers 1.5.10 py36_0
sphinx 1.7.4 py36_0
sphinxcontrib 1.0 py36h6d0f590_1
sphinxcontrib-websupport 1.0.1 py36hb5cb234_1
spyder 3.2.8 py36_0
sqlalchemy 1.2.7 py36h6b74fdf_0
sqlite 3.33.0 h62c20be_0
statsmodels 0.9.0 py36h3010b51_0
sympy 1.1.1 py36hc6d1c1c_0
tblib 1.3.2 py36h34cf8b6_0
terminado 0.8.1 py36_1
testpath 0.3.1 py36h8cadb63_0
tk 8.6.10 hbc83047_0
toolz 0.9.0 py36_0
torchaudio 0.4.0 py36 pytorch
torchvision 0.5.0 py36_cpu [cpuonly] pytorch
tornado 5.0.2 py36_0
traitlets 4.3.2 py36h674d592_0
typing 3.7.4.3 py36_0
unicodecsv 0.14.1 py36ha668878_0
unixodbc 2.3.6 h1bed415_0
urllib3 1.22 py36hbe7ace6_0
wcwidth 0.1.7 py36hdf4376a_0
webencodings 0.5.1 py36h800622e_1
werkzeug 0.14.1 py36_0
wheel 0.35.1 py_0
widgetsnbextension 3.2.1 py36_0
wrapt 1.10.11 py36h28b7045_0
xlrd 1.1.0 py36h1db9f0c_1
xlsxwriter 1.0.4 py36_0
xlwt 1.3.0 py36h7b00a1f_0
xz 5.2.5 h7b6447c_0
yaml 0.1.7 had09818_2
zeromq 4.2.5 h439df22_0
zict 0.1.3 py36h3a3bf81_0
zlib 1.2.11 h7b6447c_3
zstd 1.4.5 h9ceee32_0
|
st45982
|
Solved by clived2 in post #9
I had this same issue on a pytorch install on an older notebook with only 2 gigs of ram when I was running torch 1.4.0. I removed 1.4.0 and replaced it with 1.1.0. This config behaved perfectly. I might also add that I am having the same problem on the notebook, when trying to import Tensorflow2
|
st45983
|
Jupyter might hide the actual error message and just restart the kernel.
Could you run the script in a terminal via python scripy.py and check the error message?
|
st45984
|
Yes sir you’re correct. I get the below error when I import torch
import torch
File "/home/aiman/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 81, in <module>
from torch._C import *
ImportError: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by /lib64/libstdc++.so.6)
|
st45985
|
Did you build PyTorch from source are have you used a conda/pip binary?
Also, did you change something in your system, which might have broken PyTorch, in case it was working before?
|
st45986
|
Thank you sir for the valuable reply.
I have built Pytorch using pip:
pip install torch==1.3.1+cu100 torchvision==0.4.2+cu100 -f https://download.pytorch.org/whl/torch_stable.html
It’s setup for a new system. I’m confused if its Pytorch or a system issue.
|
st45987
|
I’m unsure what is causing this issue. Could you create a new virtual environment through pip or conda and reinstall PyTorch again?
|
st45988
|
I had this same issue on a pytorch install on an older notebook with only 2 gigs of ram when I was running torch 1.4.0. I removed 1.4.0 and replaced it with 1.1.0. This config behaved perfectly. I might also add that I am having the same problem on the notebook, when trying to import Tensorflow2
|
st45989
|
I have used Random cropped, rotation and flipping as augmentation strategy in training. I want to know the number of images before and after augmentation. How to do that?
from matplotlib import pyplot as plt
import torch
from torch import nn
import torch.nn.functional as F
from torch import optim
from torch.autograd import Variable
from torchvision import datasets, transforms, models
from PIL import Image
import numpy as np
from torch.utils import data
import os
import torchvision
from torch.utils.data.sampler import SubsetRandomSampler
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
torch.cuda.empty_cache()
import pandas as pd
from torch.optim.lr_scheduler import StepLR
print("PyTorch Version: ",torch.__version__)
print("Torchvision Version: ",torchvision.__version__)
data_dir = "./BW"
train_dir=data_dir + './Train'
valid_dir=data_dir + './Valid'
test_dir=data_dir + './Test'
# Models to choose from [resnet18, resnet50, alexnet, vgg, squeezenet, densenet, inception]
model_name = "resnet18_1"
# Number of classes in the dataset
num_classes = 2
# Batch size for training (change depending on how much memory you have)
batch_size =16
# Number of epochs to train for
num_epochs =10000
train_transforms = transforms.Compose([transforms.RandomResizedCrop(size=256),
transforms.Resize((224,224)),
transforms.RandomRotation(degrees=15),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
test_transforms = transforms.Compose([
transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
#transforms.Normalize((0.4914, 0.4822, 0.4465),(0.2023, 0.1994, 0.2010))
])
validation_transforms = transforms.Compose([transforms.Resize((224,224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
train_data= datasets.ImageFolder(train_dir,transform=train_transforms)
valid_data= datasets.ImageFolder(valid_dir,transform=validation_transforms)
test_data= datasets.ImageFolder(test_dir,transform=test_transforms)
#targets = datasets.ImageFolder.targets
num_workers = 0
print("Number of Samples in Train: ",len(train_data))
print("Number of Samples in Valid: ",len(valid_data))
print("Number of Samples in Test ",len(test_data))
train_loader = torch.utils.data.DataLoader(train_data, batch_size,
num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(valid_data, batch_size,
num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size,
num_workers=num_workers, shuffle=False)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
st45990
|
The number of images remains the same after you do data augmentation, since it happens on the fly. To get more idea on why it is called data augmentation, because the literal meaning of augmentation can be a little misleading, here is a post: Data augmentation in PyTorch 16
|
st45991
|
Let’s say I have a 4-dimensional tensor (batch x channel x time x space). I’d like to downsample the space axis and upsample the time axis. I can achieve this sequentially as follows:
tnsr = torch.randn(2, 3, 4, 5, requires_grad=True) # batch x channel x time x space
space_downsampler = nn.Conv2d(3, 3, (1, 3), stride=(1, 2), padding=(0, 1))
tnsr = space_downsampler(tnsr) # downsample space dimension
assert tnsr.shape == (2, 3, 4, 3)
time_upsampler = nn.ConvTranspose2d(3, 3, (3, 1), stride=(2, 1), padding=(1, 0))
tnsr = time_upsampler(tnsr) # upsample time dimension
assert tnsr.shape == (2, 3, 7, 3)
This solution uses a 1x3 filter with stride (1,2) followed by a 3x1 filter with fractional stride (0.5, 1). But I would prefer to use a single 3x3 filter that has stride (0.5, 2). This would mean that the 3x3 filter is applied with fractional striding along one axis and with normal 2x striding along the other axis.
How might I implement this? One idea that comes to mind would be to preprocess the tensor by “injecting zeros” into the time axis, so that a poor-man’s fractional stride can be achieved using a normal convolution module:
tnsr = torch.randn(2, 3, 4, 5, requires_grad=True) # batch x channel x time x space
tnsr_expanded = torch.zeros(2, 3, 7, 5)
tnsr_expanded[:, :, ::2] = tnsr
hybrid_conv = nn.Conv2d(3, 3, (3, 3), stride=(1, 2), padding=(1, 1))
tnsr = hybrid_conv(tnsr_expanded)
assert tnsr.shape == (2, 3, 7, 3)
But there is a problem with the above implementation: one cannot backpropagate gradients smoothly through the tnsr_expanded[:, :, ::2] = tnsr assignment (as far as I know). Here is my clumsy alternative, which does support backpropagation:
tnsr = torch.randn(2, 3, 4, 5, requires_grad=True) # batch x channel x time x space
zeros = torch.zeros(2,3,5)
tnsr_expanded = torch.stack((tnsr[:,:,0],zs,tnsr[:,:,1],zs,tnsr[:,:,2],zs,tnsr[:,:,3]),dim=2)
hybrid_conv = nn.Conv2d(3, 3, (3, 3), stride=(1, 2), padding=(1, 1))
tnsr = hybrid_conv(tnsr_expanded)
assert tnsr.shape == (2, 3, 7, 3)
Any ideas for a more elegant solution?
EDIT:
Nevermind, it appears that I was mistaken about backpropagating gradients through the tnsr_expanded[:, :, ::2] = tnsr assignment: backpropagation works as expected.
|
st45992
|
Hi.
I am quite new to this, and still trying to figure things out. Any help is appreciated.
I am trying to create two separate lstm networks, which would take in different sequence lengths. So, for example lstm1 would take in sequence length of 6 and lstm2 would take sequence length of 12. Afterwards, the outputs of both networks should be put together to produce an output of sequence length 1.
Can this be done?
I have read about pack_padded sequence… But not sure if it applies to hear.
|
st45993
|
Hi guys,
I’ve confronted with a problem that I cannot solve. Say I have a tensor A with shape (batch_A, hidden_dim) and tensor B with shape (batch_B, hidden_dim). Now I would like to do a reshape for both tensor into shape (batch_A, h, hidden_dim/h), (batch_B, h, hidden_dim/h),the mutual concatenation into (batch_A x batch_B, 2h, hidden_dim/h) and then perform Conv on it. What would be the best way to do such operation. As far as I know, I could use this:
A = A.reshape(-1, h, hidden_dim/h)
B = B.reshape(-1, h, hidden_dim/h)
A = A.unsqueeze(0).repeat(batch_B,1, 1, 1)
B = B.unsqueeze(1).repeat(1, batch_A, 1, 1)
concat_tensor = torch.cat([A, B], dim=-2)
res = conv(concat_tensor)
Is there any better way to do this? As repeat() will consume a lot of memory. This would not ideal for me because one of my batch size would be as large as 15,000.
Regards,
|
st45994
|
I could not post all the code but basically the mypy does not pass because it does not recognize the amp.autocast() function and throw an error
error: Module has no attribute "autocast"
from torch.cuda import amp
with amp.autocast():
# some code
here is my code to run mypy :
python -m mypy --ignore-missing-imports . || exit 1
Is it possible to deactivate this error
|
st45995
|
in this tutorial: https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html#quantization-aware-training 2
bias=False is used in Conv.
I am wondering why we disable bias in Conv Quantization-aware training?
Many thanks.
|
st45996
|
Hello,
would you know how I can adapt this code so that sizes of tensors must match because I have this error: x = torch.cat([x1,x2],1) RuntimeError: Sizes of tensors must match except in dimension 0. Got 32 and 1 (The offending index is 0).
My images are size 416x416.
Thank you in advance for your help,
num_classes = 20
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.inc = models.inception_v3(pretrained=True)
self.inc.aux_logits = False
for child in list(self.inc.children())[:-5]:
for param in child.parameters():
param.requires_grad = False
self.inc.fc = nn.Sequential()
self.dens121 = models.densenet121(pretrained=True)
for child in list(self.dens121.children())[:-6]:
for param in child.parameters():
param.requires_grad = False
self.dens121 = nn.Sequential(*list(self.dens121.children())[:-1])
self.SiLU = nn.SiLU()
self.linear = nn.Linear(4096, num_classes)
self.dropout = nn.Dropout(0.2)
def forward(self, x):
x1 = self.SiLU(self.dens121(x))
x1 = x1.view(-1, 2048)
x2 = self.inc(x).view(-1, 2048)
x = torch.cat([x1,x2],1)
return self.linear(self.dropout(x))
|
st45997
|
Solved by ptrblck in post #2
I cannot reproduce this issue by using an input tensor of [batch_size, 3, 416, 416] and am running into a shape mismatch error before.
Changing x.view(-1, 2048) to x.view(x.size(0), -1) and adapting the expected in_features in self.linear works:
num_classes = 20
class Net(nn.Module):
def __ini…
|
st45998
|
I cannot reproduce this issue by using an input tensor of [batch_size, 3, 416, 416] and am running into a shape mismatch error before.
Changing x.view(-1, 2048) to x.view(x.size(0), -1) and adapting the expected in_features in self.linear works:
num_classes = 20
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.inc = models.inception_v3(pretrained=True)
self.inc.aux_logits = False
for child in list(self.inc.children())[:-5]:
for param in child.parameters():
param.requires_grad = False
self.inc.fc = nn.Sequential()
self.dens121 = models.densenet121(pretrained=True)
for child in list(self.dens121.children())[:-6]:
for param in child.parameters():
param.requires_grad = False
self.dens121 = nn.Sequential(*list(self.dens121.children())[:-1])
self.SiLU = nn.SiLU()
self.linear = nn.Linear(175104, num_classes)
self.dropout = nn.Dropout(0.2)
def forward(self, x):
x1 = self.SiLU(self.dens121(x))
x1 = x1.view(x.size(0), -1)
x2 = self.inc(x).view(x.size(0), -1)
x = torch.cat([x1,x2],1)
return self.linear(self.dropout(x))
model = Net()
x = torch.randn(2, 3, 416, 416)
out = model(x)
|
st45999
|
Hi everyone
I’m training a model using torch and the clip_grad_norm_ function is returning a tensor with nan:
tensor(nan, device=‘cuda:0’)
Is there any specific reason why this would happen? Thanks for the help.
|
st46000
|
Hi,
This will might happen if the norm of your Tensors is 0? Or if it has a single element?
|
st46001
|
Please excuse my late response. So the the tensor more than one element but I did notice that the elements in the tensor are very close to zero. Could this also cause the norm to be nan? And how would I get around this? Thanks
|
st46002
|
@albanD can correct me if I’m wrong but clip_grad_norm_ is an in-place operation and doesn’t return anything (None) which might be implicitly cast to nan. So use it like this (and do not assign it to anything):
clip_grad_norm_(model.parameters(), 1.0)
|
st46003
|
I’m not sure, from the doc 9 it does modify the weights inplace but also returns the total norm.
@Maks_Botlhale Which norm are you using? This is most likely due to the content of your weights yes
|
st46004
|
Hi
I’m using norm_type=2. Yes, the clip_grad_norm_(model.parameters(), 1.0) function does return the total_norm and it’s this total norm that’s nan.
|
st46005
|
Is any element in any parameter nan (or inf) by any chance? You can use p.isinf().any() to check.
|
st46006
|
I just checked for that, none of the elements in parameters are infinite. See the screenshot below. I tried decreasing the learning rate and that didn’t help; some people suggested changing the dropout rate, that also didn’t help. I also noticed that the validation loss is also nan. check_inf1608×259 29.3 KB
|
st46007
|
This is surprising…
The clip_grad_norm_ function is pretty simple and is there: https://github.com/pytorch/pytorch/blob/1c6ace87d127f45502e491b6a15886ab66975a92/torch/nn/utils/clip_grad.py#L25-L41 35
Can you try to copy paste that in your code and check it gives nan as well? Then you can add some prints there to see when the nan appears
|
st46008
|
I copied and pasted that as suggested, and I am still getting nan values when it’s calculating the total norm. Line 36 of the code I copied calculates the total norm as:
total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type).to(device) for p in parameters]), norm_type)
I did the p.grad.detach() function on a separate line and I noticed that’s where the nan values start popping up.
|
st46009
|
Ho right (sorry I missed that…). It computes the grad norm, not the Tensors norm!
You need to check if the gradients of the parameters contain nans: p.grad.isinf().any()
|
st46010
|
Yes, that function also returns False. See the screenshot below.check_grads1606×259 29.6 KB
|
st46011
|
Well if you said in your comment above that p.grad.detach() has nan, then the grad must have nans already.
|
st46012
|
You can try this quite simple example, maybe you can find a solution:
import torch
x = torch.tensor([1., 2.])
x.grad = torch.tensor([0.4, float("inf")])
torch.nn.utils.clip_grad_norm_(x, 5)
print(x.grad)
|
st46013
|
Hello, I am writing a TDNN to compare the performance of MSE and Minimum Error Entropy (MEE) loss on time series data - the goal is to predict the next sample with 10 delays. I created a custom function that calculates MEE loss but I am not sure If I need to make a custom autograd method or if it will work? Also, I removed the negative log from the formulation of MEE so I would need to do gradient ascent rather than descent; is there a way to make the .backward() function do this or should I just make the output negative?
Right now the training isn’t working (and it takes forever) and I don’t quite understand how making a custom loss function works. Please let me know if you can offer any insight or if you spot any problems in my code. Additionally, I had to change my training loop since the MEE formulation requires saving the outputs and targets across an epoch since it requires a set of errors in order to calculate the loss (with online learning I only get 1). Previously, for the MSE method, I was using online learning and backpropagating for every window (i.e. 10 elements of the data array) of the time series.
Model:
class TDNN(nn.Module):
def __init__(self, num_delays, hidden_size, num_outputs):
super(TDNN, self).__init__()
self.num_delays = num_delays
self.hidden_size = hidden_size
self.num_outputs = num_outputs
self.fc0 = nn.Linear(self.num_delays, self.hidden_size)
self.fc1 = nn.Linear(self.hidden_size, num_outputs)
def forward(self, x):
f1 = self.fc0(x)
out = self.fc1(f1)
return out
MEE Loss Function:
def Gaussian_Kernel(x, mu, sigma):
prob = (1./(torch.sqrt(2. * math.pi * (sigma**2)))) * torch.exp((-1.) * (((x**2) - mu)/(2. * (sigma**2))))
return prob
train_list = torch.tensor(train_list)
variance = torch.var(train_list)**0.5
mean = torch.mean(train_list)
def InformationPotential(output, target, mu, sigma):
error = output - target
error_diff = []
for i in range(0, error.size(0)):
for j in range(0, error.size(0)):
error_diff.append(error[i] - error[j])
error_diff = torch.cat(error_diff)
# print(error_diff)
return (1./(target.size(0)**2))*torch.sum(Gaussian_Kernel(error_diff, 0, variance*(2**0.5)))
Training Function:
train_losses = []
train_counter = []
def train2(model2):
model2.train()
outputs = []
targets = []
for i, (data, target) in enumerate(train_loader):
data, target = data.cuda(), target.cuda()
# Compute the forward pass through the network up to the loss
output = model2(data)
outputs.append(output)
targets.append(target)
outputs = torch.cat(outputs)
targets = torch.cat(targets)
loss = InformationPotential(outputs, targets, 0, variance)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
print('Train Epoch: {}\tLoss: {:.6f}'.format(epoch, loss))
Test Function:
tot_losses = []
tot_counter = [i * len(train_loader.dataset) for i in range(EPOCHS + 1)]
def test2(model2, loader):
with torch.no_grad():
model2.eval()
# N = 0
tot_loss = 0.0
predictions = []
targets = []
for i, (data, target) in enumerate(loader):
data, target = data.cuda(), target.cuda()
output = model2(data)
targets.append(target.cpu())
predictions.append(output.cpu())
predictions = torch.cat(predictions)
targets = torch.cat(targets)
tot_loss += InformationPotential(predictions, targets, 0, variance)
# pred = output.data.max(1, keepdim = True)[1]
# correct += pred.eq(target.data.view_as(pred)).sum()
tot_loss /= len(test_loader.dataset)
tot_losses.append(tot_loss)
return tot_loss, predictions, targets
Train/Test Loop:
print("Before training validation set performance: \n")
test_loss, _, _ = test2(model2, test_loader)
print("\nTest : Avg. Loss : " + str(test_loss))
print()
print("TRAINING")
for epoch in range(EPOCHS):
print("---------------------------------------------------------------------------")
train2(model)
test_loss, _, _ = test2(model2, test_loader)
print("\nTest : Avg. Loss : {:.4f}\n".format(test_loss))
print()
print("---------------------------------------------------------------------------")
print("FINISHED TRAINING")
Train/Test Loop Output:
Before training validation set performance:
Test : Avg. Loss : tensor(0.0005)
TRAINING
---------------------------------------------------------------------------
Train Epoch: 0 Loss: 0.630891
Test : Avg. Loss : tensor(0.0005)
---------------------------------------------------------------------------
---------------------------------------------------------------------------
Train Epoch: 1 Loss: 0.630746
Test : Avg. Loss : tensor(0.0005)
---------------------------------------------------------------------------
---------------------------------------------------------------------------
Thank you in advance for any help and insight!
|
st46014
|
Solved by smth in post #2
we should now deprecate torch.nn.functional.tanh on master, as tensors and variables are now merged.
|
st46015
|
we should now deprecate torch.nn.functional.tanh on master, as tensors and variables are now merged.
|
st46016
|
If I use nn.Tanh I have to declare it in my_model.init(), e.g.
self.tanh = nn.Tanh()
Whereas I can use nn.functional.tanh directly in my_model.forward(), e.g.
output = nn.functional.tanh(input)
If you deprecate nn.functional.tanh I could do
output = nn.Tanh()(input)
where ever I need the functional form, but it would be slower because of the class instantiation.
What has this to do with Variables and Tensors? I am confused.
|
st46017
|
jpeg729:
If you deprecate nn.functional.tanh I could do
output = nn.Tanh()(input)
Or you could still use torch.tanh():
output = input.tanh()
|
st46018
|
I am expecting output having both positive and negative values. So,
output = torch.tanh(model(input))
in the final output of network should be fine?
Or are any other variants available?
|
st46019
|
There are only few activations like torch.tanh in torch.
But for torch.nn.functional , i found alot of functional activations.
Why is that?
|
st46020
|
I am trying to understand the code of the Graph Attention Network 50 implementation, but I am stuck at the following chunk of code:
if isinstance(in_channels, int):
self.lin_l = Linear(in_channels, heads * out_channels, bias=False)
self.lin_r = self.lin_l
else:
self.lin_l = Linear(in_channels[0], heads * out_channels, False)
self.lin_r = Linear(in_channels[1], heads * out_channels, False)
and from the documentation we understand that:
in_channels (int or tuple): Size of each input sample. A tuple
corresponds to the sizes of source and target dimensionalities.
But the attention coefficient is between two graph nodes that have equal feature dimensionality, so what exactly is the _l and _r needed for? Why can’t you compute just one (as in the first part of the if ) ?
|
st46021
|
Hi,
We are planning to use PyTorch JIT in production and our top-level modules use JIT Script annotations. We require the JIT Scripting to turn off during training and we need to switch it back on for model export (with inference). However PYTORCH_JIT is an environment variable that is set statically.
How can we achieve this without explicitly using PYTORCH_JIT=0 statically while launching the process?
While exporting the model, we perform a correctness check and need to turn it off/on dynamically for the 2 models (with and without JIT) to be loaded. How can we achieve this?
Thanks!
|
st46022
|
@apaszke told me that PYTORCH_JIT=0 is only meant for debugging use. In your case, I would argue that you should do jit.script before exporting, where you can also compare both versions for correctness.
|
st46023
|
I assume you mean using the module normally during training and then call torch.jit.trace at export time to create the JIT version. While this works for fully traceable models, for my use case I have mixed tracing and TorchScript, and I don’t see any way to disable the TorchScript annotations, the JIT is used for script functions/methods even if torch.jit.trace was never called. Is there any other way to achieve this?
|
st46024
|
I have these methods:
def set_jit_enabled(enabled: bool):
""" Enables/disables JIT """
if torch.__version__ < "1.7":
torch.jit._enabled = enabled
else:
if enabled:
torch.jit._state.enable()
else:
torch.jit._state.disable()
def jit_enabled():
""" Returns whether JIT is enabled """
if torch.__version__ < "1.7":
return torch.jit._enabled
else:
return torch.jit._state._enabled.enabled
When training, I call set_jit_enabled(false) before instantiating modules that have JIT annotations. I haven’t found the proper way of doing this, but this hack works just fine.
|
st46025
|
I have copied the GAT 5 layer to a separate file. I want to first make it work, then experiment with it a bit. I am, however, having a problem with making it run. The problem seems to come from this line:
alpha = softmax(alpha, index, ptr, size_i)
in the message function. It seems to be required for the Softmax function 6 as:
ptr (LongTensor, optional): If given, computes the softmax based on
sorted inputs in CSR representation. (default: :obj:None)
However, if I include a copy of the GAT layer (local copy on my drive) into a model and try to run it for node classification, I get:
~/my_test/custom_layers/layers.py in forward(self, x, edge_index, size, return_attention_weights)
135
136 # propagate_type: (x: OptPairTensor, alpha: OptPairTensor)
--> 137 out = self.propagate(edge_index, x=(x_l, x_r),
138 alpha=(alpha_l, alpha_r), size=size)
139
~/anaconda3/envs/py38/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py in propagate(self, edge_index, size, **kwargs)
255 # Otherwise, run both functions in separation.
256 if mp_type == 'edge_index' or self.__fuse__ is False:
--> 257 msg_kwargs = self.__distribute__(self.__msg_params__, kwargs)
258 out = self.message(**msg_kwargs)
259
~/anaconda3/envs/py38/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py in __distribute__(self, params, kwargs)
178 if data is inspect.Parameter.empty:
179 if param.default is inspect.Parameter.empty:
--> 180 raise TypeError(f'Required parameter {key} is empty.')
181 data = param.default
182 out[key] = data
TypeError: Required parameter ptr_i is empty.
Don’t understand exactly what’s going on.
EDIT: It works if I delete the parameter from the message function, and use just:
def message(self, x_j: Tensor, alpha_j: Tensor, alpha_i: OptTensor,
index: Tensor, size_i: Optional[int]) -> Tensor:
alpha = alpha_j if alpha_i is None else alpha_j + alpha_i
alpha = F.leaky_relu(alpha, self.negative_slope)
alpha = softmax(alpha, index, size_i)
self._alpha = alpha
alpha = F.dropout(alpha, p=self.dropout, training=self.training)
return x_j * alpha.unsqueeze(-1)
However I am not sure if this is what I should do, since I am deleting something I don’t understand.
|
st46026
|
Hallo,
I tried to train a combination model of RestNet and LSTM on several GPUs, but the loss and the weights did not change. I got confused and have no idea why. May some have an idea.
class ResNet(nn.Module):
def __init__(self):
super(ResNet, self).__init__()
resnet = load_pretrainednet()
modules = list(resnet.children())[:-1]
self.resnet = nn.Sequential(*modules)
def forward(self, x):
x1 = self.resnet(x)
x1 = x1.view(x1.size(0), -1)
print("Outside: input size", x.size(), "outputs_size", x1.size())
return x1
class Combine(nn.Module):
def __init__(self):
super(Combine, self).__init__()
self.cnn = ResNet()
self.rnn = nn.LSTM(input_size=2048, hidden_size=21, num_layers=1, batch_first=True)
self.linear = nn.Linear(21,21)
def forward(self, x):
batch_size, C, H, W = x.size()
c_in = x.view(batch_size, C, H, W)
c_out = self.cnn(c_in)
r_in = c_out.view(batch_size, 1, -1)
self.rnn.flatten_parameters()
r_out, (h_n, h_c) = self.rnn(r_in)
r_out2 = self.linear(r_out[:, -1, :])
return r_out2
def train_net(net, data_loader, num_images):
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(net.parameters(), lr=0.000001)
running_loss = 0.0
running_corrects = 0.0
net.train() # Set model to training mode
run_count = 0
current_images = 0
name, old_lstm_weight = list(net.module.rnn.named_parameters())[0]
old_linear_weight = net.module.linear.weight
for i, (inputs, labels, masks) in enumerate(data_loader, 1):
print (run_count)
torch.cuda.empty_cache()
gc.collect()
#input = input.unsqueeze()
inputs = torch.cat((inputs),0)
labels = torch.cat((labels),0)
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
print ('Current input length in total: {}'.format(inputs.size()[0]))
if torch.cuda.is_available():
print ('Current inputsize per GPU: {}'.format(np.ceil(inputs.size()[0]/torch.cuda.device_count())))
preds = net(inputs)
for elem in nvsmi.get_gpu_processes():
print(elem)
labels[labels >= 0.5] = 1.0
labels[labels < 0.5] = 0.0
loss = criterion(preds, labels)
loss.backward()
for elm in net.module.parameters():
print(elm[0][0])
break
optimizer.step()
for elm in net.module.parameters():
print(elm[0][0])
break
preds = torch.sigmoid(preds) # torch.Size([N, C]) e.g. tensor([[0., 0.5, 0.]])
preds[preds >= 0.5] = 1.0
preds[preds < 0.5] = 0.0
accuracy = (preds == labels).sum()/(labels.size()[0] * labels.size()[1] *100.0)
# zero the parameter gradients
optimizer.zero_grad()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += accuracy
del preds
torch.cuda.empty_cache()
gc.collect()
num_images += inputs.size(0)
current_images += inputs.size(0)
if run_count % 10 == 9: # every 1000 mini-batches...
# ...log the running loss
writer.add_scalar('Training/NEW_Runs_Loss', running_loss/current_images, num_images)
writer.add_scalar('Training/NEW_Runs_Acuracy', running_corrects.double()/current_images, num_images)
run_count += 1
#print ('input size: ', len(inputs))
#print ('label size: ', len(labels))
#plot_classes_preds(net, inputs, labels, mean, std)
#print(list(net.module.rnn)[0])
name, lstm_weight = list(net.module.rnn.named_parameters())[0]
linear_weight = net.module.linear.weight
if torch.equal(lstm_weight, old_lstm_weight):
print(colored("LSTM weight didn't changed", 'red'))
else:
print(colored("LSTM weight changed", 'green'))
if torch.equal(linear_weight, old_linear_weight):
print(colored("Linear weight didn't changed", 'red'))
else:
print(colored("Linear weight changed", 'green'))
old_lstm_weight = lstm_weight
old_linear_weight = linear_weight
#print(net.module.linear.weight)
return net, running_loss, running_corrects, optimizer, num_images
Print output::
Current input length in total: 90
Current inputsize per GPU: 45.0
Outside: input size torch.Size([45, 3, 224, 224]) outputs_size torch.Size([45, 2048])
Outside: input size torch.Size([45, 3, 224, 224]) outputs_size torch.Size([45, 2048])
pid: 3375727 | gpu_id: 0 | gpu_uuid: GPU-96de9d91-de41-4be2-6c12-280909e98722 | gpu_name: Tesla V100-SXM2-32GB | used_memory: 9689.0MB
pid: 3375727 | gpu_id: 1 | gpu_uuid: GPU-bbcd5e54-bdfe-7c4c-3d6b-8312ba354811 | gpu_name: Tesla V100-SXM2-32GB | used_memory: 8939.0MB
tensor([[-0.0124, -0.0049, -0.0047, -0.0125, 0.0765, -0.0013, -0.0930],
[-0.0035, -0.0379, -0.0086, 0.1207, 0.1172, 0.2363, 0.0651],
[ 0.0040, 0.0597, 0.0610, 0.0591, 0.0746, 0.1351, 0.1906],
[ 0.1521, -0.0442, -0.1501, -0.2492, -0.2439, -0.1416, 0.1227],
[ 0.0078, 0.0360, -0.0127, -0.2912, -0.3637, -0.2218, 0.0186],
[ 0.0095, 0.0808, 0.2047, 0.1493, 0.0226, -0.0785, -0.0541],
[-0.0052, 0.0481, 0.1400, 0.3045, 0.2305, 0.0612, 0.1152]],
device='cuda:0', dtype=torch.float64, grad_fn=<SelectBackward>)
tensor([[-0.0124, -0.0049, -0.0047, -0.0125, 0.0765, -0.0013, -0.0930],
[-0.0035, -0.0379, -0.0086, 0.1207, 0.1172, 0.2363, 0.0651],
[ 0.0040, 0.0597, 0.0610, 0.0591, 0.0746, 0.1351, 0.1906],
[ 0.1521, -0.0442, -0.1501, -0.2492, -0.2439, -0.1416, 0.1227],
[ 0.0078, 0.0360, -0.0127, -0.2912, -0.3637, -0.2218, 0.0186],
[ 0.0095, 0.0808, 0.2047, 0.1493, 0.0226, -0.0785, -0.0541],
[-0.0052, 0.0481, 0.1400, 0.3045, 0.2305, 0.0612, 0.1152]],
device='cuda:0', dtype=torch.float64, grad_fn=<SelectBackward>)
LSTM weight didn't changed
Linear weight didn't changed
Thanks
|
st46027
|
Hi,
I build my docker image from PyTorch image: pytorch/pytorch:1.7.0-cuda11.0-cudnn8-devel
My server:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.23.04 Driver Version: 455.23.04 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 3090 Off | 00000000:3B:00.0 Off | N/A |
| 0% 32C P0 101W / 350W | 0MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 3090 Off | 00000000:AF:00.0 Off | N/A |
| 30% 32C P0 67W / 350W | 0MiB / 24268MiB | 2% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
I’ve already installed NVIDIA Container Toolkit 19 and restart the Docker
sudo systemctl restart docker
I can run nvidia-smi inside Docker container but when I try
sudo docker run --rm --gpus all khoa/pytorch:1.7 python -c 'import torch as t; print(t.cuda.is_available()); print(t.backends.cudnn.enabled)'
cuda.is_available() return False
while backends.cudnn.enabled return True
/opt/conda/lib/python3.8/site-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at /opt/conda/conda-bld/pytorch_1603729096996/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
False
True
Can anyone help me?
I can’t run my code because of this.
|
st46028
|
Solved by ptrblck in post #5
My best guess would be that (unwanted) updates might have been executed, which wiped the NVIDIA driver on your system after the restart. If that’s the case, you would have to reinstall them and recheck your container.
|
st46029
|
The container is working for me so I guess your docker setup isn’t working properly.
Are you able to run any other container shipped with CUDA applications?
nvidia-docker run -it --ipc=host pytorch/pytorch:1.7.0-cuda11.0-cudnn8-devel
root@d11e05a20388:/workspace# python -c "import torch; print(torch.cuda.is_available())"
True
|
st46030
|
pytorch/pytorch:1.7.0-cuda11.0-cudnn8-devel also doesn’t work.
sudo docker run --rm --gpus all pytorch/pytorch:1.7.0-cuda11.0-cudnn8-devel python -c ‘import torch as t; print(t.cuda.is_available())’
False
/opt/conda/lib/python3.8/site-packages/torch/cuda/init.py:52: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at /opt/conda/conda-bld/pytorch_1603729096996/work/c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
So weird, my other server (server B) has the same configuration that can run my image.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.23.04 Driver Version: 455.23.04 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 3090 Off | 00000000:3B:00.0 Off | N/A |
| 0% 38C P8 34W / 350W | 0MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 3090 Off | 00000000:AF:00.0 Off | N/A |
| 0% 32C P8 34W / 350W | 0MiB / 24268MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Meanwhile, server A has this problem after I rebooted it.
I also think it because of Docker. But I’ve tried
sudo systemctl restart docker
but still doesn’t work.
The different between sudo docker info
Server A:
Runtimes: runc nvidia
Kernel Version: 5.4.0-53-generic
Server B:
Runtimes: runc
Kernel Version: 5.4.0-52-generic
|
st46031
|
From this guide 6 and I’m using nvidia-docker2 so I removed runc nvidia by deleting /etc/docker/daemon.json and reboot the server A.
But it still doesn’t work.
This problem happens because:
while training, there was a bug made the function divided 0, so my server was frozen/crashed.
So I ctrl+C the task and rebooted the server.
Now, cuda.is_available() always return False
Please help me.
|
st46032
|
My best guess would be that (unwanted) updates might have been executed, which wiped the NVIDIA driver on your system after the restart. If that’s the case, you would have to reinstall them and recheck your container.
|
st46033
|
Thank you.
I reinstall the Nvidia driver and it works.
I should have done it from the beginning.
|
st46034
|
Hi, I always get naming error when loading my own dataset:
why the filename print twice?
FileNotFoundError: [Errno 2] No such file or directory: ‘./dataset/train_GT./dataset/train/32.png’
My folder structure:
––dataset
——train
———0.jpg
———1.jpg
——train_GT
———0.png
———1.png
is there something wrong with the filename? and I define GT_path as follows:
` def getitem (self, index):
image_path = self.image_paths[index]
filename = image_path.split(’_’)[-1][:-len(".jpg")]
GT_path = self.GT_paths + filename + ‘.png’`
https://github.com/LeeJunHyun/Image_Segmentation 2
With reagards.Any help would be so grateful!
|
st46035
|
Solved by AndreasW in post #2
why you are split with (’’)
with
filename = image_path.split(’’)[-1][:-len(".jpg")]
you get --> ./dataset/train/32
and not with (’/)’
filename = image_path.split(’/’)[-1][:-len(".jpg")]
|
st46036
|
why you are split with (’’)
with
filename = image_path.split(’’)[-1][:-len(".jpg")]
you get --> ./dataset/train/32
and not with (’/)’
filename = image_path.split(’/’)[-1][:-len(".jpg")]
|
st46037
|
Hello. I am trying to load a video frame by frame using imageio, but facing an weird error when num_workers > 0. This is a short snippet to reproduce the error.
from skvideo import datasets
import imageio
import torch
from torch.utils.data import Dataset, DataLoader
class VideoDataset(Dataset):
def __init__(self, video_path):
super().__init__()
self.video_reader = imageio.get_reader(video_path)
self.metadata = self.video_reader.get_meta_data()
self.nframes = self.metadata["nframes"]
def __getitem__(self, idx):
img = self.video_reader.get_data(idx)
img = torch.from_numpy(img)
return img
def __len__(self):
return self.nframes
video_path = datasets.bikes()
video_data = VideoDataset(video_path)
video_loader = DataLoader(video_data, batch_size=4, num_workers=4)
for imgs in video_loader:
print(imgs.shape)
when I try to iteratively print the shape of the loaded tensors, it gets stuck mid way and throws an error:
torch.Size([8, 272, 640, 3])
torch.Size([8, 272, 640, 3])
torch.Size([8, 272, 640, 3])
torch.Size([8, 272, 640, 3])
torch.Size([8, 272, 640, 3])
torch.Size([8, 272, 640, 3])
torch.Size([8, 272, 640, 3])
torch.Size([8, 272, 640, 3])
---------------------------------------------------------------------------
CannotReadFrameError Traceback (most recent call last)
<ipython-input-20-453bfa4da85a> in <module>()
----> 1 for imgs in video_loader:
2 print(imgs.shape)
Note that the code works fine when num_workers=0. I can’t figure out what’s going wrong.
|
st46038
|
class VideoDataset(Dataset):
def __init__(self, video_path):
super().__init__()
self.video_path = video_path
self.video_reader = imageio.get_reader(video_path)
self.metadata = self.video_reader.get_meta_data()
self.nframes = self.metadata["nframes"]
def __getitem__(self, idx):
video = imageio.get_reader(self.video_path)
img = video.get_data(idx)
img = torch.from_numpy(img)
return img
def __len__(self):
return self.nframes
then it works
|
st46039
|
I am experiencing a really strange issue with out of memory errors on pytorch:
$ python -c "import torch; torch.randn(12,12).cuda()"
Traceback (most recent call last):
File "<string>", line 1, in <module>
RuntimeError: CUDA error: out of memory
Background information:
I just created a new environment with the following (from the documentation):
conda create -n pytorch-test
conda install -n pytorch-test -c pytorch pytorch cudatoolkit=11.0
And my GPU:
$ echo $CUDA_VISIBLE_DEVICES
2
$ nvidia-smi -i 2
Thu Nov 19 12:52:56 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: 11.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 2 Tesla P100-PCIE... Off | 00000000:0E:00.0 Off | 2 |
| N/A 47C P0 29W / 250W | 2MiB / 16280MiB | 0% E. Process |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Any help is much appreciated, I kinda desperate here
[EDIT]: This seems to be more of a hardware issue. I am experiencing the same problem with a new installation of tensorflow, following the exact same steps… Still, if you have any pointer to solve this, I take it
|
st46040
|
Could you switch the GPU from EXCLUSIVE Process to the default via nvidia-smi -i 0 -c 0 and check if it’s changing the behavior?
|
st46041
|
Thank you for your answer. Unfortunately, I do not have sufficient permissions to do that.
Nonetheless,here is some more information that makes me think that your approach is probably the right one:
$ nvidia-smi -q | grep "Compute Mode"
Compute Mode : Exclusive_Process
Any way I can go around this problem without changing the compute mode?
|
st46042
|
Hi !, I just started training my model with PyTorch. However, I met a problem when I trained my model with GPU. The error message is shown below.
RuntimeError: Tensor for argument #2 ‘weight’ is on CPU, but expected it to be on GPU (while checking arguments for cudnn_batch_norm)
And this is my code of model
class BN(nn.Module):
def init(self, input):
super(BN, self).init()
self.input = input
self.bn = nn.BatchNorm1d(input.size()[1], momentum=0.5)
self.bn.cuda()
def forward(self, x):
x = self.bn(x.float())
x = torch.as_tensor(x).long()
return x
class DRLSTM(nn.Module):
def init(self,
vocab_size,
embedding_dim,
hidden_size,
embeddings=None,
padding_idx=0,
dropout=0.5,
num_classes=3,
device=“cpu”):
super(DRLSTM, self).__init__()
self.vocab_size = vocab_size
self.embedding_dim = embedding_dim
self.hidden_size = hidden_size
self.num_classes = num_classes
self.dropout = dropout
self.device = device
self.bn = nn.BatchNorm1d(64, momentum=0.8).to(self.device)
self.debug = False
self._word_embedding = nn.Embedding(self.vocab_size,
self.embedding_dim,
padding_idx=padding_idx,
_weight=embeddings)
# print ('embedding_dim: ')
# print (embedding_dim)
if self.dropout:
self._rnn_dropout = RNNDropout(p=self.dropout)
# self._rnn_dropout = nn.Dropout(p=self.dropout)
self._encoding = Seq2SeqEncoder(nn.LSTM,
self.embedding_dim,
self.hidden_size,
bidirectional=True)
# self._encoding1 = Seq2SeqEncoder(nn.LSTM,
# self.embedding_dim,
# int(self.hidden_size/2),
# bidirectional=True)
# self._encoding2 = Seq2SeqEncoder(nn.LSTM,
# self.embedding_dim,
# #self.hidden_size,
# self.hidden_size,
# bidirectional=True)
self._attention = SoftmaxAttention()
self._projection = nn.Sequential(nn.Linear(4 * 2 * self.hidden_size,
self.hidden_size),
nn.ReLU()
)
self._composition = Seq2SeqEncoder(nn.LSTM,
self.hidden_size,
self.hidden_size,
bidirectional=True)
self._composition1 = Seq2SeqEncoder(nn.LSTM,
self.hidden_size,
self.hidden_size,
bidirectional=True)
self._composition2 = Seq2SeqEncoder(nn.LSTM,
2 * self.hidden_size,
self.hidden_size,
bidirectional=True)
self._classification = nn.Sequential(nn.Dropout(p=self.dropout),
nn.Linear(2 * 4 * self.hidden_size,
self.hidden_size),
nn.Tanh(),
nn.Dropout(p=self.dropout),
nn.Linear(self.hidden_size,
self.num_classes))
# Initialize all weights and biases in the model.
self.apply(_init_model_weights)
def forward(self,
premises,
premises_lengths,
hypotheses,
hypotheses_lengths):
premises_mask = get_mask(premises, premises_lengths).to(self.device)
hypotheses_mask = get_mask(hypotheses, hypotheses_lengths) \
.to(self.device)
# BN1 = nn.BatchNorm1d(premises.size()[1], momentum=0.5).to(self.device)
# BN2 = nn.BatchNorm1d(hypotheses.size()[1], momentum=0.5).to(self.device)
# premises = BN1(premises.float())
# hypotheses = BN2(hypotheses.float())
# premises = self.bn(premises.float())
# hypotheses = self.bn(hypotheses.float())
# premises = torch.as_tensor(premises).long()
# hypotheses = torch.as_tensor(hypotheses).long()
bn1 = BN(premises)
premises = bn1(premises)
bn2 = BN(hypotheses)
hypotheses = bn2(hypotheses)
I wonder why the error comes.
Thank you!
|
st46043
|
In DRLSTM you are using the device='cpu' argument as the default value and then apply it to self.bn, which would push this layer to the CPU. If you are passing a CUDATensor to this module, this error will be raised.
The better approach is not to specify the device as an argument and just calling model.to(device) after creating the object.
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier.
|
st46044
|
Hi, I am building a kind of Generator Network in this I am getting the same error. I am new to PyTorch Framework
Error is shown:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps)
2056 return torch.batch_norm(
2057 input, weight, bias, running_mean, running_var,
-> 2058 training, momentum, eps, torch.backends.cudnn.enabled
2059 )
2060
RuntimeError: Tensor for argument #2 'weight' is on CPU, but expected it to be on GPU (while checking arguments for cudnn_batch_norm)
class netG(nn.Module):
def __init__(self,channel_rate=64,drop_rate=0.0):
super(netG, self).__init__()
self.channel_rate = channel_rate
self.nChannels = 4*self.channel_rate
self.drop_rate = drop_rate
self.dilation_factor = 1
self.head = nn.Conv2d(3,4*self.channel_rate,3,1,1)
def dense_block(self,x):
block = nn.Sequential(
nn.BatchNorm2d(self.nChannels),
nn.LeakyReLU(0.2),
nn.Conv2d(self.nChannels,4*self.channel_rate,1,1),
nn.BatchNorm2d(4*self.channel_rate),
nn.Conv2d(4*self.channel_rate,self.channel_rate,3,1,self.dilation_factor,self.dilation_factor),
nn.BatchNorm2d(self.channel_rate),
nn.Dropout2d(self.drop_rate)
)
return block(x)
def tail(self,x):
block = nn.Sequential(
nn.LeakyReLU(0.2),
nn.Conv2d(self.nChannels,4*self.channel_rate,1,1),
nn.BatchNorm2d(4*self.channel_rate),
nn.Dropout(self.drop_rate)
)
return block(x)
def last_layer(self,x):
block= nn.Sequential(
nn.Conv2d(self.nChannels,self.channel_rate,3,1,1),
nn.PReLU(),
nn.Conv2d(self.channel_rate,3,3,1,1),
nn.Tanh()
)
return block(x)
def forward(self,x):
x = self.head(x)
x1 = x
self.nChannels = x.size(1)
self.dilation_factor = 1
d1 = self.dense_block(x)
x = torch.cat((x,d1),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d2 = self.dense_block(x)
x = torch.cat((x,d2),1)
self.dilation_factor = 2
self.nChannels = x.size(1)
d4 = self.dense_block(x)
x = torch.cat((x,d4),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d5 = self.dense_block(x)
x = torch.cat((x,d5),1)
self.dilation_factor = 3
self.nChannels = x.size(1)
d6 = self.dense_block(x)
x = torch.cat((x,d6),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d7 = self.dense_block(x)
x = torch.cat((x,d7),1)
self.dilation_factor = 2
self.nChannels = x.size(1)
d8 = self.dense_block(x)
x = torch.cat((x,d8),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d9 = self.dense_block(x)
x = torch.cat((x,d9),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d10 = self.dense_block(x)
x = self.tail(x)
x = torch.cat((x,x1),1)
self.nChannels = x.size(1)
x = self.last_layer(x)
return x
|
st46045
|
You are recreating modules inside the forward method of netG, which won’t register them during the model creation and will reinitialize these modules with random values in each forward pass, which is most likely not what you want.
Create all modules in netG.__init__ and use them in netG.forward as:
class netG(nn.Module):
def __init__(self,channel_rate=64,drop_rate=0.0):
super(netG, self).__init__()
self.channel_rate = channel_rate
self.nChannels = 4*self.channel_rate
self.drop_rate = drop_rate
self.dilation_factor = 1
self.head = nn.Conv2d(3,4*self.channel_rate,3,1,1)
self.dense_block = nn.Sequential(...)
self.tail = nn.Sequential(...)
self.last_layer = nn.Sequential(...)
def forward(self, x):
x = self.head(x)
...
x = self.dense_block(x)
...
|
st46046
|
I tried that earlier, but the problem is that i want to use differnet nChannels and dilation factor, but if I make modules inside __init__ then it will be initialized with same initial value of nChannels and dilation factor, which I don’t want. Can you suggest any solution.
class netG(nn.Module):
def __init__(self,channel_rate=64,drop_rate=0.0):
super(netG, self).__init__()
self.channel_rate = channel_rate
self.nChannels = 4*self.channel_rate
self.drop_rate = drop_rate
self.head = nn.Conv2d(3,4*self.channel_rate,3,1,1)
self.dilation_factor = 1
self.dense_block = nn.Sequential(
nn.BatchNorm2d(self.nChannels),
nn.LeakyReLU(0.2),
nn.Conv2d(self.nChannels,4*self.channel_rate,1,1),
nn.BatchNorm2d(4*self.channel_rate),
nn.Conv2d(4*self.channel_rate,self.channel_rate,3,1,self.dilation_factor,self.dilation_factor),
nn.BatchNorm2d(self.channel_rate),
nn.Dropout2d(self.drop_rate)
)
self.tail = nn.Sequential(
nn.LeakyReLU(0.2),
nn.Conv2d(self.nChannels,4*self.channel_rate,1,1),
nn.BatchNorm2d(4*self.channel_rate),
nn.Dropout(self.drop_rate)
)
self.last_layer = nn.Sequential(
nn.Conv2d(self.nChannels,self.channel_rate,3,1,1),
nn.PReLU(),
nn.Conv2d(self.channel_rate,3,3,1,1),
nn.Tanh()
)
def forward(self,x):
x = self.head(x)
x1 = x
self.nChannels = x.size(1)
self.dilation_factor = 1
d1 = self.dense_block(x)
x = torch.cat((x,d1),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d2 = self.dense_block(x)
x = torch.cat((x,d2),1)
self.dilation_factor = 2
self.nChannels = x.size(1)
d4 = self.dense_block(x)
x = torch.cat((x,d4),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d5 = self.dense_block(x)
x = torch.cat((x,d5),1)
self.dilation_factor = 3
self.nChannels = x.size(1)
d6 = self.dense_block(x)
x = torch.cat((x,d6),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d7 = self.dense_block(x)
x = torch.cat((x,d7),1)
self.dilation_factor = 2
self.nChannels = x.size(1)
d8 = self.dense_block(x)
x = torch.cat((x,d8),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d9 = self.dense_block(x)
x = torch.cat((x,d9),1)
self.dilation_factor = 1
self.nChannels = x.size(1)
d10 = self.dense_block(x)
x = self.tail(x)
x = torch.cat((x,x1),1)
self.nChannels = x.size(1)
x = self.last_layer(x)
return x
|
st46047
|
You can change the dilation by directly accessing the attribute:
conv = nn.Conv2d(1, 1, 3, dilation=1)
x = torch.randn(1, 1, 24, 24)
out = conv(x)
print(out.shape)
> torch.Size([1, 1, 22, 22])
conv.dilation = (2, 2)
out = conv(x)
print(out.shape)
> torch.Size([1, 1, 20, 20])
However, changing the number of output channels would change the weight parameter, since you would be adding new kernels or would remove some, so how should this be performed?
|
st46048
|
That I didn’t know thanks, but still the problem is like I have made the blocks and want to access specific layers, Conv2D, and BatchNorm, for dilation and features respectively, which are changing. Can I access them making a new nn.Module type class for each block (head, dense tail)? Like Multiple inheritance. But again it will be same problem as nn.Module will be again made again and again and CPU initializes automatically.
Can u suggest any method?
|
st46049
|
If your use case dictates changing the filters frequently (e.g. by adding more filters or removing some), I would use the functional API via F.conv2d and define the weight and bias tensors manually.
This would give you more flexibility than trying to manipulate the parameters inside the nn.Conv2d module.
|
st46050
|
I have made few changes now I am having CUDA Memory allocation error
class netG(nn.Module):
def __init__(self,nChannels=256,channel_rate=64,drop_rate=0.0,dilation=1):
super(netG, self).__init__()
self.channel_rate = channel_rate
self.nChannels = 4*self.channel_rate
self.drop_rate = drop_rate
self.dilation = 1
self.head = nn.Conv2d(3,4*self.channel_rate,3,1,1)
def dense_block(self,x):
bn1 = nn.BatchNorm2d(self.nChannels),
act = nn.LeakyReLU(0.2),
conv1 = nn.Conv2d(self.nChannels,4*self.channel_rate,1,1),
bn2 = nn.BatchNorm2d(4*self.channel_rate),
conv2 = nn.Conv2d(4*self.channel_rate,self.channel_rate,3,1,self.dilation,self.dilation),
bn3 = nn.BatchNorm2d(self.channel_rate),
dropout = nn.Dropout2d(self.drop_rate)
def forward(x):
x = bn1(x)
x = act(x)
x = conv1(x)
x = bn2(x)
x = conv2(x)
x = bn3(x)
x = dropout(x)
return x
def tail(self,x):
bn1 = nn.BatchNorm2d(self.nChannels)
act = nn.LeakyReLU(0.2)
conv1 = nn.Conv2d(self.nChannels,4*self.channel_rate,1,1)
bn2 = nn.BatchNorm2d(4*self.channel_rate)
dropout = nn.Dropout(self.drop_rate)
def forward(x):
x = bn1(x)
x = act(x)
x = conv1(x)
x = bn2(x)
x = dropout(x)
return x
def last_layer(self,x):
conv1 = nn.Conv2d(self.nChannels,self.channel_rate,3,1,1),
act = nn.PReLU(),
conv2 = nn.Conv2d(self.channel_rate,3,3,1,1),
act1 = nn.Tanh()
def forward(x):
x = conv1(x)
x = act(x)
x = conv2(x)
x = act1(x)
return x
def forward(self,x):
x = self.head(x)
x1 = x
self.nChannels = x.size(1)
self.dilation = 1
d1 = self.dense_block(x)
x = torch.cat((x,d1),1)
self.nChannels = x.size(1)
self.dilation = 1
d2 = self.dense_block(x)
x = torch.cat((x,d2),1)
self.nChannels = x.size(1)
self.dilation = 2
d3 = self.dense_block(x)
x = torch.cat((x,d3),1)
self.nChannels = x.size(1)
self.dilation = 1
d4 = self.dense_block(x)
x = torch.cat((x,d4),1)
self.nChannels = x.size(1)
self.dilation = 3
d5 = self.dense_block(x)
x = torch.cat((x,d5),1)
self.nChannels = x.size(1)
self.dilation = 3
d6 = self.dense_block(x)
x = torch.cat((x,d6),1)
self.nChannels = x.size(1)
self.dilation = 1
d7 = self.dense_block(x)
x = torch.cat((x,d7),1)
self.nChannels = x.size(1)
self.dilation = 2
d8 = self.dense_block(x)
x = torch.cat((x,d8),1)
self.nChannels = x.size(1)
self.dilation = 1
d9 = self.dense_block(x)
x = torch.cat((x,d9),1)
self.nChannels = x.size(1)
self.dilation = 1
d10 = self.dense_block(x)
self.nChannels = x.size(1)
x = self.tail(d10)
x = torch.cat((x,x1),1)
self.nChannels = x.size(1)
x = last_layer(x)
return x
88 self.dilation = 3
89 d6 = self.dense_block(x)
---> 90 x = torch.cat((x,d6),1)
91
92 self.nChannels = x.size(1)
RuntimeError: CUDA out of memory. Tried to allocate 12.00 GiB (GPU 0; 15.90 GiB total capacity; 11.82 GiB already allocated; 3.24 GiB free; 11.85 GiB reserved in total by PyTorch)
|
st46051
|
You are running out of memory in the torch.cat operation, so try to reduce e.g. the batch size.
|
st46052
|
Even with 1 batch_size (1 sample) error is there.
85 self.dilation = 3
86 d6 = self.dense_block(x)
---> 87 x = torch.cat((x,d6),1)
88
89 self.nChannels = x.size(1)
RuntimeError: CUDA out of memory. Tried to allocate 4.00 GiB (GPU 0; 7.43 GiB total capacity; 3.94 GiB already allocated; 2.86 GiB free; 3.94 GiB reserved in total by PyTorch)
|
st46053
|
How much memory does your GPU have? Your model might just be too big. You could then either reduce the spatial input shape (if possible), slim down the model, or use torch.utils.checkpoint to trade compute for memory.
|
st46054
|
I think there is no way to manipulate modules directly in the way I want to do, I should go for functional api based method only. Thanks.
|
st46055
|
Have a look at this:
class netG(nn.Module):
def __init__(self,nChannels=256,channel_rate=64,drop_rate=0.0,dilation=1):
super(netG, self).__init__()
self.channel_rate = channel_rate
self.nChannels = 4*self.channel_rate
self.drop_rate = drop_rate
self.dilation = 1
def forward(self,x):
#head
x = F.conv2d(x, nn.Parameter(torch.Tensor(3, 4*self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False)
x1 = x
#dense_block-1
self.nChannels = x.size(1)
self.dilation = 1
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-2
self.nChannels = x.size(1)
self.dilation = 1
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-3
self.nChannels = x.size(1)
self.dilation = 2
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-4
self.nChannels = x.size(1)
self.dilation = 1
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-5
self.nChannels = x.size(1)
self.dilation = 3
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-6
self.nChannels = x.size(1)
self.dilation = 1
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-7
self.nChannels = x.size(1)
self.dilation = 2
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-8
self.nChannels = x.size(1)
self.dilation = 1
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-9
self.nChannels = x.size(1)
self.dilation = 1
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
x = torch.cat([x,d],1)
#dense_block-10
self.nChannels = x.size(1)
self.dilation = 1
d = F.batch_norm(x,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.leaky_relu(d,0.2)
d = F.conv2d(d, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
d = F.batch_norm(d,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
d = F.conv2d(d, nn.Parameter(torch.Tensor(4*self.channel_rate,self.channel_rate, 3, 3).normal_(0,0.0001)),bias=False,padding=self.dialtion,dilation=self.dilation)
d = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
d = F.dropout2d(d,p=0.0)
#tail
x = F.batch_norm(d,torch.zeros(self.nChannels),torch.ones(self.nChannels),training=True,momentum=0.9)
x = F.conv2d(x, nn.Parameter(torch.Tensor(self.nChannels, 4*self.channel_rate, 1, 1).normal_(0,0.0001)),bias=False)
x = F.batch_norm(x,torch.zeros(4*self.channel_rate),torch.ones(4*self.channel_rate),training=True,momentum=0.9)
x = F.dropout2d(x,p=0.0)
x = torch.cat([x,x1],1)
#last_layer
x = F.conv2d(x,nn.Parameter(torch.Tensor(self.nChannels, self.channel_rate,3,3).normal_(0,0.0001)),bias=False)
x = F.prelu(x,nn.Parameter(torch.Tensor(self.channel_rate)))
x = F.conv2d(x,nn.Parameter(torch.Tensor(self.channel_rate,3,3,3).normal_(0,0.0001)),bias=False)
x = F.tanh(x)
return x
Let me know what the problem is, how should I pass parameter list to optimizer ?
As I am getting an error:
ValueError: optimizer got an empty parameter list
|
st46056
|
I have successfully developed the model using functional API way only. Thanks for the help.
@ptrblck
|
st46057
|
class ConvNet (nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 6)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
print('x_shape:',x.shape)
return x
n_total_steps = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i+1) % 2000 == 0:
print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')
print('Finished Training')
PATH = './cnn.pth'
torch.save(model.state_dict(), PATH)
x_shape: torch.Size([14880, 6])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-10-0f959bb8ddbd> in <module>
9 # Forward pass
10 outputs = model(images)
---> 11 loss = criterion(outputs, labels)
12
13 # Backward and optimize
~\AppData\Roaming\Python\Python38\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\AppData\Roaming\Python\Python38\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
959
960 def forward(self, input: Tensor, target: Tensor) -> Tensor:
--> 961 return F.cross_entropy(input, target, weight=self.weight,
962 ignore_index=self.ignore_index, reduction=self.reduction)
963
~\AppData\Roaming\Python\Python38\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction)
2466 if size_average is not None or reduce is not None:
2467 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
2469
2470
~\AppData\Roaming\Python\Python38\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2259
2260 if input.size(0) != target.size(0):
-> 2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).'
2262 .format(input.size(0), target.size(0)))
2263 if dim == 2:
ValueError: Expected input batch_size (14880) to match target batch_size (32).
I just started studying Pytorch
|
st46058
|
Change the flattening from x = x.view(-1, 16 * 5 * 5) to x = x.view(x.size(0), -1) to keep the batch size equal in case you have a miscalculation in the feature size.
If you are running into a shape mismatch error in the next linear layer, print the shape of x after the view operation and make sure the linear layer has its in_features set to the right value.
|
st46059
|
Hello,
I am working on a time series dataset using LSTM. Each sequence has the following dimension “S_ix6”, e.g. the sequences have different lengths. I first created a network (netowrk1), and in the “forward” function padded each sequence, so they have the same length. But unfortunately, the networks could not really learn the structures in the data. So I decided to not pad the sequences and rewrote the network (network2) so that in the forward pass there is a for-loop over each sequence in a batch, whereas mentioned before they have different lengths. And lo and behold, the network converges much better! Now my question is:
Questions:
What is really the effect of padding on the network?
Why padding the sequences ends in a worse convergence result?
Network 1: With padding
class DeepIO(nn.Module):
def __init__(self):
super(DeepIO, self).__init__()
self.rnn = nn.LSTM(input_size=6, hidden_size=512,
num_layers=2, bidirectional=True)
self.drop_out = nn.Dropout(0.25)
self.fc1 = nn.Linear(512, 256)
self.bn1 = nn.BatchNorm1d(256)
self.fc_out = nn.Linear(256, 7)
def forward(self, x):
"""
args:
x: a list of inputs of diemension [BxTx6]
"""
lengths = [x_.size(0) for x_ in x] # get the length of each sequence in the batch
x_padded = nn.utils.rnn.pad_sequence(x, batch_first=True) # padd all sequences
b, s, n = x_padded.shape
# pack padded sequece
x_padded = nn.utils.rnn.pack_padded_sequence(x_padded, lengths=lengths, batch_first=True, enforce_sorted=False)
# calc the feature vector from the latent space
out, hidden = self.rnn(x_padded)
# unpack the featrue vector
out, lens_unpacked = nn.utils.rnn.pad_packed_sequence(out, batch_first=True)
out = out.view(b, s, self.num_dir, self.hidden_size[0])
# many-to-one rnn, get the last result
y = out[:, -1, 0]
y = F.relu(self.fc1(y), inplace=True)
y = self.bn1(y)
y = self.drop_out(y)
y = self.out(y)
return y
Network 2: Without padding
class DeepIO(nn.Module):
def __init__(self):
super(DeepIO, self).__init__()
self.rnn = nn.LSTM(input_size=6, hidden_size=512,
num_layers=2, bidirectional=True)
self.drop_out = nn.Dropout(0.25)
self.fc1 = nn.Linear(512, 256)
self.bn1 = nn.BatchNorm1d(256)
self.fc_out = nn.Linear(256, 7)
def forward(self, x):
"""
args:
x: a list of inputs of diemension [BxTx6]
"""
outputs = []
# iterate in the batch through all sequences
for xx in x:
s, n = xx.shape
out, hiden = self.rnn(xx.unsqueeze(1))
out = out.view(s, 1, 2, 512)
out = out[-1, :, 0]
outputs.append(out.squeeze())
outputs = torch.stack(outputs)
y = F.relu(self.fc1(outputs), inplace=True)
y = self.bn1(y)
y = self.drop_out(y)
y = self.out(y)
return y
Thanks
Arash
|
st46060
|
Hi,
Sorry for not answering your question but how did you manage to design your neural network with variable sequence length?
I am working on audio data and I am working by padding them.
Thanks.
BR,
Shweta.
|
st46061
|
@shwe87 by just iterating through each sequence in a batch. See the for-loop in the forward-function. I think it is possible, because i do not pass the hidden state between each sequences. So e.g. no state-sharing or stateless.
|
st46062
|
You could consider generating batches with sequences of the same length. I use it all the time for sequence classification but also for seq2seq models. You may want to have a look here 248, here 90 and here 67.
|
st46063
|
@vdw thanks for the hint and the links. Yes, reordering sequences so they have the same length in each batch is a nice idea. But actually I am wondering what is the effect of padding, packing and unpacking on the performance of the network, e.g. I mean not the computational performance but the loss performance. Why does padding result in a bad performance? By the way the project I am talking about can be found here (deeplio) 25. I would be thankful for any link or paper handling this topic!
|
st46064
|
Hello,
From my understanding, in the implementation of your first network. You take the last time step y = out[:, -1, 0] to decode is incorrect. Because the output of pad_packed_sequence is containing a lot of zero padded to make sure all sequences in this batch has the same shape. Therefore, you have to rely on the lens_unpacked to select the correct last time step to decode
|
st46065
|
I am facing a very weird OOM error with my current training. The OOM error happens systematically during the forward pass of the last (or second last, not sure) batch of an epoch. It happens independent of training size. It happens before validation.
I am logging the GPU memory consumption via nvidia-smi during training. During the training epoch the memory consumption stays constant, so I doubt it’s a typical memory leak (caused e.g. by a missing .detach() call). However, when running the last batch, the memory consumption suddenly starts to increase in the forward pass. This can be seen by printing the GPU memory consumption during different steps in the forward pass.
Is there something special happening at epoch end, when the queue size of the dataloader approaches zero? Did anybody else face this phenomenon?
P.s.: While I said the error happens systematically, the training did work for at one time for a couple of epochs, so there does seem some randomness to it. However, it raised the OOM error at epoch end approximately a dozen times now.
P.P.s.: Training pipeline is a modified monodepth2 and I’m getting the error at this point: https://github.com/nianticlabs/monodepth2/blob/master/trainer.py#L285 2 I am using only 1 GPU to train.
|
st46066
|
P.P.P.s.: Only occurs for num_workers>=2. Also now happened a few times in the middle of the first epoch, not the end…
|
st46067
|
Are you using static input shapes or are they dynamically changed?
gebbissimo:
Only occurs for num_workers>=2.
Are you pushing the data to the GPU inside the Dataset?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.