id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st103900 | After hours of debugging and attempting every possible solution out there, I was also calling DataParallel twice.
Thanks |
st103901 | Using PyTorch 0.1.12 and trying to multiply two matrices in the following sizes: [1,49,1] and [1,49,256], but when I do I get the error:
RuntimeError: inconsistent tensor size
I tried using .expand_as for both matrices but it didn’t work.
(I know that it works on newer versions) |
st103902 | Solved by ptrblck in post #2
I’ve tested it in my 0.1.12 environment and it should work like you tried it:
a = torch.randn(1, 49, 1)
b = torch.randn(1, 49, 256)
c = a.expand_as(b) * b |
st103903 | I’ve tested it in my 0.1.12 environment and it should work like you tried it:
a = torch.randn(1, 49, 1)
b = torch.randn(1, 49, 256)
c = a.expand_as(b) * b |
st103904 | hi, I recently updated my Pytorch installation to the latest version which is 0.4. I installed the newer version like this :
me@shishosama:/media/me/tmpstore/SimpNet_PyTorch$ pip install http://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-linux_x86_64.whl
Collecting torch==0.4.0 from http://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-linux_x86_64.whl
Downloading http://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-linux_x86_64.whl (566.4MB)
100% |████████████████████████████████| 566.4MB 943kB/s
Installing collected packages: torch
Found existing installation: torch 0.3.1
Uninstalling torch-0.3.1:
Successfully uninstalled torch-0.3.1
Successfully installed torch-0.4.0
You are using pip version 9.0.1, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
me@shishosama:/media/me/tmpstore/SimpNet_PyTorch$ pip install torchvision
Requirement already satisfied: torchvision in /home/me/anaconda3/lib/python3.6/site-packages
Requirement already satisfied: six in /home/me/anaconda3/lib/python3.6/site-packages (from torchvision)
Requirement already satisfied: numpy in /home/me/anaconda3/lib/python3.6/site-packages (from torchvision)
Requirement already satisfied: torch in /home/me/anaconda3/lib/python3.6/site-packages (from torchvision)
Requirement already satisfied: pillow>=4.1.1 in /home/me/anaconda3/lib/python3.6/site-packages (from torchvision)
You are using pip version 9.0.1, however version 10.0.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
This is my training script (gist link 7) and as you can see its pretty simple and straightforward.
When the script reaches to the evaluation part it throws an error.
Here is the full log :
me@shisho:/media/me/tmpstore/SimpNet_PyTorch$ bash training_sequence.sh
=> creating model 'simple_imagenet_3p'
=> Model : simple_imagenet_3p(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=[3, 3], stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(2): ReLU(inplace)
(3): Conv2d(64, 128, kernel_size=[3, 3], stride=(2, 2), padding=(1, 1))
(4): BatchNorm2d(128, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(5): ReLU(inplace)
(6): Conv2d(128, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(7): BatchNorm2d(128, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(8): ReLU(inplace)
(9): Conv2d(128, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(10): BatchNorm2d(128, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(11): ReLU(inplace)
(12): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=(1, 1), ceil_mode=False)
(13): Conv2d(128, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(14): BatchNorm2d(128, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(15): ReLU(inplace)
(16): Conv2d(128, 128, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(17): BatchNorm2d(128, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(18): ReLU(inplace)
(19): Conv2d(128, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(20): BatchNorm2d(256, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(21): ReLU(inplace)
(22): Conv2d(256, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(23): BatchNorm2d(256, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(24): ReLU(inplace)
(25): Conv2d(256, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(26): BatchNorm2d(256, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(27): ReLU(inplace)
(28): Conv2d(256, 512, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(29): BatchNorm2d(512, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(30): ReLU(inplace)
(31): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=(1, 1), ceil_mode=False)
(32): Conv2d(512, 2048, kernel_size=[1, 1], stride=(1, 1))
(33): BatchNorm2d(2048, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(34): ReLU(inplace)
(35): Conv2d(2048, 256, kernel_size=[1, 1], stride=(1, 1))
(36): BatchNorm2d(256, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(37): ReLU(inplace)
(38): MaxPool2d(kernel_size=(2, 2), stride=(2, 2), padding=0, dilation=(1, 1), ceil_mode=False)
(39): Conv2d(256, 256, kernel_size=[3, 3], stride=(1, 1), padding=(1, 1))
(40): BatchNorm2d(256, eps=1e-05, momentum=0.05, affine=True, track_running_stats=True)
(41): ReLU(inplace)
)
(classifier): Linear(in_features=256, out_features=1000, bias=True)
)
=> parameter : Namespace(arch='simple_imagenet_3p', batch_size=128, data='/media/me/SSD/ImageNet_DataSet', epochs=150, evaluate=False, lr=0.1, momentum=0.9, prefix='2018-06-30-6885', print_freq=200, resume='./snapshots/imagenet/simplenets/5mil_3p/checkpoint.simple_imagenet_3p.2018-06-27-1781_2018-06-27_13-15-13.pth.tar', save_dir='./snapshots/imagenet/simplenets/5mil_3p/', start_epoch=86, train_dir_name='training_set_t12/', val_dir_name='imagenet_val/', weight_decay=1e-05, workers=12)
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 112, 112] 1,792
BatchNorm2d-2 [-1, 64, 112, 112] 128
ReLU-3 [-1, 64, 112, 112] 0
Conv2d-4 [-1, 128, 56, 56] 73,856
BatchNorm2d-5 [-1, 128, 56, 56] 256
ReLU-6 [-1, 128, 56, 56] 0
Conv2d-7 [-1, 128, 56, 56] 147,584
BatchNorm2d-8 [-1, 128, 56, 56] 256
ReLU-9 [-1, 128, 56, 56] 0
Conv2d-10 [-1, 128, 56, 56] 147,584
BatchNorm2d-11 [-1, 128, 56, 56] 256
ReLU-12 [-1, 128, 56, 56] 0
MaxPool2d-13 [-1, 128, 28, 28] 0
Conv2d-14 [-1, 128, 28, 28] 147,584
BatchNorm2d-15 [-1, 128, 28, 28] 256
ReLU-16 [-1, 128, 28, 28] 0
Conv2d-17 [-1, 128, 28, 28] 147,584
BatchNorm2d-18 [-1, 128, 28, 28] 256
ReLU-19 [-1, 128, 28, 28] 0
Conv2d-20 [-1, 256, 28, 28] 295,168
BatchNorm2d-21 [-1, 256, 28, 28] 512
ReLU-22 [-1, 256, 28, 28] 0
Conv2d-23 [-1, 256, 28, 28] 590,080
BatchNorm2d-24 [-1, 256, 28, 28] 512
ReLU-25 [-1, 256, 28, 28] 0
Conv2d-26 [-1, 256, 28, 28] 590,080
BatchNorm2d-27 [-1, 256, 28, 28] 512
ReLU-28 [-1, 256, 28, 28] 0
Conv2d-29 [-1, 512, 28, 28] 1,180,160
BatchNorm2d-30 [-1, 512, 28, 28] 1,024
ReLU-31 [-1, 512, 28, 28] 0
MaxPool2d-32 [-1, 512, 14, 14] 0
Conv2d-33 [-1, 2048, 14, 14] 1,050,624
BatchNorm2d-34 [-1, 2048, 14, 14] 4,096
ReLU-35 [-1, 2048, 14, 14] 0
Conv2d-36 [-1, 256, 14, 14] 524,544
BatchNorm2d-37 [-1, 256, 14, 14] 512
ReLU-38 [-1, 256, 14, 14] 0
MaxPool2d-39 [-1, 256, 7, 7] 0
Conv2d-40 [-1, 256, 7, 7] 590,080
BatchNorm2d-41 [-1, 256, 7, 7] 512
ReLU-42 [-1, 256, 7, 7] 0
Linear-43 [-1, 1000] 257,000
simplenetv1_imagenet_3p-44 [-1, 1000] 0
================================================================
Total params: 5,752,808
Trainable params: 5,752,808
Non-trainable params: 0
----------------------------------------------------------------
None
FLOPs: 3830.96M, Params: 5.75M
{'milestones': [30, 60, 90, 130, 150], 'gamma': 0.1, 'base_lrs': [0.1], 'last_epoch': -1}
{'milestones': [30, 60, 90, 130, 150], 'gamma': 0.1, 'base_lrs': [0.1], 'last_epoch': 86}
=> loading checkpoint './snapshots/imagenet/simplenets/5mil_3p/checkpoint.simple_imagenet_3p.2018-06-27-1781_2018-06-27_13-15-13.pth.tar'
=> loaded checkpoint './snapshots/imagenet/simplenets/5mil_3p/checkpoint.simple_imagenet_3p.2018-06-27-1781_2018-06-27_13-15-13.pth.tar' (epoch 86)
/home/me/anaconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py:397: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
"please use transforms.RandomResizedCrop instead.")
/home/me/anaconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py:156: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
"please use transforms.Resize instead.")
==>>[2018-06-30 13:31:35] [Epoch=086/150] [Need: 00:00:00] [learning_rate=0.0010000000000000002] [Best : Accuracy(T1/T5)=65.74/86.39, Error=34.26/13.61]
imagenet_train.py:317: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
losses.update(loss.data[0], input.size(0))
Epoch: [86][0/10010] Time 5.993 (5.993) Data 2.479 (2.479) Loss 1.4199 (1.4199) Prec@1 64.062 (64.062) Prec@5 85.938 (85.938)
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 2555904 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
Epoch: [86][200/10010] Time 0.393 (0.421) Data 0.000 (0.013) Loss 1.5897 (1.6016) Prec@1 64.844 (63.040) Prec@5 78.906 (83.190)
Epoch: [86][400/10010] Time 0.399 (0.409) Data 0.000 (0.008) Loss 1.3367 (1.5933) Prec@1 67.969 (63.310) Prec@5 85.156 (83.241)
Epoch: [86][600/10010] Time 0.398 (0.405) Data 0.000 (0.007) Loss 1.4794 (1.5994) Prec@1 66.406 (63.111) Prec@5 84.375 (83.191)
Epoch: [86][800/10010] Time 0.396 (0.403) Data 0.000 (0.006) Loss 1.4389 (1.5995) Prec@1 71.094 (63.136) Prec@5 84.375 (83.192)
Epoch: [86][1000/10010] Time 0.401 (0.403) Data 0.000 (0.005) Loss 1.4466 (1.5962) Prec@1 68.750 (63.204) Prec@5 86.719 (83.206)
Epoch: [86][1200/10010] Time 0.395 (0.402) Data 0.000 (0.005) Loss 1.0079 (1.5964) Prec@1 77.344 (63.150) Prec@5 91.406 (83.184)
Epoch: [86][1400/10010] Time 0.398 (0.401) Data 0.000 (0.005) Loss 1.6049 (1.5974) Prec@1 62.500 (63.175) Prec@5 78.906 (83.177)
Epoch: [86][1600/10010] Time 0.397 (0.401) Data 0.000 (0.005) Loss 1.3969 (1.5955) Prec@1 66.406 (63.194) Prec@5 85.938 (83.189)
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 2555904 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
Epoch: [86][1800/10010] Time 0.397 (0.400) Data 0.000 (0.005) Loss 1.7823 (1.5972) Prec@1 57.812 (63.106) Prec@5 82.812 (83.157)
Epoch: [86][2000/10010] Time 0.399 (0.400) Data 0.000 (0.004) Loss 1.2100 (1.5991) Prec@1 69.531 (63.081) Prec@5 89.844 (83.143)
Epoch: [86][2200/10010] Time 0.399 (0.400) Data 0.000 (0.004) Loss 1.4692 (1.5992) Prec@1 71.875 (63.116) Prec@5 85.156 (83.132)
Epoch: [86][2400/10010] Time 0.403 (0.400) Data 0.000 (0.004) Loss 1.9854 (1.5993) Prec@1 63.281 (63.113) Prec@5 76.562 (83.130)
Epoch: [86][2600/10010] Time 0.396 (0.400) Data 0.000 (0.004) Loss 1.5382 (1.5992) Prec@1 61.719 (63.114) Prec@5 82.031 (83.137)
Epoch: [86][2800/10010] Time 0.398 (0.400) Data 0.000 (0.004) Loss 1.8942 (1.5994) Prec@1 57.812 (63.111) Prec@5 82.812 (83.150)
Epoch: [86][3000/10010] Time 0.398 (0.399) Data 0.000 (0.004) Loss 1.4567 (1.6003) Prec@1 60.938 (63.071) Prec@5 89.844 (83.136)
Epoch: [86][3200/10010] Time 0.396 (0.399) Data 0.000 (0.004) Loss 1.4535 (1.6001) Prec@1 62.500 (63.056) Prec@5 84.375 (83.123)
Epoch: [86][3400/10010] Time 0.396 (0.399) Data 0.000 (0.004) Loss 1.5194 (1.5993) Prec@1 65.625 (63.083) Prec@5 86.719 (83.141)
Epoch: [86][3600/10010] Time 0.399 (0.399) Data 0.000 (0.004) Loss 1.4627 (1.5993) Prec@1 65.625 (63.069) Prec@5 85.938 (83.140)
Epoch: [86][3800/10010] Time 0.404 (0.399) Data 0.000 (0.004) Loss 1.5466 (1.5986) Prec@1 65.625 (63.077) Prec@5 82.031 (83.148)
Epoch: [86][4000/10010] Time 0.397 (0.399) Data 0.000 (0.004) Loss 1.2347 (1.6004) Prec@1 71.875 (63.050) Prec@5 89.062 (83.129)
Epoch: [86][4200/10010] Time 0.396 (0.399) Data 0.000 (0.004) Loss 1.7307 (1.6003) Prec@1 62.500 (63.045) Prec@5 80.469 (83.129)
Epoch: [86][4400/10010] Time 0.399 (0.399) Data 0.000 (0.004) Loss 1.5589 (1.6006) Prec@1 64.844 (63.049) Prec@5 82.812 (83.133)
Epoch: [86][4600/10010] Time 0.400 (0.399) Data 0.000 (0.004) Loss 1.5220 (1.6007) Prec@1 65.625 (63.036) Prec@5 82.031 (83.125)
Epoch: [86][4800/10010] Time 0.398 (0.399) Data 0.000 (0.004) Loss 1.4993 (1.6002) Prec@1 60.938 (63.055) Prec@5 86.719 (83.121)
Epoch: [86][5000/10010] Time 0.396 (0.399) Data 0.000 (0.004) Loss 1.5266 (1.6015) Prec@1 68.750 (63.030) Prec@5 87.500 (83.105)
Epoch: [86][5200/10010] Time 0.398 (0.399) Data 0.000 (0.004) Loss 1.4016 (1.6017) Prec@1 67.188 (63.019) Prec@5 84.375 (83.107)
Epoch: [86][5400/10010] Time 0.396 (0.399) Data 0.000 (0.004) Loss 1.7396 (1.6020) Prec@1 60.938 (63.015) Prec@5 82.031 (83.105)
Epoch: [86][5600/10010] Time 0.400 (0.399) Data 0.000 (0.004) Loss 1.3337 (1.6012) Prec@1 68.750 (63.033) Prec@5 86.719 (83.124)
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 2555904 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
Epoch: [86][5800/10010] Time 0.399 (0.399) Data 0.000 (0.004) Loss 1.8474 (1.6011) Prec@1 61.719 (63.029) Prec@5 81.250 (83.120)
Epoch: [86][6000/10010] Time 0.397 (0.399) Data 0.000 (0.004) Loss 1.6060 (1.6006) Prec@1 60.156 (63.038) Prec@5 85.156 (83.126)
Epoch: [86][6200/10010] Time 0.397 (0.399) Data 0.000 (0.004) Loss 1.5556 (1.6007) Prec@1 66.406 (63.030) Prec@5 85.156 (83.120)
Epoch: [86][6400/10010] Time 0.398 (0.399) Data 0.000 (0.004) Loss 2.0969 (1.6002) Prec@1 58.594 (63.036) Prec@5 75.781 (83.131)
Epoch: [86][6600/10010] Time 0.399 (0.399) Data 0.000 (0.004) Loss 1.5191 (1.6003) Prec@1 69.531 (63.045) Prec@5 85.938 (83.137)
Epoch: [86][6800/10010] Time 0.400 (0.399) Data 0.001 (0.004) Loss 1.3767 (1.6005) Prec@1 64.844 (63.031) Prec@5 84.375 (83.126)
Epoch: [86][7000/10010] Time 0.399 (0.399) Data 0.000 (0.004) Loss 1.3647 (1.6005) Prec@1 63.281 (63.045) Prec@5 85.156 (83.124)
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 2555904 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 2555904 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
Epoch: [86][7200/10010] Time 0.400 (0.399) Data 0.000 (0.004) Loss 1.7624 (1.6007) Prec@1 60.938 (63.038) Prec@5 79.688 (83.120)
Epoch: [86][7400/10010] Time 0.398 (0.399) Data 0.000 (0.004) Loss 1.4949 (1.6004) Prec@1 65.625 (63.049) Prec@5 83.594 (83.125)
Epoch: [86][7600/10010] Time 0.396 (0.399) Data 0.000 (0.004) Loss 1.3772 (1.6004) Prec@1 67.188 (63.047) Prec@5 85.156 (83.131)
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 19660800 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 18481152 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 37093376 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 39976960 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 34865152 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:742: UserWarning: Corrupt EXIF data. Expecting to read 12 bytes but only got 10.
warnings.warn(str(msg))
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:725: UserWarning: Possibly corrupt EXIF data. Expecting to read 1835008 bytes but only got 0. Skipping tag 0
" Skipping tag %s" % (size, len(data), tag))
Epoch: [86][7800/10010] Time 0.398 (0.399) Data 0.000 (0.004) Loss 1.3626 (1.6007) Prec@1 67.188 (63.041) Prec@5 87.500 (83.124)
Epoch: [86][8000/10010] Time 0.397 (0.399) Data 0.001 (0.004) Loss 1.9515 (1.6005) Prec@1 59.375 (63.044) Prec@5 78.125 (83.125)
Epoch: [86][8200/10010] Time 0.397 (0.399) Data 0.000 (0.004) Loss 1.5302 (1.6005) Prec@1 65.625 (63.046) Prec@5 80.469 (83.126)
Epoch: [86][8400/10010] Time 0.402 (0.399) Data 0.000 (0.004) Loss 1.5629 (1.6012) Prec@1 61.719 (63.028) Prec@5 84.375 (83.113)
Epoch: [86][8600/10010] Time 0.398 (0.399) Data 0.000 (0.004) Loss 1.4765 (1.6010) Prec@1 67.188 (63.023) Prec@5 87.500 (83.117)
/home/me/anaconda3/lib/python3.6/site-packages/PIL/TiffImagePlugin.py:742: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0.
warnings.warn(str(msg))
Epoch: [86][8800/10010] Time 0.399 (0.399) Data 0.000 (0.004) Loss 1.7422 (1.6007) Prec@1 60.938 (63.023) Prec@5 79.688 (83.122)
Epoch: [86][9000/10010] Time 0.399 (0.399) Data 0.000 (0.004) Loss 1.7085 (1.6009) Prec@1 63.281 (63.021) Prec@5 82.031 (83.119)
Epoch: [86][9200/10010] Time 0.394 (0.399) Data 0.000 (0.004) Loss 1.7975 (1.6009) Prec@1 61.719 (63.017) Prec@5 78.125 (83.119)
Epoch: [86][9400/10010] Time 0.396 (0.399) Data 0.000 (0.004) Loss 1.8066 (1.6009) Prec@1 57.812 (63.016) Prec@5 85.156 (83.120)
Epoch: [86][9600/10010] Time 0.399 (0.399) Data 0.000 (0.004) Loss 1.8106 (1.6010) Prec@1 58.594 (63.023) Prec@5 79.688 (83.121)
Epoch: [86][9800/10010] Time 0.397 (0.399) Data 0.000 (0.004) Loss 1.2793 (1.6008) Prec@1 64.844 (63.027) Prec@5 89.844 (83.122)
Epoch: [86][10000/10010] Time 0.400 (0.399) Data 0.000 (0.004) Loss 1.5781 (1.6010) Prec@1 63.281 (63.019) Prec@5 85.156 (83.124)
imagenet_train.py:354: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
input_var = torch.autograd.Variable(input, volatile=True)
imagenet_train.py:355: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
target_var = torch.autograd.Variable(target, volatile=True)
imagenet_train.py:363: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
losses.update(loss.data[0], input.size(0))
Test: [0/391] Time 3.970 (3.970) Loss 0.7654 (0.7654) Prec@1 85.938 (85.938) Prec@5 92.188 (92.188)
THCudaCheck FAIL file=/pytorch/aten/src/THC/generic/THCStorage.cu line=58 error=2 : out of memory
Traceback (most recent call last):
File "imagenet_train.py", line 523, in <module>
main()
File "imagenet_train.py", line 208, in main
prec1,prec5, val_loss = validate(val_loader, model, criterion, log)
File "imagenet_train.py", line 358, in validate
output = model(input_var)
File "/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 112, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/media/me/tmpstore/SimpNet_PyTorch/models/simplenet_v1_p3_imgnet.py", line 53, in forward
out = self.features(x)
File "/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/me/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 301, in forward
self.padding, self.dilation, self.groups)
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/aten/src/THC/generic/THCStorage.cu:58
More information :
GPU: GTX1080,
RAM: 18G
OS information :
x86_64
Kernel version: 4.13.0-45-generic
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.4 LTS"
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
gcc-version : gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
What is happening here? Does version 0.4 take more memory than the former ones? |
st103905 | Solved by SimonW in post #2
Pay attention to this. volatile now does nothing, and you should wrap your validate loop in with torch.no_grad():.
Your graph in validate won’t be freed until the output and loss variables are overwritten in the next iteration, and thus effective doubling memory usage. |
st103906 | Shisho_Sama:
imagenet_train.py:354: UserWarning: volatile was removed and now has no effect. Use with torch.no_grad(): instead. input_var = torch.autograd.Variable(input, volatile=True)
Pay attention to this. volatile now does nothing, and you should wrap your validate loop in with torch.no_grad():.
Your graph in validate won’t be freed until the output and loss variables are overwritten in the next iteration, and thus effective doubling memory usage. |
st103907 | Thank you very much. Is this OK now?
def validate(val_loader, model, criterion, log):
batch_time = AverageMeter()
losses = AverageMeter()
top1 = AverageMeter()
top5 = AverageMeter()
# switch to evaluate mode
model.eval()
end = time.time()
with torch.no_grad(): #for versions >=0.4
for i, (input, target) in enumerate(val_loader):
target = target.cuda(async=True)
input_var = torch.autograd.Variable(input)#, volatile=True) #for versions <=0.3.1
target_var = torch.autograd.Variable(target)#, volatile=True) #for versions <=0.3.1
# compute output
output = model(input_var)
loss = criterion(output, target_var)
# measure accuracy and record loss
prec1, prec5 = accuracy(output.data, target, topk=(1, 5))
losses.update(loss.data[0], input.size(0))
top1.update(prec1[0], input.size(0))
top5.update(prec5[0], input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
print_log('Test: [{0}/{1}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
'Prec@1 {top1.val:.3f} ({top1.avg:.3f})\t'
'Prec@5 {top5.val:.3f} ({top5.avg:.3f})'.format(
i, len(val_loader), batch_time=batch_time, loss=losses,
top1=top1, top5=top5), log)
print_log(' * Prec@1 {top1.avg:.3f} Prec@5 {top5.avg:.3f} Loss@ {error:.3f}'.format(top1=top1, top5=top5, error=losses.avg), log)
return top1.avg, top5.avg, losses.avg |
st103908 | Yeah this is fine.
Also with 0.4, you don’t even need the torch.autograd.Variable wrappers anymore. This should be a helpful read to you: https://pytorch.org/2018/04/22/0_4_0-migration-guide.html 75 |
st103909 | I’m trying to resume training and I am using torch.optim.lr_scheduler.MultiStepLR 35 for decreasing the learning rate. I noticed the constructor accepts a last_epoch parameter. So I tried to set it to the last epoch in which my checkpoint is made, and then simply resume training from that epoch forward.
When I tried to send a value for this parameter, I faced the error :
KeyError: “param ‘initial_lr’ is not specified in param_groups[0] when resuming an optimizer”
I have no idea what this means and how to get around it. The documentation is also vague to me and I really cant understand it. it talks about initial_lr, but there is no parameter named as such . I’m completely lost here!
Any help is greatly appreciated.
By the way, I’m using Pytorch 0.3.1 |
st103910 | Could you provide a small code snippet resulting in this error?
Did the approach from the other thread 423 not work? |
st103911 | That solution works for version 0.4, however, the problem with 0.4 is that I face out of memory error while training, it seems the 0.4 version takes more VRAM compared to 0.3.1 and that’s why I’m forced to stick with 0.3.1. Now I need another way to get the resume to work. If it wasn’t because of that out of memory issue, all was good! Anyway here is a sample snippet of code to show how I did it that causes the error:
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.SGD(model.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay,
nesterov=True)
# epoch
milestones = [30, 60, 90, 130, 150]
scheduler = lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=36)
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print_log("=> loading checkpoint '{}'".format(args.resume), log)
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
if 'best_prec5' in checkpoint:
best_prec5 = checkpoint['best_prec5']
else:
best_prec5 = 0.00
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])
model.eval()
print_log("=> loaded checkpoint '{}' (epoch {})".format(args.resume, checkpoint['epoch']), log)
else:
print_log("=> no checkpoint found at '{}'".format(args.resume), log)
cudnn.benchmark = True |
st103912 | I think we should have a look at the memory issue first.
Did you create a separate thread already for it? |
st103913 | No, I haven’t. I am in the middle of training (reverted back to 0.3.1 and resumed the training the old fasion way)
When the training is finished I’ll update to version 0.4 and give it a try again |
st103914 | Asked the new question concerning the out of memory issue in 0.4 :
Training fails by out of memory error on Pytorch 0.4 but runs fine on 0.3.1
hi, I recently updated my Pytorch installation to the latest version which is 0.4. I installed the newer version like this :
me@shishosama:/media/me/tmpstore/SimpNet_PyTorch$ pip install http://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-linux_x86_64.whl
Collecting torch==0.4.0 from http://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-linux_x86_64.whl
Downloading http://download.pytorch.org/whl/cu90/torch-0.4.0-cp36-cp36m-linux_x86_64.whl (566.4MB)
100% |█████████████████… |
st103915 | Cant understand how come I havent encountered this question before, but… how to combine two byte tensors used to represent masks / true/false values? Two ways that work are:
res = a * b # method one
res = a & b # method two
However, I dislike the first, since multiplication sounds more expensive than bool ops (I know in practice, the difference will be zero, if it’s running on cuda, by the time the tensors move on-chip and back… but still). I dislike the second having been bitten in previous code in things where I assume anything non-zero is True, but & is bitwise, and doesnt really work like this. So I’d rather do something like:
res = a and b # preferred method one, but fails
res = a && b # preferred method two, but fails
What is the most standard way of combining two logical bytetensors, in practice? |
st103916 | Hello,
I’m actually working with the library since long time. Until now, the applications were very standard and sometimes even already implemented.
In my actual case, I would need change the convolution, ideally, I would like to:
“Change” the image convoluted area. For example actually abbreviated we have conv(Tensor[:]).
What I’d like to have is something like: conv(Tensor[1:5, 1:5, : , :]).
I would like to know if there is an already implemented solution? And if no, is there any possibility to make it myself?
I could have implemented a convolution, my concern is about GPU memory management which leads me to ask you if there is any innate solution.
Thanks in advance.
Regards
Marc |
st103917 | In your example you are applying the convolution for the 4 samples in the current batch and 4 channels.
This should work as you’ve suggested:
batch_size = 10
channels = 10
h, w = 24, 24
x = torch.randn(batch_size, channels, h, w)
conv = nn.Conv2d(4, 1, 3, 1, 1)
output = conv(x[1:5, 1:5]) |
st103918 | Hello ptrblck,
I actually was agreeing with this solution which seems correct. My actual concern is: can I translate the convolution without using such tensor reduction.
In my case, for a more precise case, I’d like to make 4 convolution at a time (4 different layers) but for each of them, they should start correspondingly as (0.1), (0,0), (1,0) and (1,1). The idea was to know is there was a solution for a prior translation of the convolution.
If there is no such possible solution, I’ll go for a tensor reduction with the square bracket accessor.
Anyhow it’s just a matter of a proper / generic code. If it’s not possible thanks for your answer.
Regards |
st103919 | How can I use the argmax values to index a tensor?
So, for example, I have two tensors of the same shape x,y and have the argmax = x.min(-1) of one of them. Then I want to get the values at the position in y i.e. y[argmax] ?
How can I do that ? |
st103920 | Your argmax should rather be:
argmax = x.max(0)[1]
since max returns a tuple.
Then you can use it as you did:
y[argmax] |
st103921 | x = torch.rand(3,4)
y = torch.rand(3,4)
argmax = x.max(0)[1]
y[argmax]
Does not work for me. |
st103922 | You probably want to use arange(4) as a second index or use gather or so.
Best regards
Thomas |
st103923 | going wtih Tom’s idea, and tweaking the earlier code a bit:
import torch
import numpy as np
torch.manual_seed(4)
x = torch.rand(3,4)
y = torch.rand(3,4)
print('y', y)
_, argmax = x.max(-1)
print('argmax', argmax)
y[np.arange(3), argmax] = 3
print('y', y)
Result, as required:
y
0.9263 0.4735 0.5949 0.7956
0.7635 0.2137 0.3066 0.0386
0.5220 0.3207 0.6074 0.5233
[torch.FloatTensor of size 3x4]
argmax
0
2
1
[torch.LongTensor of size 3]
y
3.0000 0.4735 0.5949 0.7956
0.7635 0.2137 3.0000 0.0386
0.5220 3.0000 0.6074 0.5233
[torch.FloatTensor of size 3x4] |
st103924 | Seems this will help you. https://github.com/Zhaoyi-Yan/Shift-Net_pytorch/blob/master/util/MaxCoord.py 440
This is for assigning the channel 1, of which has the max value across all channels. |
st103925 | Using torch.gather could work. Yet, it will first generate a tensor with the same size as the original tensor. If any one have a better way?
def score_max(x, dim, score):
_tmp=[1]*len(x.size())
_tmp[dim] = x.size(dim)
return torch.gather(x,dim,score.max(
dim)[1].unsqueeze(dim).repeat(tuple(_tmp))).select(dim,0) |
st103926 | Hi everyone,
I am using minibatch to accelerate my rnn network. The size of my minibatch is 50.
Now I get the following code:
hiddens, hidden = self.rnn(vids_embeddings_sorted_packed, hidden)
hidden goes well. It owns all 50 results.
result:
hidden
tensor([[[ 0.7852, -0.7907, 0.0129, …, -0.5274, -0.6168, 0.8388],
[-0.1980, 0.5532, -0.4744, …, 0.3106, -0.3343, 0.1624],
[ 0.2504, 0.1677, -0.5666, …, -0.6386, 0.0573, 0.2041],
…,
[ 0.9051, -0.7725, 0.5131, …, 0.8861, 0.7480, -0.1111],
[ 0.7791, -0.8842, -0.6890, …, -0.2500, -0.7887, 0.4737],
[-0.1205, 0.2851, -0.5888, …, 0.2452, -0.1945, -0.8261]]])
hiddens should own all the intermediate results. But it only owns the intermediate results of the first one, but without the remaining 49 ones.
result:
hiddens
PackedSequence(data=tensor([[ 0.8669, -0.2405, -0.2589, …, -0.2067, 0.3336, 0.6909],
[-0.3318, 0.8954, -0.5930, …, -0.3042, 0.3690, -0.4933],
[ 0.7121, 0.0110, 0.2848, …, 0.5082, 0.3286, -0.2838],
…,
[ 0.1060, -0.9560, 0.0281, …, 0.7583, 0.7072, 0.8017],
[ 0.8482, 0.2802, -0.2152, …, 0.8695, 0.1836, -0.9567],
[ 0.7852, -0.7907, 0.0129, …, -0.5274, -0.6168, 0.8388]]), batch_sizes=tensor([ 293, 293, 111, 46, 21, 12, 7, 4, 3, 3,
3, 2, 2, 1, 1, 1, 1, 1, 1, 1,
1]))
You may find that the last row of hiddens should end in -0.8261, but it only contains the result of the first row of hidden, but neglect the result of the remaining 49 row of hidden. |
st103927 | Hi, everyone, I’m trying to create a wrapper module around an existing module that has parameters and I’m a bit worried that I may be registering the same parameter several times and modifying it multiple times during optim.step().
For instance, I’d like to create an embedding layer that uses the special dropout featured in the AWS_lstm paper. This involves creating an embedding layer and then wrapping it in a custom module:
embed = nn.Embedding(a,b)
class EmbedDrop(Module):
def __init__(self, embedlayer, p):
self.embed = embedlayer
self.weight = embedlayer.weight
self.dropout = p
def forward(self, input):
if not self.training:
return F.linear(input, self.weight)
...
so the embedding layer’s weight matrix has been ‘mentioned’ three times during different init calls, and twice during during the init call of the wrapping module. If I’m making a mistake and registering it twice, but for several reasons need the weight matrix to be an attribute of the wrapping layer, could I get around this by making it a property instead?
class EmbedDrop(Module):
def __init__(self, embedlayer, p):
self.embed = embedlayer
self.dropout = p
def forward(self, input):
...
@property
def weight(self): return self.embed.weight
Just to emphasize the embedding dropout wrapper is just an example for illustration and this is a general question about avoiding registering the same parameter multiple times.
Does anyone know if registering the weight as a property also registers it as a parameter? Is it possible to register a parameter twice or is just an imaginary problem on my part that can’t actually happen. Is there a way of telling the module the init stage to not bother registering the paramter (because its already been registered)
Generalizing this and taking this slightly further, I’ve often found myself wanting to make all the attributes of the wrapped layer accessible as attributes of the wrapping layer, so Ive adding this to the init of the wrapper module:
class MyWrapper(Module):
def __init__(self, wrapped_mod):
self.wrapped = wrapped_mod
non_hidden_attrs = toolz.keysfilter(lambda x : not x.startswith('_'), wrapped_mod.__dict__)
self.__dict__.update(non_hidden_attrs)
def forward(self, input):
...
Can anyone tell me if there are any special PyTorch/autograd specific reasons I should avoid doing this?
Thanks a lot for any help! |
st103928 | I know how to define conv parameter or fc parameter. Now, I want to define a parameter matrix W×H, and it can be learned like conv parameter. How can I do this?
Thank you for your attention and answer! |
st103929 | Solved by ptrblck in post #2
You can register parameters in your model with nn.Parameter.
Alternatively, if you would like to train your parameter outside of a model, you could just use torch.randn(..., requires_grad=True). You can find an example here. |
st103930 | You can register parameters in your model with nn.Parameter.
Alternatively, if you would like to train your parameter outside of a model, you could just use torch.randn(..., requires_grad=True). You can find an example here 35. |
st103931 | This paper 46 explains a new activation function that has a trainable parameter and is used for quantization of activations (Eq. 1, 2, and 3).
The quantization is as follows:
y = 0.5(|x| - |x - alpha| + alpha)
y_q = round(y * (2 ^ k - 1) / alpha) * alpha / (2 ^ k - 1),
where alpha is the trainable parameter and k is the number of bits.
The partial derivative of y_q with respect to alpha is mentioned in Eq. 3 of the paper.
What is the easiest method of integrating this activation function in PyTorch?
I was thinking of defining an nn.Module that includes alpha as a Parameter. The problem is that there are two sets of gradients here: one for updating alpha and another one for defining gradients with respect to inputs. I assume the latter should be handled in backward() function, but I’m not sure how to update alpha. |
st103932 | Hi there,
Note the quotes in the title: that’s not PyTorch which is to blame but the careless user that I am.
Here’s the guilty code:
total_loss = 0.0 # total loss of the epoch
for ...
loss = criterion(...)
loss.backward()
# ...
total_loss += loss
For reference, here is what’s happening:
total_loss is initially a Python float variable, because we use the += operator on it, it is transformed into a Tensor. Because loss has requires_grad=True, the new total_loss tensor also does. The subsequent computations involving total_loss are then recorded during the entire epoch, and eventually clutter the GPU memory.
The solution is simply to write
total_loss += loss.item()
The type casting of the total_loss variable may have been harmless before the Variable and Tensor classes were merged, but not anymore.
As there is no mean to add in place a tensor to a float, Python falls back to using some __radd__ method of the Tensor class. I think this should at least warn the user about the dangerous path he’s taking.
What’s your point of view on this matter ? |
st103933 | This is known behavior and is explained in the section Accumulating losses in the Migration Guide 7.
You would want this kind of behavior for example if you have different loss functions and would like to accumulate them:
total_loss = loss1 + loss2 + loss3
total_loss.backward()
Also you could accumulate the losses of several mini batches and backward them together:
for ...
loss = criterion(...)
total_loss += loss
total_loss.backward()
Therefore I’m not sure it would be a good idea to add a warning, since it’s used for the purpose of storing the computation graphs. |
st103934 | Hi,
Thanks for the link, which I should have read when migrating…
I think you misread me, I’m not complaining about the fact that we can aggregate losses when they are all tensors (I suppose that would be the case in your examples). What I’m pointing at is the implicit “type-cast” happening on Python numbers. Particularly I find the combination of these two things very error prone:
float += ScalarTensor is valid and changes the type of the left value
formatting a zero-dimensional tensor only shows the scalar value, without any indication that the variable is indeed a tensor
The second point is easy to fix. As for the first, __radd__ (and its friends) could print a warning which would be easily silenced by an explicit cast using torch.tensor. |
st103935 | Hello ,
I have been creating different recurrent models ( rnn , lstm , gru ) which gave me a good OA of 86% . I read in a deep learning forum that it’s a good idea to input your initial data into a standard feed forward network then the output directly into your recurrent model. I used a data structure like this for my recurrent model [batch_size , seq_dim , features] , now I have to re-configure this data to properly be my initial input for my FFN model as far as I can know the input should be [batch_size, features] . My question is How to re-configure my initial data to get through the FFN and get and output in the form of my RNN data structure.
I hope you understood my issue.
And thank you. |
st103936 | Typically, the feedforward part treats time-steps as either as unrelated or as a dimension.
Thus you would do something like
seq_len, batch_size, feature_size = input.shape
preprocessed_input = feed_forward_new(input.view(seq_len * batch_size, -1)).view(seq_len * batch_size, -1)
(adapt if your input or output is multidimensional).
An alternative can be to include convolutions over timesteps. Then you need to .permute the batch to the front
input_reordered = input.permute(1, 0, 2).unsqueeze(1) # Batch x channel(=1) x H (= time) x W (=features)
preprocesses_input_raw = my_conv_2d_net(input_reordered) # batch x channel (=features) x time x features
batch, c, time_out, f = preprocesses_input.shape # c and f will give one large feature vector
preprocesses_input = preprocesses_input_raw.permute(2, 0, 1, 3).reshape(time_out, batch, c * f)
now you can pass preprocessed input to an RNN.
Best regards
Thomas |
st103937 | Hello @tom ,
Thank you for your quick reply, well I thought about " spreading " my data (seq_len * batch_size) but it will even complicated things making it [25*100] , I was thinking of maybe creating something like this: a feedforward net for each time step so my initial input will be split into seq_dim * [batch_size , features] pass each of this new tensors through the same FFN , we’ll call this model_in then combines the outputs of the FFNs to create our input for the RNN [,batch_size, seq_len , features (maybe I will change this depending on the output from FFN ) ] , these are my thoughts on the matter , can you give me your feedback ( plausibility , complexity … )
And Thank you
Cheers |
st103938 | You could achieve this behavior using the nn.Linear layer.
Just permute your input, so that your dimensions are [batch_size, seq, in_features] and the linear layer will be applied for all seq using in_features.
Have a look at the doc 1 for the shape information. |
st103939 | CUDA_VISIBLE_DEVICES is not really a solution for changing default device 14 because it relies on setting the environment variable before the script runs. Perhaps torch.load docs should mention that torch.cuda.set_device is ignored by it and tensors are loaded to the same device they were saved against.
I had a distributed training (4 nodes each 8 GPUs) and for the life of me could not realize why I could run DistributedDataParallel with 4 processes (each 8 GPU) but not with 32 process each using torch.cuda.set_device(gpus[0]). Until I realized that torch.load (a previous snapshot) ignores torch.cuda.set_device!
Thanks to map_location solution from here 39 and here 25 now I load to CPU first, and it works just fine now! |
st103940 | Assume that there is a reference two-dimensional array ref and a given vector x. I would like to return the closest vector to x from ref, such that the operation is differentiable.
The solution I currently have, which is not differentiable, is like this:
distances = torch.sqrt(torch.sum((reference - x) ** 2, dim=1)) # I could have used something like nn.PairwiseDistance to calculate distances
_, min_index = torch.min(distances)
return reference[min_index]
This solution is probably not differentiable because it is using the argmin function. Is there a differentiable way of finding the closest vector? |
st103941 | If I understand this NIPS 2017 paper 3 (Multiscale Quantization for Fast Similarity Search) correctly, it has some sort of nearest neighbor search (Eq. 2) and claims to be training the model using SGD. |
st103942 | Hi,
I have a problem for loss.backward() as the requires_gradient=false after computation.
output.shape = [20,1] as well as target and criterion = nn.BCELoss()
Any idea?
Thanks |
st103943 | Could you post your forward code?
Maybe you’ve used .data somewhere or detached your tensor otherwise. |
st103944 | Thank you for quick response. Here is my code:
class SimpleNet(nn.Module):
def __init__(self, num_classes=2):
super(SimpleNet, self).__init__()
self.conv1 = nn.Conv3d(in_channels=1, out_channels=8, kernel_size=3, stride=1, padding=1)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv3d(in_channels=8, out_channels=16, kernel_size=3, stride=1, padding=1)
self.relu2 = nn.ReLU()
self.pool = nn.MaxPool3d(kernel_size=2)
self.conv3 = nn.Conv3d(in_channels=16, out_channels=32, kernel_size=3, stride=1, padding=1)
self.relu3 = nn.ReLU()
self.conv4 = nn.Conv3d(in_channels=32, out_channels=32, kernel_size=3, stride=1, padding=1)
self.relu4 = nn.ReLU()
self.fc = nn.Linear(in_features= 16 * 16 * 16 * 32, out_features=num_classes)
self.softmax = nn.LogSoftmax()
def forward(self, input):
output = self.conv1(input)
output = self.relu1(output)
output = self.conv2(output)
output = self.relu2(output)
output = self.pool(output)
output = self.conv3(output)
output = self.relu3(output)
output = self.conv4(output)
output = self.relu4(output)
output = output.view(-1, 16 * 16 * 16 * 32)
output = self.fc(output)
output = self.softmax(output)
_, output = output.max(1)
return output
net = SimpleNet()
opt = optim.Adam(net.parameters(), lr=0.001, betas=(0.9, 0.999))
criterion = nn.BCELoss()
def train_epoch(model, opt, criterion, batch_size=20):
model.train()
losses = []
for i in range(0, X.size(0), batch_size):
x_batch = X[i:i + batch_size, :]
y_batch = Y[i:i + batch_size, :]
x_batch = Variable(x_batch)
y_batch = Variable(y_batch)
opt.zero_grad()
y_hat = net(x_batch).type(torch.FloatTensor)
y_hat = torch.unsqueeze(y_hat, 1)
loss = criterion(y_hat, y_batch)
loss.backward()
opt.step()
losses.append(loss.data.numpy())
return losses |
st103945 | Hi,
I’m working with affectnet which is about 450K images, totalling about 56Gb.
I’ve got a Titan Xp, and I’ve succesfully created a custom Dataset loader.
However, my training appears to be very slow because for each mini-batch the input/output tensors are re-allocated from host to gpu.
I’d like to try and upload some of the dataset images on the GPU, since it has about 12Gb of DDRAM. Problem is that it is working extremely slow. I’ve tested with CUDA 9.0 and now with CUDA 9.1.
The loop is something very simple such as:
for idx in range(len(self.labels.rows)):
if torch.cuda.memory_allocated() < MAX_GPU_MEM:
pair = self.__getitem__(idx)
in_tensor = pair[0].cuda(non_blocking=True).half()
out_tensor = pair[1].cuda(non_blocking=True).half()
self.data.append([in_tensor, out_tensor])
else:
print("GPU nearly maxed out")
break
print("in GPU RAM: ", len(self.data))
I can’t see if there is a method to allocate beforehand more GPU memory. I admit that some of the time spent is on pre-processing those images and transforming them, but the rate the GPU RAM increases is phenomenally slow.
Is there a way to upload them all together in a batch?
EDIT: maybe my issue is the pre-processing after all… I’ll try and profile the code |
st103946 | It is for CPU tensors to be quickly transported to CUDA, which should be exactly what you need. |
st103947 | Thanks Simon, I gave it a try, seems to be working, still somewhat slow even though I multi-threaded the pre-processing, but now I know it’s not the GPU pipeline. |
st103948 | I’m hitting what appears to be a deadlock when trying to make use of multiprocessing with pytorch. The equivalent numpy code works like I expect it to.
I’ve made a simplified version of my code: a pool of 4 workers executing an array-wide broadcast operation 1000 times (so ~250 each worker). The array in question is 100,000 x 3 and the broadcast operation is subtraction of all rows by a single 1 x 3 row array. The large array is a shared/global array, and the row array is different at each iteration.
The code works exactly as expected using numpy, with the pooled workers showing a 4x speedup over the equivalent for loop.
The code in pytorch, however, hits a deadlock (I assume): none of the workers complete the array broadcast operation even once.
The numpy code below prints the following:
Finished for loop over my_subtractor: took 8.1504 seconds.
Finished pool over my_subtractor: took 2.2247 seconds.
The pytorch code, on the other hand, prints this then stalls:
Finished for loop over my_subtractor: took 3.1082 seconds.
BLA
BLA
BLA
BLA
“BLA” print statements are just to show that each worker is stuck in – apparently – a deadlock state. There are exactly 4 of these: one per worker entering – and getting stuck in – an iteration.
If you feel ambitious enough to reproduce, note that it doesn’t work on Windows because it’s not wrapped around if __name__ == '__main__': (I read somewhere that you need this because of the way Windows handles launching processes). Also you will need to create an empty file called my_globals.py.
Here is the numpy code
from time import time
import numpy as np
import my_globals
from multiprocessing import Pool as ThreadPool
# shared memory by virtue of being global
my_globals.minuend = np.random.rand(100000,3)
# array to be iterated over in for loop / pool of workers
subtrahends = np.random.rand(10000,3)
# function called at each iteration (broadcast operation)
def my_subtractor(subtrahend):
my_globals.minuend - subtrahend
return 0
# launch for loop
ts = time()
for idx, subtrahend in enumerate(subtrahends):
my_subtractor(subtrahend)
te = time()
print('Finished for loop over my_subtractor: took %2.4f seconds.' % (te - ts))
# launch equivalent pool of workers
ts = time()
pool = ThreadPool(4)
pool.map(my_subtractor, subtrahends)
pool.close()
pool.join()
te = time()
print('Finished pool over my_subtractor: took %2.4f seconds.' % (te - ts))
Here is the equivalent pytorch code:
from time import time
import torch
import my_globals
from torch.multiprocessing import Pool as ThreadPool
# necessary on my system because it has low limits for number of file descriptors; not recommended for most systems,
# see: https://pytorch.org/docs/stable/multiprocessing.html#file-descriptor-file-descriptor
torch.multiprocessing.set_sharing_strategy('file_system')
# shared memory by virtue of being global
my_globals.minuend = torch.rand(100000,3)
# array to be iterated over in for loop / pool of workers
subtrahends = torch.rand(10000,3)
# function called at each iteration (broadcast operation)
def my_subtractor(subtrahend, verbose=True):
if verbose:
print("BLA") # -- prints for every worker in the pool (so 4 times total)
my_globals.minuend - subtrahend
if verbose:
print("ALB") # -- doesn't print for any worker
return 0
# launch for loop
ts = time()
for idx, subtrahend in enumerate(subtrahends):
my_subtractor(subtrahend, verbose=False)
te = time()
print('Finished for loop over my_subtractor: took %2.4f seconds.' % (te - ts))
# launch equivalent pool of workers
ts = time()
pool = ThreadPool(4)
pool.map(my_subtractor, subtrahends)
pool.close()
pool.join()
te = time()
print('Finished pool over my_subtractor: took %2.4f seconds.' % (te - ts)) |
st103949 | I build the caffe2 with anaconda following the page.
In the server with a single titanx, has cudnn7 and cuda9 but do not have nccl, so I download the nccl2 from nvidia and extract it to path/to/local/nccl2, and then edit the ./pytorch/conda/integrated/build.sh in the line 42 to be:“export NCCL_ROOT_DIR=path/to/local/nccl2”.
Then I need to use caffe2 with python2, so I added “conda_args+=(” --python 2.7") " in the ./pytorch/scripts/build_anaconda.sh to use python2.7.
The building was succeed, but when I run python2 test.py
from caffe2.python import core
It tells me:
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
WARNING:root:Debug message: No module named caffe2_pybind11_state_hip
Segmentation fault (core dumped)
My question is:
a. why the conda does not support gpu?
b. if I am using a single gpu, is nccl necessary for building?
c. how to fix No module named caffe2_pybind11_state_hip
PyTorch or Caffe2: caffe2
How you installed PyTorch (conda, pip, source): conda
Build command you used (if compiling from source):./scripts/build_anaconda.sh --install-locally --cuda 9.0 --cudnn 7
OS:ubuntu16
PyTorch version:
Python version:2.7
CUDA/cuDNN version:9.1/7
GPU models and configuration:??
GCC version (if compiling from source):5.4.0
CMake version:not install
Versions of any other relevant libraries:
Thank you very much! |
st103950 | Hi, every one,
I am currently using the image data to predicate the some values, it is basically a problem of regression, now I want to evaluate the reliability of the prediction.
I saw some code to evaluate the model, but for every prediction which is made by the model, is it possible to do the evaluation?
Thanks a lot. |
st103951 | I don’t really have the labels for the classification, but for every image I have a corresponding array.
Like:
img1, [0.1,0.6,0.8,1.5]
img2, [0.1,0.6,0.68,1.55]
img3, [0.1,0.75,0.8,10.5]
…
…
So , finally, the neural network that I trained can give me a array like:
imgTest, [0.3,0.6,0.5,0.9]
I want to evaluate the reliability of the prediction. |
st103952 | Divide your data to 80% of training data and 20% of evaluation data (some people use 90%-10%). To evaluate your model compare the prediction of the model and the label given to you for every image in the evaluation data, if it’s equal add 1 to a correct_counter. After running over all the example in the evaluation data, just make the following division: correct_counter/size(evaluation).
This is probably the most basic method to evaluate a model. |
st103953 | Nice,to evaluate the model, I totally agree with you.
And is possible that I can evaluate the prediction that the model has made? |
st103954 | What do you mean evaluate the prediction? Images that you don’t have the labels to? |
st103955 | A model evaluation will give you probability of the model being right or wrong, which is probably the best you can get. Another approach is human evaluation, in which people are asked to evaluate how well the model did (it is common when using generative models). |
st103956 | Is this possible to do ? Here’s what I have tried so far, to no avail. I am on device 1 trying to access a variable from device 0.
torch.cuda.current_device() #prints 1
outputs.get_device() #prints 0
torch.cuda.comm.broadcast(outputs, (0, 1))
outputs.get_device() #prints 0
outputs.cuda(1)
outputs.get_device() #prints 0
outputs.to(1)
outputs.get_device() #prints 0
outputs.cuda(device=1)
outputs.get_device() #prints 0
If this is relevant, the outputs variable is an output from an LSTM hosted on GPU 0. I’m trying to move it to GPU 1for additional computation because I think I’m getting a from not enough memory on GPU 0. |
st103957 | outputs = outputs.to(1)
should work, since it’s not an inplace operation.
The same goes for the cuda() call.
PS: you can use it inplace on Modules, but I would recommend to always assign it. |
st103958 | I’m trying to implement an inception block to train on, but I seem to be getting an error when I concatenate all the outputs from the convolutional layers.
The code is here:
github.com
maxmatical/pytorch-projects/blob/master/inception block 6
# inception style module
class convnet(nn.Module):
def __init__(self):
super(convnet, self).__init__()
self.conv_3 = nn.Sequential(
nn.Conv2d(3, 8, 1, padding = 0),
nn.ReLU(),
nn.Conv2d(8, 8, 3, padding = 1),
nn.ReLU()
)
self.conv_5 = nn.Sequential(
nn.Conv2d(3, 8, 1, padding = 0),
nn.ReLU(),
nn.Conv2d(8, 8, 5, padding = 2),
nn.ReLU()
)
self.max_pool_conv_1 = nn.Sequential(
nn.MaxPool2d(3, stride = 3), # window size 3x3
nn.Conv2d(3, 8, 1, padding = 0),
nn.ReLU()
This file has been truncated. show original
When I run my training, I get this error
RuntimeError Traceback (most recent call last)
in ()
12
13 # forward + backward + optimize
—> 14 outputs = net(inputs)
15 loss = criterion(outputs, labels)
16 loss.backward()
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs)
489 result = self._slow_forward(*input, **kwargs)
490 else:
–> 491 result = self.forward(*input, **kwargs)
492 for hook in self._forward_hooks.values():
493 hook_result = hook(self, input, result)
in forward(self, x)
33 x3 = self.conv_5(x)
34 x4 = self.max_pool_conv_1(x)
—> 35 x = torch.cat([x1,x2,x3,x4], 1) # concatenate all layers
36 x = x.view(x.size(0), -1) # flatten
37 x = F.relu(self.fc1(x))
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 32 and 10 in dimension 2 at /Users/soumith/code/builder/wheel/pytorch-src/aten/src/TH/generic/THTensorMath.c:3586
It looks like it has something to do with the torch.cat line. Could anyone offer any help? |
st103959 | Print the sizes of x1, … x4. Likely, x4 is smaller than the others in the W,H dimensions, that wouldn’t work.
It’s a good idea to implement models from scratch to learn, but you could peek e.g. at the inception model in torchvision 18 if you are stuck.
Best regards
Thomas |
st103960 | I have a fully connected layer, I want to multiply first half of weights by a number K that has itself to be learnt.
As in suppose the weight of fully connected network is W [mxn] and I have a another parameter K that is initialized to say 0.5.
I want to perform W[:m//2,:] = W[:m//2,:] * K and I need K to change as well
Is there some way to do this, will backpropogation work if I multiply two parameters??? |
st103961 | Solved by tom in post #2
Modifying parameters this way doesn’t work.
In my opinion the best way to do this is to create your own layer using W and K as parameters and using torch.nn.functional.linear in the forward.
for the multiplication, (provided m is even), I’d probably suggest
mul = torch.ones(2, 1, 1, device=K.d… |
st103962 | Modifying parameters this way doesn’t work.
In my opinion the best way to do this is to create your own layer using W and K as parameters and using torch.nn.functional.linear in the forward.
for the multiplication, (provided m is even), I’d probably suggest
mul = torch.ones(2, 1, 1, device=K.device)
mul[0, 0, 0] = K
weight = (W.view(2, m//2, -1) * mul).view(m, -1)
and then using weight. Backpropagation should work this way.
There is a somewhat elaborate way to replace parameters with calculated quantities, see e.g. the spectral norm implementation for inspiration. Unfortunately, there doesn’t seem general interest for a more generally applicable interface (I’ve tried to pitch my ideas at https://github.com/pytorch/pytorch/issues/7313 33 ).
Best regards
Thomas |
st103963 | Oh thanks!
Functional approach seems cool I guess, bit not neat but good enough
Thanks |
st103964 | Hi, in your example use of mul is to convert parameter into tensor before multiplication right??
EDIT : seems it doesnt make a difference with or without it |
st103965 | Oops, the code example missed the top line because I left it on the triple backtick line :/.
The mul is target the first half only. |
st103966 | Issue description
Always raise exception when meet specific batch size number in inference phase, such as [9, 11, 13, 15, 19, 22], but single device operating normally.
When downgrade to torch 4.0 stable version still meet this question.
Code example
import sys
import traceback
import torch
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = torch.nn.Conv2d(3, 10, kernel_size=3, padding=1, stride=2)
self.conv2 = torch.nn.Conv2d(10, 20, kernel_size=1, padding=0, stride=2)
self.fc = torch.nn.Linear(20, 2)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = torch.nn.functional.adaptive_avg_pool2d(x, 1).squeeze()
x = self.fc(x)
return x
problem_pair = []
exception = None
for i in range(8):
model = Net()
if i != 0:
model = torch.nn.DataParallel(model, device_ids=range(i)).cuda()
else:
model.cuda()
for j in range(1, 60):
try:
data = torch.rand(j, 3, 8, 8).cuda()
_ = model(data)
except Exception as e:
exception = sys.exc_info()
problem_pair.append([i, j])
print('problem pair {} raise error'.format(problem_pair))
traceback.print_exception(*exception)
output
problem pair [[2, 3], [3, 5], [3, 7], [4, 5], [4, 7], [4, 10], [4, 13], [5, 7], [5, 9], [5, 13], [5, 1
7], [5, 21], [6, 7], [6, 9], [6, 11], [6, 13], [6, 16], [6, 21], [6, 26], [6, 31], [7, 9], [7, 11], [7
, 13], [7, 16], [7, 19], [7, 25], [7, 31], [7, 37], [7, 43]] raise error
Traceback (most recent call last):
File "demo.py", line 33, in <module>
_ = model(data)
File "/home/wangyulong/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in
__call__
result = self.forward(*input, **kwargs)
File "/home/wangyulong/.local/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line
115, in forward
return self.gather(outputs, self.output_device)
File "/home/wangyulong/.local/lib/python3.5/site-packages/torch/nn/parallel/data_parallel.py", line
127, in gather
return gather(outputs, output_device, dim=self.dim)
File "/home/wangyulong/.local/lib/python3.5/site-packages/torch/nn/parallel/scatter_gather.py", line
68, in gather
return gather_map(outputs)
File "/home/wangyulong/.local/lib/python3.5/site-packages/torch/nn/parallel/scatter_gather.py", line
55, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/home/wangyulong/.local/lib/python3.5/site-packages/torch/nn/parallel/_functions.py", line 55,
in forward
return comm.gather(inputs, ctx.dim, ctx.target_device)
File "/home/wangyulong/.local/lib/python3.5/site-packages/torch/cuda/comm.py", line 186, in gather
"but expected {}".format(got, expected))
ValueError: gather got an input of invalid size: got 2, but expected 2x2
System Info
PyTorch version: 0.5.0a0+6eec411
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.4 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
CMake version: version 3.11.0
Python version: 3.5
Is CUDA available: Yes
CUDA runtime version: 9.0.176
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti
GPU 4: GeForce GTX 1080 Ti
GPU 5: GeForce GTX 1080 Ti
GPU 6: GeForce GTX 1080 Ti
GPU 7: GeForce GTX 1080 Ti
Nvidia driver version: 390.30
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy (1.14.3)
[pip3] torch (0.5.0a0+6eec411)
[pip3] torchvision (0.2.1)
[conda] Could not collect |
st103967 | Ok, stupid operation, squeeze operation for batchsize % gpu == 1, make the array from 2d array to 1d array |
st103968 | Hello
I am currently been able to train a system using Q-Learning. I will to move it to Actor_Critic (A2C) method. Please don’t ask me why for this move, I have to.
I am currently borrowing the implementation from https://github.com/higgsfield/RL-Adventure-2/blob/master/1.actor-critic.ipynb 19
The thing is, I am keep getting a success rate of approx ~ 50% (which is basically random behavior). My game is a long episode (50 steps). I am wondering how should I debug this. Should I print out the reward, the value, or what? How should I debugg this?
Here are some log:
simulation episode 2: Success, turn_count =20
loss = tensor(1763.7875)
simulation episode 3: Fail, turn_count= 42
loss = tensor(44.6923)
simulation episode 4: Fail, turn_count= 42
loss = tensor(173.5872)
simulation episode 5: Fail, turn_count= 42
loss = tensor(4034.0889)
simulation episode 6: Fail, turn_count= 42
loss = tensor(132.7567)
loss = simulation episode 7: Success, turn_count =22
loss = tensor(2099.5344)
As a general trend, I have observed that for Success episodes, the loss tends to be huge, where as for Fail episode, the loss function output tends to be small. Any suggestion? |
st103969 | Hello. I am using nn.DataParallel to enable multi-GPUs to train my model. However, I met problems with DataLoader. The Loader always stucks after a few iterations. When I switch back to single GPU, everything goes smooth. What can be wrong? |
st103970 | Below is a sample code of something I am trying to implement :
import torch
import torch.nn as nn
class tmpNw(nn.Module):
def __init__(self):
super(tmpNw, self).__init__()
self.linear = []
self.linear.append(nn.Linear(2, 3))
self.linear.append(nn.Linear(3, 3))
def forward(self, x):
x = self.linear[0](x)
x = self.linear[1](x)
return x
nw = tmpNw()
print list(nw.parameters())
optimizer = torch.optim.Adam(nw.parameters(), lr=1e-3)
It returns empty parameter, so I am unable to create the optimizer object. |
st103971 | For attributes to be registered as parameters, they need to be of type nn.Parameter or nn.Module. Either do
self.linear1 = nn.Linear(...)
self.linear2 = ...
or use nn.ModuleList 11
self.linear = nn.ModuleList([nn.Linear(...), ...]) |
st103972 | Hello,
I have been training recently a model using different RNNs (simple RNN, LSTMs, GRU), so far the best one was a GRU with one layer dimension and 256 hidden dimensions giving me 87.5% OA and 76.5% MIOU so here are my questions :
1- I have been using a learning rate equal to 0.1 that changes on a certain epoch ( I used 200 epoch for training and I changed on the 140th epoch by learning _rate_new = learning rate_old * decay rate ( 0.1 * 0.7) I only did this once on the 140th epoch not repeatedly, so my question is there a better method for updating the learning rate using a built-in function or another way.
2- for the optimizer weights, I used a tensor of the inverse squared of the classes frequencies in my training dataset ( I read it in another forum that it performs well), Is there a better way for doing this?
and thank you , I just want to push the envelope one last bit and get that sweat 90% OA without overfitting. |
st103973 | Does OA mean overall accuracy?
If so, do you have an imbalanced dataset and how are the class frequencies?
You could also try just the inverse of the class frequencies and see, how well your model performs.
For the learning rate schedule, you could try ReduceLROnPlateau 34. |
st103974 | hello @ptrblck ,
yes OA means overall accuracy, yes I have an imbalanced dataset, some classes have fewer examples than others, I tried just the inverse of the class frequencies it performed poorly comparing to the inverse square root.
I’ll have a look at the scheduler , it has been on my radar lately.
thanks |
st103975 | Thanks for the info!
Interesting idea to use the square root.
Just to dig a bit deeper: are you calculating the OA by summing the right predictions (diag of confusion matrix) and dividing it by the sum of all samples?
This might be problematic in an imbalanced setup. |
st103976 | yes Sir , OA = correct predictions (a.k.a trace of my confusion matrix ) / all samples , MIOU = mean ( cm[i,i]/ (cm[i,:] + cm[:i] - cm[i,i]) ) , so my OA is wrong ? |
st103977 | No not at all. Your OA calculation is right. It might give you hight accuracy values, if some of your classes have very little support and might lead to the accuracy paradox 8. |
st103978 | I’ll be trying the scheduler , let’s hope for the best , as for the weight update I should leave like that the inverse square root ? I’ll be getting a larger dataset soon-ish ( the one I used has 9230 examples , this one will have about 250,000 but with 28 classes ) that’s why i’m trying to tune and squeeze a very good model hyper-paramertes so I woudln’t have to repeat the whole process again on the large dataset.
Again thank you for your help , wish you all the best.
cheers |
st103979 | How can i implement Conv2d with weights sharing in one-dimension of input?
i.e., given an input image I(x,y), i want to perform convolution such that input I[:,y] uses the same convolution filter weights and I[x,:] uses different weights. |
st103980 | Would a kernel size of [kh, 1] work? Or do you want a two-dimensional kernel with shared weights along the width?
x = torch.randn(1, 3, 24, 24)
conv = nn.Conv2d(in_channels=3, out_channels=6, kernel_size=(3, 1))
output = conv(x) |
st103981 | Hi, I’m trying to get a feel for pytorch as a relatively new programmer using the iris dataset but I have an issue. I was running a few tests on training speed regarding CPU/GPU and num_workers and I get some interesting results.
CUDA, num_workers=0
2.430s
CUDA, num_workers=1
18.741s
CPU, num_workers=0
1.619s
CPU, num_workers=1
11.038s
As you can see it seems faster to run on the CPU with no subprocesses which shouldn’t be the case. Can someone explain why or if my implementation is wrong?
Here’s my code/model:
github.com
cpiscos/pytorch_projects/blob/master/iris.py 6
import pandas as pd
import torch
from torch.utils.data import Dataset, DataLoader
import numpy as np
import matplotlib.pyplot as plt
import time
device = torch.device('cuda')
class IrisData(Dataset):
def __init__(self):
xy = pd.read_csv('data/iris.data', header=None)
xy[4] = xy[4].astype('category')
self.x = xy.iloc[:, :4].values
self.x_mu = np.mean(self.x, axis=0)
self.x_std = np.std(self.x, axis=0)
self.x = torch.from_numpy((self.x - self.x_mu) / self.x_std)
self.y = torch.from_numpy(np.array(xy[4].cat.codes, dtype='int64'))
This file has been truncated. show original |
st103982 | Hello forum,
I’m trying to use pytorch to store a variable length array inside the state_dict so the array is persistent.
First I register the buffer like in the batch normalization layer to add the array to the state_dict:
self.register_buffer(‘train_loss_data’, torch.tensor([], requires_grad=False))
During training I add an item every checkpoint to the tensor:
model.train_loss_data = torch.cat([model.train_loss_data, torch.tensor([train_loss]).to(device)])
torch.save(model.state_dict(), ‘./models/’ + model.name + ‘.ckpt’)
Then when I go to load the best checkpoint I get a dimensions not equal error.
model.load_state_dict(torch.load(’./models/’ + model.name + ‘.ckpt’))
RuntimeError: Error(s) in loading state_dict for mlp:
While copying the parameter named “train_loss_data”, whose dimensions in the model are torch.Size([21]) and whose dimensions in the checkpoint are torch.Size([8]).
I could achieve this functionality with tensorflow using the following validate_shape flag:
graph.train_acc = tf.Variable([], trainable=False, name=‘train_acc’, validate_shape=False)
Is there any way with the pytorch loader/saver to achieve what I want to do? |
st103983 | I was able to work around it by overloading load_state_dict() with the following:
def load_state_dict(self, state_dict, strict=True):
self.train_loss_data.resize_(state_dict[‘train_loss_data’].shape)
super().load_state_dict(state_dict, strict) |
st103984 | Hi, I have been working on a tutorial as a fast introduction to deep learning NLP with Pytorch. I feel that the current tutorials focus mostly on CV. There are some NLP examples out there, but I didn’t find anything for beginners (which I am looking for, since we are using Pytorch for an NLP class I am TA’ing). So I wrote a tutorial. It assumes NLP knowledge and familiarity with neural nets, but not with deep learning programming.
I wanted to post the tutorial here to get feedback and also because I figure it may be helpful to some people. There are some fast explanations and a lot of code, with a few working examples (nothing state of the art, just things to get an idea). I still need to add a BiLSTM-CRF tagger for NER example, which will be the most complicated one. Here’s the link.
GitHub
rguthrie3/DeepLearningForNLPInPytorch 1.1k
An IPython Notebook tutorial on deep learning for natural language processing, including structure prediction. - rguthrie3/DeepLearningForNLPInPytorch
If you look at it, I’m happy to get any feedback. I want it to be useful to the students in my class. |
st103985 | Also checkout https://github.com/spro/practical-pytorch 777 which has some NLP tutorials. |
st103986 | Hey, nice work and thanks for sharing!
I have some minor suggestions:
make_bow_vector (cell evaluated as 91) - create vec using torch.zeros(len(word_to_idx))
I’d mention that NLLLoss expects log-probabilities, but you could also use CrossEntropyLoss if you removed the log_softmax.
I’d split the log_probs line in cell 101 into a few more. It’s not very readable with that indentation.
Also, I’d also recommend using tensor indexing to create BoW vectors, as that will likely be faster than iterating over a list in tensor constructor. |
st103987 | @rguthrie3, thanks for the amazing tutorial!
By any chance, did you write down a solution for the pretrained embeddings exercise?
Best,
D |
st103988 | I got an implementation of CBOW here. Please try to finish it yourself until check others!
github.com
towerjoo/pytorch-playground/blob/master/official_tut/cbow.py 398
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
raw_text = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split()
word_to_ix = {}
for i, word in enumerate(raw_text):
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
data = []
This file has been truncated. show original |
st103989 | H zhutaoi, it is very nice for you to post your implementation. But I have some questions about it.
In your code, you defined your CBOW model as same as the author’s NGramLanguageModeler and change the number of context size during training. I believe this can work but I don’t think it matches the definition of the CBOW model, which is (A*sum(q) + b). I think you can throw the context number away and add your context together before feed in the linear layer.
I am not in the area of NLP so if I misunderstood the model or made a mistake, please point it out and I am happy to discuss with you.
Thanks! |
st103990 | I think it should be along this line:
class CBOW(nn.Module):
def __init__(self, vocab_size, embedding_dim):
super(CBOW, self).__init__()
self.embedding = nn.Embedding(num_embeddings=vocab_size,
embedding_dim=embedding_dim)
self.linear = nn.Linear(in_features=embedding_dim,
out_features=vocab_size)
def forward(self, x):
# embeds 4 context words into say, 10 dim,
# then take their sum along the rows (dim=0) to get 1 by 10 vector
embedding = self.embedding(x).sum(dim=0)
out = self.linear(embedding)
out = F.log_softmax(out)
return out
But I’m getting RuntimeError: index out of range at /py/conda-bld/pytorch_1493674854206/work/torch/lib/TH/generic/THTensorMath.c:273 and I don’t know why? |
st103991 | Because of the way you generate the word_to_ix dict. In the codes author provided, he generated the dict as:
word_to_ix = {word: i for i, word in enumerate(raw_text)}
Note here he enumerate over raw_text but not vocab. I guess that is why you get an index out of range error. You can either change the raw_text to vocab, or set the vocab_size to be the length of raw_text. Hope this can address your issue. |
st103992 | Ah, right! of course, it makes more sense to enumerate vocab for later embedding. |
st103993 | Thank you very much this is really good for starters. However, as I am new to PyTorch I am looking for any tutorial that can handle sparse operations as I am dealing with one hot vectors. Please guide if you know any such tutorials.
Sincerely, |
st103994 | I am new to pytorch and learning NLP/deep learning.
I was going through the CBOW model mentioned here 16 and the explanation mentioned on tutorial page/exercise (here 15) . In the former, two matrices are learned while in the later we only learn the embeddings of the words, A and B parameters. I think both are saying the same things but I couldn’t understand how.
I implemented the exercise of CBOW (my code is below). Please let me know if it looks okay.
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.autograd as autograd
CONTEXT_SIZE = 2
EMBED_SIZE = 10
raw_text = “”“We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.”"".split()
vocab = set(raw_text)
word_to_idx = {word: i for i, word in enumerate(vocab)}
idx_to_word = {i: word for i, word in enumerate(vocab)}
print(word_to_idx)
context_target = [ ([raw_text[i-2], raw_text[i-1], raw_text[i+1] , raw_text[i+2]], raw_text[i]) for i in range(2, len(raw_text)-2)]
class CBOWClassifier(nn.Module):
def __init__ (self, vocab_size, embed_size, context_size):
super(CBOWClassifier,self).__init__()
self.embeddings = nn.Embedding(vocab_size, embed_size)
self.linear1 = nn.Linear(embed_size, 128)
self.linear2 = nn.Linear(128, vocab_size)
def forward(self, inputs):
embed = self.embeddings(inputs)
embed = torch.sum(embed, dim=0)
out = self.linear1(embed)
out = F.relu(out)
out = self.linear2(out)
log_probs = F.log_softmax(out)
return log_probs
VOCAB_SIZE = len(word_to_idx)
model = CBOWClassifier(VOCAB_SIZE, EMBED_SIZE, 2*CONTEXT_SIZE)
losses = []
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
for epochs in range(100):
total_loss = torch.Tensor([0])
for context, target in context_target:
context_idx = [word_to_idx[w] for w in context]
context_var = autograd.Variable(torch.LongTensor(context_idx))
model.zero_grad()
log_probs = model(context_var)
target_idx = word_to_idx[target]
loss = loss_function(log_probs, autograd.Variable(torch.LongTensor([target_idx])))
loss.backward()
optimizer.step()
total_loss = total_loss + loss.data
losses.append(total_loss)
print(losses) |
st103995 | Hello,
I also implemented the CBOW model as follows:
gist.github.com
https://gist.github.com/emirceyani/2ca7d8c3c9a2704d0f1e7f72cfbdac72 184
CBOW.py
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
CONTEXT_SIZE = 2 # 2 words to the left, 2 to the right
EMBEDDING_DIM=10
This file has been truncated. show original
Loss is decresing but how much epoch is needed to get the output for the CBOW exercise? |
st103996 | I’ve been reading this tutorial and would like to ask why use this line:
hello_embed = embeds(autograd.Variable(lookup_tensor))
instead of
hello_embed = embeds(Variable(lookup_tensor))
In other words, why wrap the Variable around with an autograd?
Because type(hello_embed) for the two lines produce the same result. (<class ‘torch.autograd.variable.Variable’>_
Thanks! |
st103997 | @rguthrie3 Hi, I saw you don’t have a language model example…I am working on a clean implementation of a language model for Word level LM…I guess you might know a bit about this question: Why is Hidden Variable out of Network Class in Pytorch examples Language Model? . Please take a look! Thanks for the tutorial btw. They are usually very helpful |
st103998 | Anyone see issues with my implementation?
EMBEDDING_DIM = 10
raw_text = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells.""".split()
# By deriving a set from `raw_text`, we deduplicate the array
vocab = set(raw_text)
vocab_size = len(vocab)
word_to_ix = {word: i for i, word in enumerate(vocab)}
data = []
for i in range(2, len(raw_text) - 2):
context = [raw_text[i - 2], raw_text[i - 1],
raw_text[i + 1], raw_text[i + 2]]
target = raw_text[i]
data.append((context, target))
print(data[:5])
class CBOW(nn.Module):
def __init__(self, vocab_size, embedding_dim):
super(CBOW, self).__init__()
self.embeddings = nn.Embedding(vocab_size, embedding_dim)
self.linear1 = nn.Linear(embedding_dim, 128)
self.linear2 = nn.Linear(128, vocab_size)
def forward(self, inputs):
embeds = self.embeddings(inputs).sum(0).view((1,-1))
out = self.linear1(F.relu(embeds))
out = self.linear2(out)
log_probs = F.log_softmax(out, dim=1)
return log_probs
losses = []
loss_function = nn.NLLLoss()
model = CBOW(len(vocab), EMBEDDING_DIM)
optimizer = optim.SGD(model.parameters(), lr=0.001)
# create your model and train. here are some functions to help you make
# the data ready for use by your module
def make_context_vector(context, word_to_ix):
idxs = [word_to_ix[w] for w in context]
return torch.tensor(idxs, dtype=torch.long)
for epoch in range(10): # train a bit
total_loss = torch.Tensor([0])
for context, target in data:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in variables)
context_idxs = torch.tensor(make_context_vector(context, word_to_ix), dtype=torch.long)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a variable)
loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
# keep training
epoch_count = 10
print("Training until loss is less than 1..")
while losses[-1] >= 1: # go until seriously overfitting :)
total_loss = torch.Tensor([0])
for context, target in data:
# Step 1. Prepare the inputs to be passed to the model (i.e, turn the words
# into integer indices and wrap them in variables)
context_idxs = torch.tensor(make_context_vector(context, word_to_ix), dtype=torch.long)
# Step 2. Recall that torch *accumulates* gradients. Before passing in a
# new instance, you need to zero out the gradients from the old
# instance
model.zero_grad()
# Step 3. Run the forward pass, getting log probabilities over next
# words
log_probs = model(context_idxs)
# Step 4. Compute your loss function. (Again, Torch wants the target
# word wrapped in a variable)
loss = loss_function(log_probs, torch.tensor([word_to_ix[target]], dtype=torch.long))
# Step 5. Do the backward pass and update the gradient
loss.backward()
optimizer.step()
# Get the Python number from a 1-element Tensor by calling tensor.item()
total_loss += loss.item()
losses.append(total_loss)
epoch_count += 1
print("Final loss of %0.4f in %d epochs" % (float(losses[-1]), epoch_count))
# Test
correct = 0
for context, target in data:
context_idxs = torch.tensor(make_context_vector(context, word_to_ix), dtype=torch.long)
log_probs = model(context_idxs)
_, ix = torch.max(log_probs, 1)
prediction = next(key for key, value in word_to_ix.items() if value == int(ix))
correct += target == prediction
accuracy = correct / len(data)
print("Average accuracy:", accuracy)
Training util loss is less than 1..
Final loss of 0.9996 in 1436 epochs
Average accuracy: 1.0
What concerned me was that the template defines CONTEXT_SIZE rather than EMBEDDING_DIM, but the equation looks like it sums over 2*CONTEXT_SIZE, so this parameter is should not actually be required? Maybe just a typo? |
st103999 | I’m transferring a matrices addition code from Tensorflow to Pytorch. The matrices shapes are (1,25,256) and (1,1,256). On Tensorflow it works but on Pytorch I get the error:
RuntimeError: inconsistent tensor size
I know that the shapes doesn’t match but this code works on Tensorflow and I wonder if it can work on Pytorch as well. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.