id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st97868
|
You could use the functional API to define your parameters in the forward method.
Here is a small example using a random number of kernels for the conv layer:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv_weight = None
self.conv_bias = None
def forward(self, x):
if self.conv_weight is None:
nb_kernels = torch.randint(1, 10, (1,))
self.conv_weight = nn.Parameter(torch.randn(nb_kernels, x.size(1), 3, 3))
self.conv_bias = nn.Parameter(torch.randn(nb_kernels))
x = F.conv2d(x, self.conv_weight, self.conv_bias, stride=1, padding=1)
return x
model = MyModel()
x = torch.randn(1, 3, 24, 24)
output = model(x)
output.mean().backward()
print(model.conv_weight.grad)
print(list(model.parameters()))
|
st97869
|
I was working on a CNN model and I got this error without much information from the call stack. The only thing I know is It happened during backprop. Is there any common reason for this?
Call stack:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-89> in <module>()
29 optimizer.zero_grad()
30 pred = model(data)
---> 31 criterion(pred, label).backward()
32 optimizer.step()
33
~/anaconda3/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
94 products. Defaults to ``False``.
95 """
---> 96 torch.autograd.backward(self, gradient, retain_graph, create_graph)
97
98 def register_hook(self, hook):
~/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
88 Variable._execution_engine.run_backward(
89 tensors, grad_tensors, retain_graph, create_graph,
---> 90 allow_unreachable=True) # allow_unreachable flag
91
92
|
st97870
|
It seems that is a ulimit issue. I have searched online and saw previous discussion on this. Tried clean up method in those posts (with su) and that solved the issue. Not sure why that happened though.
|
st97871
|
I got the same problem during backprop. Would you mind sharing how your solved it and where you found the solution?
|
st97872
|
Hello community,
I would like to share a package built with pytorch to train recommendation system models https://github.com/amoussawi/recoder 81.
It contains two factorization models implementations: (Deep) Autoencoders and Matrix Factorization. You can build your own factorization model and train it.
It was built to be fast for large-scale training with negative sampling, for instance you can have an Autoencoder model fully trained on Movielens-20M dataset in less than a minute on a Tesla K80 GPU.
|
st97873
|
Hi there,
I was looking through the PyTorch source code for LSTM and GRU, and I don’t see where their equations are in the code. Can anyone point me in the right direction?
Thanks!
|
st97874
|
Solved by tom in post #2
They map to C++ code in ATen/native/RNN.cpp, the same in the cudnn subdirectory or native cuda kernels in the cuda subdirectory’s RNN.cu.
You can see by following the same method as described in my Selective excursion into PyTorch internals blog post, with a little detour at the beginning from tor…
|
st97875
|
They map to C++ code in ATen/native/RNN.cpp 40, the same in the cudnn subdirectory 2 or native cuda kernels in the cuda subdirectory’s RNN.cu 5.
You can see by following the same method as described in my Selective excursion into PyTorch internals 23 blog post, with a little detour at the beginning from torch.nn.LSTM.forward via torch.nn.modules.rnn._rnn_impls.
Best regards
Thomas
|
st97876
|
I am currently making this dataset which looks something like this:
[ [x, y], # ground truth for a coordinate
[0], # 1-hot encodings for class identification, there are 4 of these
[1],
[0],
[0] ]
For example, I am going to input some images and I want a vector output like above. I consider predicting coordinate as a regression problem and the 1 hot-encoding as a classification problem. Can I use MSEloss or L1loss on the first value and use cross-entropy for the bottom 4 values? I am self-taught so there is a lot that I don’t know sorry if this is a stupid question. Code example welcome.
Edited:
I found this while doing some research,
b = nn.MSELoss()
a = nn.CrossEntropyLoss()
loss a = a(output_x, x_labels)
loss_b = b(output_y, y_labels)
loss = loss_a + loss_b loss.backward()
So in theory can I make a prediction with my network, eg y_hat and slice off the coordinates prediction and call it output_x and do the same thing for output_y for classification? Will this work with my problem?
|
st97877
|
For sure you can use multiple loss functions. For example PSPNet uses that extensively (called auxiliary output/loss). The below is such an example:
out_seg, out_cls = model(x)
seg_loss = seg_loss(out_seg, y_seg)
cls_loss = cls_loss(out_cls, y_cls)
loss = seg_loss + alpha * cls_loss
loss.backward()
optimizer.step()
So the model has two outputs (typically from an earlier layer and the final layer). In this case, one is trained on classification and the other one on segmentation. The idea is that having a loss function directly on an earlier layer helps to train those earlier layers better.
|
st97878
|
Suppose to have a coefficients tensor with a shape (A,B) and another tensor of values with shape (A,B,C,D…) how can i do an scalar element-wise multiplication such that the results will be each subtensor of shape (C,D,…) multiplied by each element in the coefficients tensor?
To be clear with A=3, B=3, C=2 and D=2
the coefficients tensor can be something like
C = [[0.1 0.2 0.3]
[0.4 0.5 0.6]
[0.7 0.8 0.9]
and the values tensor (V) will be a 3x3x2x2 tensor with each 3x3 “cell” being a 2x2 matrix
the results of this operation should be each 2x2 matrix scalarly multiplied by the respective element in C so that
V[0][0] = V[0][0] * C[0][0]
V[0][1] = V[0][1] * C[0][1]
V[0][2] = V[0][2] * C[0][2]
V[1][0] = V[1][0] * C[1][0]
and so on…
Where each of this * is a scalar multiplication between an element of the matrix C and a matrix of the tensor V
>>> C = torch.randn(3,3)
>>> V = torch.randn(3,3,2,2)
>>> C*V
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 3
>>> torch.dot(C,V)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: dot: Expected 1-D argument self, but got 2-D
>>> torch.mul(C,V)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: The size of tensor a (3) must match the size of tensor b (2) at non-singleton dimension 3
Maybe somewhere in the docs there is the answer to my question but i could find it, thanks in advice!
|
st97879
|
I think you are on the right track.
Automatic broadcasting doesn’t work in your example, as the dimensions are missing.
Try to unsqueeze C and it should work:
A, B, C, D = 3, 3, 2, 2
c = torch.ones(A, B) * 2
v = torch.randn(A, B, C, D)
d = c[:, :, None, None] * v
print((d[0, 0] == v[0, 0]* 2).all())
|
st97880
|
Hi. In the ResNet paper, the resnet output of a 224x224 image is 7x7, before the average pooling layer. However, PyTorch seems to give it as 8x8. Is there any different approach token? Attached is the image of demo.
demo.png1853×547 33.7 KB
|
st97881
|
Solved by ptrblck in post #2
It seems your image shape is 254x254, which is probably a typo.
Could you try it again with 224x224?
|
st97882
|
It seems your image shape is 254x254, which is probably a typo.
Could you try it again with 224x224?
|
st97883
|
Oops. Sorry for my silly mistake. Most probably read it in the paper fast.
Thanks
|
st97884
|
I am now using RTX2080ti, so I need install cuda 10. After I built pytorch from source, it pop out a warning when I train a simple RNN model.
UserWarning: PyTorch was compiled without cuDNN support. To use cuDNN, rebuild PyTorch making sure the library is visible to the build system.
"PyTorch was compiled without cuDNN support. To use cuDNN, rebuild "
So I tried to rebuild pytorch, but I got message from building log as below.
I guest setup.py found the cudnn path, but didn’t build with it.
…
– Found CUDA: /usr/local/cuda (found suitable version “10.0”, minimum required is “7.0”)
– Caffe2: CUDA detected: 10.0
– Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
– Caffe2: CUDA toolkit directory: /usr/local/cuda
– Caffe2: Header version is: 10.0
– Found cuDNN: v7.3.1 (include: /usr/local/cuda/include, library: /usr/local/cuda/lib64/libcudnn.so.7)
– Autodetected CUDA architecture(s): 7.5
– Added CUDA NVCC flags for: -gencode;arch=compute_75,code=sm_75
– Could NOT find NCCL (missing: NCCL_INCLUDE_DIRS NCCL_LIBRARIES)
– Could NOT find CUB (missing: CUB_INCLUDE_DIR)
– CUDA detected: 10.0
…
– TORCH_VERSION : 1.0.0
– CAFFE2_VERSION : 1.0.0
– BUILD_ATEN_MOBILE : OFF
– BUILD_ATEN_ONLY : OFF
– BUILD_BINARY : OFF
– BUILD_CUSTOM_PROTOBUF : ON
– Link local protobuf : ON
– BUILD_DOCS : OFF
– BUILD_PYTHON : ON
– Python version : 2.7.15
– Python executable : /home/k123/env/python2.7.15/bin/python
– Pythonlibs version : 2.7.15
– Python library : /home/k123/.local/lib/libpython2.7.a
– Python includes : /home/k123/.local/include/python2.7
– Python site-packages: lib/python2.7/site-packages
– BUILD_CAFFE2_OPS : ON
– BUILD_SHARED_LIBS : ON
– BUILD_TEST : ON
– USE_ASAN : OFF
– USE_CUDA : 1
– CUDA static link : 0
– USE_CUDNN : OFF
– CUDA version : 10.0
– CUDA root directory : /usr/local/cuda
– CUDA library : /usr/lib/x86_64-linux-gnu/libcuda.so
– cudart library : /usr/local/cuda/lib64/libcudart_static.a;-pthread;dl;/usr/lib/x86_64-linux-gnu/librt.so
– cublas library : /usr/local/cuda/lib64/libcublas.so
– cufft library : /usr/local/cuda/lib64/libcufft.so
– curand library : /usr/local/cuda/lib64/libcurand.so
– nvrtc : /usr/local/cuda/lib64/libnvrtc.so
– CUDA include path : /usr/local/cuda/include
– NVCC executable : /usr/local/cuda/bin/nvcc
– CUDA host compiler : /usr/bin/cc
– USE_TENSORRT : OFF
– USE_ROCM : OFF
– USE_EIGEN_FOR_BLAS :
– USE_FFMPEG : OFF
…
any suggestion?
|
st97885
|
I am trying to implement 2-layer neural network using different methods (TensorFlow, PyTorch and from scratch) and then compare their performance based on MNIST dataset.
I am not sure what mistakes I have made, but the accuracy in PyTorch is only about 10%, which is basically random guess. I think probably the weights does not get updated at all.
Note that I intentionally use the dataset provided by TensorFlow to keep the data I use through 3 different methods consistent for accurate comparison.
from tensorflow.examples.tutorials.mnist import input_data
import torch
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = torch.nn.Linear(784, 100)
self.fc2 = torch.nn.Linear(100, 10)
def forward(self, x):
# x -> (batch_size, 784)
x = torch.relu(x)
# x -> (batch_size, 10)
x = torch.softmax(x, dim=1)
return x
net = Net()
net.zero_grad()
Loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01)
for epoch in range(1000): # loop over the dataset multiple times
batch_xs, batch_ys = mnist_m.train.next_batch(100)
# convert to appropriate settins
# note the input to the linear layer should be (n_sample, n_features)
batch_xs = torch.tensor(batch_xs, requires_grad=True)
# batch_ys -> (batch_size,)
batch_ys = torch.tensor(batch_ys, dtype=torch.int64)
# forward
# output -> (batch_size, 10)
output = net(batch_xs)
# result -> (batch_size,)
result = torch.argmax(output, dim=1)
loss = Loss(output, batch_ys)
# backward
optimizer.zero_grad()
loss.backward()
optimizer.step()
|
st97886
|
Guanqun_Yang:
def forward(self, x): # x -> (batch_size, 784) x = torch.relu(x) # x -> (batch_size, 10) x = torch.softmax(x, dim=1) return x
You are not using fc1 and fc2 at all.
Your input is passing through relu and softmax and hence no learning
Use fc1 and fc2 in forward function as follows:
def forward(self, x):
# x -> (batch_size, 784)
x = self.fc1(x)
x = torch.relu(x)
x = self.fc2(x)
# x -> (batch_size, 10)
x = torch.softmax(x, dim=1)
return x
Also, you need to flatten the tensor before feeding it to Net (I will leave that as an exercise )
|
st97887
|
Additionally to what @bhushans23 said, you shouldn’t use nn.Softmax as the last layer, since nn.CrossEntropyLoss expects logits and applies nn.LogSoftmax internally.
Just remove the nn.Softmax from your model and make sure the other layers are used.
|
st97888
|
Edit: I have created an issue 1 on the git.
In a previous thread I was confused because I thought my GPU was bad. Turns out it’s something else.
On a GTX1080 on Ubuntu18.04, Resnet50 takes 12ms on average for a forward pass. I used this script:
import numpy as np
import torch
from timeit import default_timer as timer
from torchvision.models import resnet50
def main():
# Define model and input data
resnet = resnet50().cuda()
x = torch.from_numpy(np.random.rand(1, 3, 224, 224).astype(np.float32)).cuda() # Entire network
# x = torch.from_numpy(np.random.rand(1, 64, 32, 32).astype(np.float32)).cuda() # Stub alone
# The first pass is always slower, so run it once
resnet.forward(x)
# Measure elapsed time
passes = 20
total_time = 0
for _ in range(passes):
start = timer()
resnet.forward(x)
delta = timer() - start
print('Forward pass: %.3fs' % delta)
total_time += delta
print('Average forward pass: %.3fs' % (total_time / passes))
if __name__ == '__main__':
main()
On my installation of Windows 7, it runs in 58ms. I made sure that:
The same version of python was used (a fresh 3.6.6 install in both cases)
The same version of pytorch was used with the same cuda version (0.4.1 and 9.2 respectively)
Python was installed on the same SSD in both cases (I doubt that matters but you never know)
I have installed the most recent nvidia drivers on each OS (390 on Ubuntu vs 4.16 on Windows)
Can anyone enlighten me or reproduce this? Otherwise I’ll write an issue on the github.
|
st97889
|
Can I somehow figure out from a Module what the input layer and output layer are?
i.e. if I have some Layer in hand, can I infer what the input tensors are? I would like to have this information to compute the jacobian of the layer outputs w.r.t layer inputs.
autograd builds some kind of graph during the forward pass, so can I use this somehow?
|
st97890
|
I just want to iterate through a model and extract all Conv2d layer’s parameters
|
st97891
|
Does anyone know how to translate a vectorized version of ||x - w||^2 in pytorch? I have a working version in numpy but it seems there are issue with summing over axis in pytorch so I’m not sure how to translate my code to pytorch:
WW = np.sum(np.multiply(W,W), axis=0, dtype=None, keepdims=True)
XX = np.sum(np.multiply(x,x), axis=1, dtype=None, keepdims=True)
Delta_tilde = 2.0*np.dot(x,W) - (WW + XX)
|
st97892
|
I was assuming that w and x are vectors in mini-batches with the first dimension (dim 0) being the batch dimension.
|
st97893
|
w is the centers (of potentially an RBF), so W might not be the same size as a batch size. Its just like the number of filters for a fully connected network.
|
st97894
|
SimonW:
dimension
Hi @Brando_Miranda, you solved this question? I’ve the same doubts… I’d like to see a more detailed example than those offered here.
|
st97895
|
Im just using the original code I posted in my question but translated to pytorch.
|
st97896
|
I’ve trained a model like this:
github.com
iarroyof/pytorch/blob/master/gauss_kernel.py 25
# -*- coding: utf-8 -*-
import torch
from torch.autograd import Variable
from pdb import set_trace as st
def kernel_product(w, x, mode = "gaussian", s = 0.1):
w_i = torch.t(w).unsqueeze(1)
x_j = x.unsqueeze(0)
xmy = ((w_i - x_j)**2).sum(2)
#st()
if mode == "gaussian" : K = torch.exp( - (torch.t(xmy) ** 2) / (s**2) )
elif mode == "laplace" : K = torch.exp( - torch.sqrt(torch.t(xmy) + (s**2)))
elif mode == "energy" : K = torch.pow( torch.t(xmy) + (s**2), -.25 )
return K
class MyReLU(torch.autograd.Function):
"""
We can implement our own custom autograd Functions by subclassing
This file has been truncated. show original
|
st97897
|
I’m pretty new to this and I’m not sure if/how this would be possible. The architecture I’m planning to use for my network feeds a set of inputs to a hidden layer and then have another input meet the outputs of the hidden layer and have both feed into a second hidden layer.
Is there a way for me to do this or do I need to have a workaround where I use two networks?
Here’s a picture of the architecture:
network_architecture.PNG1041×1253 308 KB
|
st97898
|
You could slice the input tensor, use the first part for the linear layer, and concatenate the result with the second part. Here is a small example:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(8, 8)
self.fc2 = nn.Linear(12, 12)
self.fc3 = nn.Linear(12, 20)
def forward(self, x):
# Use first part of x
x1 = F.relu(self.fc1(x[:, :8]))
# Concatenate the result of the first part with the second part of x
x = torch.cat((x1, x[:, 8:]), dim=1)
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = MyModel()
x = torch.randn(1, 12)
output = model(x)
|
st97899
|
Getting the vector representation of each document while each document is written in one row of a csv file (example.csv here) using https://github.com/inejc/paragraph-vectors 6
So I followed all the steps above and after all the steps I get the following:
[jalal@goku data]$ ls -ltra
total 308
-rw-r--r--. 1 jalal cs-grad 863 Nov 9 00:59 example.csv
drwxr-xr-x. 10 jalal cs-grad 4096 Nov 9 00:59 ..
-rw-r--r--. 1 jalal cs-grad 136981 Nov 9 06:53 example_model.dbow_numnoisewords.2_vecdim.100_batchsize.32_lr.0.001000_epoch.8_loss.1.108974.pth.tar
-rw-r--r--. 1 jalal cs-grad 136981 Nov 9 07:02 example_model.dbow_numnoisewords.2_vecdim.100_batchsize.32_lr.0.001000_epoch.9_loss.1.145683.pth.tar
-rw-r--r--. 1 jalal cs-grad 17395 Nov 9 07:02 example_model.dbow_numnoisewords.2_vecdim.100_batchsize.32_lr.0.001000.png
-rw-r--r--. 1 jalal cs-grad 9 Nov 9 07:05 example_model.dbow_numnoisewords.2_vecdim.100_batchsize.32_lr.0.001000.csv
drwxr-xr-x. 2 jalal cs-grad 4096 Nov 9 07:05 .
How can I actually get the vector representation of each document?
In case you are interested, here’s how the example.csv looks like:
[jalal@goku data]$ cat example.csv
text
"In the week before their departure to Arrakis, when all the final scurrying about had reached a nearly unbearable frenzy, an old crone came to visit the mother of the boy, Paul."
"It was a warm night at Castle Caladan, and the ancient pile of stone that had served the Atreides family as home for twenty-six generations bore that cooled-sweat feeling it acquired before a change in the weather."
"The old woman was let in by the side door down the vaulted passage by Paul's room and she was allowed a moment to peer in at him where he lay in his bed."
"By the half-light of a suspensor lamp, dimmed and hanging near the floor, the awakened boy could see a bulky female shape at his door, standing one step ahead of his mother. The old woman was a witch shadow - hair like matted spiderwebs, hooded 'round darkness of features, eyes like glittering jewels."
Commands I ran:
[jalal@goku paragraphvec]$ python train.py start --data_file_name 'example.csv' --num_epochs 100 --batch_size 32 --num_noise_words 2 --vec_dim 100 --lr 1e-3
Dataset comprised of 4 documents.
Vocabulary size is 109.
Training started.
and
[jalal@goku paragraphvec]$ python export_vectors.py start --data_file_name 'example.csv' --model_file_name /scratch2/NAACL2018/text_experiment/paragraph-vectors/models/example_model.dbow_numnoisewords.2_vecdim.100_batchsize.32_lr.0.001000_epoch.86_loss.0.827747.pth.tar
|
st97900
|
So I dont have access to a GPU ,but have access to a 25 cluster of Xeon CPUs . Here are my questions :
If I set the number of nodes in qpub to 24 , will PyTorch automatically use all the nodes , because someone earlier said that it relies on Intel MKL and MKL should be able to detect all the CPUs available.
Also , does setting the num_workers take care of automatically distributing the workload ?
|
st97901
|
pytorch will use all the cores on a single machine.
If you want to use 25 Xeon machines, then you will have to write some special logic using our torch.distributed functions: http://pytorch.org/docs/master/distributed.html 98
|
st97902
|
Given that the torch.distributed module is still in beta 4, does torch.multiprocessing have any disadvantge performance wise other than running each script separately ?
|
st97903
|
Hi, thank you for asking the question because I have a similar issue. Were you able to distribute your training over many machines? could you tell me how you did it, please? I don’t have knowledge about parallel or distributed computing
|
st97904
|
I’ve got a DataLoader loading a custom dataset.
loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size, sampler=train_sampler,
pin_memory=(torch.cuda.is_available()), num_workers=0)
I want to get rid of the Variable and use torch.tensor properly and without extra copying. An example of my training epoch looks like:
for batch_idx, (input, target) in enumerate(loader):
# Create vaiables
if torch.cuda.is_available():
input_var = torch.autograd.Variable(input.cuda(async=True))
target_var = torch.autograd.Variable(target.cuda(async=True))
else:
input_var = torch.autograd.Variable(input)
target_var = torch.autograd.Variable(target)
# compute output
output = model(input_var)
loss = torch.nn.functional.mse_loss(output, target_var)
With data from a DataLoader, do I change torch.autograd.Variable(input.cuda(async=True)) to torch.from_numpy(input).cuda(async=True) to prevent copying?
Also, when I test an epoch I have
input_var = torch.autograd.Variable(input.cuda(async=True), volatile=True)
target_var = torch.autograd.Variable(target.cuda(async=True), volatile=True)
What do I replace volatile with?
Cheers
|
st97905
|
Using the new methods you would write:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
...
for batch_idx, (input, target) in enumerate(loader):
input = input.to(device)
target = target.to(device)
...
The volatile argument was replaces with the woth torch.no_grad(): statement.
You should just wrap your validation code into this statement to disable gradients:
with torch.no_grad():
for batch_idx, (data, target) in enumerate(val_loader):
...
|
st97906
|
Hi guys, Thank you so much for the support on here. I don’t know parallel or distributed computing so please excuse me if my question is naive.
I will use HPC(high performance computer) for my research and I don’t know about parallel or distributed computing. I really don’t understand the DistributedDataParallel() in pytorch. Especially init_process_group() . What is the meaning of initializing processes group? and what is
init_method : URL specifying how to initialize the package.
for example (I found those in the documentation):
'tcp://10.1.1.20:23456' or 'file:///mnt/nfs/sharedfile'
What are those URLs?
What is the Rank of the current process?
Is world_size the number of GPUs?
It would be really appreciated if someone explained to me What is and How to use DistributedDataParallel() and init_process_group() in a simple way because I don’t know parallel or distributed computing.
I will use things like Slurm(sbatch) in the HPC.
|
st97907
|
I am run my code on server that does not allow to upgrade to pytorch 0.4.1. It is still using version of 0.3.1. It has multiple GPU (4 GPUs)
| 0 GeForce GTX TIT... Off | 0000:05:00.0 Off | N/A |
| 72% 86C P2 209W / 250W | 12065MiB / 12204MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX TIT... Off | 0000:06:00.0 Off | N/A |
| 80% 87C P2 229W / 250W | 11714MiB / 12206MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX TIT... Off | 0000:09:00.0 Off | N/A |
| 48% 82C P2 193W / 250W | 11714MiB / 12206MiB | 90% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX TIT... Off | 0000:0A:00.0 Off | N/A |
| 22% 26C P8 16W / 250W | 2MiB / 12206MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
I want to use the GPU 3 only because it is free now. I have set up in python code as
os.environ['CUDA_VISIBLE_DEVICES']=3
torch.cuda.set_device(3)
model.cuda()
I have to print successful the current GPU device as
torch.cuda.device_count() =', 4L
torch.cuda.current_device() =', 3L
However, it still use GPU 0 as defalut. How should I correct the above code to use GPU #3?
|
st97908
|
Solved by ptrblck in post #2
Make sure to set the visible devices before any other imports.
The easiest way would be to run your whole script using:
CUDA_VISIBLE_DEVICES=3 python script.py args
Also note, that setting the visible devices to a particular device masks all other devices.
That means you should only see the one …
|
st97909
|
Make sure to set the visible devices before any other imports.
The easiest way would be to run your whole script using:
CUDA_VISIBLE_DEVICES=3 python script.py args
Also note, that setting the visible devices to a particular device masks all other devices.
That means you should only see the one device now with id 0 in your python script.
torch.cuda.set_device(3) shouldn’t work therefore.
|
st97910
|
ptrblck:
hat setting the visible devices to a particular device masks all other devices.
That means you should only see the one device now with id 0 in y
Thanks. My problem has solved because of PID different
|
st97911
|
This is probably duplicate - but I haven’t found an answer. Its not particularly important as torch works just fine but it bites my curiosity and bothers my OCD, what is it about the python torch module that pylint fails to traverse?
For some reason pylint redlines torch on some calls claiming that its members aren’t valid.
for example
import torch
torch.__version__ #ok
torch.cuda.is_available() #ok
a = torch.randn(3,32,32) #redlines torch in this call
b = torch.unsqueeze(a,0) #redlines torch in this call
alleged error reads [pylint] E1011: Module torch has no ‘unqueeze’ member
Ive seen this error since 0.3.0 and I still see it in the compilation of torch 1.0 from a few days ago.
I have the correct interpreter setup, I have pylint properly installed. Both torch and pylint with conda.
I only rarely recall having seen this with other project, none come to mind. Torchvision is fine.
Fix it or not, if anyone knows the answer id be curious to know. thanks.
|
st97912
|
I am trying to do 10 fold CV with DNN. However, the loss curve change drastically when I do it in different ways. When I run the 10 fold CV in the fit method of the Module class, the loss curve has large fluctuations. But when I run the for loop on the KFolds object, and fit the training data separately, the loss curve is smooth and converges quickly.
Below is my code:
from sklearn.model_selection import KFold
#tune model in 10-fold CV:
#n fold CV on hc data:
nfold = 10
seed = 111
kf = KFold(n_splits=nfold, shuffle = True, random_state=seed)
############################################# DNN (pytorch)
import torch
import torch.nn.functional as F
import torch.nn as nn
from torch.autograd import Variable
from torch.optim.lr_scheduler import ReduceLROnPlateau
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedShuffleSplit
import matplotlib.pyplot as plt
torch.manual_seed(999) # reproducible
def smooth(y, box_pts):
box = np.ones(box_pts)/box_pts
y_smooth = np.convolve(y, box, mode='same')
return y_smooth
> def weights_init(m):
> if isinstance(m, nn.Linear):
> print('initialize weight...')
> nn.init.xavier_uniform_(m.weight.data)
> class Model(nn.Module):
>
> def __init__(self, n_hidden1=400, n_hidden2=150, n_hidden3=50,
> min_loss = .5, l_rate = .001, max_epochs = 10000):
>
> super(Model, self).__init__()
> self.n_hidden1 = n_hidden1
> self.n_hidden2 = n_hidden2
> self.n_hidden3 = n_hidden3
> self.min_loss = min_loss
> self.l_rate = l_rate
> self.max_epochs = max_epochs
>
>
> def build_layer(self, input_dim):
>
> if input_dim<100:
> print('4 layers .................')
> # MD and FA:
> n_hidden1 = 50
> n_hidden2 = 20
> n_hidden3 = 5
>
> self.layer1 = nn.Linear(input_dim, n_hidden1)
> self.layer2 = nn.Linear(n_hidden1, n_hidden2)
> self.layer3 = nn.Linear(n_hidden2, n_hidden3)
> self.layer4 = nn.Linear(n_hidden3, 1)
> #self.layer5 = nn.Linear(5, 1)
>
> elif input_dim<150:
> print('5 layers .................')
> # gray matter volume
> n_hidden1 = 200
> n_hidden2 = 100
> n_hidden3 = 50
>
> self.layer1 = nn.Linear(input_dim, n_hidden1)
> self.layer2 = nn.Linear(n_hidden1, n_hidden2)
> self.layer3 = nn.Linear(n_hidden2, n_hidden3)
> self.layer4 = nn.Linear(n_hidden3, 20)
> self.layer5 = nn.Linear(20, 1)
> #self.layer6 = nn.Linear(5, 1)
>
> elif input_dim<300:
> print('5 layers .................')
> # ALFF and ReHo
> n_hidden1 = 200
> n_hidden2 = 150
> n_hidden3 = 50
>
> self.layer1 = nn.Linear(input_dim, n_hidden1)
> self.layer2 = nn.Linear(n_hidden1, n_hidden2)
> self.layer3 = nn.Linear(n_hidden2, 1)
> #self.layer4 = nn.Linear(n_hidden3, 20)
> #self.layer5 = nn.Linear(20, 1)
> #self.layer6 = nn.Linear(5, 1)
>
> else:
> print('6 layers .................')
> self.layer1 = nn.Linear(input_dim, self.n_hidden1)
> self.layer2 = nn.Linear(self.n_hidden1, self.n_hidden2)
> self.layer3 = nn.Linear(self.n_hidden2, self.n_hidden3)
> self.layer4 = nn.Linear(self.n_hidden3, 30)
> self.layer5 = nn.Linear(30, 5)
> self.layer6 = nn.Linear(5, 1)
>
> self.sigmoid = torch.nn.Sigmoid()
>
>
> def forward (self, x, **kwargs):
>
> out = F.relu(self.layer1(x)) # activation function for hidden layer
> #out1 = self.sigmoid(self.layer1(x))
> #out1 = self.dropout(out1)
>
> #out2 = F.relu(self.layer2(out1))
> out = self.sigmoid(self.layer2(out))
> #out2 = self.dropout(out2)
>
>
>
> if x.shape[1]<100:
> out = F.relu(self.layer3(out))
> #out = self.dropout(out)
>
> y_pred = self.layer4(out)
>
> elif x.shape[1]<300:
> out = self.sigmoid(self.layer3(out))
> out = F.relu(self.layer4(out))
> y_pred = self.layer5(out) # linear output
>
> else:
> out = self.sigmoid(self.layer3(out))
> out = self.sigmoid(self.layer4(out))
> out = F.relu(self.layer5(out))
> y_pred = self.layer6(out) # linear output
>
> return y_pred
>
>
> def update_plot(loss_list, corr_list, loss_info):
> corr_list_s = smooth(corr_list, box_pts = 100)
> ax.clear()
> ax.plot(range(epoch), loss_list,'r--', range(epoch), corr_list*10, 'g--',
> range(epoch), corr_list_s*10, 'y--')
> plt.ylim(0, 30)
> plt.text(10, 30 , loss_info, fontsize=10)
> fig.canvas.draw()
>
>
> def fit (self, X, y, max_loss = 10, tune_loss = False):
> """
> if tune_loss:
> 1. use 80% of data as training and 20% as test set to get the optimized number of epochs
> 2. reset the model use all the data to train the model.
>
> Note: the tuned loss may not work better for independent validation data. As in K-fold CV, the training
> and validation set are not independent. (e.g. larger mean value of training set indicates smaller mean
> of the validataion set). It is recommend to first use tune_loss to explore the best min_loss, and run
> the model again with tune_loss=False.
> """
>
> min_loss = self.min_loss
> max_epochs = self.max_epochs
>
> #torch.manual_seed(999) # reproducible
> self.build_layer(input_dim = X.shape[1])
>
> criterion = nn.MSELoss()# Mean Squared Loss
>
> optimizer = torch.optim.SGD(self.parameters(),
> lr = self.l_rate,
> weight_decay=1e-3,
> momentum=0.9,
> dampening = 0,
> nesterov=True) #Stochastic Gradient Descent
>
> scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1000, gamma=.5)
>
> if tune_loss==True:
> print('optimizing loss value ...')
> opt_loss_list = []
>
> for train_index, test_index in kf.split(X, y):
> #print("TRAIN:", train_index, "TEST:", test_index)
> X_train, X_test = X[train_index], X[test_index]
> y_train, y_test = y[train_index], y[test_index]
>
> X_train_torch = Variable(torch.from_numpy(X_train))
> y_train_torch = Variable(torch.from_numpy(y_train))
> X_train_torch = X_train_torch.float()
> y_train_torch = y_train_torch.float()
> labels = y_train_torch.view(y_train.shape[0],1)
>
> X_test_torch = Variable(torch.from_numpy(X_test))
> X_test_torch = X_test_torch.float()
>
> # Collect errors to evaluate performance
> loss_list = []
> corr_list = []
>
> torch.manual_seed(999)
> self.apply(weights_init)
>
> epoch = 0
> fig = plt.figure(figsize=(5,5))
> ax = fig.add_subplot(111)
> plt.ion()
> fig.show()
> fig.canvas.draw()
>
>
> while True:
>
> #increase the number of epochs by 1 every time
> epoch +=1
> #scheduler.step()
> #clear grads
> optimizer.zero_grad()
> # forward to get predicted values
> outputs = self(X_train_torch) # model predict, outputs = net.forward(inputs)
>
> loss = criterion(outputs, labels)
> loss.backward()# back props
> optimizer.step()# update the parameters
>
>
> y_prediction = self(X_test_torch)
> y_prediction = y_prediction.detach().numpy().flatten()
>
> corr = np.corrcoef(y_prediction, y_test)[0,1]
> corr_list = np.append(corr_list, corr)
> loss_list = np.append(loss_list, loss.item())
>
> max_corr = np.amax(corr_list)
>
>
>
> if epoch % 100 == 0:
> loss_info = 'epoch %d, loss %.4f, test cor %.4f' % (epoch, loss.item(), corr)
> update_plot(loss_list, corr_list, loss_info)
>
> if epoch>max_epochs or loss.item()< min_loss:
> break
>
> opt_epochs = corr_list_s.argmax()
> opt_loss = loss_list[opt_epochs]
>
> if opt_loss>max_loss:
> opt_loss = np.nan
>
> opt_loss_list = np.append(opt_loss_list, opt_loss)
>
> print('stop at epochs: %d, loss %.4f, with test cor: %.4f' % \
> (opt_epochs, opt_loss, corr_list[opt_epochs]))
>
> loss_info = 'epoch %d, loss %.4f, test cor %.4f' % (epoch, loss.item(), corr)
> opt_loss_info = 'opt_epoch %d, opt_loss %.4f, test cor %.4f' % (opt_epochs, opt_loss, corr_list[opt_epochs])
> ax.clear()
> ax.plot(range(epoch), loss_list,'r--', range(epoch), corr_list*10, 'g--',
> range(epoch), corr_list_s*10, 'y--')
> plt.ylim(0, 30)
> plt.text(10, 30 , loss_info, fontsize=10)
> plt.text(10, 20 , opt_loss_info, fontsize=10)
> ax.axvline(opt_epochs)
> fig.canvas.draw()
>
> opt_loss_mean = np.nanmean(opt_loss_list)
> print('boots finished with optimized loss: %.4f' %opt_loss_mean)
>
>
>
> else:
> torch.manual_seed(999)
> self.apply(weights_init)
> print('training with all training data:')
> opt_loss_mean = -9999
>
> X_train_torch = Variable(torch.from_numpy(X))
> y_train_torch = Variable(torch.from_numpy(y))
> X_train_torch = X_train_torch.float()
> y_train_torch = y_train_torch.float()
>
> labels = y_train_torch.view(y.shape[0],1)
> loss_list = []
> epoch = 0
>
> fig = plt.figure(figsize=(8,8))
> ax = fig.add_subplot(111)
> plt.ion()
>
> fig.show()
> fig.canvas.draw()
>
> while True:
> #increase the number of epochs by 1 every time
> epoch +=1
>
> #clear grads
> #scheduler.step()
> optimizer.zero_grad()
> #forward to get predicted values
> outputs = self(X_train_torch) # model predict, outputs = net.forward(inputs)
>
> loss = criterion(outputs, labels)
> loss.backward()# back props
> optimizer.step()# update the parameters
>
> loss_list = np.append(loss_list, loss.item())
>
> if epoch % 100 == 0:
> loss_info = 'epoch %d, loss %.4f' % (epoch, loss.item())
> ax.clear()
> ax.plot(range(epoch), loss_list,'r--')
> plt.ylim(0, 30)
> plt.text(10, 30 , loss_info, fontsize=20)
> fig.canvas.draw()
>
> # define the mean loss to prevent overfitting with bad data.
> if loss.item()<opt_loss_mean or loss.item()<min_loss or epoch>max_epochs:
> print('stop with loss', loss.item())
> break
>
> def predict(self, X_test):
>
> X_test_torch = Variable(torch.from_numpy(X_test))
> X_test_torch = X_test_torch.float()
>
> y_prediction = self(X_test_torch)
> y_prediction = y_prediction.detach().numpy().flatten()
>
> return y_prediction
if I run with tune_loss=True (do cv in the fit method with initate_weights before each loop):
net = Model(min_loss = .9, l_rate = 1e-3, max_epochs = 10000)
%matplotlib notebook
net.fit(X_hc, y_hc, tune_loss = True)
I got the curve like this:
when I run fit in the for loop of 10-fold CV:
for train_index, test_index in kf.split(X):
print('run_model on CV: %d' % i)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
net.fit(X_train, y_train, tune_loss = False)
I got the curve like this:
Could anyone explain why this happens?
|
st97913
|
I’ve got torch 0.4.1 on python3.5. A paper I’m trying to reproduce claims they have a 13ms execution time for a model based on ResNet50 on GTX 1080 Ti using Caffe.
I’ve been able to translate the exact network to PyTorch using MMdnn, however the execution of the ResNet part alone takes between 29 and 39ms alone on my end, using a GTX 1080 and the entire network takes between 35 and 54ms (I’m suprised it varies so much between subsequent executions, is that normal in PyTorch?). I’ve tried to look at the torchvision’s resnet50 model for comparison, but the execution time is even worse: 53ms for the stub I have in common with my network and 58ms for all ResNet50.
I understand that a GTX 1080 Ti is better than a 1080, but still the difference is too large. Unfortunately, I haven’t been able to run the network on Caffe on my machine for comparison, as Caffe is hell to compile (I need a custom layer).
Here is my code for reproduction:
import numpy as np
import torch
from timeit import default_timer as timer
from torchvision.models import resnet50
def main():
# Define model and input data
resnet = resnet50().cuda()
x = torch.from_numpy(np.random.rand(1, 3, 224, 224).astype(np.float32)).cuda() # Entire network
# x = torch.from_numpy(np.random.rand(1, 64, 32, 32).astype(np.float32)).cuda() # Stub alone
# The first pass is always slower, so run it once
resnet.forward(x)
# Measure elapsed time
passes = 20
total_time = 0
for _ in range(passes):
start = timer()
resnet.forward(x)
delta = timer() - start
print('Forward pass: %.3fs' % delta)
total_time += delta
print('Average forward pass: %.3fs' % (total_time / passes))
if __name__ == '__main__':
main()
When I refer to the stub, it means I commented out the following lines in torchvision/models/resnet.py:
def forward(self, x):
# x = self.conv1(x)
# x = self.bn1(x)
# x = self.relu(x)
# x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
# x = self.avgpool(x)
# x = x.view(x.size(0), -1)
# x = self.fc(x)
return x
|
st97914
|
Well my problem just sort of got solved… I managed to get my hands temporarily on a GTX 1080 Ti and the difference in perfs are actually that big: 10ms on average for all resnet50. I can’t say I expected that.
edit: Actually it seems to be a matter of configuration… if I run on Linux on my own machine, I get 12ms, which is much better (vs 58ms on Windows). Guess I gotta keep exploring…
|
st97915
|
I have a dataset which has images and corresponding texts.
I want to cut part of a dataset to get input data in a shape of 4 sequence images with corresponding texts.
like data = [ [image1, image2, image3, image4] ,[text1, ...text4] ]
So, if I have 6 images, I can make 3 inputs data.
my dataset loader is like below
class dataset(Dataset):
def __init__(self, path, start, end):
self.path = path
self.data = self.getsubsequnce(start, end)
def _getMegabatch(self, start, end):
file = h5.File(self.path)[start:end]
return file
def _getsubsequence(self,start,end):
megabatch = self._getMegabatch(start, end)
"""
returns set of 4 length images
"""
def __len__(self):
return len(self.data)
def __getitem__(self, index):
data = self.data
return data[index]
I’m wondering this is a common way of generating mini-batches from mega-batch data.
Could you give me an advice??
|
st97916
|
I’m not sure I understand the implementation of your Dataset fully.
Are you creating a new Dataset for each sequence, i.e. when are you passing new start and end values?
It looks like you are returning one sample from the sequence. As far as I’ve understood your use case, I thought you would like to return 4 samples holding the images and text data?
Could you post the shapes of your whole dataset and each sample?
|
st97917
|
@ptrblck Okay I think above code is too simplified.
overall, from my whole dataset, _getMegabatch will get Mega-batch dataset[start:end]
then from the megabatch I want to return minibatches.
Specifically, my whole dataset is a list [images, texts], images.shape = [20000, 224, 224, 3], texts.shape=[20000, 30]
and _getMegabatch() slices whole data to mega-batch data [images, texts] with images.shape = [6, 224, 224, 3] texts.shape=[6,30] mega-batch size 6 is just an example.
For easy understanding let’s represent this mega-batch data images (shape [6, 224, 224, 3] and texts as [a, b, c, d, e, f], [1,2,3,4,5, 6], each alphabet, number is image and text.
From this mega-batch data I am gonna get all 4-length sub-sequence datas with _getsubsequence like self.data =[ [ [a, b, c, d], [b, c, d, e], [c, d, e, f] ], [ [1, 2, 3, 4], [2, 3, 4, 5], [3, 4, 5, 6] ] ]
Finally, __getitem__ will return self.data[0][index], self.data[1][index] e.g. [a, b, c, d], [1, 2, 3, 4]
I`m not sure this mega-batch \in mini-batch dataloader structure is efficient or not.
|
st97918
|
Really basic question here. I’ve read in a few places that PyTorch seems to be geared towards “development” and not production. Is there some reason why it is not suitable for production?
The claims on the site, “The memory usage in PyTorch is extremely efficient”, “PyTorch is quite fast – whether you run small or large neural networks.” etc. seem to be all great benefits for running in production (and I suppose in development too)
|
st97919
|
if you dont mind running python in production, then pytorch is ready for production.
However, industrial-grade production deployments prefer C/C++/Java (for example mobile).
Hence our tag of “pytorch is not suited for production”, tied to our deep entanglement with python.
We also tend to always prioritize research flexibility over freezing specs / a more structured static model approach which is also suited for production.
|
st97920
|
Is PyTorch claiming performance improvements over libraries like Tensorflow even without C or cython?
|
st97921
|
we dont claim performance improvements over any other library, we leave that exercise to the community.
|
st97922
|
Forgive me to bring back this discussion again; but, has there been new things in PyTorch that enable production?
Especially that the new pytorch.org headline says:
“PyTorch
FROM
RESEARCH TO
PRODUCTION”
??
It is still Pythonic, so, what does PRODUCTION refer to in the headline?
|
st97923
|
Coming with the 1.0 preview, you can now use the just-in-time compiler (torch.jit ) to load your model in C++. Have a look at this example 20.
Also using libtorch you have now a C++ API. Here 10 is an end-to-end example.
Soumith might have more information for production usage.
|
st97924
|
I got a simple feedforward net that its not learning, i get the same loss with every epoch.
Book= pd.ExcelFile(“path”)
Sheet= Book.parse(“R_MWL_N”)
Input_Data= Sheet.R
Target_Data= Sheet.MWL
Input_Data= torch.FloatTensor(Input_Data)
Input_Data= Input_Data.view(365,1)
Target_Data= torch.FloatTensor(Target_Data)
Target_Data= Target_Data.view(365,1)
batch_size = 1
learning_rate=0.01
epochs=20
days_elapsed = 7
class LoadData(Dataset):
def init(self, inputs, targets, days_elapsed, transform=None):
assert len(inputs) == len(targets)
self.inputs = inputs
self.targets = targets
self.days_elapsed = days_elapsed
self.transform = transform
def __len__(self):
samples = len(self.targets) - self.days_elapsed + 1
return samples
def __getitem__(self, idx):
inputs = self.inputs[idx:idx+self.days_elapsed]
inputs = np.array(inputs)
inputs = inputs.reshape((1,len(inputs)))
target = np.array(self.targets[idx + self.days_elapsed - 1])
target = target.reshape((1,len(target)))
if self.transform is not None:
inputs = self.transform(inputs)
return (inputs, target)
train_dataset = LoadData(Input_Data, Target_Data, days_elapsed, transform=None)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=False)
input_size= 7
hidden_size=20
output_size=1
class Net(nn.Module):
def init(self, input_size, hidden_size, output_size):
super(Net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, output_size)
self.ReLU = nn.ReLU()
self.Sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.ReLU(x)
x = self.fc2(x)
x = self.Sigmoid(x)
return x
FFNN = Net(input_size, hidden_size, output_size)
print(FFNN)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(FFNN.parameters(), lr=learning_rate)
for epoch in range (epochs):
for batch_idx, (inputtensor, targettensor) in enumerate(train_loader):
inputtensor = inputtensor.requires_grad_()
targettensor = targettensor.requires_grad_()
optimizer.zero_grad()
FFNN_output= FFNN(inputtensor)
loss = criterion(FFNN_output, targettensor)
loss.backward()
optimizer.step()
for i in range (epoch):
print ("#" + str(i) + " Loss: " + format(loss.item()))
|
st97925
|
I want to understand how does ReLU backward, I find it is implemented by torch._C._nn.threshold, not autograd.Function object.
I think all object from torch._C._nn is defined by aten/src/Aten/nn.yaml, and a backward implementation of torch._C._nn.threshold is THNN_(Threshold_updateGradInput), however, I finds it accept THTensor *input, I do not know where does it get this tensor while it is not an autograd.Function which can save input as ctx.
|
st97926
|
Solved by smth in post #2
an autograd.Function (sort of, but in C++) is generated from nn.yaml and other metadata, which saves the input and passes it to the Threshold_updateGradInput function.
If you have a local source build of PyTorch, looking at the file build/aten/src/ATen/CPUFloatType.cpp would help
|
st97927
|
an autograd.Function (sort of, but in C++) is generated from nn.yaml and other metadata, which saves the input and passes it to the Threshold_updateGradInput function.
If you have a local source build of PyTorch, looking at the file build/aten/src/ATen/CPUFloatType.cpp would help
|
st97928
|
other metdata is what I might’ve missed. Anything in Declarations.cwrap, generic/THNN.h (generic/THNN.h is written in a certain way, with structured comments in code about which arguments are buffers, which are inputs etc.)…
|
st97929
|
I saw many Pytorch examples using flatten_parameters in the forward function of the RNN
self.rnn.flatten_parameters()
I saw this RNNBase 47 and it is written that it
Resets parameter data pointer so that they can use faster code paths
What does that mean?
|
st97930
|
Hi all,
I have a certain NN architecture and I want, given the activations of a certain layer to run back through the network and recover the original input. I should be able to do this with certain precision simply by creating a new network with inverted layers (deconv instead of conv, linear with transposed weight tensor, etc.) but I am wondering if there is an easier way, programatically speaking.
It would be nice to get a general solution but in my specific case my architecture only contains conv, deconv and leaky_relu layers.
Thanks,
|
st97931
|
These two networks have the same structure, the last layer of resnet34 has been modified to output 3 values. For Network1, I split the network into 3 groups in order to do other operations later, but the architecture itself is identical.
class Network1(nn.Module):
def __init__(self):
super(Network1, self).__init__()
pretrained_model = resnet34(pretrained = True)
self.group1 = nn.Sequential(*list(pretrained_model.children())[0:6])
self.group2 = nn.Sequential(*list(pretrained_model.children())[6:8])
self.group3 = nn.Sequential(
nn.AdaptiveAvgPool2d(1),
Flatten(),
nn.Linear(512, 3)
)
def forward(self, image):
out = self.group3(self.group2(self.group1(image)))
return out
class Network2(nn.Module):
def __init__(self):
super(Network2, self).__init__()
pretrained_model = resnet34(pretrained = True)
self.group3 = nn.Sequential(
*list(pretrained_model.children())[0:8],
nn.AdaptiveAvgPool2d(1),
Flatten(),
nn.Linear(512, 3)
)
def forward(self, image):
out = self.group3(image)
return out
Therefore, I expected these two networks to train at a similar pace when trained, but oddly Network2 trains much faster than Network1.
The network outputs same values when I fed in a random noise, therefore I think there are problems with the training process. Any ideas?
|
st97932
|
The models should be identical and you’ve already tested for the same outputs, so I think the code is fine.
How often did you perform your experiments? Could it be the difference in the training was just randomly better or one model?
|
st97933
|
Hello all, I have a 5D tensor such as BxCxHxWxD. I want to use nn.Linear function to classify from C to C//2 channel. How should I do it?
This is my code
class 5D_Linear(nn.Module):
def __init__(self, channel):
super(5D_Linear, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool3d(1)
self.linear_5d = nn.Sequential(nn.Linear(channel, channel// 2),
nn.ReLU(inplace=True),
nn.Linear(channel // 2, channel))
def forward(self, x):
x = self.avg_pool(x)
B, C, D, H, W = x.size()
x_4d = x.view(B, C, D, -1)
x_4d = self.linear_5d(x_4d)
x_5d = x_4d.view(B, C, D, H, W)
print(x_5d.size())
return x_5d
|
st97934
|
linear_5d seems to be mapping channel//2 back to channel , is that your intention? what doesn’t seem to be working in this code?
|
st97935
|
sorry. this is my mistake. the 5d must convert to 1d to perform classification. it looks compute prob of each channel
|
st97936
|
I have a tensor X with size (N, R) and a tensor Y with size (M, T).
I need to combine these tensors such that I get a new tensor of size (N x M, R+T) where I have concatenated dim=1 and combined dim = 0.
So for example
X = tensor([ [1,2,3], [4,5,6] ])
Y = tensor([ [7, 8], [9, 10] ] )
# where result would be
# tensor([ [1,2,3,7, 8], [1,2,3,9,10], [4,5,6,7,8], [4,5,6,9,10] ])
# also this is a special case where N == M. But I need to be able to do this with arbitrarily N and M.
I can do this with a loop but I am looking for an efficient operation to do this.
Also, X and Y are results from a neural network (nn.Module). I don’t want to break the computation graph because I need to compute the loss with result. Would creating a new tensor, looping through X and Y and simply appending the concatenations break the computation graph?
|
st97937
|
This seems to work for me:
X = torch.tensor([ [1,2,3], [4,5,6] ])
Y = torch.tensor([ [7, 8], [9, 10] ] )
X1 = X.unsqueeze(0)
Y1=Y.unsqueeze(1)
print(X1.shape,Y1.shape)
X2 = X1.repeat(Y.shape[0],1,1)
Y2 = Y1.repeat(1,X.shape[0],1)
print(X2.shape,X2.shape)
Z = torch.cat([X2,Y2],-1)
Z = Z.view(-1,z.shape[-1])
print(Z.shape)
|
st97938
|
I am working on a problem where multiple workers send CUDA tensors to a shared queue that is read by the main process. While the code works great with CPU tensors (i.e. the tensors sent by the workers are retrieved correctly by the main process), I am finding that when the workers send CUDA tensors through the shared queue, the tensor values read by the main process are often garbage values.
Related discussion: Invalid device pointer using multiprocessing with CUDA 42.
e.g. the following minimal code reproduces this issue. It works fine for CPU tensors, but repeatedly reads only the last few tensors that were sent for CUDA.
import torch
import torch.multiprocessing as mp
DEVICE = "cuda"
N = 10
done = mp.Event()
def proc(queue):
t = torch.tensor(0., device=DEVICE)
n = 0
while n < N:
t = t+1
print("sent:", t)
queue.put(t)
n += 1
done.wait()
if __name__ == "__main__":
ctx = mp.get_context("spawn")
queue = ctx.Queue()
p = ctx.Process(target=proc, args=(queue,))
p.daemon = True
p.start()
for _ in range(N):
print("recv:", queue.get())
done.set()
which prints out:
$ python examples/ex2.py
sent: tensor(1., device='cuda:0')
sent: tensor(2., device='cuda:0')
sent: tensor(3., device='cuda:0')
sent: tensor(4., device='cuda:0')
sent: tensor(5., device='cuda:0')
sent: tensor(6., device='cuda:0')
sent: tensor(7., device='cuda:0')
sent: tensor(8., device='cuda:0')
sent: tensor(9., device='cuda:0')
sent: tensor(10., device='cuda:0')
recv: tensor(9., device='cuda:0')
recv: tensor(10., device='cuda:0')
recv: tensor(9., device='cuda:0')
recv: tensor(10., device='cuda:0')
recv: tensor(9., device='cuda:0')
recv: tensor(10., device='cuda:0')
recv: tensor(9., device='cuda:0')
recv: tensor(10., device='cuda:0')
recv: tensor(9., device='cuda:0')
recv: tensor(10., device='cuda:0')
Based on the discussion in Invalid device pointer using multiprocessing with CUDA 42 and @colesbury’s suggestion in Using torch.Tensor over multiprocessing.Queue + Process fails 39, I suspected that the issue is that we need to hold the reference to the CUDA tensor in the worker process until it is read by the main process. To test this, I appended the tensors to a temporary list in the worker and then it worked fine!
This is however quite wasteful because the workers produce a large number of tensors and it is not feasible to hold them in memory. I was wondering what is the best practice to deal with such a use case. I can try to store the tensors in a limited size queue and hope that the last one has had sufficient time to be read by the main process, but that seems too fragile. cc. @smth, @colesbury
|
st97939
|
Unfortunately, I don’t know of a good way to do this. Trying to manage the lifetimes of CUDA tensors across processes is complicated. I try to only share CUDA tensors that have the same lifetime as the program (like model weights for example).
|
st97940
|
colesbury:
Unfortunately, I don’t know of a good way to do this.
Thanks @colesbury. This will be nice to have to do parallel MCMC sampling. For the time being, I’m thinking of using events to periodically clear off tensors from the shared memory - lets see how far that gets us.
|
st97941
|
Hi, I am using the same model with two following data loading strategy:
train_loader = DataLoader(TensorDataset(train_0, train_t), batch_size=batch_size,
shuffle=True, drop_last=False)
train_loader = DataLoader(torch.cat((train_0, train_t), dim=1), batch_size=batch_size,
shuffle=True, drop_last=False)
I trained both cases for 2 epochs, and case 1 took 30 seconds while case 2 took only 13 s. Does anyone know why there would be a significant difference between these two?
Thanks!
|
st97942
|
I found that sometimes index_select is much faster than advanced indexing. For a toy example it is 10x faster. Checking with the profiler reveals that many more ops are used for advanced indexing, whereas index_select is a single op.
Is this behavior intentional? Could advanced indexing easily check if it can use index_select instead? I imagine quite a few projects could benefit from using index_select.
Here’s the behavior reproduced on CPU:
gist.github.com
https://gist.github.com/mrdrozdov/37123eed34eeaa7d1c6640d7ad2c5278 35
profile.py
import torch
import argparse
from tqdm import tqdm
parser = argparse.ArgumentParser()
parser.add_argument('--flip', action='store_true')
options = parser.parse_args()
This file has been truncated. show original
spoiler-alert.txt
Profiler Output after 1 call
----------------------------
# Method A
----------------- --------------- --------------- --------------- --------------- ---------------
Name CPU time CUDA time Calls CPU total CUDA total
----------------- --------------- --------------- --------------- --------------- ---------------
select 15.768us 0.000us 1 15.768us 0.000us
as_strided 10.561us 0.000us 1 10.561us 0.000us
_cast_int64_t 0.559us 0.000us 1 0.559us 0.000us
This file has been truncated. show original
Note: I somewhat recall that index_select required more memory than advanced indexing, which is what leads me to believe that this isn’t really a bug and there are cases where advanced indexing makes sense to use (maybe when you are indexing many dimensions simultaneously).
|
st97943
|
I am trying to implement a custom loss function for training. It is similar to mse but I am trying to calculate this loss only for specific area in predicted image and not the whole image. I have used a filter (‘wt_matrix’ in code) of the same dimension as image filled with ‘1’ values for locations which are to be considered in loss calculation and ‘0’ for locations that are not to be used. My implementation is as follows.
def build_loss(self, predicted_img, gt_data, wt_matrix):
wt=np.sum(wt_matrix)
wt_matrix=Variable(torch.from_numpy(wt_matrix),requires_grad=False)
wt_matrix=wt_matrix.cuda()
gt_data=gt_data.cuda()
pred_img=predicted_img*wt_matrix
gdata=wt_matrix*gt_data
diff_1=(pred_img-gdata)*(pred_img-gdata)
wt=np.array([wt])
wt=Variable(torch.from_numpy(wt),requires_grad=False)
loss=torch.sum(diff_1)/wt.cuda()
return loss
I have imported Variable form torch.autograd. I am calling backward on loss. Is everything correct in my implementation? It does not seem to work properly and takes too long to run.
|
st97944
|
Hi, I’m having some trouble which results in the following error:
pred_log_probs = estimator.forward(x_train[:, :])
File "D:\Google_Drive\Projects\FeedForward_detection\NN_detector_module_seperated_output.py", line 84, in forward
x = F.relu((self.layer_1(x)))
File "C:\Users\sholev\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\functional.py", line 643, in relu
return torch.relu(input)
RuntimeError: CUDA error: out of memory
The strange thing is that this error arises after 7 epochs, so it seems like some GPU memory allocation is not being released.
The NN architecture is the following:
class symbols_detector(nn.Module):
def __init__(self, num_of_symbols, N1, N2):
super(symbols_detector, self).__init__()
self.num_of_symbols = num_of_symbols
self.N1 = N1
self.N2 = N2
self.Lin = (N1*2 + 2)*N2
self.layer_1 = nn.Linear(self.Lin , L1)
self.layer_2 = nn.Linear(L1, L2)
self.layer_3 = nn.Linear(L2, L3)
self.layer_4 = nn.Linear(L3, L4)
self.layer_5 = nn.Linear(L4, self.num_of_symbols * N2)
self.log_p = nn.LogSoftmax(dim=2)
def forward(self, x):
x = F.relu((self.layer_1(x)))
x = F.relu((self.layer_2(x)))
x = F.relu((self.layer_3(x)))
x = F.relu((self.layer_4(x)))
x = self.layer_5(x)
x = x.view(-1, self.2, self.num_of_symbols)
x = self.log_p(x)
return x
The training code is the following:
rand_idx = torch.randperm(x_train.shape[0])
x_train.pin_memory()
x_train = x_train[rand_idx, :].cuda()
y_train = y_train[:, rand_idx]
y_train = y_train.cuda()
pred_log_probs = estimator.forward(x_train[:, :])
train_loss = torch.zeros([ep_num+1])
SER_train = np.zeros([ep_num+1])
BER_train = np.zeros([ep_num + 1])
train_loss[0] = cost_func(pred_log_probs.permute([0, 2, 1]), y_train[0, :])
train_loss[0].cpu().data.numpy())
for i in range(0, ep_num):
estimator.train()
for j in range(0, (int(N_train_samples/batch_Size)-2)):
#print(j)
pred_log_probs = estimator.forward(x_train[j * batch_Size:(j + 1) * batch_Size, :])
model_optimizer.zero_grad()
loss1 = cost_func(pred_log_probs.permute([0, 2, 1]), y_train[0, j * batch_Size:(j + 1) * batch_Size])
loss1.backward()
model_optimizer.step()
estimator.eval()
pred_log_probs = estimator.forward(x_train[:, :])
train_loss[i+1] = cost_func(pred_log_probs.permute([0, 2, 1]), y_train[0, :])
rand_idx = torch.randperm(x_train.shape[0])
x_train = x_train[rand_idx, :].cuda()
y_train = y_train[:, rand_idx].cuda()
print('model trained data set')
return train_loss
Is there something I’m missing that causes the memory the overflow?
Thanks
|
st97945
|
It looks like you are directly appending the training loss to train_loss[i+1], which might hold a reference to the computation graph. If that’s the case, you are storing the computation graph in each epoch, which will grow your memory. You need to detach the loss from the computation, so that the graph can be cleared.
Change this line of code to:
train_loss[i+1] = cost_func(pred_log_probs.permute([0, 2, 1]), y_train[0, :]).detach().item()
and try it again.
|
st97946
|
I have some problems running the examples provided in fastai lib so I posted on their forum. But after searching here for a solution , I found torch.cuda.empty_cache() but still I get the memory error… so that is why Im comming here
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-8-cc9249814530> in <module>()
1 torch.cuda.is_available()
----> 2 torch.cuda.empty_cache()
~/anaconda3/lib/python3.7/site-packages/torch/cuda/__init__.py in empty_cache()
372 """
373 if _initialized:
--> 374 torch._C._cuda_emptyCache()
375
376
RuntimeError: CUDA error: an illegal memory access was encountered
I cross post from https://forums.fast.ai/t/cyfar-ipynb-cuda-runtime-error-77-an-illegal-memory-access/29649 3
Hi there I just reinstalled my home PC to start all over again
Here is fastai.show_install()
=== Software ===
python version : 3.7.0
fastai version : 1.0.20.dev0
torch version : 1.0.0.dev20181105
nvidia driver : 410.73
torch cuda ver : 9.2.148
torch cuda is : available
torch cudnn ver : 7104
torch cudnn is : enabled
=== Hardware ===
nvidia gpus : 1
torch available : 1
- gpu0 : 7949MB | GeForce RTX 2080
=== Environment ===
platform : Linux-4.18.0-10-generic-x86_64-with-debian-buster-sid
distro : Ubuntu 18.10 Cosmic Cuttlefish
conda env : base
python : /home/tyoc213/anaconda3/bin/python
sys.path :
/home/tyoc213/fastai/examples
/home/tyoc213/anaconda3/lib/python37.zip
/home/tyoc213/anaconda3/lib/python3.7
/home/tyoc213/anaconda3/lib/python3.7/lib-dynload
/home/tyoc213/anaconda3/lib/python3.7/site-packages
/home/tyoc213/fastai
/home/tyoc213/anaconda3/lib/python3.7/site-packages/IPython/extensions
/home/tyoc213/.ipython
collab.ipynb works OK but stepping on cyfar on fastai/examples I an error executing this line
learn = Learner(data, wrn_22(), metrics=accuracy).to_fp16()
learn.fit_one_cycle(30, 3e-3, wd=0.4, div_factor=10, pct_start=0.5)
I get this output
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-14-72f1e2b0093b> in <module>()
----> 1 learn = Learner(data, wrn_22(), metrics=accuracy).to_fp16()
2 learn.fit_one_cycle(30, 3e-3, wd=0.4, div_factor=10, pct_start=0.5)
<string> in __init__(self, data, model, opt_func, loss_func, metrics, true_wd, bn_wd, wd, train_bn, path, model_dir, callback_fns, callbacks, layer_groups)
~/fastai/fastai/basic_train.py in __post_init__(self)
136 self.path = Path(ifnone(self.path, self.data.path))
137 (self.path/self.model_dir).mkdir(parents=True, exist_ok=True)
--> 138 self.model = self.model.to(self.data.device)
139 self.loss_func = ifnone(self.loss_func, self.data.loss_func)
140 self.metrics=listify(self.metrics)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in to(self, *args, **kwargs)
377 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
378
--> 379 return self._apply(convert)
380
381 def register_backward_hook(self, hook):
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _apply(self, fn)
183 def _apply(self, fn):
184 for module in self.children():
--> 185 module._apply(fn)
186
187 for param in self._parameters.values():
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _apply(self, fn)
183 def _apply(self, fn):
184 for module in self.children():
--> 185 module._apply(fn)
186
187 for param in self._parameters.values():
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _apply(self, fn)
189 # Tensors stored in modules are graph leaves, and we don't
190 # want to create copy nodes, so we have to unpack the data.
--> 191 param.data = fn(param.data)
192 if param._grad is not None:
193 param._grad.data = fn(param._grad.data)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in convert(t)
375
376 def convert(t):
--> 377 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
378
379 return self._apply(convert)
RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /opt/conda/conda-bld/pytorch-nightly_1541411195070/work/aten/src/THC/generic/THCTensorCopy.cpp:20
if running torch.cuda.is_available() return True.
Update extra tests
Im also running out of memory in dogs_cats.ipynb.
learn = create_cnn(data, models.resnet34, metrics=accuracy)
learn.fit_one_cycle(1)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-6ec085df1eed> in <module>()
----> 1 learn = create_cnn(data, models.resnet34, metrics=accuracy)
2 learn.fit_one_cycle(1)
~/fastai/fastai/vision/learner.py in create_cnn(data, arch, cut, pretrained, lin_ftrs, ps, custom_head, split_on, classification, **kwargs)
67 learn.split(ifnone(split_on,meta['split']))
68 if pretrained: learn.freeze()
---> 69 apply_init(model[1], nn.init.kaiming_normal_)
70 return learn
71
~/fastai/fastai/torch_core.py in apply_init(m, init_func)
193 def apply_init(m, init_func:LayerFunc):
194 "Initialize all non-batchnorm layers of `m` with `init_func`."
--> 195 apply_leaf(m, partial(cond_init, init_func=init_func))
196
197 def in_channels(m:nn.Module) -> List[int]:
~/fastai/fastai/torch_core.py in apply_leaf(m, f)
189 c = children(m)
190 if isinstance(m, nn.Module): f(m)
--> 191 for l in c: apply_leaf(l,f)
192
193 def apply_init(m, init_func:LayerFunc):
~/fastai/fastai/torch_core.py in apply_leaf(m, f)
188 "Apply `f` to children of `m`."
189 c = children(m)
--> 190 if isinstance(m, nn.Module): f(m)
191 for l in c: apply_leaf(l,f)
192
~/fastai/fastai/torch_core.py in cond_init(m, init_func)
183 if (not isinstance(m, bn_types)) and requires_grad(m):
184 if hasattr(m, 'weight'): init_func(m.weight)
--> 185 if hasattr(m, 'bias') and hasattr(m.bias, 'data'): m.bias.data.fill_(0.)
186
187 def apply_leaf(m:nn.Module, f:LayerFunc):
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch-nightly_1541411195070/work/aten/src/THC/generic/THCTensorMath.cu:14
I get the cuda memory error also in tabular
learn = get_tabular_learner(data, layers=[200,100], metrics=accuracy)
learn.fit(1, 1e-2)
output
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-5-480eb9caae1a> in <module>()
----> 1 learn = get_tabular_learner(data, layers=[200,100], metrics=accuracy)
2 learn.fit(1, 1e-2)
~/fastai/fastai/tabular/data.py in get_tabular_learner(data, layers, emb_szs, metrics, ps, emb_drop, y_range, use_bn, **kwargs)
93 model = TabularModel(emb_szs, len(data.cont_names), out_sz=data.c, layers=layers, ps=ps, emb_drop=emb_drop,
94 y_range=y_range, use_bn=use_bn)
---> 95 return Learner(data, model, metrics=metrics, **kwargs)
96
<string> in __init__(self, data, model, opt_func, loss_func, metrics, true_wd, bn_wd, wd, train_bn, path, model_dir, callback_fns, callbacks, layer_groups)
~/fastai/fastai/basic_train.py in __post_init__(self)
136 self.path = Path(ifnone(self.path, self.data.path))
137 (self.path/self.model_dir).mkdir(parents=True, exist_ok=True)
--> 138 self.model = self.model.to(self.data.device)
139 self.loss_func = ifnone(self.loss_func, self.data.loss_func)
140 self.metrics=listify(self.metrics)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in to(self, *args, **kwargs)
377 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
378
--> 379 return self._apply(convert)
380
381 def register_backward_hook(self, hook):
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _apply(self, fn)
183 def _apply(self, fn):
184 for module in self.children():
--> 185 module._apply(fn)
186
187 for param in self._parameters.values():
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _apply(self, fn)
183 def _apply(self, fn):
184 for module in self.children():
--> 185 module._apply(fn)
186
187 for param in self._parameters.values():
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _apply(self, fn)
189 # Tensors stored in modules are graph leaves, and we don't
190 # want to create copy nodes, so we have to unpack the data.
--> 191 param.data = fn(param.data)
192 if param._grad is not None:
193 param._grad.data = fn(param._grad.data)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in convert(t)
375
376 def convert(t):
--> 377 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
378
379 return self._apply(convert)
RuntimeError: CUDA error: out of memory
|
st97947
|
From https://pytorch.org/docs/stable/notes/cuda.html#asynchronous-execution 9 , it’s stated that GPU operation is asynchronous.
By default, GPU operations are asynchronous. When you call a function that uses the GPU, the operations are enqueued to the particular device, but not necessarily executed until later. This allows us to execute more computations in parallel, including operations on CPU or other GPUs.
However, when I run the following simple code, the elapsed time was same.
So I guess only some of GPU operations can benefit from asynchronous, not all operations.
Is it right assumption? Or moreover, should I make GPU operations be asynchronous by threading or multiprocessing (just like nn.DataParallel) ?
import time
import torch
def foo(idx):
with torch.cuda.device(idx):
a = torch.randn(1000, 1000)
b = torch.randn(1000, 1000)
a *= b
def main():
st = time.time()
for _ in range(10):
foo(0)
print(time.time() - st)
time.sleep(10)
st = time.time()
for i in range(10):
foo(i%2)
print(time.time() - st)
if __name__ == '__main__':
main()
|
st97948
|
Since you are trying to time CUDA operations, you should synchronize before starting and stopping your timer using torch.cuda.synchronize().
torch.cuda.synchronize()
st = time.time()
...
torch.cuda.synchronize()
print(time.time() - st)
Currently you are just seeing the time to launch the kernels.
|
st97949
|
Why MultiLabelMarginLoss take torch.long has arguments?
From understanding MultiLabelMarginLoss take not labels but one hot encoding labels so we can use multiclass multilabels prediction.
Why MultiLabelMarginLoss arguments shoud be long? It’s compose of only 0 and 1.
I have the following error if I try with uint8
RuntimeError: Expected object of scalar type Long but got scalar type Byte for argument #2 'target'
Is there something I miss?
|
st97950
|
Solved by albanD in post #2
Hi,
The target tensor is not expected to contain one hot encoding. from the doc:
"
The criterion only considers a contiguous block of non-negative targets that starts at the front.
This allows for different samples to have variable amounts of target classes
"
It should contain label values and…
|
st97951
|
Hi,
The target tensor is not expected to contain one hot encoding. from the doc 32:
"
The criterion only considers a contiguous block of non-negative targets that starts at the front.
This allows for different samples to have variable amounts of target classes
"
It should contain label values and be padded by -1s. For example if you have 3 labels and sample 0 is 0, 1 and sample 1 is 2 and sample 2 is 0,1,2:
label = [[0, 1, -1],
[2, -1, -1],
[0, 1, 2]]
This is a old design choice (the cpu code is >7years old and is part of the first commit on github, so it’s most certainly older than that), I’m not sure why this was made though.
|
st97952
|
Wow! I didn’t understand at all that
thank you so much!
Do you have any recommendation for multiclass multilabels classification?
I don’t know if there is better loss function or anything that can help
|
st97953
|
ptrblck:
nn.BCELoss
Thank you for your exemple. is there any interest to use directly nn.BCELoss over nn.BCEWithLogitsLoss?
|
st97954
|
You can specify a pos_weight using nn.BCEWithLogitsLoss, but besides that, there is no difference apart from the additional sigmoid layer in nn.BCEWithLogitsLoss.
|
st97955
|
From my understanding these function takes one hot encoding labels, so 0 and 1 so why should they take float and not uint8?
I’m doing multi-class multi-labels classification so maybe it’s a particular case?
|
st97956
|
I am new to neural network and for sure PyTorch, I am working on simple feed forward NN to predict groundwater level form precipitation and temperature daily data.
I’m facing some problems and seeking help:
First problem: data loader
So I should be using data loader so to feed the input data in batches (for example I have 300 temperature values and I want a batch size of 4) my understanding, the dataloader will take the first four data, feed it forward and then move to the next four data, my question is, is there a way for the dataloader to take the first four and then move 1 reading ahead (to use three of the temperature data used in the previous batch, ex. First batch = temperatures 1, 2, 3 and 4, second batch= temperatures= 2, 3, 4 and 5, and so on till it get to the last reading)
Second problem: data loader and output(target) data
Will there be a need to use dataloader for the output(target) data if there will be no batch size as I want it to take one reading only, just one output
If I have many output nodes, with different batch size to the input data, should I construct a separate dataloader for it or is it possible to combine the input and output within the same dataloader function.
Third problem: forward function
In defining forward function within the class (nn.module), and struggling with the input data; if I am using data loader and batches, should I use dataloader as input, if so how. or should I use the entire data frame.
This is my code, and Im asking about xin(xinput),
def forward(self, xin):
xinhi = self.fc1(xin)
xhi = self.Sigmoid(xinhi)
xhiout = self.fc2(xhi)
xout = self.Sigmoid(hiout)
return xout
|
st97957
|
I am not sure your problem settings. In general, batch is used for both input and target data.
If you want to use 1 day’s data to predict another day’s data, then batch_size can be 4 (or any number that fits your memory). In the situation, the number of target data samples will be 4 in a batch too.
If you want to use several days’ data (say 4 days) to predict another day’s data, then 4 is not your batch size. The input 4 days’ data constitutes one training sample.
Best,
|
st97958
|
kaixin:
I am not sure your problem settings. In general, batch is used for both input and target data.
If you want to use 1 day’s data to predict another day’s data, then batch_size can be 4 (or any number that fits your memory). In the situation, the number of target data samples will be 4 in a batch too.
If you want to use several days’ data (say 4 days) to predict another day’s data, then 4 is not your batch size. The input 4 days’ data constitutes one training sample.
Thank you, but I got confused a bit.
I have 300 days temperature measurements and the corresponding 300 groundwater level measurements. I will be using the previous 3 days temperature measurements+today’s to predict (say) today’s water level. so my understanding is that I will have one input node that has 4 temperature measurements and one output node that has one measurement. so i understand that the batch size will be four so it takes every consecutive 4 value and move ahead but i want it to take one output (the water level) and not four
|
st97959
|
The training samples in a batch serve the same purpose. In your settings, day 1’s temperature will influence the model in a way that is different from day 2.
In your settings, a training sample should be:
input: [day1's temp, day2's temp, day3's temp, day 4's temp]
target: [day4's water level]
then a batch is (say batch_size is 4):
input: [[day1's temp, day2's temp, day3's temp, day 4's temp],
[day2's temp, day3's temp, day4's temp, day 5's temp],
[day3's temp, day4's temp, day5's temp, day 6's temp],
[day4's temp, day5's temp, day6's temp, day 7's temp]]
target: [[day4's water level],
[day5's water level],
[day6's water level],
[day7's water level]]
You could custom a Dataset to slide the window in your data, a small and incomplete example is
from torch.utils.data import Dataset
class WeatherData(Dataset):
def __getitem__(self, idx):
return {'inputs': [self.temp[i] for i in range(idx + 4)],
'target': self.water[idx + 3]}
|
st97960
|
Thank you so much, but could I please ask you to have a lot at my code, for a reason its not working:
book= pd.ExcelFile(“path.xlsx”)
sheet=book.parse(“sheetname”)
Input=sheet[‘R’]
Target=sheet[‘MWL’]
batch_size = 365
learning_rate=0.01
epochs=5
input_size= 1
hidden_size=10
output_size=1
class LoadData(Dataset):
def init(self, input, target):
self.input=Input
self.target=Target
def __getitem__(self, idx):
return {'inputs': [self.input[i] for i in range(idx +4)], 'target': self.target[idx +3]}
def __len__ (self):
return 365
train_dataset = LoadData(Input, Target)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=False)
|
st97961
|
I assume the variable Input and Target are sequences of same length (365, right?), then 365 is not your batch_size (btw, 365 is not a typical number for batch_size).
batch_size = 8
learning_rate=0.01
epochs=5
days_elapsed = 4
class LoadData(Dataset):
def __init__(self, inputs, targets, days_elapsed):
assert len(inputs) == len(targets)
self.inputs = inpus
self.targets = targets
self.days_elapsed = days_elapsed
def __len__(self):
return len(self.targets) - self.days_elapsed + 1
def __getitem__(self, idx):
return {'inputs': [self.inputs[i] for i in range(idx + self.days_elapsed)],
'target': self.targets[idx + self.days_elapsed - 1]}
train_dataset = LoadData(Input, Target)
# usually set shuffle=True
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
This is a minimal example. In practice, you might want to feed NN an array(or Tensor) rather than a list.
|
st97962
|
kaixin:
days_elapsed = 4
I have 368 temperature measurements with the corresponding water-level, I am taking a block of four consecutive (the sliding window) readings, which would leave me with 362 = 365 - 3 training samples = batch size, right. just like what you have mentioned bfore
input: [[day1’s temp, day2’s temp, day3’s temp, day 4’s temp],
[day2’s temp, day3’s temp, day4’s temp, day 5’s temp],
[day3’s temp, day4’s temp, day5’s temp, day 6’s temp],
[day4’s temp, day5’s temp, day6’s temp, day 7’s temp]]
.
.
.
[day362’s temp, day363’s temp, day364’s temp, day 365’s temp]]
target: [[day4’s water level],
[day5’s water level],
[day6’s water level],
[day7’s water level]]
.
.
.
[day365’s water level]]
and ture, the data is list but i convert it like this and here is my code (but the problem that I get now is “RuntimeError: size mismatch, m1: [1 x 365], m2: [1 x 10]”)
book= pd.ExcelFile(“path.xlsx”)
sheet=book.parse(“sheetname”)
print(sheet)
Input=sheet[‘R’]
Target=sheet[‘MWL’]
Input = torch.tensor(Input)
Target = torch.tensor(Target)
batch_size = 362
days_elapsed = 4
learning_rate=0.01
epochs=5
input_size= 1
hidden_size=10
output_size=1
class LoadData(Dataset):
def init(self, inputs, targets, days_elapsed):
assert len(inputs) == len(targets)
self.inputs = inputs
self.targets = targets
self.days_elapsed = days_elapsed
def __len__(self):
return len(self.targets) - self.days_elapsed + 1
def __getitem__(self, idx):
return {'inputs': [self.inputs[i] for i in range(idx + self.days_elapsed)],
'target': self.targets[idx + self.days_elapsed - 1]}
train_dataset = LoadData (Input, Target, days_elapsed)
train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=true)
print(train_loader)
len(train_loader)
class Net(nn.Module):
def init(self, input_size, hidden_size, output_size):
super(Net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, output_size)
self.Sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.Sigmoid(x)
x = self.fc2(x)
x = self.Sigmoid(x)
return x
FFNN = Net(input_size, hidden_size, output_size)
print(FFNN)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(FFNN.parameters(), lr=learning_rate)
Training the FNN Model
for epoch in range (epochs):
for batch_idx, (inputtensor, targettensor) in enumerate(train_loader):
inputtensor = Variable(Input)
targettensor = Variable(Target)
optimizer.zero_grad()
FFNN_output= FFNN(inputtensor)
loss = criterion(FFNN_output, targettensor)
loss.backward()
optimizer.step()
#print out some results every time a certain number of iterations is reached
if (i+1) % 100 == 0
print('Epoch [%d/%d], Step [%d/%d], Loss: %.4f'
%(epoch+1, num_epochs, i+1, len(train_dataset)//batch_size, loss.data[0]))
|
st97963
|
In your case, the input shape should be 365 x 4, and the input_size should be 4 because every time you feed the NN with 4 datapoints and expect 1 output.
|
st97964
|
kaixin:
from torch.utils.data import Dataset class WeatherData(Dataset): def getitem(self, idx): return {‘inputs’: [self.temp[i] for i in range(idx + 4)], ‘target’: self.water[idx + 3]}
Thank you.
if possible could you clarify this issue please:
if I want the network to update the weights based on the entire year data (365 days data) like a regression problem, should not I feed the whole training set at once like this:
Input nodes = 4 (feeding the network 4 days measurements)
output nodes = 1 (expecting 1 output)
batch size = 365-3 = 362 (I have 362 training example to cover the whole year)
input shape = 365 x 4 (as you mentioned)
output shape = 365 x 1
thank you in advance
|
st97965
|
I have 4gb ram ,2gb ram gpu and when i am trying lenet-5 for kaggle facial keypoints dataset i m getting RuntimeError: CUDA error: out of memory. What should I do and what is causing this?
import torch
import torch.nn as nn
import torch.nn.functional as F
class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution
# kernel
self.conv1 = nn.Conv2d(1, 32, 3)
self.conv2 = nn.Conv2d(6, 64, 2)
self.conv3 = nn.Conv2d(64, 128, 2)
self.fc1 = nn.Linear(128 * 5 * 5, 500)
self.fc2 = nn.Linear(500, 500)
self.fc3 = nn.Linear(500, 30)
def forward(self, x):
# Max pooling over a (2, 2) window
x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
# If the size is a square you can only specify a single number
x = F.max_pool2d(F.relu(self.conv2(x)), (2, 2))
x = F.max_pool2d(F.relu(self.conv3(x)), (2, 2))
x = x.view(-1, self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc3(x)
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
dtype = torch.float
device = torch.device("cuda:0")
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 2140, 9216, 100, 30
def train(model,x,y,criterion,optimizer):
model.train()
y_pred = model(x)
loss = criterion(y_pred, y)
print('train-loss',t, loss.item(),end=' ')
optimizer.zero_grad()
loss.backward()
optimizer.step()
return loss.item()
def valid(model,x_valid,y_valid,criterion):
model.eval()
y_pred = model(x_valid)
loss = criterion(y_pred, y_valid)
print('test-loss',t, loss.item(),end=' ')
return loss.item()
# Create random Tensors to hold inputs and outputs
X_train=X_train.reshape(-1, 1, 96, 96)
X_valid=X_valid.reshape(-1, 1, 96, 96)
x_train = torch.tensor(torch.from_numpy(X_train),device=device,dtype=dtype)
y_train = torch.tensor(torch.from_numpy(Y_train),device=device,dtype=dtype)
x_valid = torch.tensor(torch.from_numpy(X_valid),device=device,dtype=dtype)
y_valid = torch.tensor(torch.from_numpy(Y_valid),device=device,dtype=dtype)
model = LeNet().to(device)
loss_train=[]
loss_valid=[]
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(400):
loss_train.append(train(model,x_train,y_train,criterion,optimizer))
loss_valid.append(valid(model,x_valid,y_valid,criterion))
print()
|
st97966
|
Your code might have some typos.
self.conv2 should accept 32 in_channels based on your forward method.
Also, are you resizing the images?
If not, I think your fc1 layer should accept 128 * 11 * 11 features. At least that’s working with input images of 96x96.
Using a batch size of 128 the training takes approx. 1.6GB so your GPU have enough RAM.
Are other processes filling up your GPU maybe?
|
st97967
|
http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/ 2 in this tutorial he has used following nn architecture
Blockquote
net2 = NeuralNet(
layers=[
(‘input’, layers.InputLayer),
(‘conv1’, layers.Conv2DLayer),
(‘pool1’, layers.MaxPool2DLayer),
(‘conv2’, layers.Conv2DLayer),
(‘pool2’, layers.MaxPool2DLayer),
(‘conv3’, layers.Conv2DLayer),
(‘pool3’, layers.MaxPool2DLayer),
(‘hidden4’, layers.DenseLayer),
(‘hidden5’, layers.DenseLayer),
(‘output’, layers.DenseLayer),
],
input_shape=(None, 1, 96, 96),
conv1_num_filters=32, conv1_filter_size=(3, 3), pool1_pool_size=(2, 2),
conv2_num_filters=64, conv2_filter_size=(2, 2), pool2_pool_size=(2, 2),
conv3_num_filters=128, conv3_filter_size=(2, 2), pool3_pool_size=(2, 2),
hidden4_num_units=500, hidden5_num_units=500,
output_num_units=30, output_nonlinearity=None,
Blockquote
and for reshaping he used
python
def load2d(test=False, cols=None):
X, y = load(test=test)
X = X.reshape(-1, 1, 96, 96)
return X, y
where he is not resizing it to 32 channels
thanks for the reply
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.