id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118068
|
You can do something like this to target individual modules:
import torchvision.models as models
resnet = models.resnet101(pretrained=True)
for (name, layer) in resnet._modules.items():
#iteration over outer layers
print((name, layer))
resnet._modules['layer1'][0]._modules['bn1'].weight.data.zero_()
The ._modules exposes layers as an ordered dict (it’s also private so maybe this is not future-proof? but the PT devs have referenced it on other threads) so you can index by key normally . Hence, printing them in the loop above helps to navigate complex models like resnet. it’s a little confusing becuase the nn.Sequential (keyed as ‘layer1’) is indexed like a list with integers.
You can use nested versions of that loop to get at things if you are changing a lot of stuff. If you are just changing a few, might actually be less confusing to do them in a one-liner individually. You can also use .requires_grad = False for individual layers if you just don’t want them to be updated during training.
And, of course, you could also just delete the module:
del(resnet._modules['layer1'][0]._modules['bn1'])
(edit)
You could also just duplicate the resnet.py 30 file from pytorch/vision and make your own version of ResNet. I have had to do this for using larger/smaller images. It is sometimes easer for experimentation as you can save the model file with
your_resnet.__class__.__name__
in the filename and name it something like ‘ResNet_noBN5’. But whatever works for you.
Good luck!
|
st118069
|
Hi
I’m trying to jointly train a convolutional network as an AE and GAN but I’m not sure that I have the training routine set up correctly. Would greatly appreciate any help.
for epoch in range(n_iter):
for i, (batch, _) in enumerate(dataloader):
current_batch_size = batch.size(0)
#---------------
#Train as AE
#---------------
optim_encoder.zero_grad()
optim_decoder.zero_grad()
input = Variable(batch).cuda()
encoded = _Encoder(input)
encoded = encoded.unsqueeze(0)
encoded = encoded.view(input.size(0), -1, 1, 1)
reconstructed = _Decoder(encoded)
reconstruction_loss = criterion(reconstructed, input)
reconstruction_loss.backward()
optim_encoder.step()
optim_decoder.step() # here it's SGD
#---------------
#Train as GAN
#---------------
#Train Discriminator on real
_Discriminator.zero_grad()
real_samples = input.clone()
inference_real = _Discriminator(real_samples)
labels = torch.FloatTensor(current_batch_size).fill_(real_label)
labels = Variable(labels).cuda()
real_loss = criterion(inference_real, labels)
real_loss.backward()
#Generate fake samples
noise.data.resize_(current_batch_size, z_d, 1, 1)
noise.data.uniform_(-10, 10)
fake_samples = _Decoder(noise)
#Train Discriminator on fake
labels.data.fill_(fake_label)
inference_fake = _Discriminator(fake_samples.detach())
fake_loss = criterion(inference_fake, labels)
fake_loss.backward()
discriminator_total_loss = real_loss + fake_loss
optim_discriminator.step()
#Update Decoder/Generator with how well it fooled Discriminator
_Decoder.zero_grad()
labels.data.fill_(real_label)
inference_fake_Decoder = _Discriminator(fake_samples)
fake_samples_loss = criterion(inference_fake_Decoder, labels)
fake_samples_loss.backward()
optim_decoderGAN.step() # here it's Adam
My goal is to have the Decoder/Generator map the real samples to specific locations in the latent space, and then generate potential candidates between/around the points that are mapped to real samples. Also, I know that the “stabilizing GANs” post says to use normal rather than uniform distributions and that my range is quite large. I started doing that because evaluating the trained DeeperDCGANs that I’ve been using seems to show that the usual Z.normal_(0, 1) is too small of a range, and the points are continually overwritten at each iteration. I also think that the feature space of my dataset is likely following a uniform rather than normal distribution.
Thanks!
|
st118070
|
Hello,
As a part of a computation, I need to access a member of an output variable after I do a backward pass on it. The weird thing is, if I do this in GPU, this access takes a lot of time compared to doing this in CPU. In the code snippet below, I is an image with size 3x224x224 and network is resnet50 from the models:
x = Variable(I.unsqueeze_(0).cuda(),requires_grad=True)
net.cuda()
output = net(x)
ff = output.data.squeeze()
output[:,10].backward()
alp = ff[100]
Now, normally I expect the last line to take less than a milisecond and this is the case if I do not use the GPU (if I remove the .cuda()s). However, in this case, it takes around 100 ms. Is there a problem? Is the computer alp from the GPU memory and that is causing the issue? Or am I doing something wrong?
|
st118071
|
Hi,
All the CUDA API is asynchronous so the backward call will return before everything is actually computed.
Your last line actually access some data from the GPU and thus forces a synchronization and thus waits for all computation to finish.
You can add before your last line a torch.cuda.synchronize() and your last line will become instant again.
|
st118072
|
When A is a module with multiple sub-modules, where only a sub-set of the sub-modules are used dependent on the input, for example, A is defined as follows,
class A(nn.Module):
def __init__(self):
self.common = nn.Linear(100, 50)
self.module1 = nn.Linear(50,30)
self.module2 = nn.Linear(50,2)
def forward(self, input, idx):
commonOut = self.common(input)
if idx == 0:
return self.module1(commonOut)
else:
return self.module2(commonOut)
opt = torch.optim.Adam(A.parameters())
In this case, if A is forwarded with idx=0, then the parameters belong to the module2 is not necessary to be updated.
When using opt for the parameter update, I wonder if PyTorch automatically only updates the modules that are acutally used in forward or the unused modules are also updated by adding zero gradient, which would be waste of resource.
If it does, is there any way I can get the list of parameters that are actually updated?
Thanks
|
st118073
|
No, we don’t do that. There’s no way how the optimizer would know which parameters were or weren’t used, and there’s no possible way to find out about that. It would require us to impose some strict requirements, that don’t really make sense in most use cases, for only minor improvements in other. Why is it such a problem that these parameters are getting updated? Is your script slow? It shouldn’t add a lot of overhead.
|
st118074
|
Thanks for the clarification. Since my model uses only a small subset of the sub-modules dependent on the input, it could be more efficient if only the used parameters are updated.
However, since there are only parameter updates (no forward/backward passes) for the unused modules, it might not be the bottleneck of the entire process as you mentioned. (If it is a bottlneck, then I’ll use separate modules with their own parameter updates.)
Thanks!
|
st118075
|
If it turns out to be a bottleneck, you could create an optimizer per each module, and call .step() only on the optimizers of used modules. That should be quite simple to implement.
|
st118076
|
Aside from performance considerations, if the error is only being evaluated in terms of the output of a single module, aren’t there other (possibly negative) implications of updating the weights for components unused in the forward pass?
|
st118077
|
I don’t have a GPU computer. Is there a tutorial or best practices for using PyTorch on a cloud-based virtual machine with GPU capabilities?
|
st118078
|
PyTorch does not NEED GPUs to function. It works great on CPUs as well.
That said, if you want to use a cloud based VM with GPUs, checkout Amazon EC2, Nimbix or Azure which all provide decent GPU instances.
|
st118079
|
I just installed PyTorch on AWS g2.2xlarge machine with the default Ubuntu AWS image (ami-e13739f6). Here’s what I did:
sudo apt-get update
sudo apt-get install -y gcc make python3-pip linux-image-extra-`uname -r` linux-headers-`uname -r` linux-image-`uname -r`
wget http://us.download.nvidia.com/XFree86/Linux-x86_64/375.39/NVIDIA-Linux-x86_64-375.39.run
chmod 755 NVIDIA-Linux-x86_64-375.39.run
sudo ./NVIDIA-Linux-x86_64-375.39.run -a
sudo nvidia-smi -pm 1 # enable persistence mode for faster CUDA start-up
And then install NumPy and PyTorch
pip3 install numpy ipython
pip3 install https://download.pytorch.org/whl/cu75/torch-0.1.10.post2-cp35-cp35m-linux_x86_64.whl
pip3 install torchvision
Now PyTorch works with CUDA
ipython3
>>> import torch
>>> torch.randn(5, 5).cuda()
0.8154 0.9884 -0.7032 0.8225 0.5738
-1.0872 1.0991 0.5105 -1.2160 0.3384
-0.0405 0.2946 0.3753 -1.9461 0.0952
1.6247 -0.8727 -0.6441 -0.8109 1.7622
1.2141 1.3939 -1.2827 -0.3837 -0.0731
[torch.cuda.FloatTensor of size 5x5 (GPU 0)]
EDIT: updated instructions
|
st118080
|
Thank you! I especially like your warning about the error message! Very helpful!
|
st118081
|
Here is a brief tutorial on installing and running pytorch on an AWS GPU enabled compute instance: https://medium.com/@waya.ai/quick-start-pyt-rch-on-an-aws-ec2-gpu-enabled-compute-instance-5eed12fbd168 315
|
st118082
|
mjdietzx:
https://medium.com/@waya.ai/quick-start-pyt-rch-on-an-aws-ec2-gpu-enabled-compute-instance-5eed12fbd168
Thanks, it is great help!
|
st118083
|
How do I use my own datasets (jpg) to training and verification, how to make? Who can share their own code or process to tell me how to do it? Thank you
|
st118084
|
Hi,
I won’t share any code, as it’s worth learning yourself, but this 81 page has everything you need.
For reference, you’ll need to:
Read the data --> store it in a tensor
Create a target tensor
Create a TensorDataset()
Create a DataLoader()
|
st118085
|
I need to sample from a normal distribution in the training process, and is there any way to generate random numbers from device, rather than using torch.normal(mean, variance).cuda(). As when the dimension of normal is large, it seems to be very costly. Thanks!
|
st118086
|
Hi,
is
t = torch.Tensor(10,10).cuda()
t.normal_()
what you are looking for?
Best regards
Thomas
|
st118087
|
I use PyTorch to train a lenet model, the network structure is identical to Caffe’s default lenet. Training 12 epochs takes about 28s in Caffe, but training 10 epochs takes over 100s in PyTorch.
My expirements are all run on Titan X, CUDA 8.0 and CUDNN v5. The training bach_size is 64.
The final test accuracy is similar, but is seems that PyTorch is much slower than Caffe on lenet training.
Is this the real performance of PyTorch? Or may there be some details ignored by me like CUDNN?
I am new to PyTorch, welcome anyone to discuss on this post.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, kernel_size=5)
self.conv2 = nn.Conv2d(20, 50, kernel_size=5)
self.fc1 = nn.Linear(800, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.max_pool2d(self.conv1(x), kernel_size=2, stride=2)
x = F.max_pool2d(self.conv2(x), kernel_size=2, stride=2)
x = x.view(-1, 800)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x)
model = Net()
if args.cuda:
model.cuda()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9, weight_decay=0.0005)
# ignore train code
|
st118088
|
You can’t “ignore train code”, because it’s what decides how fast will your script run. In our benchmarks we don’t see large discrepancies between PyTorch and Caffe. One thing that might make it faster would be to set torch.backends.cudnn.benchmark = True
|
st118089
|
Thanks for your reply. I think I have found the reason, it’s because the data loader is a little slow, especially the transform functions. The training time is the same whether torch.backends.cudnn.benchmark is True or False, PyTorch may call cudnn automatically.
As your replay, PyTorch and Caffe has no large discrepancies, are there any more details or links?
|
st118090
|
Hello, all.
I meet a problem that speed of function tensor.sum() is strange when calculate the same kind of Tensor.
I want to calculate the confusion matrix of a foreground/background segmentation model, and below is my testing code:
import time
import torch
def func(pr, gt):
dump = 0.0
for gt_i in range(2):
for pr_i in range(2):
num = (gt == gt_i) * (pr == pr_i)
start = time.time()
dump += num.sum()
print("Finding Time: {} {} {t:.4f}(s)".format(gt_i, pr_i, t=(time.time() - start)))
if __name__ == '__main__':
gt = torch.rand(1, 400, 400) > 0.5
gt = gt.int().cuda()
print(">>>>>>>>>>>>>>> Test One >>>>>>>>>>>>>>>")
prob1 = torch.rand(1, 2, 400, 400)
_, pr1 = prob1.topk(1, 1, True, True)
pr1 = torch.squeeze(pr1, 1)
pr1 = pr1.int().cuda()
print(type(pr1), pr1.size())
func(gt, pr1)
print(">>>>>>>>>>>>>>> Test Two >>>>>>>>>>>>>>>")
prob2 = torch.rand(1, 2, 400, 400)
prob2 = prob2.cuda()
_, pr2 = prob2.topk(1, 1, True, True)
pr2 = torch.squeeze(pr2, 1)
pr2 = pr2.int()
print(type(pr2), pr2.size())
func(gt, pr2)
And the result is
The speed of tensor.sum() of (gt == 0) * (pr == 0) in Test Two is very strange. However input tensor type of Test One and Test Two is the same.
I can not find the reason… Is there some hidden property of Tensor that i missed? Anyone can give some help?
Thanks
|
st118091
|
@apaszke I am seeking for help again… I would appreciate it if you can give some advice. THANKS!!!
|
st118092
|
You need to insert proper synchronization, because the GPU runs asynchronously (unless you e.g. launch a CPU <-> GPU copy). There should be a few posts on this forums that should how to do it (search for torch.cuda.synchronize()).
|
st118093
|
Namely for your test:
import time
import torch
def func(pr, gt):
dump = 0.0
for gt_i in range(2):
for pr_i in range(2):
num = (gt == gt_i) * (pr == pr_i)
# Make sure you don't have anything still running
torch.cuda.synchronize()
start = time.time()
dump += num.sum()
# Make sure everything has been done
torch.cuda.synchronize()
print("Finding Time: {} {} {t:.4f}(s)".format(gt_i, pr_i, t=(time.time() - start)))
if __name__ == '__main__':
gt = torch.rand(1, 400, 400) > 0.5
gt = gt.int().cuda()
print(">>>>>>>>>>>>>>> Test One >>>>>>>>>>>>>>>")
prob1 = torch.rand(1, 2, 400, 400)
_, pr1 = prob1.topk(1, 1, True, True)
pr1 = torch.squeeze(pr1, 1)
pr1 = pr1.int().cuda()
print(type(pr1), pr1.size())
func(gt, pr1)
print(">>>>>>>>>>>>>>> Test Two >>>>>>>>>>>>>>>")
prob2 = torch.rand(1, 2, 400, 400)
prob2 = prob2.cuda()
_, pr2 = prob2.topk(1, 1, True, True)
pr2 = torch.squeeze(pr2, 1)
pr2 = pr2.int()
print(type(pr2), pr2.size())
func(gt, pr2)
|
st118094
|
When I use nn.ReLU(inplace=True), there is an error that is:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
But this error just appears in some places. Why it occurs?
|
st118095
|
Because the inplace op overwrites some data that’s needed by some Function to compute the gradient. It has no way of doing that after you overwrite it. Just remove inplace=True in these places.
|
st118096
|
I had the same issue, and noticed this post a bit late.
Could you automatically make inplace=False in train mode of models?
|
st118097
|
When I install the Pytorch package from source, it reports the following problem.
My platform is: Ubuntu 16.06 + CUDA 7.5
[ 1%] Linking CXX shared library libTHCUNN.so
[100%] Built target THCUNN
Install the project…
– Install configuration: “Release”
– Installing: /home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/lib/libTHCUNN.so.1
– Installing: /home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/lib/libTHCUNN.so
– Set runtime path of “/home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/lib/libTHCUNN.so.1” to “”
– Up-to-date: /home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/include/THCUNN/THCUNN.h
– Up-to-date: /home/yuhang/pytorch/pytorch-master/torch/lib/tmp_install/include/THCUNN/generic/THCUNN.h
– Configuring done
– Generating done
– Build files have been written to: /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl
[100%] Generating lib/libnccl.so
Compiling src/libwrap.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/libwrap.o
Compiling src/core.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/core.o
Compiling src/all_gather.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/all_gather.o
Compiling src/all_reduce.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/all_reduce.o
Compiling src/broadcast.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/broadcast.o
Compiling src/reduce_scatter.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/reduce_scatter.o
Compiling src/reduce.cu > /home/yuhang/pytorch/pytorch-master/torch/lib/build/nccl/obj/reduce.o
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
ptxas warning : Too big maxrregcount value specified 96, will be ignored
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
** return (char ) memcpy (__dest, __src, __n) + __n;*
** ^**
/usr/include/string.h: In function ‘void __mempcpy_inline(void, const void*, size_t)’:**
/usr/include/string.h:652:42: error: ‘memcpy’ was not declared in this scope
** return (char ) memcpy (__dest, __src, __n) + __n;*
Note that in Caffe installation, similar problem can be solved in here:
https://groups.google.com/forum/#!msg/caffe-users/Tm3OsZBwN9Q/XKGRKNdmBAAJ 48
By changing the CMakeLists.txt, I wonder whether we have some similar solutions in Pytorch.
Thanks,
Yuhang
|
st118098
|
Further into this problem I found that it is caused by the version of gcc is too new:
Caffe:
https://github.com/BVLC/caffe/issues/4046 37
Torch:
https://github.com/szagoruyko/imagine-nn/issues/42 12
A usual way to solve it is to add a flag as: flags=-D_FORCE_INLINES before compiling. Is there any place I could insert this command installing Pytorch ?
|
st118099
|
CCFLAGS="-D_FORCE_INLINES" CXXFLAGS="-D_FORCE_INLINES" python setup.py install should do it.
|
st118100
|
Thanks for reply but the error is still there. I’m trying ‘CFLAGS’ instead of ‘CCFLAGS’.
|
st118101
|
I tried this:
CCFLAGS="-D_FORCE_INLINES" CFLAGS="-D_FORCE_INLINES" CXXFLAGS="-D_FORCE_INLINES" python setup.py 2 install
But still not working. Might because some other problems…
|
st118102
|
Maybe you can follow this disscussion:
github.com/torch/distro
Ubuntu 16.04 - torch installation failure 23
opened
Aug 25, 2016
shriphani
Hi,
I am following these instructions: http://torch.ch/docs/getting-started.html and I get this on the final command:
/usr/include/string.h: In function ‘void* __mempcpy_inline(void*, const void*, size_t)’:
/usr/include/string.h:652:42:...
In my case, I am using Ubuntu 16.04 and CUDA 7.5 and I add -D_FORCE_INLINES in CXXFLAGS in file torch/lib/nccl/Makefile.
5912c61fbc84e.png1859×588 115 KB
@shiningsurya @ywu36
|
st118103
|
does it use the same dropout mask for every timestep? if not how to make it work that way while still maintaining the same performance as using nn.RNN?
This type of dropout is better according to the following paper, and it is also the dropout used in keras.
arxiv.org
1512.05287.pdf 26
1608.22 KB
Thanks.
|
st118104
|
During testing, the dropout is turned off. The performance would not be affected by the dropout.
|
st118105
|
No, it doesn’t, because PyTorch’s RNNs are thin wrappers around cuDNN, which doesn’t support time-locked dropout masks. Users can implement it themselves, though at the cost of reduced speed due to inability to use the optimized cuDNN kernel.
|
st118106
|
can i modify weights and gradients when training the network?
if yes, how to do it before and after updating parameters?
|
st118107
|
To modify gradients, use Variable hooks:
http://pytorch.org/docs/autograd.html#torch.autograd.Variable.register_hook 970
You can modify network weights directly. For example,
net.conv1.weight.data.clamp_(min=-1e-3, max=1e-3)
|
st118108
|
Thank you~ it’s helpful!
it seems that the hook will be activated during the backward process. And it will affect the gradient of the next layer in the backward automatically, right?
and i found a way to modify the weights and gradients too. But it’s not as awsome as yours, haha
in torch, i can modify weights and gradients directly by assign a tensor to it, like this
model.conv1.weight.grad.data = torch.ones(model.conv1.weight.grad.data.size()).cuda()
and this has slight difference from the hook method if you use optim.step( ). But if you write you own step( ) method, and modify the gradients inside the scope of your step( ), this method and the hook method will do the same thing. Am I right?
And BTW, the weights are Tensor, but the gradients are Variable, very strange, haha. I know there must be some consideration like grad of grad? Anyway
Thank you for your help!
pytorch is awsome!
|
st118109
|
Hi everyone, since the indices are all constant and do not require to compute the gradient, why do we have to pass a Variable type to index the entries in nn.Embedding instead of simply using a torch.LongTensor type? Thanks.
|
st118110
|
I have a bunch of Tensors, say of size torch.Size([3, 224, 224]) each, totaling 99 tensors. If I put all these tensors in a python list say t_list, I want to put this list in a Tensor object, such as t_storage = torch.LongTensor(99, 3, 224, 224),
what is the most efficient way to accomplish this?
|
st118111
|
torch.stack should do the job for this.
torch.stack(sequence, dim=0)
Concatenates sequence of tensors along a new dimension.
All tensors need to be of the same size.
Parameters:
sequence (Sequence) – sequence of tensors to concatenate.
dim (int) – dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive).
|
st118112
|
I just noticed that in version 0.1.12 the Variable class doesn’t have attributes creator or previous_functions any more. Is there any alternative method to visualize the network structure? Thanks!
|
st118113
|
Version 0.1.12 is stuck between a couple different changes so you can’t build a network visualizer in it; if you’re using master or you can wait until the next release you’ll be able to visualize again using .next_functions.
|
st118114
|
I have a very large net. I put a part of the net on cuda 0 and another part of the net on cuda 1. But there is no no copy_() operation for Variables. I can not directly propagate the gradient from one gpu device to another device. Will further release support this feature? Thanks.
|
st118115
|
No. If you want to copy a CUDA variable from one device to another, do:
var2 = var1.cuda(new_device)
|
st118116
|
Can some please explain how the transforms work and why normalize the data in the mnist dataset?
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])),
batch_size=args.batch_size, shuffle=True, **kwargs)
|
st118117
|
Hi,
You call the transforms in the same way you would for a loss function:
a = np.random.rand(10,10,1)
b = transforms.ToTensor()(a)
b = transforms.Normalize((mean, ), (std, ))(b)
Note the extra parentheses.
|
st118118
|
Normalizing the data helps the network converge.
If it’s not normalized each image will not be on the same scale, some images will induce bigger errors, other less errors.
Everything error will be added to the gradient with the same weight and backpropagated. Weight corrections will be overestimated for some images and underestimated on others.
At worst your optimizer will not find a way to minimize your loss due to oscillating too much, probably it will converge but will be slower, at best there won’t be any difference.
|
st118119
|
Can pytorch support manually assign grad_output 58 like torch?
In torch, each nn.Module have backward(self, grad_output) method, so that we can provide our manually designed grad_output.
To be more specifically,
z = g(f(x)), then from chain rule: dz/dx = dg(f(x))/df(x) * df(x)/dx.
In pytorch autograd, we just need to do z.backward() to do the back-propagation. If I want to manually provide k = dg(f(x))/df(x), and perform dz/dx = k * df(x)/dx, how should I do?
Thanks in advanced!
|
st118120
|
you can also do: z.backward(grad_output) instead of z.backward() on the last variable.
|
st118121
|
[ 24%] Building NVCC (Device) object CMakeFiles/THC.dir/generated/THC_generated_THCTensorSortInt.cu.o In file included from /mydir/pytorch/torch/lib/THC/generated/../THCTensorMath.h:17:0, from /mydir/pytorch/torch/lib/THC/generated/../THCTensorMathPointwise.cuh:4, from /mydir/pytorch/torch/lib/THC/generated/THCTensorMathPointwiseShort.cu:1: /mydir/pytorch/torch/lib/THC/generated/../THCGenerateAllTypes.h:19:34: error: /mydir/pytorch/torch/lib/THC/generated/../THCGenerateShortType.h: Cannot allocate memory #include "THCGenerateShortType.h" ^ In file included from /mydir/pytorch/torch/lib/THC/generated/../THCTensorMathPointwise.cuh:4:0, from /mydir/pytorch/torch/lib/THC/generated/THCTensorMathPointwiseChar.cu:1: /mydir/pytorch/torch/lib/THC/generated/../THCTensorMath.h:23:33: error: /mydir/pytorch/torch/lib/THC/generated/../THCGenerateAllTypes.h: Cannot allocate memory #include "THCGenerateAllTypes.h" ^ nvcc error : 'cudafe' died due to signal 9 (Kill signal)
Can anyone help with this error? I followed the steps given in the README.
Thanks.
|
st118122
|
Can I load two images from my dataset and proceed them together through the network
at the same time, So instead of using conv3d with 3 input channels (R,G,B), I can use it with
9 input channels of the two images??
if yes, can you please give me some hints about how to do that !
Thank you
|
st118123
|
If I understand your question correctly you want to concatenate two 2D RGB images together and process them at the same time.
You are talking about using conv3d, unless you have a 3d volume as input, I don’t see how it’s going to work.
Merging 2 RGB images, you would end up with one tensor of 6 channels.
Your question does not make any sense if it can’t be interpreted in real life. For example in medical imaging, it is possible to use multiple images coming from ultrasound, MRI, CT, etc if they come from the same patient/view, there is a real meaning in doing this (contrast/resolution/etc).
To conclude unless the two images represent a sequence (video, audio), I don’t think it’s interesting to do this. Think of how you are going to predict something during inference.
If you want to do this anyway, I would suggest using torch.cat to concatenate a tensor along a given dimension.
http://pytorch.org/docs/torch.html#torch.cat 62
|
st118124
|
Thank you for your prompt reply
Actually I want to do this for a video sequence,
because when I proceed one single frame throw the network,
after the third maxpooling3D layer, one of the dimensions become
null (equal to zero) so I get this kind of error :
"output size is too small"
So I thought if I add more input channels the dimension will not reach 0.
|
st118125
|
Ok, if you are working with video sequence, then it makes sense to work with 3d convolution. However I have no experience with this kind of problem.
About your error, this is usually a good start to write down your network on paper, start from the input volume, and find out the output size of each operation, so you can better understand where you are going, this is usually how I am doing it.
If you want to understand how a nn.Module transforms your data, wrap your input inside a Variable and forward it to the Module then you will see the dimensions of the ouput with size()
|
st118126
|
I have a minimal example to reproduce the issue:
github.com/pytorch/pytorch
Conv1d only accepts FloatTensor 5
opened
May 6, 2017
closed
May 6, 2017
geyang
It looks like Conv1d only accepts FloatTensor, and when it is fed DoubleTensor it errors out.
Here is a short example
import torch
from...
It looks like Conv1d only accepts FloatTensor, and when it is fed DoubleTensor it errors out.
Here is a short example
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
x_stub = Variable(torch.DoubleTensor(100, 15, 12).normal_(0, 1))
conv_1 = nn.Conv1d(15, 15, 3)
y = conv_1(x_stub)
so show what’s going on, I added the following line to the source
def conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1,
groups=1):
f = ConvNd(_single(stride), _single(padding), _single(dilation), False,
_single(0), groups, torch.backends.cudnn.benchmark, torch.backends.cudnn.enabled)
=this line=> print(input, weight, bias)
return f(input, weight, bias)
when running code, it gives me the following error message:
Variable containing:
( 0 ,.,.) =
1 1 1 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
... ⋱ ...
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 1
...
(127,.,.) =
1 1 1 ... 0 0 0
0 0 0 ... 0 0 0
0 0 0 ... 0 0 0
... ⋱ ...
0 0 0 ... 0 0 0
0 0 0 ... 0 1 0
0 0 0 ... 0 0 1
[torch.DoubleTensor of size 128x12x15]
Parameter containing:
(0 ,.,.) =
0.1286 -0.1301
-0.0871 0.0397
0.0317 0.0072
-0.0406 0.0803
-0.1885 0.1544
0.1090 -0.1772
-0.1818 0.0865
-0.1696 -0.0973
-0.1179 -0.0781
-0.0745 -0.1268
0.1303 -0.0950
0.0804 -0.1008
...
(11,.,.) =
-0.1984 -0.1655
-0.0531 -0.0365
-0.1009 0.2038
-0.0382 0.1492
-0.1048 -0.1378
0.0774 0.0515
-0.0548 -0.1791
-0.1805 0.0558
-0.1805 -0.0603
-0.1938 0.0465
-0.1470 -0.0298
-0.1597 -0.1718
[torch.FloatTensor of size 12x12x2]
Parameter containing:
0.2023
0.0939
-0.2037
0.1501
-0.0270
0.0494
0.0637
-0.1420
0.1512
-0.1538
-0.1828
0.0366
[torch.FloatTensor of size 12]
Traceback (most recent call last):
File "/Users/usr/projects/deep_learning_notes/pytorch_playground/grammar_variational_autoencoder/grammar_vae.py", line 69, in <module>
losses += sess.train(train_loader, epoch)
File "/Users/usr/projects/deep_learning_notes/pytorch_playground/grammar_variational_autoencoder/grammar_vae.py", line 26, in train
recon_batch, mu, log_var = self.model(data)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/Users/usr/projects/deep_learning_notes/pytorch_playground/grammar_variational_autoencoder/model.py", line 90, in forward
mu, log_var = self.encoder(x)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/Users/usr/projects/deep_learning_notes/pytorch_playground/grammar_variational_autoencoder/model.py", line 47, in forward
h = self.conv_1(x)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/modules/conv.py", line 143, in forward
self.padding, self.dilation, self.groups)
File "/Users/usr/anaconda/envs/deep-learning/lib/python3.6/site-packausrs/torch/nn/functional.py", line 69, in conv1d
return f(input, weight, bias)
RuntimeError: expected Double tensor (got Float tensor)
|
st118127
|
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
x_stub = Variable(torch.DoubleTensor(100, 15, 12).normal_(0, 1))
conv_1 = nn.Conv1d(15, 15, 3).double()
y = conv_1(x_stub)
|
st118128
|
Hey there,
I’m trying to implement a pytorch module that wraps any module and applies the wrapped module’s operation to every time step of the input. This is pretty much the same as Keras’ TimeDistributed wrapper.
To do so, I want to simply reshape the input to two dimensions, apply the operation, and then reshape it back.
However, I’m having troubles doing x_reshape = x.view(-1, x.size(-1)), when the nn.LSTM cell has batch_first=True, which gets me the error:
RuntimeError: input is not contiguous at /Users/soumith/code/pytorch-builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:231
Hopefully, someone here can help me solve this problem…
Cheers
|
st118129
|
view only works on tensors which are contiguous in memory, so in general you have to write .contiguous().view(sizes)
Also, the Bottle mixin in the SNLI model script (in the examples repo) may do more or less what you’re looking for, at least for some layers (nn.Bottle was the name for this in Torch7)
|
st118130
|
@jekbradbury
So indeed, in order to avoid extra .contiguous() call, it’s strongly recommended to go with batch_first=False?
Cuz I think this function call would invoke memory copy to make contiguous array under the hood.
rnn = th.nn.LSTM(5, 10, 1, batch_first=True) # batch_first
x = th.autograd.Variable(th.randn(1, 2, 5)) # batch size 1
out, state = rnn(x)
print(out.is_contiguous()) # TRUE
x = th.autograd.Variable(th.randn(2, 2, 5)) # batch size 2
out, state = rnn(x)
print(out.is_contiguous()) # FALSE
rnn = th.nn.LSTM(5, 10, 1) # seq_len first
x = th.autograd.Variable(th.randn(2, 1, 5)) # batch size 1
out, state = rnn(x)
print(out.is_contiguous()) # TRUE
x = th.autograd.Variable(th.randn(2, 2, 5)) # batch size 2
out, state = rnn(x)
print(out.is_contiguous()) # TRUE
|
st118131
|
Yes, that’s right, although the cost of .contiguous() is usually not that much compared to the LSTM itself.
|
st118132
|
Are there any differences between nn.activations and nn.functional.activations?
Such as class torch.nn.ReLU and torch.nn.functional.relu,the two activation functions seem to be the same, can I use one of them to replace another?
|
st118133
|
PyTorch activations use the functional API under the hood 7. So you can safely use one or the other. However, the nn.module additionally implements _repr_ which lets you easily pretty print a summary of your network.
|
st118134
|
the second tensor is not a batch, only have 2 dimention, I wanan multi the second tensor with each item of the first batch
|
st118135
|
Is it ok to put some nn.Module in the forward() function of my class?
will they be correctly registered and the gradients be computed correctly?
For example, is my code below correct?
(my dropout, pooling and upsampling are in the forward() method and not in the init(). Or should i use the functional form?)
def make_conv_bn_prelu(in_channels, out_channels, kernel_size=3, stride=1, padding=1):
return [
nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding, bias=False),
nn.BatchNorm2d(out_channels),
nn.PReLU(out_channels),
]
def make_flat(out):
flat = nn.AdaptiveAvgPool2d(1)(out)
flat = flat.view(flat.size(0), -1)
return flat
class MyNet(nn.Module):
def __init__(self, in_shape, num_classes):
super(MyNet, self).__init__()
in_channels, height, width = in_shape
stride=1
self.preprocess = nn.Sequential(
*make_conv_bn_prelu(in_channels, 8, kernel_size=1, stride=1, padding=0 ),
*make_conv_bn_prelu(8, 8, kernel_size=1, stride=1, padding=0 ),
*make_conv_bn_prelu(8, 8, kernel_size=1, stride=1, padding=0 ),
*make_conv_bn_prelu(8, 8, kernel_size=1, stride=1, padding=0 ),
)
self.down0 = nn.Sequential(
*make_conv_bn_prelu( 8, 32),
*make_conv_bn_prelu(32, 32, kernel_size=1, stride=1, padding=0 ),
)
self.down1 = nn.Sequential(
*make_conv_bn_prelu(32, 32),
*make_conv_bn_prelu(32, 32, kernel_size=1, stride=1, padding=0 ),
)
self.down2 = nn.Sequential(
*make_conv_bn_prelu(32, 32),
*make_conv_bn_prelu(32, 32, kernel_size=1, stride=1, padding=0 ),
)
self.down3 = nn.Sequential(
*make_conv_bn_prelu(32, 32),
*make_conv_bn_prelu(32, 32, kernel_size=1, stride=1, padding=0 ),
)
self.up2 = nn.Sequential(
*make_conv_bn_prelu(32+32, 32, kernel_size=1, stride=1, padding=0 ),
*make_conv_bn_prelu(32, 32),
)
self.up1 = nn.Sequential(
*make_conv_bn_prelu(32+32, 32, kernel_size=1, stride=1, padding=0 ),
*make_conv_bn_prelu(32, 32),
)
self.block = nn.Sequential(
*make_linear_bn_prelu(32+32+32, 512),
*make_linear_bn_prelu(512, 512),
)
self.logit = nn.Linear(512, num_classes)
def forward(self, x):
out = self.preprocess(x)
down0 = self.down0(out)
out = nn.MaxPool2d(kernel_size=2, stride=2)(down0)
down1 = self.down1(out)
out = nn.MaxPool2d(kernel_size=2, stride=2)(down1)
down2 = self.down2(out)
out = nn.MaxPool2d(kernel_size=2, stride=2)(down2)
out = self.down3(out)
flat3 = make_flat(out)
up2 = nn.UpsamplingNearest2d(scale_factor=2)(out)
up2 = torch.cat([down2, up2],1)
out = self.up2(up2)
flat2 = make_flat(out)
up1 = nn.UpsamplingNearest2d(scale_factor=2)(out)
up1 = torch.cat([down1, up1],1)
out = self.up1(up1)
flat1 = make_flat(out)
out = torch.cat([flat1, flat2, flat3],1)
out = nn.Dropout(p=0.10)(out)
out = self.block(out)
out = self.logit(out)
return out
|
st118136
|
@mratsim thank you for the reply. I understand nn.functional can be used in the forward part. Will the nn.Moude work too? My code seems to run, but I am not very sure if it is correct.
|
st118137
|
Yes your code is correct. Nevertheless, this is what the functional interface is for, and it’d be a better fit here.
|
st118138
|
Hi all,
This is a beginner question. I’m doing the 60 minute Blitz tutorial, but when running the cifar10 classifier, I get on line
optimizer = optim.SGD(Network.parameters(), lr=0.001, momentum=0.9)
this error:
TypeError: parameters() missing 1 required positional argument: ‘self’
I copied the tutorial line by line, so I have no clue what is going wrong.
Any help will be much appreciated!
|
st118139
|
I installed it from the main pytorch webpage and am using pytorch version
0.1.11 for python 3.5
|
st118140
|
it doesn’t look like you copied the code as-is:
optimizer = optim.SGD(Network.parameters(), lr=0.001, momentum=0.9)
This line is not in the tutorial, the line in the tutorial is:
optimizer = optim.SGD(net.parameters(), lr=0.01)
Here net is an object of the class Net, and maybe in your case Network is a class and not an object.
|
st118141
|
Thank you! Yeah that would’ve been a problem, but I changed some other variable names too, thereby maintaining the same structure. Eventually I reinstalled pytorch and it worked for me!
Cheers
|
st118142
|
I want to use conv1d to do something. kernel_size = 4, stride=1, and i want the output of the conv1d has the same width with the origin input. To realize it , i need to pad 3 zero totally and it seems impossible to using conv1d to achieve this goal.
Any suggestions about how to solve this kind of problem. Thanks.
|
st118143
|
You cad use F.pad to add a different number of zeros in the left and in the right of your image (because your kernel has even size)
|
st118144
|
Hi,
Please somebody help me understand why my Pytorch cost not converging while my Numpy does (using the same logic). I am trying to create a fizbuz program similar to Joel’s tensorflow implementation 7.
I have 2 layer dense network with sigmoid activation in both layers. I am using MSE for cost and same hyperparameters for both numpy and pytorch script. My numpy script converges to the global minima with less than 1k epochs while Pytorch is jumping around even after 5k.
Pytorch implementation:
import numpy as np
import torch as th
from torch.autograd import Variable
input_size = 10
epochs = 1000
batches = 64
lr = 0.01
def binary_enc(num):
ret = [int(i) for i in '{0:b}'.format(num)]
return [0] * (input_size - len(ret)) + ret
def binary_dec(array):
ret = 0
for i in array:
ret = ret * 2 + int(i)
return ret
def training_test_gen(x, y):
assert len(x) == len(y)
indices = np.random.permutation(range(len(x)))
split_size = int(0.9 * len(indices))
trX = x[indices[:split_size]]
trY = y[indices[:split_size]]
teX = x[indices[split_size:]]
teY = y[indices[split_size:]]
return trX, trY, teX, teY
def x_y_gen():
x = []
y = []
for i in range(1000):
x.append(binary_enc(i))
if i % 15 == 0:
y.append([1, 0, 0, 0])
elif i % 5 == 0:
y.append([0, 1, 0, 0])
elif i % 3 == 0:
y.append([0, 0, 1, 0])
else:
y.append([0, 0, 0, 1])
return training_test_gen(np.array(x), np.array(y))
def check_fizbuz(i):
if i % 15 == 0:
return 'fizbuz'
elif i % 5 == 0:
return 'buz'
elif i % 3 == 0:
return 'fiz'
else:
return 'number'
trX, trY, teX, teY = x_y_gen()
if th.cuda.is_available():
dtype = th.cuda.FloatTensor
else:
dtype = th.FloatTensor
x = Variable(th.from_numpy(trX).type(dtype), requires_grad=False)
y = Variable(th.from_numpy(trY).type(dtype), requires_grad=False)
w1 = Variable(th.randn(10, 100).type(dtype), requires_grad=True)
w2 = Variable(th.randn(100, 4).type(dtype), requires_grad=True)
b1 = Variable(th.zeros(1, 100).type(dtype), requires_grad=True)
b2 = Variable(th.zeros(1, 4).type(dtype), requires_grad=True)
no_of_batches = int(len(trX) / batches)
for epoch in range(epochs):
for batch in range(no_of_batches):
start = batch * batches
end = start + batches
x_ = x[start:end]
y_ = y[start:end]
a2 = x_.mm(w1)
a2 = a2.add(b1.expand_as(a2))
h2 = a2.sigmoid()
a3 = h2.mm(w2)
a3 = a3.add(b2.expand_as(a3))
hyp = a3.sigmoid()
error = hyp - y_
loss = error.pow(2).sum()
loss.backward()
w1.data -= lr * w1.grad.data
w2.data -= lr * w2.grad.data
b1.data -= lr * b1.grad.data
b2.data -= lr * b2.grad.data
w1.grad.data.zero_()
w2.grad.data.zero_()
print(epoch, error.mean().data[0])
Numpy Implementation:
import numpy as np
input_size = 10
epochs = 1000
batches = 64
lr = 0.01
def sig(val):
return 1 / (1 + np.exp(-val))
def sig_d(val):
sig_val = sig(val)
return sig_val * (1 - sig_val)
def binary_enc(num):
ret = [int(i) for i in '{0:b}'.format(num)]
return [0] * (input_size - len(ret)) + ret
def binary_dec(array):
ret = 0
for i in array:
ret = ret * 2 + int(i)
return ret
def training_test_gen(x, y):
assert len(x) == len(y)
indices = np.random.permutation(range(len(x)))
split_size = int(0.9 * len(indices))
trX = x[indices[:split_size]]
trY = y[indices[:split_size]]
teX = x[indices[split_size:]]
teY = y[indices[split_size:]]
return trX, trY, teX, teY
def x_y_gen():
x = []
y = []
for i in range(1000):
x.append(binary_enc(i))
if i % 15 == 0:
y.append([1, 0, 0, 0])
elif i % 5 == 0:
y.append([0, 1, 0, 0])
elif i % 3 == 0:
y.append([0, 0, 1, 0])
else:
y.append([0, 0, 0, 1])
return training_test_gen(np.array(x), np.array(y))
def check_fizbuz(i):
if i % 15 == 0:
return 'fizbuz'
elif i % 5 == 0:
return 'buz'
elif i % 3 == 0:
return 'fiz'
else:
return 'number'
trX, trY, teX, teY = x_y_gen()
w1 = np.random.randn(10, 100)
w2 = np.random.randn(100, 4)
b1 = np.zeros((1, 100))
b2 = np.zeros((1, 4))
no_of_batches = int(len(trX) / batches)
for epoch in range(epochs):
for batch in range(no_of_batches):
# forward
start = batch * batches
end = start + batches
x = trX[start:end]
y = trY[start:end]
a2 = x.dot(w1) + b1
h2 = sig(a2)
a3 = h2.dot(w2) + b2
hyp = sig(a3)
error = hyp - y
loss = (error ** 2).mean()
# backward
outerror = error
outgrad = outerror * sig_d(a3)
outdelta = h2.T.dot(outgrad)
outbiasdelta = np.ones([1, batches]).dot(outgrad)
hiddenerror = outerror.dot(w2.T)
hiddengrad = hiddenerror * sig_d(a2)
hiddendelta = x.T.dot(hiddengrad)
hiddenbiasdelta = np.ones([1, batches]).dot(hiddengrad)
w1 -= hiddendelta * lr
b1 -= hiddenbiasdelta * lr
w2 -= outdelta * lr
b2 -= outbiasdelta * lr
print(epoch, loss)
# test
a2 = teX.dot(w1) + b1
h2 = sig(a2)
a3 = h2.dot(w2) + b2
hyp = sig(a3)
outli = ['fizbuz', 'buz', 'fiz', 'number']
for i in range(len(teX)):
num = binary_dec(teX[i])
print(
'Number: {} -- Actual: {} -- Prediction: {}'.format(
num, check_fizbuz(num), outli[hyp[i].argmax()]))
print('Test loss: ', np.mean(teY - hyp))
|
st118145
|
in torch:
loss = error.pow(2).sum()
in numpy:
loss = (error ** 2).mean()
that’s the point. learning rate is too small, maybe change it to lr*(end-start) or:
loss =error.pow(2).mean()
fixes it in my computer
|
st118146
|
Note that pytorch also implements the ** operator, so you can effectively have the same code for both torch and numpy.
|
st118147
|
@chenyuntc I dont see any improvement with changing sum to mean
And about changing the learning rate: Changing the hyperparameter will definitely improve the performance coz my NN was still learning. But my problem is not with my networks isolated performance. Numpy give me a loss of 0.009 and acuracy more than 0.98 with the same hyperparameter i used for pytorch. But pytorch is nowhere near.
|
st118148
|
I thought pow(2) and ** 2 are same and would not affect the output. Will try that anyways.
|
st118149
|
sorry about the mistake:
I found it.you missed:
b1.grad.data.zero_()
b2.grad.data.zero_()
also print
print(epoch, (error**2).mean().data[0])
you should get similar results as numpy.
|
st118150
|
@chenyuntc
I see some improvements but still far away from Numpy’s. I set the random seed to 10000 and tried to print the weights in each epoch. I could not understand two specific behaviors.
Numpy weights are 8 decimal valued (0.65897509) while pytorch is rounding it to 4 digit (0.6590).
The change in weight values are much faster in numpy
Change of first first weight matrix’s first value in Numpy
0.65897509416461408
0.63824423351700321
0.70742434009324673
0.74590493637361743
0.76070011091500567
0.76704695584086358
0.77166554222431838
0.77654595399345228
0.78221473000011987
0.7887668125254953
Change of first first weight matrix’s first value in Pytorch
0.6590
0.6585
0.6580
0.6575
0.6570
0.6565
0.6561
0.6556
0.6551
0.6547
0.6542
Since both script using seed(1000) we got the same initial value (pytorch rounded it though) but the change is super slow in pytorch
|
st118151
|
it’s not rounded in torch, it just print in this format
in pytorch, you should use loss=error.pow(2).sum()/2.to get the same grad as numpy----sorry about the mistake.
if it still doesn’t work, you can print w1.grad.data, and compare to the grad you compute. that may be helpful.
|
st118152
|
So when I print it, pytorch rounds the value and print it but for doing computation it uses the actual value?
I’ll do 2&3 and verify. Thanks @chenyuntc
|
st118153
|
PyTorch computes using the full precision of the data type, just the displaying that truncates the numbers.
You can change that by modifying set_printoptions 23
|
st118154
|
I could see some differences again (not improvement exactly) but still far from Numpy’s accuracy. Trying different approaches and asking around. Will update here if i find some solution. Please let me know if you have some other intuitions.
|
st118155
|
the diffirence actually comes from your numpy code.
hiddenerror = outerror.dot(w2.T)
which shall be:
hiddenerror = outgrad.dot(w2.T)
Even without modifying this, both pytorch and numpy code should converge to similar results (0.038/0.014). so maybe something else is wrong in your running code.
I use below code to test and get nearlly the same results.
pytorch: (999, 0.03878042474389076)
numpy: ( 999, 0.038780463080550241)
pytorch code:
import numpy as np
import torch as th
from torch.autograd import Variable
input_size = 10
epochs = 1000
batches = 64
lr = 0.01
np.random.seed(10000)
def binary_enc(num):
ret = [int(i) for i in '{0:b}'.format(num)]
return [0] * (input_size - len(ret)) + ret
def binary_dec(array):
ret = 0
for i in array:
ret = ret * 2 + int(i)
return ret
def training_test_gen(x, y):
assert len(x) == len(y)
indices = np.random.permutation(range(len(x)))
split_size = int(0.9 * len(indices))
trX = x[indices[:split_size]]
trY = y[indices[:split_size]]
teX = x[indices[split_size:]]
teY = y[indices[split_size:]]
return trX, trY, teX, teY
def x_y_gen():
x = []
y = []
for i in range(1000):
x.append(binary_enc(i))
if i % 15 == 0:
y.append([1, 0, 0, 0])
elif i % 5 == 0:
y.append([0, 1, 0, 0])
elif i % 3 == 0:
y.append([0, 0, 1, 0])
else:
y.append([0, 0, 0, 1])
return training_test_gen(np.array(x), np.array(y))
def check_fizbuz(i):
if i % 15 == 0:
return 'fizbuz'
elif i % 5 == 0:
return 'buz'
elif i % 3 == 0:
return 'fiz'
else:
return 'number'
trX, trY, teX, teY = x_y_gen()
if th.cuda.is_available():
dtype = th.cuda.FloatTensor
else:
dtype = th.FloatTensor
x = Variable(th.from_numpy(trX).type(dtype), requires_grad=False)
y = Variable(th.from_numpy(trY).type(dtype), requires_grad=False)
w1 = np.random.randn(10, 100)
w2 = np.random.randn(100, 4)
w1 = Variable(th.from_numpy(w1).type(dtype), requires_grad=True)
w2 = Variable(th.from_numpy(w2).type(dtype), requires_grad=True)
b1 = Variable(th.zeros(1, 100).type(dtype), requires_grad=True)
b2 = Variable(th.zeros(1, 4).type(dtype), requires_grad=True)
no_of_batches = int(len(trX) / batches)
for epoch in range(epochs):
for batch in range(no_of_batches):
start = batch * batches
end = start + batches
x_ = x[start:end]
y_ = y[start:end]
a2 = x_.mm(w1)
a2 = a2.add(b1.expand_as(a2))
h2 = a2.sigmoid()
a3 = h2.mm(w2)
a3 = a3.add(b2.expand_as(a3))
hyp = a3.sigmoid()
error = hyp - y_
loss = error.pow(2).sum()/2.0
loss.backward()
w1.data -= lr * w1.grad.data
w2.data -= lr * w2.grad.data
b1.data -= lr * b1.grad.data
b2.data -= lr * b2.grad.data
w1.grad.data.zero_()
w2.grad.data.zero_()
b1.grad.data.zero_()
b2.grad.data.zero_()
print(epoch, (error**2).mean().data[0])
numpy code:
import numpy as np
input_size = 10
epochs = 1000
batches = 64
lr = 0.01
np.random.seed(10000)
def sig(val):
return 1 / (1 + np.exp(-val))
def sig_d(val):
sig_val = sig(val)
return sig_val * (1 - sig_val)
def binary_enc(num):
ret = [int(i) for i in '{0:b}'.format(num)]
return [0] * (input_size - len(ret)) + ret
def binary_dec(array):
ret = 0
for i in array:
ret = ret * 2 + int(i)
return ret
def training_test_gen(x, y):
assert len(x) == len(y)
indices = np.random.permutation(range(len(x)))
split_size = int(0.9 * len(indices))
trX = x[indices[:split_size]]
trY = y[indices[:split_size]]
teX = x[indices[split_size:]]
teY = y[indices[split_size:]]
return trX, trY, teX, teY
def x_y_gen():
x = []
y = []
for i in range(1000):
x.append(binary_enc(i))
if i % 15 == 0:
y.append([1, 0, 0, 0])
elif i % 5 == 0:
y.append([0, 1, 0, 0])
elif i % 3 == 0:
y.append([0, 0, 1, 0])
else:
y.append([0, 0, 0, 1])
return training_test_gen(np.array(x), np.array(y))
def check_fizbuz(i):
if i % 15 == 0:
return 'fizbuz'
elif i % 5 == 0:
return 'buz'
elif i % 3 == 0:
return 'fiz'
else:
return 'number'
trX, trY, teX, teY = x_y_gen()
w1 = np.random.randn(10, 100)
w2 = np.random.randn(100, 4)
b1 = np.zeros((1, 100))
b2 = np.zeros((1, 4))
no_of_batches = int(len(trX) / batches)
for epoch in range(epochs):
for batch in range(no_of_batches):
# forward
start = batch * batches
end = start + batches
x = trX[start:end]
y = trY[start:end]
a2 = x.dot(w1) + b1
h2 = sig(a2)
a3 = h2.dot(w2) + b2
hyp = sig(a3)
error = hyp - y
loss = (error ** 2).mean()
# backward
outerror = error
outgrad = outerror * sig_d(a3)
outdelta = h2.T.dot(outgrad)
outbiasdelta = np.ones([1, batches]).dot(outgrad)
hiddenerror = outgrad.dot(w2.T)
hiddengrad = hiddenerror * sig_d(a2)
hiddendelta = x.T.dot(hiddengrad)
hiddenbiasdelta = np.ones([1, batches]).dot(hiddengrad)
w1 -= hiddendelta * lr
b1 -= hiddenbiasdelta * lr
w2 -= outdelta * lr
b2 -= outbiasdelta * lr
print(epoch, loss)
# test
a2 = teX.dot(w1) + b1
h2 = sig(a2)
a3 = h2.dot(w2) + b2
hyp = sig(a3)
outli = ['fizbuz', 'buz', 'fiz', 'number']
for i in range(len(teX)):
num = binary_dec(teX[i])
print(
'Number: {} -- Actual: {} -- Prediction: {}'.format(
num, check_fizbuz(num), outli[hyp[i].argmax()]))
print('Test loss: ', np.mean(teY - hyp))
|
st118156
|
This is awsome!! I got the same accuracy and loss. Thank you so much @chenyuntc. And Could you please explain the intution behind using
loss = error.pow(2).sum()/2.0
over
loss = error.pow(2).mean()
especially when you use
error**2).mean().data[0]
while printing the error.
And could you describe how it is similar to Numpy’s implementation
loss = (error ** 2).mean()
|
st118157
|
if loss = error.pow(2).sum()/2.0, dloss/derror = error
if loss = error.pow(2).mean(), dloss/derror = 2*error/(batch_size), your batch_size is 64 here.
because in numpy implementation, outerror = error, so we should use the first form of loss.
I print (error**2).mean().data[0], because you are doing this in numpy
loss = (error ** 2).mean()
...
...
print(epoch, loss)
they are the same, but the pytorch code can backward and calculate grad automatically.
|
st118158
|
According to the spec of torch.utils.data.Dataset:
An abstract class representing a Dataset.
All other datasets should subclass it. All subclasses should override
__len__, that provides the size of the dataset, and __getitem__,
supporting integer indexing in range from 0 to len(self) exclusive.
My problem is that, what if the data comes in online streaming fashion, and I’m not able to find out __len__ at all? Or I just have a very large dataset, intend to iterate over it just once, so don’t care about the __len__ of it.
In both cases, could I ignore this __len__ function when subclassing Dataset safely?
|
st118159
|
You should override __len__ even for a streaming dataset, since it’s called by some other classes. You can set it to the number of examples you want per-“epoch” or just a very large integer.
|
st118160
|
Hi there,
I read through the Overfeat Lua implementation, @Soumith_Chintala’s fork as well as Sermanet’s C implementation. I have a few questions and I hope you will be kind enough to help me out and make me understand localization and detection with convnets.
In the overfeat paper, it is mentioned that a regression network was stacked on the previous layers (i.e. excluding the classifier layer) so that localization + detection is done in this regression layer. I have read through the c code and the Lua code and I see no where where this is explicitly implemented ( maybe I am missing something). Could you clarify this for me please?
Secondly, it is mentioned in the paper that bounding boxes are predicted and used to localize and detect objects in the image. This also, I did not find in any of the codes I have seen. They mentioned that the bounding box is provided alongside the images as input to the convnet. How is the ground truth bounding box generated? Was it manually drawn over the image before being forwarded through the net or was it specified in code? If it was specified in code, how are the aspect ratios generated? I have been looking around but have found little pointers on how this is actually implemented. Would appreciate a clarification with respect to this.
If I have just one object I am trying to identify in a scene, localize and detect, I am assuming that using the proposed detection and localization as in overfeat, r-CNN, or single-shot detector, I would not have to crop my training images to contain just the roi in the image that I am trying to find since this is the whole idea of multiple bounding box predictions, and detection . Am I correct? How does one avoid multiple predictions and just get the bounding box for the one object one is trying to find in a image?
How easy would it be to stack the regression net such as in overfeat on top of the resnet18 layers, save the last layer, in pytorch, for example?
Would appreciate help.
Sincerely,
Lekan
|
st118161
|
they only released overfeat code for classification. Detection code is not released.
|
st118162
|
I want to implement the yolo (You look only once) in Pytorch. I wrote its code and set its batch size to 64. But when I ran the algorithm, its cost always increased. But when I set the batch size to 32, the cost decreased in a long term. Could you please tell me Is it logical or not?
|
st118163
|
Smaller batchsize gives the gradients sufficient noise to jump out of valleys. That being said 64 is not that big size. Are you seeing at the loss per image or the total loss (which would be higher for higher batch size).
|
st118164
|
Thanks for your response @Mika_S! I have found that one of my code line had some problem which it was a NAN value. I would like to know, have you ever read the yolo paper?
|
st118165
|
I have read yolo paper an year back. But shoot me questions and I can try to answer as best as I can :).
|
st118166
|
I have a question about its cost function implementation. Because the source code was provided by C language and I am so newbie in c. I would like to know, have you got any implementation about the cost in python? I have implemented it but i think the yolo’s authors use some tricks to obtain the best answer which they are unknown. Is it possible to collaborate with each other to provide the python implementation of that?
I have started a post in google group of darknet (Yolo basic framework) link 11, but did not get any answers.
|
st118167
|
Sorry for the late reply. Unfortunately I do not have an implementation of the cost in python.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.