id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st118868
|
No, it’s not possible to do it with the groups option.
The only way I see of doing it would be to hackily wrap SpatialConvolutionMap (from Lua torch) in pytorch, but it doesn’t have GPU kernels implemented, so you’d need to run it on the CPU.
Furthermore, if I remember properly, it is unlikely that we will have optimized GPU kernels for SpatialConvolutionMap.
|
st118869
|
Hi!
I need something like CAdd in torch:
github.com
torch/nn/blob/master/doc/simple.md#nn.CAdd 12
<a name="nn.simplelayers.dok"></a>
# Simple layers #
Simple Modules are used for various tasks like adapting Tensor methods and providing affine transformations :
* Parameterized Modules :
* [Linear](#nn.Linear) : a linear transformation ;
* [LinearWeightNorm](#nn.LinearWeightNorm) : a weight normalized linear transformation ;
* [SparseLinear](#nn.SparseLinear) : a linear transformation with sparse inputs ;
* [IndexLinear](#nn.IndexLinear) : an alternative linear transformation with for sparse inputs and max normalization ;
* [Bilinear](#nn.Bilinear) : a bilinear transformation with sparse inputs ;
* [PartialLinear](#nn.PartialLinear) : a linear transformation with sparse inputs with the option of only computing a subset ;
* [Add](#nn.Add) : adds a bias term to the incoming data ;
* [CAdd](#nn.CAdd) : a component-wise addition to the incoming data ;
* [Mul](#nn.Mul) : multiply a single scalar factor to the incoming data ;
* [CMul](#nn.CMul) : a component-wise multiplication to the incoming data ;
* [Euclidean](#nn.Euclidean) : the euclidean distance of the input to `k` mean centers ;
* [WeightedEuclidean](#nn.WeightedEuclidean) : similar to [Euclidean](#nn.Euclidean), but additionally learns a diagonal covariance matrix ;
* [Cosine](#nn.Cosine) : the cosine similarity of the input to `k` mean centers ;
* [Kmeans](#nn.Kmeans) : [Kmeans](https://en.wikipedia.org/wiki/K-means_clustering) clustering layer;
* Modules that adapt basic Tensor methods :
This file has been truncated. show original
So it could be Linear (y = Ax + b) but without learnable A.
How can I do it via pytorch?
|
st118870
|
Hi,
Yes, you just need to do usual math operations and it will work just fine. For example
weight = nn.Parameter(torch.rand(4))
input = Variable(torch.rand(4))
output = input * weight # weight is a learnable parameter
|
st118871
|
Hi,
I would like to implement a sharing weight ‘alexnet’ with two input batches of images, and the fully connected layers are concatenated together like this:
class ALEXNET_two_scale(nn.Module):
def __init__(self, num_classes,
original_model = models.__dict__['alexnet'](pretrained=True)):
super(ALEXNET_two_scale, self).__init__()
self.features = original_model.features
self.drop_out = nn.Dropout(p=0.75)
self.fc6_scl1 = nn.Linear(256 * 6 * 6, 4096)
self.fc6_scl2 = nn.Linear(256 * 6 * 6, 4096)
self.relu = nn.ReLU(inplace=True)
self.fc7_scl1 = nn.Linear(4096, 2048)
self.fc7_scl2 = nn.Linear(4096, 2048)
self.fc8 = nn.Linear(4096, num_classes)
def forward(self, imgs_scl1, imgs_scl2):
x_1 = self.features(imgs_scl1)
x_2 = self.features(imgs_scl2)
x_1 = self.drop_out(x_1)
x_2 = self.drop_out(x_2)
x_1 = self.fc6_scl1(x_1)
x_2 = self.fc6_scl2(x_2)
x_1 = self.relu(x_1)
x_2 = self.relu(x_2)
x_1 = self.drop_out(x_1)
x_2 = self.drop_out(x_2)
x_1 = self.fc7_scl1(x_1)
x_2 = self.fc7_scl2(x_2)
x = torch.cat([x_1, x_2], 1)
y = self.fc8(x)
return y
However, when I run it, I come across the following error:
/home/ga85nej/Documents/pytorch_WH_multiscale/pytorch_multiscale_VGG16 in <module>()
396
397 if __name__ == '__main__':
--> 398 main()
399
400
/home/ga85nej/Documents/pytorch_WH_multiscale/pytorch_multiscale_VGG16 in main()
369
370 # train for one epoch
--> 371 train_losses_per_epoch = train(train_img_lab_lists, model, train_transform, criterion, optimizer, epoch)
372
373 training_metadata['epoch_{}_train_loss'.format(epoch)] = train_losses_per_epoch
/home/ga85nej/Documents/pytorch_WH_multiscale/pytorch_multiscale_VGG16 in train(train_img_lab_lists, model, train_transform, criterion, optimizer, epoch)
143
144 # compute output
--> 145 output = model(batch_imgs_scale1_var, batch_imgs_scale2_var)
146 loss = criterion(output, batch_labs_var)
147
/home/ga85nej/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
200
201 def __call__(self, *input, **kwargs):
--> 202 result = self.forward(*input, **kwargs)
203 for hook in self._forward_hooks.values():
204 hook_result = hook(self, input, result)
/home/ga85nej/Documents/pytorch_WH_multiscale/pytorch_multiscale_VGG16 in forward(self, imgs_scl1, imgs_scl2)
70 x_2 = self.drop_out(x_2)
71
---> 72 x_1 = self.fc6_scl1(x_1)
73 x_2 = self.fc6_scl2(x_2)
74
/home/ga85nej/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
200
201 def __call__(self, *input, **kwargs):
--> 202 result = self.forward(*input, **kwargs)
203 for hook in self._forward_hooks.values():
204 hook_result = hook(self, input, result)
/home/ga85nej/anaconda3/lib/python3.6/site-packages/torch/nn/modules/linear.py in forward(self, input)
52 return self._backend.Linear()(input, self.weight)
53 else:
---> 54 return self._backend.Linear()(input, self.weight, self.bias)
55
56 def __repr__(self):
/home/ga85nej/anaconda3/lib/python3.6/site-packages/torch/nn/_functions/linear.py in forward(self, input, weight, bias)
8 self.save_for_backward(input, weight, bias)
9 output = input.new(input.size(0), weight.size(0))
---> 10 output.addmm_(0, 1, input, weight.t())
11 if bias is not None:
12 # cuBLAS doesn't support 0 strides in sger, so we can't use expand
RuntimeError: matrix and matrix expected at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.10_1488758793045/work/torch/lib/THC/generic/THCTensorMathBlas.cu:235
Does anyone know about it?
Thank you very much.
|
st118872
|
You need to add a .view before the fully-connected layers to convert from 4D tensors to 2D tensors
|
st118873
|
I am trying to accelerate my pytorch model training on a g2.x2 large instance with CUDA7.5 on Ubuntu 14.04. After some hassle getting it provisioned, I have hit a wall with this issue. My script hangs indefinitely just after I invoke a call to torch.cuda.is_available(). I have searched around and can’t seem to find others with the same issue. Here is a screen grab of an attempt to isolate the call in a repl on the instance.
Screen Shot 2017-04-01 at 5.28.57 PM.png1650×724 103 KB
It seems like eveything this fine after the call to torch.cuda.is_available(), but the process hangs when I attempt to leave the shell. Any help would be much appreciated.
|
st118874
|
I know pytorch optimizer have parameter ‘weight_decay’ but how to choose a suitable value, I always got a too large or too small value
|
st118875
|
How can you quantize, as well as normalize the output of a network?
For example say I have a net whose output I pass through a sigmoid. This quashes all the values to between [0,1], what I would like is normalized histograms, where the values of the bins are quantized to say 255 levels.
Here’s some code, to simulate the nets output
batch_size = 2
num_classes = 10
levels = 256
out = torch.randn( batch_size, num_classes )
out = torch.sigmoid(out) # squash to between 0 and 1
Now normalization
row_sums = torch.sum(out, 1) # normalization
row_sums = row_sums.repeat(1, num_classes) # expand to same size as out
out = torch.div( out , row_sums ) # these should be histograms
torch.sum(out,1) # yay :) they sum to one
1.0000
1.0000
[torch.FloatTensor of size 2x1]
Now try to quantize each bin of the histogram, to values [0,1/256, ..., 1]
out = torch.mul( out , levels )
out = torch.round( out ) # use round not floor as floor will loose probability mass
out = torch.div( out , levels )
torch.mul(out,255) # yay :) they're quantized
24 7 24 19 41 25 38 33 9 34
12 17 31 39 16 35 25 29 14 36
[torch.FloatTensor of size 2x10]
torch.sum(out,1) # oh dear - no longer normalized
0.9961
0.9961
[torch.FloatTensor of size 2x1]
|
st118876
|
TensorFlow has tf.expand_dims(x, -1). It’s not a big deal, but being able to write x.unsqueeze(-1) is a lot nicer than x.unsqueeze(len(x.size()). Agree? Disagree?
|
st118877
|
this is soon coming to pytorch with: https://github.com/pytorch/pytorch/pull/1108 83
|
st118878
|
Thank you all for the help in this fabulous forum.
I was a Torch user, and new to pytorch. Right now I want to do something like “skip connection”.
Below is the code I used in torch. How can I make a “skip connection” in pytorch
main = nn.Sequential()
...
local conc = nn.ConcatTable()
local conv = nn.Sequential()
conv:add(SpatialConvolution(...))
conc:add(nn.Identity())
conc:add(conv)
main:add(conc)
main:add(nn.CAddTable())
|
st118879
|
Pytorch is very similar to nngraph in LuaTorch, except that you dont have Cadd, Cmul or any of the table layers. Its the normal +, * operator.
Assuming proper padding for compatible sizes -
input = Variable(torch.Tensor(...))
conv_out =self.conv(input)
out = conv_out + input
|
st118880
|
A skip connection just requires passing the return value from one layer to one further down:
github.com
mattmacy/vnet.pytorch/blob/master/vnet.py#L169 134
# # to what is in the actual prototxt, not the intent
# self.down_tr32 = DownTransition(16, 2)
# self.down_tr64 = DownTransition(32, 3)
# self.down_tr128 = DownTransition(64, 3)
# self.down_tr256 = DownTransition(128, 3)
# self.up_tr256 = UpTransition(256, 3)
# self.up_tr128 = UpTransition(128, 3)
# self.up_tr64 = UpTransition(64, 2)
# self.up_tr32 = UpTransition(32, 1)
# self.out_tr = OutputTransition(16)
def forward(self, x):
out16 = self.in_tr(x)
out32 = self.down_tr32(out16)
out64 = self.down_tr64(out32)
out128 = self.down_tr128(out64)
out256 = self.down_tr256(out128)
out = self.up_tr256(out256, out128)
out = self.up_tr128(out, out64)
out = self.up_tr64(out, out32)
out = self.up_tr32(out, out16)
out = self.out_tr(out)
Models will typically add or concatenate the skip connection:
github.com
mattmacy/vnet.pytorch/blob/master/vnet.py#L101 118
self.up_conv = nn.ConvTranspose3d(inChans, outChans // 2, kernel_size=2, stride=2)
self.bn1 = ContBatchNorm3d(outChans // 2)
self.do1 = passthrough
self.do2 = nn.Dropout3d()
self.relu1 = ELUCons(elu, outChans // 2)
self.relu2 = ELUCons(elu, outChans)
if dropout:
self.do1 = nn.Dropout3d()
self.ops = _make_nConv(outChans, nConvs, elu)
def forward(self, x, skipx):
out = self.do1(x)
skipxdo = self.do2(skipx)
out = self.relu1(self.bn1(self.up_conv(out)))
xcat = torch.cat((out, skipxdo), 1)
out = self.ops(xcat)
out = self.relu2(torch.add(out, xcat))
return out
class OutputTransition(nn.Module):
|
st118881
|
When I run torch.svd(torch.rand(3,3).cuda()) I encountered an error:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-6-9c860559505b> in <module>()
----> 1 torch.svd(torch.rand(3,3).cuda())
RuntimeError: No CUDA implementation of 'gesvd'. Install MAGMA and rebuild cutorch (http://icl.cs.utk.edu/magma/) at /data/wanggu/software/pytorch/torch/lib/THC/generic/THCTensorMathMagma.cu:280
However, I have installed magma-cuda80 through conda install -c soumith magma-cuda80 and rebuilt pytorch.
The error still exists. Could anyone help me?
|
st118882
|
Solved by smth in post #4
cmake prefix path is not which conda, it is "$(dirname $(which conda))/../"
export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
|
st118883
|
What is the exact command you are using to rebuild pytorch? did you set CMAKE_PREFIX_PATH as specified here?
github.com
pytorch/pytorch/blob/master/README.md#from-source 71
<p align="center"><img width="40%" src="docs/source/_static/img/pytorch-logo-dark.png" /></p>
--------------------------------------------------------------------------------
PyTorch is a Python package that provides two high-level features:
- Tensor computation (like NumPy) with strong GPU acceleration
- Deep neural networks built on a tape-based autograd system
You can reuse your favorite Python packages such as NumPy, SciPy and Cython to extend PyTorch when needed.
We are in an early-release beta. Expect some adventures and rough edges.
- [More about PyTorch](#more-about-pytorch)
- [Installation](#installation)
- [Binaries](#binaries)
- [From Source](#from-source)
- [Docker Image](#docker-image)
- [Previous Versions](#previous-versions)
- [Getting Started](#getting-started)
- [Communication](#communication)
This file has been truncated. show original
|
st118884
|
I set the cmake prefix to which conda,do I need to set it again when I rebuild pytorch?
|
st118885
|
cmake prefix path is not which conda, it is "$(dirname $(which conda))/../"
export CMAKE_PREFIX_PATH="$(dirname $(which conda))/../"
|
st118886
|
Thanks @smth, it’s fixed now. I misunderstood the “anaconda root directory”. I thought it was where conda located.
|
st118887
|
here is the Variable before softmax
屏幕快照 2017-03-31 下午3.15.51.png778×868 83.3 KB
屏幕快照 2017-03-31 下午3.16.01.png712×852 85.9 KB
after softmax, the biggest one of them became 1 and others became 0
屏幕快照 2017-03-31 下午3.16.18.png770×874 71.3 KB
my forward function
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = F.relu(self.conv4(x))
x = F.relu(self.conv5(x))
x = F.relu(self.conv6(x))
x = F.relu(self.conv7(x))
x = x.view(x.size(0), -1)
x = F.softmax(x)
return x
is there something wrong in my usage of softmax?
|
st118888
|
This is to be expected. exp(-20) is about 2e-9, so if your largest input to softmax is 20 larger (in absolute numbers) than the others, the softmax will be extremely spiked. (And the softmax is saturated, i.e. you have vanishing gradients, see e.g. http://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/ 163 for lots of details.)
The canonical way to overcome this is to use a temperature parameter to divide the inputs (see e.g. https://en.wikipedia.org/wiki/Softmax_function#Reinforcement_learning 148 ), for your example, the maximal difference seems to be ~150, so you could try an initial temperature of ~200 to get several non-zero probabilities.
Best regards
Thomas
|
st118889
|
the number is too large,its exp overflow. I I think you should add batchnorm layer before convolution layer.
|
st118890
|
Yesterday I upgraded my pytorch to v0.1.11. I ran the same training script. I noticed that the time for each batch has increased from 0.8 seconds to 2.9 seconds. Nothing is changed except the pytorch. My old version is 0.1.9+67f9455. Now I am trying to upgrade to cudnn v6 to see whether it could solve my isse. I wonder whether anyone had the same problem with me.
Yaozong
|
st118891
|
Problem solved! After I used the pre-built binary install, everything works as fast as usual. Before that, I was building the pytorch manually pulled from the master branch.
|
st118892
|
If I had to guess, you’re running on CPU and when compiled from source it wasn’t being linked properly with MKL.
|
st118893
|
So I’m using a script to turn a directory of images in 5 subdirectories into a single tensor of size (730, 3, 256, 256) and a label tensor of size (730, 5) for 5 classes and then torch.utils.data to turn that into a TensorDataset and make/shuffle batches. The batches are then moved to the GPU individually at each iteration through the dataset during training.
However, this isn’t a tenable practice for a very large dataset. Is there a better way to do this that I’m not seeing in the docs? It seems like there should be a simpler way to read images from disk into shuffled batches rather than having to put the whole thing into two tensors in system memory.
|
st118894
|
You can use the ImageFolder dataset from torchvision 103 for that. It follows the structure of imagenet.
|
st118895
|
I am using a GRUCell at each time-step to pass input and hidden state and get back the new hidden state in the following way:
hidden_states[i, j, :] = GRUCell(input, hidden_states[i, j, :])
where
hidden_states = Variable(torch.zeros(N, N, 1, hidden_state_size)).cuda()
As you can notice that instead of sending a batch of inputs, I am sending one input at a time (hence, the 1 in the tensor size). When I run this, I get a runtime error as follows:
RuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 2 objects sharing it
Can anyone explain how I have created another variable with the same storage as hidden_states?
|
st118896
|
Resolved. I was using hidden_states elsewhere in the code and creating a new variable that shared the storage.
|
st118897
|
Hi!
What is the best way to concatenate final hidden of two networks in the bidirectional case?
I mean, for example, if we have 2 layers, batch_size = 5 and hidden_size = 10 BiRnn outputs tensors with shape(4,5,10) for c and h, but in my case, I need shape(2,5,20) because I will feed this to decoder.
Thanks!
|
st118898
|
You can use torch.stack([out1, out2], 2), see the doc here 181 for more details.
|
st118899
|
when I write some model about CNN, I found the transpose op is too slow in GPU, even slower than CPU
here is some my test code
test1:
import torch
import time
from torch.autograd import Variable
x = Variable(torch.randn(100,500))
cputimes = []
for sampl in (1000, 10000, 100000, 1000000):
start = time.time()
for i in range(sampl):
y = torch.transpose(x,0,1)
end = time.time()
cputimes.append(end-start)
print(cputimes)
x = x.cuda(device_id=2)
gputimes = []
for sampl in (1000, 10000, 100000, 1000000):
start = time.time()
for i in range(sampl):
y = torch.transpose(x,0,1)
end = time.time()
gputimes.append(end-start)
print(gputimes)
output:
[0.00479888916015625, 0.047800540924072266, 0.5636386871337891, 4.8213441371917725]
[0.0057294368743896484, 0.056331634521484375, 0.5558302402496338, 5.78531289100647]
test2:
In [16]: torch.cuda.set_device(2)
In [17]: %timeit torch.transpose(torch.FloatTensor(20,100),1,0)
The slowest run took 26.26 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 1.72 µs per loop
In [18]: %timeit torch.transpose(torch.cuda.FloatTensor(20,100),1,0)
The slowest run took 21.21 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 3: 3.05 µs per loop
|
st118900
|
Hi,
In you example, you could replace the transpose function by any function in torch, you would get the same behavior.
The transpose operation does not actually touches the tensor data and just work on the metadata. The code to do that on cpu and gpu is exactly the same and never touches the gpu. The runtimes that you see in your test is just the overhead of the python loop + calling into c code (in your case the c code does almost nothing).
The gpu version is slightly slower because the cuda library has to get its state before calling the functions which slows it slightly compared to the pure cpu version.
This code sample is slow only because of the python loop which calls c functions. To make it faster, you need to find a way to remove this loop. If in your case, you want to transpose a bunch of matrices, you could for example stack them in a single tensor and then call transpose on this tensor.
|
st118901
|
To add to what @albanD wrote, here’s also a quick demo of Autograd overhead.
class NoOp(torch.autograd.Function):
def __init__(self):
super().__init__()
def forward(self, x):
return x
def print_times(x, func, msg):
start = time.time()
for i in range(1000000):
_ = func(x)
t = time.time() - start
print("{}: {:.5f}".format(msg, t))
tensor = torch.randn(100, 500)
ndarray = tensor.numpy()
variable = Variable(tensor)
print_times(tensor, lambda x: x, "Python noop")
print_times(ndarray, lambda x: x.transpose(), "Numpy transpose")
print_times(tensor, lambda x: x.t(), "Torch transpose")
print_times(variable, lambda x: NoOp()(x), "Autograd noop")
print_times(variable, lambda x: x.t(), "Autograd transpose")
# output:
#
# Python noop: 0.07554
# Numpy transpose: 0.23783
# Torch transpose: 0.49813
# Autograd noop: 1.95098
# Autograd transpose: 3.72835
|
st118902
|
Actually I changed my code and remove the transpose op, but the model still run slower on GPU than CPU, here is some of my model code .
x = x.unsqueeze(1)
x0 = F.relu(self.conv0_0(x)).squeeze(3)
x1 = F.relu(self.conv0_1(x)).squeeze(3)
x = torch.cat((x0,x1),1)
x = x.unsqueeze(1)
x0 = F.relu(self.conv1_0(x)).squeeze(2)
x1 = F.relu(self.conv1_1(x)).squeeze(2)
x0 = F.max_pool1d(x0, x0.size(2)).squeeze(2)
x1 = F.max_pool1d(x1, x1.size(2)).squeeze(2)
x = torch.cat((x0,x1),1)
self.conv is torch.nn.conv2d,
why it preform badly on GPU
|
st118903
|
The main reason for that is usually that you are working with such small conv layer and so few data that the overhead of launching the job on the GPU is higher than the computation itself.
If you have a very small net with small inputs, you will see no speedup from using GPUs.
|
st118904
|
Kaggle competitions rely on Log Loss (which is torch.nn.BCELoss):
LogLoss= −1/n∑i=1 to n [yi log(ŷ i) + (1−yi) log(1−ŷ i)],
where:
n : is the number of patients in the test set
ŷi : is the predicted probability of the image belonging to a patient with cancer
yi : is 1 if the diagnosis is cancer, 0 otherwise
Can someone please explain how this loss differs from torch.nn.MultiLabelSoftMarginLoss?
torch.nn MLSM Loss:
loss(x, y) = - sum_i (y[i] log( exp(x[i]) / (1 + exp(x[i])))
+ (1-y[i]) log(1/(1+exp(x[i])))) / x:nElement()
Thanks!
|
st118905
|
Hi Torchies,
Bumping an old question… if someone has a paper / presentation comparing different loss functions and their effects, it will really help me out.
Thanks!
|
st118906
|
I was trying to train OpenNMT example 5 on mac with cpu with the following steps:
Env: python3.5, Pytorch 0.1.10.1
preprocess data and shrink src and tgt to have only the first 100 sentences by inserting the following lines after line133 1 in preprocess.py
shrink = True
if shrink:
src = src[0:100]
tgt = tgt[0:100]
then, I ran
python preprocess.py -train_src data/src-train.txt -train_tgt data/tgt-train.txt -valid_src data/src-val.txt -valid_tgt data/tgt-val.txt -save_data data/demo
then I train using python train.py -data data/demo.train.pt -save_model demo_model
Then it rans ok for a while before an error appeared:
(dlnd-tf-lab) ->python train.py -data data/demo.train.pt -save_model demo_model
Namespace(batch_size=64, brnn=False, brnn_merge='concat', curriculum=False, data='data/demo.train.pt', dropout=0.3, epochs=13, extra_shuffle=False, gpus=[], input_feed=1, layers=2, learning_rate=1.0, learning_rate_decay=0.5, log_interval=50, max_generator_batches=32, max_grad_norm=5, optim='sgd', param_init=0.1, pre_word_vecs_dec=None, pre_word_vecs_enc=None, rnn_size=500, save_model='demo_model', start_decay_at=8, start_epoch=1, train_from='', train_from_state_dict='', word_vec_size=500)
Loading data from 'data/demo.train.pt'
* vocabulary size. source = 24999; target = 35820
* number of training sentences. 100
* maximum batch size. 64
Building model...
* number of parameters: 58121320
NMTModel (
(encoder): Encoder (
(word_lut): Embedding(24999, 500, padding_idx=0)
(rnn): LSTM(500, 500, num_layers=2, dropout=0.3)
)
(decoder): Decoder (
(word_lut): Embedding(35820, 500, padding_idx=0)
(rnn): StackedLSTM (
(dropout): Dropout (p = 0.3)
(layers): ModuleList (
(0): LSTMCell(1000, 500)
(1): LSTMCell(500, 500)
)
)
(attn): GlobalAttention (
(linear_in): Linear (500 -> 500)
(sm): Softmax ()
(linear_out): Linear (1000 -> 500)
(tanh): Tanh ()
)
(dropout): Dropout (p = 0.3)
)
(generator): Sequential (
(0): Linear (500 -> 35820)
(1): LogSoftmax ()
)
)
Train perplexity: 29508.9
Train accuracy: 0.0216306
Validation perplexity: 4.50917e+08
Validation accuracy: 3.57853
Train perplexity: 1.07012e+07
Train accuracy: 0.06198
Validation perplexity: 103639
Validation accuracy: 0.944334
Train perplexity: 458795
Train accuracy: 0.031198
Validation perplexity: 43578.2
Validation accuracy: 3.42942
Train perplexity: 144931
Train accuracy: 0.0432612
Validation perplexity: 78366.8
Validation accuracy: 2.33598
Decaying learning rate to 0.5
Train perplexity: 58696.8
Train accuracy: 0.0278702
Validation perplexity: 14045.8
Validation accuracy: 3.67793
Decaying learning rate to 0.25
Train perplexity: 10045.1
Train accuracy: 0.0457571
Validation perplexity: 26435.6
Validation accuracy: 4.87078
Decaying learning rate to 0.125
Train perplexity: 10301.5
Train accuracy: 0.0490849
Validation perplexity: 24243.5
Validation accuracy: 3.62823
Decaying learning rate to 0.0625
Train perplexity: 7927.77
Train accuracy: 0.062812
Validation perplexity: 7180.49
Validation accuracy: 5.31809
Decaying learning rate to 0.03125
Train perplexity: 4573.5
Train accuracy: 0.047421
Validation perplexity: 6545.51
Validation accuracy: 5.6163
Decaying learning rate to 0.015625
Train perplexity: 3995.7
Train accuracy: 0.0549085
Validation perplexity: 6316.25
Validation accuracy: 5.4175
Decaying learning rate to 0.0078125
Train perplexity: 3715.81
Train accuracy: 0.0540765
Validation perplexity: 6197.91
Validation accuracy: 5.86481
Decaying learning rate to 0.00390625
Train perplexity: 3672.46
Train accuracy: 0.0540765
Validation perplexity: 6144.18
Validation accuracy: 6.01392
Decaying learning rate to 0.00195312
Train perplexity: 3689.7
Train accuracy: 0.0528286
Validation perplexity: 6113.55
Validation accuracy: 6.31213
Decaying learning rate to 0.000976562
Exception ignored in: <function WeakValueDictionary.__init__.<locals>.remove at 0x118b19b70>
Traceback (most recent call last):
File "/Users/Natsume/miniconda2/envs/dlnd-tf-lab/lib/python3.5/weakref.py", line 117, in remove
TypeError: 'NoneType' object is not callable
Could you tell me how to fix it? Thanks!
|
st118907
|
Hello,
dl4daniel:
TypeError: ‘NoneType’ object is not callable
I think might be seeing a [bug in python 3.5 weakref]
(http://bugs.python.org/issue29519 81) that occurs during shutdown.
On my machine I was able to resolve it by applying this patch (though I seem to recall that there was some fuzz in the line numbers):
https://github.com/python/cpython/commit/9cd7e17640a49635d1c1f8c2989578a8fc2c1de6.patch 154
Best regards
Thomas
|
st118908
|
tom:
bug in python 3.5 weakref
Thanks a lot, Tom!
According to your suggestion, in order to get code running, I switched to python 2.7, and it trains without error! and it works for python 3.6 too. I am testing on python3.5 after conda update python. Now it seems they all work without error when training like above.
Thanks again!
|
st118909
|
Has any work been done on supporting dynamic unrolling of inputs as in TF’s {bidirectional_}dynamic_rnn?
|
st118910
|
In PyTorch, a dynamic RNN over a custom cell is a for loop. That is, the following two code snippets do the same thing (the first one is a simplified version of the implementation of tf.dynamic_rnn)
#TensorFlow (should be run once, during `__init__`)
cond = lambda i, h: i < tf.shape(words)[0]
cell = lambda i, h: rnn_unit(words[i], h)
i = 0
_, h = tf.while_loop(cond, cell, (i, h0))
#PyTorch (should be run for every batch, during `forward`)
h = h0
for word in words:
h = rnn_unit(word, h)
|
st118911
|
Thanks. The python/ops/rnn.py code is so involved / convoluted I thought that there must be something more going on that that.
|
st118912
|
@jekbradbury wouldn’t it be possible (and faster) to run the loop entirely on the GPU?
|
st118913
|
In pytorch, running unidirectional one-layer arbitrary cell is easy (as @jekbradbury showed in his snippet), it becomes more involved if you need bidirectional/stacked recurrent cells - you either have to replicate bidirectional/stacked mechanics from nn/_functions/rnn.py, or add your cell all over the place in nn/_functions/rnn.py. @csarofeen had a proposal to make that easier https://github.com/pytorch/pytorch/issues/711 48, but so far it went nowhere.
I don’t quite understand what you mean by running loop entirely on the GPU. All the data operations will be performed on the GPU if your input, hidden and weights are on the GPU, the overhead of python loop is negligible. The biggest performance problem with custom rnn cell is not loop overhead, it’s that pointwise operations are not fused, for a typical recurrent cell such as LSTM there will be 6-10 of them, and performance will be limited by launch latency of those 6-10 kernels.
|
st118914
|
Also, note that
you can use pack_padded_sequence 91 to allow sequences of different length inside a minibatch
it’s perfectly fine to use minibatches with different sequence lengths in every call to LSTM
If your application allows that, using nn.LSTM instead of manually unrolled nn.LSTMCell you can easily observe a 10x speedups.
|
st118915
|
@elanmart I’m familiar with the call from the API docs, but I don’t see it used anywhere in the examples. The section of code I’m looking at is: https://github.com/allenai/bi-att-flow/blob/master/basic/model.py#L135 43
Can I just pack/pad the sentences in the minibatch and feed that to a BiLSTM?
|
st118916
|
No, because it looks like you have to apply attention. Look at https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/Models.py 131 for attention example.
|
st118917
|
Recently when I start to construct network module , I got confused that some people inherit from nn.Container while others choose nn.Module.
I just wonder What is the difference between that ?
I look through the source code and just find that nn.Container is a sub class of nn.Module ??
Could someone make clear of the question ?
|
st118918
|
nn.Container is something old, and is irrelevant. It does not exist anymore and has been merged into nn.Module.
|
st118919
|
I’m trying to implement a policy gradient method in RL and the output of my model need some more calculations before computing the loss. What should I do with my output and the loss function in such case?
|
st118920
|
If the calculations are “trainable”, that is there’s some learning involved, perhaps you could use a simple multi-layer perception mlp which takes in the output of your model, as input.
The output of the mlp could then feed into the loss criterion.
Maybe posting some code of the calculations you want do might be helpful to understand what you want to do?
|
st118921
|
Thank you for your answer!
the calculations are not trainable because it need to deal with some ‘discounted reward’ which is not available until an episode ends
As I read other topics, I found out that if I use the Variable containing the output of my model through my calculation, the model is then trainable, but the outcome of the optimization is not normal.
Here is part of my codes:
definition of the network
class PolicyNetwork(nn.Module):
def init(self):
super(PolicyNetwork, self).init()
self.conv1 = nn.Conv2d(7, 128, kernel_size=5, stride=1, padding=2)
self.conv2 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
self.conv3 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
self.conv4 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
self.conv5 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
self.conv6 = nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1)
self.conv7 = nn.Conv2d(128, 1, kernel_size=5, stride=1, padding=2)
self.steps_done = 0
self.matches_done = 0
self.win_count = 0
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = F.relu(self.conv4(x))
x = F.relu(self.conv5(x))
x = F.relu(self.conv6(x))
x = F.relu(self.conv7(x))
x = x.view(x.size(0), -1)
x = F.softmax(x)
return x
main codes in the optimizer
output = model(Variable(epstate.type(dtype)))
discounted_epr = discount_rewards(epreward)
discounted_epr -= torch.mean(discounted_epr)
discounted_epr /= torch.std(discounted_epr)
discounted_epr.resize_(discounted_epr.size()[0], 1)
discounted_epr = discounted_epr.expand(discounted_epr.size()[0], 81)
epy = Variable(epy, requires_grad=False)
discounted_epr = Variable(discounted_epr, requires_grad=False)
loss = (epy - output).mul(discounted_epr).pow(2).mean()
optimizer.zero_grad()
loss.backward()
optimizer.step()
|
st118922
|
@AjayTalati I’m not familiar with that community so the layout is not so good. sorry:cold_sweat:
|
st118923
|
No problem @pointW - if you want to understand, A3C, there’s already a very good implementation in PyTorch,
github.com
ikostrikov/pytorch-a3c/blob/master/train.py 14
import torch
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from envs import create_atari_env
from model import ActorCritic
def ensure_shared_grads(model, shared_model):
for param, shared_param in zip(model.parameters(),
shared_model.parameters()):
if shared_param.grad is not None:
return
shared_param._grad = param.grad
def train(rank, args, shared_model, counter, lock, optimizer=None):
torch.manual_seed(args.seed + rank)
This file has been truncated. show original
This might be more complicated than you need if you only want plain policy gradient - but it works very well
|
st118924
|
I’m trying to start PyTorch’s tutorials from https://github.com/yunjey/pytorch-tutorial 9. It says
$ git clone https://github.com/yunjey/pytorch-tutorial.git 2
$ cd pytorch-tutorial/tutorials/project_path
$ python main.py 2 # cpu version
$ python main-gpu.py # gnu version
After successfully cloning, I’m unable to execute 3rd line (python main.py 2, # CPU version). It shows this error: No such file or directory.
Is there something wrong in my installation of anaconda. I’m using anaconda3.5. Please help me.
|
st118925
|
I think there was some problem in cloning. Now i’m able to run the above command. Thank you!
|
st118926
|
Does torch.cuda.set_device(args.gpu) set a GPU for execution or it sets the number of GPUs should be used for execution?
If it sets the GPU for execution, how can I set multiple GPUs to run my experiment? For example, I want to tell to pytorch that you should use two GPUs (if available) to run my experiment. How can I achieve that?
|
st118927
|
torch.cuda.set_device sets the default GPU and in order to use multi-GPU, you may use nn.DataParallel
|
st118928
|
The following code is from the SNLI example in Pytorch.
def forward(self, inputs):
batch_size = inputs.size()[1]
state_shape = self.config.n_cells, batch_size, self.config.d_hidden
h0 = c0 = Variable(inputs.data.new(*state_shape).zero_())
outputs, (ht, ct) = self.rnn(inputs, (h0, c0))
In the above code, why h0 and c0 is created through inputs.data.new()? What inputs.data.new() actually means? Can anyone explain this piece of code? I am not understanding why we can create the h0 and c0 variable of desired shape in a normal way?
|
st118929
|
input.data.new create a tensor whose type is same as input.data, and also you can create a variable in a normal way such as Variable(torch.zeros(*state_shape)), but in order to keep the tensor type consistent with inputs, you need to cast to the type of inputs. with type_as or type manually
|
st118930
|
ok, so only type information of the inputs is used to create the Variable, nothing else? like the inputs.data itself?
|
st118931
|
I think only the type information of inputs.data not the type of Variable inputs
|
st118932
|
class LCNPModel(nn.Module):
"""Container module with an encoder, a recurrent module, and a decoder."""
def __init__(self, inputs):
super(LCNPModel, self).__init__()
....
self.encoder_nt = nn.Embedding(self.nnt, self.dnt)
self.word2vec_plus = nn.Embedding(self.nt, self.dt)
self.word2vec = nn.Embedding(self.nt, self.dt)
self.LSTM = nn.LSTM(self.dt, self.dhid, self.nlayers, batch_first=True, bias=True)
# the initial states for h0 and c0 of LSTM
self.h0 = (Variable(torch.zeros(self.nlayers, self.bsz, self.dhid)),
Variable(torch.zeros(self.nlayers, self.bsz, self.dhid)))
.....
self.init_weights(initrange)
self.l2 = itertools.ifilter(lambda p: p.requires_grad == True, self.parameters())
def init_weights(self, initrange=1.0):
self.word2vec_plus.weight.data.fill_(0)
self.word2vec.weight.data = self.term_emb
self.encoder_nt.weight.data = self.nonterm_emb
self.word2vec.weight.requires_grad = False
self.encoder_nt.weight.requires_grad = False
....
Hi, above is part of my code. In my code, I have a word2vec and word2vec_plus embeddings. So I would like an embedding which is initialized as word2vec pretrained vectors, but keep training it further. But when I use the optimizer, I would like to take the l2 norm of this embedding as the distance between the current embedding with the original word2vec embedding, which makes sense since I don’t want the new trained embedding to be too far away from the pretrained one.
My problem is, when I set the word2vec.weight.requires_grad to False, and optimize parameters that require gradient, everything is fine but the training time is too slow after the first round. However, if I comment out everything with word2vec but only use word2vec_plus, everything is very fast. Since you can think of I am using word2vec just as a normal constant in my code, it is not supposed to slow down the model training.
So my question is, if there is any way to speed up this process or is there anything that I am doing wrong?
Thanks a lot!
|
st118933
|
It’s probably the L2 norm distance between the two embedding matrices that’s taking forever to calculate, and there isn’t really a way around that for now (you may update only parts of the word2vec_plus embedding matrix at each iteration but you have to recompute the L2 norm over the whole matrix).
|
st118934
|
Hi,
The problem is that even when I comment out the word2vec embedding, which is a constant since I don’t require gradient for it, and optimize over everything else, I expect this is the same as not commenting out it (because the optimizer has nothing to do with it), it still gave me speed up. So I think even when requires_grad is false, the optimizer, for some reason, still looks at it?
|
st118935
|
I read two lines of codes as follows
for param in resnet.parameters():
param.requires_grad = False
And it is for fintune only top layer of the model, but I cant understand why?
Is there anyone who can explain it to me! Thank you very much!
|
st118936
|
That 2 lines of code freezes the whole model. If you want to finetune few layers, you need to get the parameters of those layers and make the ‘requires_grad’ as True, so that you can finetune those layers.
|
st118937
|
Dose PyTorch possess any elegant way to initiate parameters of some layers when necessary?Dose some functions,such as conv2,Linear,has direct interfaces for users to do initialization?
Thank you for your answers!
|
st118938
|
Have a look of this 538
m = t.nn.Conv2d(16, 33, 3, stride=2)
m.weight.data.normal_(0, 0.01)
m.bias.data.fill_(0)
Also nn.init 467 have implemented many initialization methods.
m = t.nn.Conv2d(16, 33, 3, stride=2)
xavier_uniform(m.weight.data)
|
st118939
|
Has anyone already implemented Monte Carlo dropout as described by Gal & Ghahramani '15 and in Gal’s blog: What my model doesn’t know 80 - using pytorch for estimating a model’s confidence in its predictions? I know it would be fairly trivial to implement, but if someone has already done it that’s even better.
One could parallelize MCD inference by having multiple instances of a given item in a mini-batch. However, in order for that to work the dropout masks have to be independent for all the members of a given mini-batch. Normally it would be faster to re-use a mask across members so I’m curious how it’s done in torch.
Thanks.
|
st118940
|
In pytorch dropout masks are independent for all the samples in the minibatch (a single mask the size of the whole input is generated).
|
st118941
|
hi, i am trying to finetune the resnet model with my own data,i follow the imagenet folders main.py 29 example to modify the fc layer in this way, i only finetune in resnet not alexnet
def main():
global args, best_prec1
args = parser.parse_args()
# create model
if args.pretrained:
print("=> using pre-trained model '{}'".format(args.arch))
model = models.__dict__[args.arch](pretrained=True)
#modify the fc layer
model.fc=nn.Linear(512,100)
else:
print("=> creating model '{}'".format(args.arch))
model = models.__dict__[args.arch]()
if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
model.features = torch.nn.DataParallel(model.features)
model.cuda()
else:
model = torch.nn.DataParallel(model).cuda()
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'".format(args.resume))
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
model.load_state_dict(checkpoint['state_dict'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(args.resume))
cudnn.benchmark = True
the other code remain same as the imagenet main.py 29
https://github.com/pytorch/examples/blob/master/imagenet/main.py 47
and when testing the model i trained ,i found the fc layer is still 1000 kinds
,i struggle to figure it out for a long time ,but it still the same ,i dont why
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(relu): ReLU (inplace)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(downsample): Sequential (
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
)
)
(1): BasicBlock (
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
(relu): ReLU (inplace)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True)
)
)
(avgpool): AvgPool2d (
)
(fc): Linear (512 -> 1000)
)
)
here is my testing code:
import torch
import torch.nn as nn
#from __future__ import print_function
import argparse
from PIL import Image
import torchvision.models as models
import skimage.io
from torch.autograd import Variable as V
from torch.nn import functional as f
from torchvision import transforms as trn
# define image transformation
centre_crop = trn.Compose([
trn.ToPILImage(),
trn.Scale(256),
trn.CenterCrop(224),
trn.ToTensor(),
trn.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
filename=r'2780-0-20161221_0001.jpg'
img = skimage.io.imread(filename)
x = V(centre_crop(img).unsqueeze(0), volatile=True)
model = models.__dict__['resnet18']()
model = torch.nn.DataParallel(model).cuda()
checkpoint = torch.load('model_best1.pth.tar')
model.load_state_dict(checkpoint['state_dict'])
best_prec1 = checkpoint['best_prec1']
logit = model(x)
print(logit)
print(len(logit))
h_x = f.softmax(logit).data.squeeze()
anyone can tell me where do i go wrong and how to extrac the last averarge pooling layer features ,thank you so much!
|
st118942
|
and also i try another code in testing the model,do i have to modify the resnet model again in testing the saving model?
import torch
import torch.nn as nn
#from torchvision import models
#from future import print_function
import argparse
#import torch
#from torch.autograd import Variable
from PIL import Image
#from torchvision.transforms import ToTensor
import torchvision.models as models
import skimage.io 2
from torch.autograd import Variable as V
from torch.nn import functional as f
from torchvision import transforms as trn
# define image transformation
centre_crop = trn.Compose([
trn.ToPILImage(),
trn.Scale(256),
trn.CenterCrop(224),
trn.ToTensor(),
trn.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
filename=r'2780-0-20161221_0001.jpg'
img = skimage.io.imread(filename)
x = V(centre_crop(img).unsqueeze(0), volatile=True)
model = models.__dict__['resnet18']()
model.fc=nn.Linear(512,100)
checkpoint = torch.load('model_best1.pth.tar')
best_prec1 = checkpoint['best_prec1']
model.load_state_dict(checkpoint['state_dict'])
model = torch.nn.DataParallel(model).cuda()
logit = model(x)
print(logit)
print(len(logit))
h_x = f.softmax(logit).data.squeeze()
an error occured ,i have no idea why
TM2%0$$WTD4NME}`M4{BRBW.png887×394 7.72 KB
|
st118943
|
help wanted,many thanks,do i have add freeze code like this in training
for param in model.parameters():
param.requires_grad = False
and update the optimize sgd
|
st118944
|
This was a bug in PyTorch that was fixed in commit https://github.com/pytorch/pytorch/pull/982 68 that was merged into master 16 days ago. Did you try updating your PyTorch installation?
|
st118945
|
my version is torch-0.1.10.post2-cp27-none-linux_x86_64.whl which i download from 14 days ago ,is there any problem with this version,can i update the whl by update command,i belive there is something wrong with my code, but i cant figure it out
|
st118946
|
Can you try this minimal example in your interpreter and see if it changes the layer? In my PyTorch installation it works without problems.
import torch.nn as nn
class M(nn.Module):
def __init__(self):
super(M, self).__init__()
self.m = nn.Linear(2,2)
def forward(self, x):
return self.m(x)
m = M()
print(m)
# should be
# M (
# (m): Linear (2 -> 2)
# )
m.m = nn.Linear(3, 3)
print(m)
# should be
# M (
# (m): Linear (3 -> 3)
# )
|
st118947
|
Ok, now try doing the same thing but with resnet
from torchvision import models
import torch.nn as nn
m = models.resnet18()
m.fc = nn.Linear(512, 10)
print(m) # see if the last layer was modified
If the last layer is correctly modified, then there is an inconsistency with what you have written in the first message, and we might be missing information to help you further debug your problem
|
st118948
|
i try retrain with freeze method ,i follow the example ,but i didnt succeed ,my code is this way:
def main():
global args, best_prec1
args = parser.parse_args()
# create model
if args.pretrained:
print("=> using pre-trained model '{}'".format(args.arch))
model = models.__dict__[args.arch](pretrained=True)
#xxxxxxxxxxxxxx to modify resnet 18 the fc layer xxxxxxxxxxxxxx
model.fc=nn.Linear(512,100)
else:
print("=> creating model '{}'".format(args.arch))
model = models.__dict__[args.arch]()
#for param in model.parameters():
#param.requires_grad = False
if args.arch.startswith('alexnet') or args.arch.startswith('vgg'):
model.features = torch.nn.DataParallel(model.features)
model.cuda()
else:
model = torch.nn.DataParallel(model).cuda()
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print("=> loading checkpoint '{}'".format(args.resume))
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
model.load_state_dict(checkpoint['state_dict'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.resume, checkpoint['epoch']))
else:
print("=> no checkpoint found at '{}'".format(args.resume))
#xxxxxxxxxxxxxx freeze update xxxxxxxxxxxxxx
for param in model.parameters():
param.requires_grad = False
# Replace the last fully-connected layer
# Parameters of newly constructed modules have requires_grad=True by default
#model.fc = torch.nn.Linear(512, 3)
print(model)
cudnn.benchmark = True
# Data loading code
traindir = os.path.join(args.data, 'train')
valdir = os.path.join(args.data, 'val')
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir, transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
])),
batch_size=args.batch_size, shuffle=True,
num_workers=args.workers, pin_memory=True)
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize,
])),
batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True)
# define loss function (criterion) and pptimizer
criterion = nn.CrossEntropyLoss().cuda()
#xxxxxxxxxxxxxx try to make sgd only changing fc layer xxxxxxxxxxxxxx
ignored_params = list(map(id, model.module.fc.parameters()))
base_params = filter(lambda p: id(p) not in ignored_params,
model.module.parameters())
optimizer = torch.optim.SGD([
{'params': base_params},
{'params': model.module.fc.parameters()
}], args.lr,momentum=args.momentum, weight_decay=args.weight_decay)
# optimizer = torch.optim.SGD(model.module.fc.parameters(), args.lr,momentum=args.momentum, weight_decay=args.weight_decay)
it occured a problem
ValueError: optimizing a parameter that doesn’t require gradients
where did i missed
|
st118949
|
You are freezing all the parameters of your network, so the optimizer is complaining that you don’t have parameters to optimize.
If you only want to train the newly added fully-connected layer, you should do instead
for param in model.parameters():
param.requires_grad = False
for param in model.fc.parameters():
param.requires_grad = True
|
st118950
|
yes it is modified
so something wrong is with my testing code ,when loading the model?
|
st118951
|
In your first testing code, you forgot to modify the fc layer.
In the second one, I’d recommend to add the DataParallel before you load the state dict, as your models were saved using DataParallel, so you need them to have DataParallel to be properly deserialized
|
st118952
|
image.png825×499 15.8 KB
seems not right ,how to use load_state_dict after using the DataParalled method?
i also use the
model.module.load_state_dict(checkpoint[‘state_dict’])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/public/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 311, in load_state_dict
.format(name))
KeyError: 'unexpected key "module.conv1.weight" in state_dict'
|
st118953
|
andyhx:
logit = model(x)
print(logit)
print(len(logit))
h_x = f.softmax(logit).data.squeeze()
thank u sir ,i retrain with freeze method ,and get a new model ,loading model is ok now,the classes is right,thank u!
|
st118954
|
I use resnet pretrained model to fine tune my own dataset. Everthing seems ok but I found that at the testing phase, the result is different due to different batch_size setting of test_loader. I change the pretrained model to alexnet, everthing goes well and the result is consistent. Why?
|
st118955
|
Do you change your network to evaluation mode?
Note that resnet has BatchNorm, which is affected by the batch size, while AlexNet doesn’t
|
st118956
|
Hi, I have implemented a rnn with customized middle and output layers.
When I call loss.backward(), I got the error message in the title.
Can anyone shed a light on the cause ?
I have include the model and train.py in this link below.
https://github.com/ShihanSu/sequence-pytorch
|
st118957
|
what is datahp? what does the iterator return?
I’m guessing that your problem is that you’re not casting the input and target data Variable type and therefore autograd can not compute the gradients… But again, I’m guessing! Didn’t spend too much time looking at your code.
|
st118958
|
Hi, everyone!
I am writing a repository about semantic segmentation with pytorch. Here, I will implement some exiting networks and some experimental networks for semantic segmentation.
The github address:
GitHub
ycszen/pytorch-segmentation 3
Pytorch for Segmentation. Contribute to ycszen/pytorch-segmentation development by creating an account on GitHub.
Welcome to communicate and contribute!
|
st118959
|
Dear All,
I am trying to start using PyTorch.
I have a stupid question at the stage of installing PyTorch…
Is there any functional or performance differences on PyTorch over the different Python versions 2.7, 3.5, and 3.6?
Thanks!
|
st118960
|
py3 versions (3.5 and 3.6) also have CUDA multiprocessing available.
Apart from that feature difference, they are within epsilon of performance with 2.7.
|
st118961
|
I changed from 2.7 to 3.6 but am having problems with the model zoo. The Imagenet example will download pre-trained weights using a hickle file. Unfortunately it seems that the hickle module is not compatible with Python V 3.* due to the way strings are handled, which makes it difficult to use the pre-trained examples.
Does anybody know a way around this, I don’t want to have to go back to 2.7 if I can avoid it
John
|
st118962
|
I have three convolution layer:
self.conv3 = nn.Conv1d(self.input_encoding_size, 100, kernel_size=3)
self.conv4 = nn.Conv1d(self.input_encoding_size, 100, kernel_size=4)
self.conv5 = nn.Conv1d(self.input_encoding_size, 100, kernel_size=5)
I want to apply batch normalization after the conv layer, do I need three separate batch normalization layers or just a single one in this case?
self.bn3 = nn.Conv1d(100)
self.bn4 = nn.Conv1d(100)
self.bn5 = nn.Conv1d(100)
x3 = F.relu(self.bn3(self.conv3(seq_vec)))
x4 = F.relu(self.bn4(self.conv4(seq_vec)))
x5 = F.relu(self.bn5(self.conv5(seq_vec)))
Is this the same as?
self.bn3 = nn.Conv1d(100)
x3 = F.relu(self.bn3(self.conv3(seq_vec)))
x4 = F.relu(self.bn3(self.conv4(seq_vec)))
x5 = F.relu(self.bn3(self.conv5(seq_vec)))
|
st118963
|
You need three separate BatchNorm layers, because BatchNorm also keeps track of running_mean and running_var, batch statistics which are used at test time. These statistics will be quite different at x3. x4 and x5.
|
st118964
|
I have background in machine learning but I am a beginner in deep learning. A question for which I have not seen a direct answer anywhere is if the developers want it to be something that beginners get into DL with or this is geared towards deep learning researchers.
Also, at the moment there seems to be not as much tutorials available for beginners as other frameworks such as Keras. Though, for me at least, I am finding pytorch to be not as high level as Keras, which is why I am really liking it. This helps me get a sense of what is happening under the hood when I am learning a new concept.
|
st118965
|
It’s definitely meant to be accessible to beginners as well as researchers; part of the reason PyTorch is built the way it is so that it doesn’t have as much need for separate abstraction layers aimed at different use cases/user populations like tflearn or Keras. That should also make it easier to figure out how things work (at least until you get to the C backend) or move up/down the abstraction stack.
|
st118966
|
I tried to use Amos’ DenseNet and Soumith’s ImageNet examples and replaced CIFAR10 dataset with ImageFolder dataset:
Original:
trainLoader = DataLoader(
dset.CIFAR10(root='cifar', train=True, download=True,
transform=trainTransform),
batch_size=args.batchSz, shuffle=True, **kwargs)
testLoader = DataLoader(
dset.CIFAR10(root='cifar', train=False, download=True,
transform=testTransform),
batch_size=args.batchSz, shuffle=False, **kwargs)
Modified:
trainLoader = DataLoader(
dset.ImageFolder(root='/home/FC/data/P/train', transform=trainTransform),
batch_size=args.batchSz, shuffle=True, **kwargs)
testLoader = DataLoader(
dset.ImageFolder(root='/home/FC/data/P/val', transform=testTransform),
batch_size=args.batchSz, shuffle=False, **kwargs)
But loading process hangs forever. Keyboard interrupt shows:
^CTraceback (most recent call last):
File "train.py", line 291, in <module>
main()
File "train.py", line 132, in main
train(train_loader, model, criterion, optimizer, epoch)
File "train.py", line 157, in train
for i, (input, target) in enumerate(train_loader):
File "/conda3/envs/idp/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 168, in __next__
idx, batch = self.data_queue.get()
File "/conda3/envs/idp/lib/python3.5/queue.py", line 164, in get
self.not_empty.wait()
File "/conda3/envs/idp/lib/python3.5/threading.py", line 293, in wait
waiter.acquire()
KeyboardInterrupt
What should I do to resolve this?
Folder structure is following with 2048x2048 PNGs:
/home/FC/Data/P
-> train -> classes -> images.png
-> val -> classes -> images.png
|
st118967
|
Sure thing Kaiser, here is my full train.py in a gist:
gist.github.com
https://gist.github.com/FuriouslyCurious/698e95a4c79fdaa37c32d3b2076a95ac 8
train.py
#!/usr/bin/env python3
import argparse
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
This file has been truncated. show original
I have tweaked only a couple of lines from Brandon Amos’s original DenseNet implementation, by replacing CIFAR10 loaded with ImageFolder loader:
github.com
bamos/densenet.pytorch/blob/master/train.py 6
#!/usr/bin/env python3
import argparse
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
import torchvision.datasets as dset
import torchvision.transforms as transforms
from torchvision.utils import save_image
from torch.utils.data import DataLoader
import os
import sys
import math
This file has been truncated. show original
Thanks!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.