id
stringlengths
3
8
text
stringlengths
1
115k
st103500
The Embedding layer works a bit different than e.g. Linear. You specify the num_embeddings, i.e. the size of the dictionary, and the embedding_dim, i.e. the size of each embedding vector. In your case num_embeddings=27297 means, that your input tensor should store indices in the rage [0, 27296]. Have a look at this small example: num_embeddings = 27297 embedding_dim = 300 emb = nn.Embedding( num_embeddings=num_embeddings, embedding_dim=embedding_dim ) batch_size = 100 x = torch.empty(batch_size, dtype=torch.long).random_(num_embeddings) output = emb(x) output.shape
st103501
I grew up visually learning concepts. Taking your advice does that mean I’m given a one dimensional (1x300) input then embedded to look like (27297 x 300) compared to the diagram I found on the internet (S x B x I)? The example I found looks like 3 dimensional. Does B = num_embeddings and S = embedding_dim? embedding.png1279×719 317 KB
st103502
nn.Embedding basically keeps your input dimensions and adds the embedding_dim to it. So if you provide a two-dimensional input, you will get a three-dimensional output. I’m not familiar with your example, but it looks like you transpose your input to get the dimensions [sequence, batch_size] and pass it into the embedding layer. I would therefore correspond to the embedding_dim.
st103503
Okay this is my overview of everything I’ve learned from converting a pytorch model to ONNX. Before I converted the pytorch model I wanted to make sure the dimensions for captions, cap_lens and hidden were correct through the forward function and no errors! output.png501×571 33 KB . However I have a new problem…I get an error from exporting the model using the exact same inputs??? “TypeError: wrapPyFuncWithSymbolic(): incompatible function arguments. The following argument types are supported: (self: torch._C.Graph, arg0: function, arg1: List[torch::jit::Value], arg2: int, arg3: function) -> iterator”. I tried tupling all 3 inputs (captions, cap_lens, hidden) onto the onnx converter yet I get some sort of data type error…Before showing the output terminal from the conversion I want to show how all three inputs look like. I came to a conclusion I need to either convert all three inputs into float or long dtype and idk how to properly convert dtypes. caption is a (48,15) with torch.LongTensor data type cap_lens is (48,) with torch.LongTensor data type and lastly hidden is a tuple of two (2, 48, 128) with torch.FloatTensor datatype # Export the model torch_out = torch.onnx._export(text_encoder, # model being run (captions_fake_input, cap_lens, hidden), # model input (or a tuple for multiple inputs) "kol.onnx", # where to save the model (can be a file or file-like object) export_params=True) # store the trained parameter weights inside the model file [output] TypeError Traceback (most recent call last) in () 3 (captions_fake_input, cap_lens, hidden), # model input (or a tuple for multiple inputs) 4 “kol.onnx”, # where to save the model (can be a file or file-like object) ----> 5 export_params=True) # store the trained parameter weights inside the model file ~/anaconda/lib/python3.6/site-packages/torch/onnx/init.py in _export(*args, **kwargs) 18 def _export(*args, **kwargs): 19 from torch.onnx import utils —> 20 return utils._export(*args, **kwargs) 21 22 ~/anaconda/lib/python3.6/site-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_type) 132 # training mode was.) 133 with set_training(model, training): –> 134 trace, torch_out = torch.jit.get_trace_graph(model, args) 135 136 if orig_state_dict_keys != _unique_state_dict(model).keys(): ~/anaconda/lib/python3.6/site-packages/torch/jit/init.py in get_trace_graph(f, args, kwargs, nderivs) 253 if not isinstance(args, tuple): 254 args = (args,) –> 255 return LegacyTracedModule(f, nderivs=nderivs)(*args, **kwargs) 256 257 ~/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 489 result = self._slow_forward(*input, **kwargs) 490 else: –> 491 result = self.forward(*input, **kwargs) 492 for hook in self._forward_hooks.values(): 493 hook_result = hook(self, input, result) ~/anaconda/lib/python3.6/site-packages/torch/jit/init.py in forward(self, *args) 286 _tracing = True 287 trace_inputs = _unflatten(all_trace_inputs[:len(in_vars)], in_desc) –> 288 out = self.inner(*trace_inputs) 289 out_vars, _ = _flatten(out) 290 _tracing = False ~/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py in call(self, *input, **kwargs) 487 hook(self, input) 488 if torch.jit._tracing: –> 489 result = self._slow_forward(*input, **kwargs) 490 else: 491 result = self.forward(*input, **kwargs) ~/anaconda/lib/python3.6/site-packages/torch/nn/modules/module.py in _slow_forward(self, *input, **kwargs) 477 tracing_state._traced_module_stack.append(self) 478 try: –> 479 result = self.forward(*input, **kwargs) 480 finally: 481 tracing_state.pop_scope() ~/Desktop/text-to-image-transcribed/code/model.py in forward(self, captions, cap_lens, hidden, mask) 153 154 #print("======= Packed emb ====== ") –> 155 emb = pack_padded_sequence(emb, cap_lens, batch_first=True) 156 #print("emb: ", emb) 157 #print("emb shape: ", emb.shape) ~/anaconda/lib/python3.6/site-packages/torch/onnx/init.py in wrapper(*args, **kwargs) 71 72 symbolic_args = function._unflatten(arg_values, args) —> 73 output_vals = symbolic_fn(tstate.graph(), *symbolic_args, **kwargs) 74 75 for var, val in zip( ~/anaconda/lib/python3.6/site-packages/torch/nn/utils/rnn.py in _symbolic_pack_padded_sequence(g, input, lengths, batch_first, padding_value, total_length) 144 outputs = g.wrapPyFuncWithSymbolic( 145 pack_padded_sequence_trace_wrapper, [input, lengths], 2, –> 146 _onnx_symbolic_pack_padded_sequence) 147 return tuple(o for o in outputs) 148 TypeError: wrapPyFuncWithSymbolic(): incompatible function arguments. The following argument types are supported: 1. (self: torch._C.Graph, arg0: function, arg1: List[torch::jit::Value], arg2: int, arg3: function) -> iterator Invoked with: graph(%0 : Long(48, 15) %1 : Long(48) %2 : Float(2, 48, 128) %3 : Float(2, 48, 128) %4 : Float(27297, 300) %5 : Float(512, 300) %6 : Float(512, 128) %7 : Float(512) %8 : Float(512) %9 : Float(512, 300) %10 : Float(512, 128) %11 : Float(512) %12 : Float(512)) { %13 : Float(48, 15, 300) = aten::embedding[padding_idx=-1, scale_grad_by_freq=0, sparse=0](%4, %0), scope: RNN_ENCODER/Embedding[encoder] %16 : Float(48, 15, 300), %17 : Handle = ^Dropout(0.5, False, False)(%13), scope: RNN_ENCODER/Dropout[drop] %15 : Float(48, 15, 300) = aten::slicedim=0, start=0, end=9223372036854775807, step=1, scope: RNN_ENCODER/Dropout[drop] %14 : Float(48, 15, 300) = aten::as_stridedsize=[48, 15, 300], stride=[4500, 300, 1], storage_offset=0, scope: RNN_ENCODER/Dropout[drop] %18 : Long(48) = prim::Constantvalue=, scope: RNN_ENCODER %76 : Float(502, 300), %77 : Long(15), %78 : Handle = ^PackPadded(True)(%16, %18), scope: RNN_ENCODER %19 : Float(15!, 48!, 300) = aten::transposedim0=0, dim1=1, scope: RNN_ENCODER %21 : Long() = aten::selectdim=0, index=47, scope: RNN_ENCODER %20 : Long() = aten::as_stridedsize=[], stride=[], storage_offset=47, scope: RNN_ENCODER %22 : Byte() = aten::leother={0}, scope: RNN_ENCODER %24 : Float(7!, 48!, 300) = aten::slicedim=0, start=0, end=7, step=1, scope: RNN_ENCODER %23 : Float(7!, 48!, 300) = aten::as_stridedsize=[7, 48, 300], stride=[300, 4500, 1], storage_offset=0, scope: RNN_ENCODER %25 : Float(7, 48, 300) = aten::clone(%24), scope: RNN_ENCODER %26 : Float(336, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %28 : Float(1!, 48!, 300) = aten::slicedim=0, start=7, end=8, step=1, scope: RNN_ENCODER %27 : Float(1!, 48!, 300) = aten::as_stridedsize=[1, 48, 300], stride=[300, 4500, 1], storage_offset=2100, scope: RNN_ENCODER %30 : Float(1!, 46!, 300) = aten::slicedim=1, start=0, end=46, step=1, scope: RNN_ENCODER %29 : Float(1!, 46!, 300) = aten::as_stridedsize=[1, 46, 300], stride=[300, 4500, 1], storage_offset=2100, scope: RNN_ENCODER %31 : Float(1, 46, 300) = aten::clone(%30), scope: RNN_ENCODER %32 : Float(46, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %34 : Float(1!, 48!, 300) = aten::slicedim=0, start=8, end=9, step=1, scope: RNN_ENCODER %33 : Float(1!, 48!, 300) = aten::as_stridedsize=[1, 48, 300], stride=[300, 4500, 1], storage_offset=2400, scope: RNN_ENCODER %36 : Float(1!, 43!, 300) = aten::slicedim=1, start=0, end=43, step=1, scope: RNN_ENCODER %35 : Float(1!, 43!, 300) = aten::as_stridedsize=[1, 43, 300], stride=[300, 4500, 1], storage_offset=2400, scope: RNN_ENCODER %37 : Float(1, 43, 300) = aten::clone(%36), scope: RNN_ENCODER %38 : Float(43, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %40 : Float(1!, 48!, 300) = aten::slicedim=0, start=9, end=10, step=1, scope: RNN_ENCODER %39 : Float(1!, 48!, 300) = aten::as_stridedsize=[1, 48, 300], stride=[300, 4500, 1], storage_offset=2700, scope: RNN_ENCODER %42 : Float(1!, 29!, 300) = aten::slicedim=1, start=0, end=29, step=1, scope: RNN_ENCODER %41 : Float(1!, 29!, 300) = aten::as_stridedsize=[1, 29, 300], stride=[300, 4500, 1], storage_offset=2700, scope: RNN_ENCODER %43 : Float(1, 29, 300) = aten::clone(%42), scope: RNN_ENCODER %44 : Float(29, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %46 : Float(1!, 48!, 300) = aten::slicedim=0, start=10, end=11, step=1, scope: RNN_ENCODER %45 : Float(1!, 48!, 300) = aten::as_stridedsize=[1, 48, 300], stride=[300, 4500, 1], storage_offset=3000, scope: RNN_ENCODER %48 : Float(1!, 20!, 300) = aten::slicedim=1, start=0, end=20, step=1, scope: RNN_ENCODER %47 : Float(1!, 20!, 300) = aten::as_stridedsize=[1, 20, 300], stride=[300, 4500, 1], storage_offset=3000, scope: RNN_ENCODER %49 : Float(1, 20, 300) = aten::clone(%48), scope: RNN_ENCODER %50 : Float(20, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %52 : Float(1!, 48!, 300) = aten::slicedim=0, start=11, end=12, step=1, scope: RNN_ENCODER %51 : Float(1!, 48!, 300) = aten::as_stridedsize=[1, 48, 300], stride=[300, 4500, 1], storage_offset=3300, scope: RNN_ENCODER %54 : Float(1!, 12!, 300) = aten::slicedim=1, start=0, end=12, step=1, scope: RNN_ENCODER %53 : Float(1!, 12!, 300) = aten::as_stridedsize=[1, 12, 300], stride=[300, 4500, 1], storage_offset=3300, scope: RNN_ENCODER %55 : Float(1, 12, 300) = aten::clone(%54), scope: RNN_ENCODER %56 : Float(12, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %58 : Float(1!, 48!, 300) = aten::slicedim=0, start=12, end=13, step=1, scope: RNN_ENCODER %57 : Float(1!, 48!, 300) = aten::as_stridedsize=[1, 48, 300], stride=[300, 4500, 1], storage_offset=3600, scope: RNN_ENCODER %60 : Float(1!, 10!, 300) = aten::slicedim=1, start=0, end=10, step=1, scope: RNN_ENCODER %59 : Float(1!, 10!, 300) = aten::as_stridedsize=[1, 10, 300], stride=[300, 4500, 1], storage_offset=3600, scope: RNN_ENCODER %61 : Float(1, 10, 300) = aten::clone(%60), scope: RNN_ENCODER %62 : Float(10, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %64 : Float(1!, 48!, 300) = aten::slicedim=0, start=13, end=14, step=1, scope: RNN_ENCODER %63 : Float(1!, 48!, 300) = aten::as_stridedsize=[1, 48, 300], stride=[300, 4500, 1], storage_offset=3900, scope: RNN_ENCODER %66 : Float(1!, 4!, 300) = aten::slicedim=1, start=0, end=4, step=1, scope: RNN_ENCODER %65 : Float(1!, 4!, 300) = aten::as_stridedsize=[1, 4, 300], stride=[300, 4500, 1], storage_offset=3900, scope: RNN_ENCODER %67 : Float(1, 4, 300) = aten::clone(%66), scope: RNN_ENCODER %68 : Float(4, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %70 : Float(1!, 48!, 300) = aten::slicedim=0, start=14, end=15, step=1, scope: RNN_ENCODER %69 : Float(1!, 48!, 300) = aten::as_stridedsize=[1, 48, 300], stride=[300, 4500, 1], storage_offset=4200, scope: RNN_ENCODER %72 : Float(1!, 2!, 300) = aten::slicedim=1, start=0, end=2, step=1, scope: RNN_ENCODER %71 : Float(1!, 2!, 300) = aten::as_stridedsize=[1, 2, 300], stride=[300, 4500, 1], storage_offset=4200, scope: RNN_ENCODER %73 : Float(1, 2, 300) = aten::clone(%72), scope: RNN_ENCODER %74 : Float(2, 300) = aten::viewsize=[-1, 300], scope: RNN_ENCODER %75 : Float(502, 300) = aten::cat[dim=0](%26, %32, %38, %44, %50, %56, %62, %68, %74), scope: RNN_ENCODER return (); } , <function _symbolic_pack_padded_sequence..pack_padded_sequence_trace_wrapper at 0x1c24e95950>, [16 defined in (%16 : Float(48, 15, 300), %17 : Handle = ^Dropout(0.5, False, False)(%13), scope: RNN_ENCODER/Dropout[drop] ), [15, 15, 14, 14, 13, 13, 13, 13, 13, 13, 12, 12, 11, 11, 11, 11, 11, 11, 11, 11, 10, 10, 10, 10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 8, 8, 8, 7, 7]], 2, <function _symbolic_pack_padded_sequence.._onnx_symbolic_pack_padded_sequence at 0x1c1be48378>
st103504
Could you post the code to text_encoder? In your notebook it’s loaded from models.py, which seems to be missing. I’m not that familiar with ONNX, but is there a reason, you are using _export instead of .export? Exporting a model with an Embedding layer and multiple outputs works, so I would have to see your whole model to see the reason it’s failing.
st103505
I created a separate notebook in the same repo with models.py (imported) and the rest of AttnGAN project. My notebook is similar to pretrain_DAMSM.py but modified so I can import the model into production. I also switched to .export and got the same results. I’ll post the code on my git repo but you need to download the coco data just fyi. I didn’t push commit the attngan project by the way github.com rchavezj/ConvertML_Models/blob/master/convert_.ipynb 4 { "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": { "collapsed": false }, "outputs": [], "source": [ "from __future__ import print_function\n", "\n", "from miscc.utils import mkdir_p\n", "from miscc.utils import build_super_images\n", "from miscc.losses import sent_loss, words_loss\n", "from miscc.config import cfg, cfg_from_file\n", "\n", "from datasets import TextDataset\n", "from datasets import prepare_data\n", "\n", This file has been truncated. show original
st103506
^My project is the comment above. Below is another developer contributing to AttnGAN GitHub taoxugit/AttnGAN 2 Contribute to AttnGAN development by creating an account on GitHub.
st103507
Sorry, maybe I’m blind, but I still couldn’t find your model definition. I just took the one defined from the other repo. After some minor modifications, this code works for me: EDIT: Sorry, my mistake. The code throws the same error and does not work! import torch import torch.nn as nn import torch.onnx from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence # ############## Text2Image Encoder-Decoder ####### class RNN_ENCODER(nn.Module): def __init__(self, ntoken, ninput=300, drop_prob=0.5, nhidden=128, nlayers=1, bidirectional=False): super(RNN_ENCODER, self).__init__() self.n_steps = 10 self.ntoken = ntoken # size of the dictionary self.ninput = ninput # size of each embedding vector self.drop_prob = drop_prob # probability of an element to be zeroed self.nlayers = nlayers # Number of recurrent layers self.bidirectional = bidirectional self.rnn_type = 'LSTM' if bidirectional: self.num_directions = 2 else: self.num_directions = 1 # number of features in the hidden state self.nhidden = nhidden // self.num_directions self.define_module() self.init_weights() def define_module(self): self.encoder = nn.Embedding(self.ntoken, self.ninput) self.drop = nn.Dropout(self.drop_prob) if self.rnn_type == 'LSTM': # dropout: If non-zero, introduces a dropout layer on # the outputs of each RNN layer except the last layer self.rnn = nn.LSTM(self.ninput, self.nhidden, self.nlayers, batch_first=True, dropout=self.drop_prob, bidirectional=self.bidirectional) elif self.rnn_type == 'GRU': self.rnn = nn.GRU(self.ninput, self.nhidden, self.nlayers, batch_first=True, dropout=self.drop_prob, bidirectional=self.bidirectional) else: raise NotImplementedError def init_weights(self): initrange = 0.1 self.encoder.weight.data.uniform_(-initrange, initrange) # Do not need to initialize RNN parameters, which have been initialized # http://pytorch.org/docs/master/_modules/torch/nn/modules/rnn.html#LSTM # self.decoder.weight.data.uniform_(-initrange, initrange) # self.decoder.bias.data.fill_(0) def init_hidden(self, bsz): weight = next(self.parameters()).data if self.rnn_type == 'LSTM': return (weight.new(self.nlayers * self.num_directions, bsz, self.nhidden).zero_(), weight.new(self.nlayers * self.num_directions, bsz, self.nhidden).zero_()) else: return weight.new(self.nlayers * self.num_directions, bsz, self.nhidden).zero_() def forward(self, captions, cap_lens, hidden, mask=None): # input: torch.LongTensor of size batch x n_steps # --> emb: batch x n_steps x ninput emb = self.drop(self.encoder(captions)) # # Returns: a PackedSequence object cap_lens = cap_lens.data.tolist() emb = pack_padded_sequence(emb, cap_lens, batch_first=True) # #hidden and memory (num_layers * num_directions, batch, hidden_size): # tensor containing the initial hidden state for each element in batch. # #output (batch, seq_len, hidden_size * num_directions) # #or a PackedSequence object: # tensor containing output features (h_t) from the last layer of RNN output, hidden = self.rnn(emb, hidden) # PackedSequence object # --> (batch, seq_len, hidden_size * num_directions) output = pad_packed_sequence(output, batch_first=True)[0] # output = self.drop(output) # --> batch x hidden_size*num_directions x seq_len words_emb = output.transpose(1, 2) # --> batch x num_directions*hidden_size if self.rnn_type == 'LSTM': sent_emb = hidden[0].transpose(0, 1).contiguous() else: sent_emb = hidden.transpose(0, 1).contiguous() sent_emb = sent_emb.view(-1, self.nhidden * self.num_directions) return words_emb, sent_emb model = RNN_ENCODER(27297) captions = torch.empty(48, 15, dtype=torch.long).random_(27297) cap_lens = torch.sort(torch.empty(48, dtype=torch.long).random_(1, 15), descending=True)[0] hidden = (torch.randn(1, 48, 128), torch.randn(1, 48, 128)) output = model(captions, cap_lens, hidden) torch.onnx.export(model, (captions, cap_lens, hidden), 'test.proto', verbose=True, export_params=True) Could you compare your code with this one?
st103508
I tested out your code and I’m still getting the same error. Am I’m dealing with a software package that I’m missing?
st103509
I’m currently using a PyTorch version compiled from master. Let me check the code with 0.4.0. EDIT: It’s also working on 0.4.0. Which PyTorch version do you have? Could you update to the current stable release? You will find the install instructions on the website 2.
st103510
ptrblck: PyTorch version compiled from master I’m currently on 0.4.0 as well. How do I check I’m on master?
st103511
Hi, I have a huge 1-dimension sparse array (e.g. [0,0,…,0,1,0,0,…,0]) to feed into PyTorch for training. Is there an efficient way to define the input? I have looked at the torch.sparse, but that seems is for 2-D. Besides, the targets are also sparse array, is there an efficient way to compare the output of the training and the target?
st103512
Solved by tom in post #2 You can use sparse arrays with 1d as well. Subtracting sparse array should work. Best regards Thomas
st103513
You can use sparse arrays with 1d as well. Subtracting sparse array should work. Best regards Thomas
st103514
I have a multi-dimensional tensor of and I want to change the value at only certain indices. I tried the numpy method of indexing using a list of indices but its producing some weird and inconsistent behavior. Sometimes it works as I want it to but at some, apparently arbitrary, indices I get an out-of-bounds error. Please look at the code below to get an idea of what I am talking about: >>> import torch >>> a = torch.zeros((3,3)) >>> idxs = [(2,2),(2,1)] >>> a[idxs] = 2 >>> a tensor([[ 0., 0., 0.], [ 0., 0., 0.], [ 0., 2., 2.]]) # This is is exactly what I wanted # Now for the weirdness >>> a = torch.zeros((6,32)) >>> idxs = [(2,6),(2,1)] >>> a[idxs] Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: invalid argument 2: out of range: 193 out of 192 at /pytorch/aten/src/TH/generic/THTensorMath.c:430 # I tried the same indices, in the reverse order and no error >>> idxs = [(2,1),(2,6)] >>> a[idxs] tensor([ 0., 0.]) Any idea what I am doing wrong? Most probably its not a bug and I have some misunderstanding about how this format of indexing works in pytorch.
st103515
I’m not sure if you really get the indices you would like to have. Have a look at the following code: a = torch.arange(6*32).view(6,32) idxs = [(2,1),(2,6)] print(a[idxs]) > tensor([66, 38]) You are basically getting a[2, 2] and a[1, 6], which explains the error in the other example. If you want a[2, 1] and a[2, 6] try: idxs = ([2, 2], [1, 6]) a[idxs] a[2, 1] a[2, 6] Note that the error is hidden in your first example, as the column and row tensors are (2, 2) and (2, 1).
st103516
Oh, so each sub-list should contain the indices of the respective dimensions. Got it. Thanks
st103517
Hey! So to my understanding Caffe2 merged with Pytorch. I’m familiar with Pytorch but not with Caffe or Caffe2. There’s a pretrained model that I would like to use right here: https://github.com/berli/MTCNN-2/tree/master/model 7 I’m not even sure if it’s a Caffe or Caffe2 model. Anyways,How do I run it? Thanks in advance, Yoni
st103518
I don’t know either but I want to! Just spamming so I can get notified Hope ur post gets more traction!
st103519
I want to train a model for a time series prediction task. I built my own model on PyTorch but I’m getting really bad performance compared to the same model implemented on Keras. Each epoch on PyTorch takes 50ms against 1ms on Keras. I want to show you my simple code because I’d like to know if I made any mistakes or it’s just PyTorch. Thank you in advance. This is my module: class Net(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, hidden_layers): super(Net, self).__init__() self.input_dim = input_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.hidden_layers = hidden_layers self.lstm = nn.LSTM(input_dim, hidden_dim, hidden_layers) self.h2o = nn.Linear(hidden_dim, output_dim) def forward(self, x): h_t = Variable(torch.randn(self.hidden_layers, BATCH_SIZE, self.hidden_dim)).cuda() c_t = Variable(torch.randn(self.hidden_layers, BATCH_SIZE, self.hidden_dim)).cuda() h_t, c_t = self.lstm(x, (h_t, c_t)) output = self.h2o(h_t) return output And this is the training execution: model = Net(INPUT_DIM, 40, OUTPUT_DIM, 1).cuda() loss_fcn = MEDLoss() optimizer = optim.RMSprop(model.parameters()) for epoch in range(EPOCHS): loss = 0 start = time.time() for seq in range(11, 20): length = seq_lenghts[seq] x = Variable(X_data[:length, [seq], :]).cuda() y = Variable(Y_data[:length, [seq], :]).cuda() model.zero_grad() output = model(x) loss = loss_fcn(output, y) loss.backward() optimizer.step() print("Epoch", epoch + 1, "Loss:", loss.cpu().data.numpy(), "Time:", time.time() - start)
st103520
Could I see the code with the dataset as well (can I get an example that I can run). And at the same time, can I run your Keras / TF code as well?
st103521
I’m sorry, I cannot publish the dataset. Input dimension is (242 timesteps, 20 sequences, 10 input dim), output is (242, 20, 2). This is my full code with a random generated dataset so you can run it. import torch import torch.nn as nn from torch.autograd import Variable from torch.nn.modules.loss import _Loss, _assert_no_grad import torch.optim as optim import numpy as np import time BATCH_SIZE = 1 INPUT_DIM = 10 OUTPUT_DIM = 2 EPOCHS = 1000 def med_loss(input, target): return torch.mean(torch.sqrt(torch.sum(torch.pow(target - input, 2)))) class MEDLoss(_Loss): def __init__(self): super(MEDLoss, self).__init__() def forward(self, input, target): _assert_no_grad(target) return med_loss(input, target) class Net(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, hidden_layers): super(Net, self).__init__() self.input_dim = input_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.hidden_layers = hidden_layers self.lstm = nn.LSTM(input_dim, hidden_dim, hidden_layers) self.h2o = nn.Linear(hidden_dim, output_dim) def forward(self, x): h_t = Variable(torch.randn(self.hidden_layers, BATCH_SIZE, self.hidden_dim)).cuda() c_t = Variable(torch.randn(self.hidden_layers, BATCH_SIZE, self.hidden_dim)).cuda() h_t, c_t = self.lstm(x, (h_t, c_t)) output = self.h2o(h_t) return output print("Loading data") X_data = torch.randn((242, 20, INPUT_DIM)) Y_data = torch.rand((242, 20, OUTPUT_DIM)) * 10 seq_lenghts = np.ones(20) * 242 model = Net(INPUT_DIM, 40, OUTPUT_DIM, 1).cuda() loss_fcn = MEDLoss() optimizer = optim.RMSprop(model.parameters()) for epoch in range(EPOCHS): loss = 0 start = time.time() for seq in range(11, 20): length = seq_lenghts[seq] x = Variable(X_data[:length, [seq], :]).cuda() y = Variable(Y_data[:length, [seq], :]).cuda() model.zero_grad() output = model(x) loss = loss_fcn(output, y) loss.backward() optimizer.step() print("Epoch", epoch + 1, "Loss:", loss.cpu().data.numpy(), "Time:", time.time() - start)
st103522
thanks, i could run it. Can I have the TF/Keras equivalent as well? Sample output that I’m seeing Epoch 218 Loss: [ 20.28092194] Time: 0.12861180305480957 Epoch 219 Loss: [ 21.20872307] Time: 0.12335944175720215 Epoch 220 Loss: [ 20.74290848] Time: 0.11568808555603027 Epoch 221 Loss: [ 20.72460365] Time: 0.11843180656433105 Epoch 222 Loss: [ 19.11690903] Time: 0.11834907531738281 Epoch 223 Loss: [ 22.12939262] Time: 0.11621856689453125 Epoch 224 Loss: [ 19.81811905] Time: 0.11482071876525879 Epoch 225 Loss: [ 19.9168148] Time: 0.137786865234375
st103523
Keras implementation: import numpy as np from keras.models import Sequential from keras.layers import Dense, CuDNNLSTM from keras.utils import print_summary from keras import backend as K def mean_eucl_dist(y_true, y_pred): return K.mean(K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))) X_data = np.random.randn((20, 242, 10)) Y_data = np.random.rand((20, 242, 2)) * 10 model = Sequential() model.add(CuDNNLSTM(40, return_sequences=True, input_shape=(242, 10))) model.add(Dense(2, activation='linear')) model.compile(loss=mean_eucl_dist, optimizer='rmsprop', metrics=[mean_eucl_dist]) model.fit(X_data[11:, :, :], Y_data[11:, :, :], batch_size=242, epochs=10000, shuffle=True)
st103524
which version of TensorFlow has to be installed to get to run this? (also I dont think X_data = np.randn((20, 242, 10)) is valid, numpy doesn’t have randn).
st103525
TensorFlow 1.4, tensorflow-gpu package on pip. As a side note, I’m not sure that batch_size=242 is correct in the Keras implementation, I’m a bit confused because it seems that PyTorch and Keras have a different semantic for this term. I edited my code, now it should work.
st103526
yes that looks suspect, batch_size should be 1. If you give batch_size=242, it is processing the entire dataset in one go (because dataset size here is 10 samples, it either takes min(dataset_size, batch_size) or entirely drops the training computation. At batch_size=1 for both scripts, pytorch runs on my machine in about 15 seconds for 100 epochs, and the Keras one runs at about 20 seconds. I’ve made some cosmetic changes to the PyTorch one in terms of best-practices for you to maybe learn from, nothing major. My changes get the pytorch script down to 13 seconds / 100 epochs. The changes are: we provide a mse_loss (mean square error), just use that instead of rolling your own move the dataset to GPU fully before-hand, just like how TF/Keras would do it. Here’s the modified PyTorch script: import torch import torch.nn as nn from torch.autograd import Variable from torch.nn.modules.loss import _Loss, _assert_no_grad import torch.nn.functional as F import torch.optim as optim import numpy as np import time BATCH_SIZE = 1 INPUT_DIM = 10 OUTPUT_DIM = 2 EPOCHS = 100 class Net(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim, hidden_layers): super(Net, self).__init__() self.input_dim = input_dim self.hidden_dim = hidden_dim self.output_dim = output_dim self.hidden_layers = hidden_layers self.lstm = nn.LSTM(input_dim, hidden_dim, hidden_layers) self.h2o = nn.Linear(hidden_dim, output_dim) def forward(self, x): h_t = Variable(x.data.new(self.hidden_layers, BATCH_SIZE, self.hidden_dim).normal_()) c_t = Variable(x.data.new(self.hidden_layers, BATCH_SIZE, self.hidden_dim).normal_()) h_t, c_t = self.lstm(x, (h_t, c_t)) output = self.h2o(h_t) return output print("Loading data") X_data = torch.randn((242, 20, INPUT_DIM)) Y_data = torch.rand((242, 20, OUTPUT_DIM)) * 10 X_data = X_data.cuda() Y_data = Y_data.cuda() model = Net(INPUT_DIM, 40, OUTPUT_DIM, 1).cuda() optimizer = optim.RMSprop(model.parameters()) for epoch in range(EPOCHS): loss = 0 start = time.time() for seq in range(11, 20): x = Variable(X_data[:, [seq], :].cuda()) y = Variable(Y_data[:, [seq], :].cuda()) model.zero_grad() output = model(x) loss = F.mse_loss(output, y) loss.backward() optimizer.step() print("Epoch", epoch + 1, "Loss:", loss.data[0], "Time:", time.time() - start) Here’s the runtime logs for pytorch: https://gist.github.com/994c3d50545d26229b9ac9c12c070b7e 83 Here’s the runtime logs for TF/Keras: https://gist.github.com/842a410b004a8fe7b773b1e9904befaa 58 And btw at this size of an LSTM, CuDNN kernels dont give any speedup really. The CuDNN kernels shine when there are multiple layers and large kernel sizes.
st103527
Thank you very much, now I get comparable results (6 seconds both with PyTorch and Keras for 100 epochs). So the problem was just my misunderstanding of batch_size’s meaning. I read some articles found on the web that are not clear about it and some of them are wrong!
st103528
I need a small clarification. Is it always necessary to have a batch size of 1 when training any of the sequence models (RNN,LSTM,GRU) in pytorch to give the maximum speed-up?
st103529
A larger batch is faster to compute but more iterations could be necessary to converge, so… it depends.
st103530
In that case why in your example a batch_size > 1 did not give a greater speedup?
st103531
Hi. I’m new to pytorch. I have a project I’m working on that uses the babi data set. My code is very messy and I want to show as little of it as I can. I have some modules that I use and one of them is a wrapper for the other ones. The wrapper module has several methods in it besides the ‘forward’ method. These methods are called in the wrapper’s forward method. Do I have to worry about this setup? Will my code train properly? In fact I am trying to fix a problem that I have where my model does not train well after reaching the 50% accuracy mark. Could this somehow be related to my wrapper’s forward method? class WrapMemRNN(nn.Module): def __init__(self,vocab_size, embed_dim, hidden_size, n_layers, dropout=0.3, do_babi=True, bad_token_lst=[], freeze_embedding=False, embedding=None): super(WrapMemRNN, self).__init__() self.hidden_size = hidden_size self.n_layers = n_layers self.do_babi = do_babi self.bad_token_lst = bad_token_lst self.embedding = embedding self.freeze_embedding = freeze_embedding self.teacher_forcing_ratio = hparams['teacher_forcing_ratio'] self.model_1_enc = Encoder(vocab_size, embed_dim, hidden_size, n_layers, dropout=dropout,embedding=embedding, bidirectional=False) self.model_2_enc = Encoder(vocab_size, embed_dim, hidden_size, n_layers, dropout=dropout, embedding=embedding, bidirectional=False) self.model_3_mem_a = MemRNN(hidden_size, dropout=dropout) self.model_3_mem_b = MemRNN(hidden_size, dropout=dropout) self.model_4_att = EpisodicAttn(hidden_size, dropout=dropout) self.model_5_ans = AnswerModule(vocab_size, hidden_size,dropout=dropout) self.input_var = None # for input self.q_var = None # for question self.answer_var = None # for answer self.q_q = None # extra question self.inp_c = None # extra input self.inp_c_seq = None self.all_mem = None self.last_mem = None # output of mem unit self.prediction = None # final single word prediction self.memory_hops = hparams['babi_memory_hops'] if self.freeze_embedding or self.embedding is not None: self.new_freeze_embedding() #self.criterion = nn.CrossEntropyLoss() pass def forward(self, input_variable, question_variable, target_variable, criterion=None): self.new_input_module(input_variable, question_variable) self.new_episodic_module() outputs, loss = self.new_answer_module_simple(target_variable, criterion) return outputs, None, loss, None def new_freeze_embedding(self): self.model_1_enc.embed.weight.requires_grad = False self.model_2_enc.embed.weight.requires_grad = False print('freeze embedding') pass def new_input_module(self, input_variable, question_variable): out1, hidden1 = self.model_1_enc(input_variable) self.inp_c_seq = out1 self.inp_c = hidden1 #out1 out2, hidden2 = self.model_2_enc(question_variable) self.q_q = hidden2 return def new_episodic_module(self): if True: m_list = [] g_list = [] e_list = [] f_list = [] m = self.q_q.clone() g = nn.Parameter(torch.zeros(1, 1, self.hidden_size)) e = nn.Parameter(torch.zeros(1, 1, self.hidden_size)) f = nn.Parameter(torch.zeros(1, 1, self.hidden_size)) m_list.append(m) g_list.append(g) e_list.append(e) f_list.append(f) #m_list.append(self.q_q.clone()) for iter in range(self.memory_hops): #g_list.append(g) #e_list.append(e) sequences = self.inp_c_seq.clone().permute(1,0,2).squeeze(0) for i in range(len(sequences)): #if True: x = self.new_attention_step(sequences[i], g_list[-1], m_list[-1], self.q_q) g_list.append(x) for i in range(len(sequences)): #if True: e, f = self.new_episode_small_step(sequences[i], g_list[-1], e_list[-1]) e_list.append(e) f_list.append(f) _, out = self.model_3_mem_a( e_list[-1], m_list[-1])#, g_list[-1]) m_list.append(out) self.last_mem = m_list[-1] return m_list[-1] def new_episode_small_step(self, ct, g, prev_h): _ , gru = self.model_3_mem_a(ct, prev_h, None) # g h = g * gru + (1 - g) * prev_h return h, gru def new_attention_step(self, ct, prev_g, mem, q_q): #mem = mem.view(-1, self.hidden_size) concat_list = [ #prev_g.view(-1, self.hidden_size), ct.unsqueeze(0),#.view(self.hidden_size,-1), mem.squeeze(0), q_q.squeeze(0), (ct * q_q).squeeze(0), (ct * mem).squeeze(0), torch.abs(ct - q_q).squeeze(0), torch.abs(ct - mem).squeeze(0) ] #for i in concat_list: print(i.size()) #exit() return self.model_4_att(concat_list) def new_answer_module_simple(self,target_var, criterion): loss = 0 ansx = self.model_5_ans(self.last_mem, self.q_q) #ans = ansx.data.max(dim=1)[1] ans = torch.argmax(ansx,dim=1)[0] if criterion is not None: loss = criterion(ansx, target_var[0]) return [ans], loss pass
st103532
I don’t know why your model doesn’t train well, but calling other methods in your forward method probably isn’t the problem.
st103533
My model still has problems. I have opened a stack overflow question. If you are interested in the issue in general check out this link: https://stackoverflow.com/questions/51154949/dmn-neural-network-with-poor-validation-results-only-50 20 . thanks.
st103534
Hello, I am working on a new optimization algorithm but I am unable to integrate it in my pytorch code. Is there any way to implement any optimization algorithm other than present in the torch.optim package? Thanks a lot in advance.
st103535
Yes, you can just check the implementations of the optimization algorithms of pytorch (e.g. SGD 2.0k or Adam 943) and use them as a model to help you get started. Here is an example of an optimization algorithm that was presented at ICLR: AccSGD 1.4k
st103536
as the title, take inception v3 for example, so how to check or print the intermediate output ?
st103537
You could register forward hooks as shown in this example 106. If you would like to store the activation in a dict, have a look at this example 51.
st103538
Thank you. That is exactly what I need. another question. how to display the intermediate results?
st103539
You can just print the activation from your dict. If you would like to visualize them, just call .numpy() on the activation and you can use e.g. matplotlib to visualize the activation maps.
st103540
What is your opinion on this? Do you run into errors or annoying situations without the context manager? I my opinion the less context managers the better. Especially, if you are using PEP8! Whike the torch.no_grad() context manager makes totally sense, since the gradient calculation was previously a property of the data (Variable(..., volatile=False)) and is now a “property” of the work flow, the eval property belongs to the model in my opinion, since it changes the behavior of some internal Modules. Using this context manager, it might also be more complicated to set some layers into eval, while others stay in train. Would love to hear other opinions on this!
st103541
Switching correctly train()/eval() during training/validation is tricky in the presence of exceptions or other early exits. Once in the wrong state your training might be totally off. Context managers help in such situations. I’m using the following context manager: # Following snippet is licensed under MIT license from contextlib import contextmanager @contextmanager def evaluating(net): '''Temporarily switch to evaluation mode.''' istrain = net.training try: net.eval() yield net finally: if istrain: net.train() Intended usage: with evaluating(net): .... Also in case you are concerned having too many nested blocks, you can collapse multiple context managers into a single line with evaluating(net), torch.no_grad(): in py3.x and later. In 2.7 you need to use nested from contextlib.
st103542
Thank you @Christoph_Heindl . Dont suppose… could you explicitly grant a license to use the code, eg MIT, (or something else that is relatively free, ie not GPLv3 or similar).
st103543
@hughperkins modified above snippet to contain license info. One thing that should be clarified by someone who has internal insights is if net.training is always a primitive boolean and not something complex. Otherwise remembering the training state before switching to eval would require an explicit copy.
st103544
If i do this, does the “Model” and the “Encoder” have same weights? Or will they become two separate networks? class Encoder(nn.Module): def __init__(self): super(Encoder, self).__init__() self.fc1 = nn.Linear(784, 32) def forward(self, x): return F.sigmoid(self.fc1(x)) class Decoder(nn.Module): def __init__(self): super(Decoder, self).__init__() self.fc1 = nn.Linear(32, 784) def forward(self, x): return F.sigmoid(self.fc1(x)) class AutoEncoder(nn.Module): def __init__(self): super(AutoEncoder, self).__init__() self.fc1 = Encoder() self.fc2 = Decoder() def forward(self, x): return self.fc2(self.fc1(x)) Model = AutoEncoder() Encoder = Encoder() Decoder = Decoder()
st103545
Solved by ptrblck in post #2 Both will be initialized with their own weights: print(Model.fc1.fc1.weight) print(Encoder.fc1.weight)
st103546
Both will be initialized with their own weights: print(Model.fc1.fc1.weight) print(Encoder.fc1.weight)
st103547
Hi, I want to modify the convolutional filters only in the backward pass to use some fixed random ones (Feedback Alignment https://www.nature.com/articles/ncomms13276 4, https://arxiv.org/abs/1609.01596 3), so I wrote a custom module: class Aconv2d(nn.Module): def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True): super(Aconv2d, self).__init__() if in_channels % groups != 0: raise ValueError('in_channels must be divisible by groups') if out_channels % groups != 0: raise ValueError('out_channels must be divisible by groups') kernel_size = _pair(kernel_size) stride = _pair(stride) padding = _pair(padding) dilation = _pair(dilation) self.in_channels = in_channels self.out_channels = out_channels self.kernel_size = kernel_size self.stride = stride self.padding = padding self.dilation = dilation self.transposed = False self.output_padding = _pair(0) self.groups = groups if self.transposed: self.weight = nn.Parameter(torch.Tensor(in_channels, out_channels // groups, *kernel_size)) else: self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *kernel_size)) if bias: self.bias = nn.Parameter(torch.Tensor(out_channels)) else: self.register_parameter('bias', None) self.backward_weight = torch.Tensor(self.weight.size()) self.reset_parameters() self.forward_weight = self.weight.data def reset_parameters(self): n = self.in_channels for k in self.kernel_size: n *= k stdv = 1. / math.sqrt(n) self.weight.data.uniform_(-stdv, stdv) self.backward_weight.uniform_(-stdv, stdv) if self.bias is not None: self.bias.data.uniform_(-stdv, stdv) def forward(self, input): return F.conv2d(input, self.weight, self.bias, self.stride, self.padding, self.dilation, self.groups) def switch_mode(self, mode): if mode == 'backward': self.forward_weight.copy_(self.weight) self.weight.data.copy_(self.backward_weight) elif mode == 'forward': self.weight.data.copy_(self.forward_weight) return Then I iterate over my model layers and call switch_mode for each of these layers before the forward call and the backward call, to use the right weights. With this an iteration takes ~45 seconds on my GPU, while with plain backprop it takes ~15 seconds. I tried writing my custom autograd function using conv2d_input and conv2d_weight but it’s even slower (~240 seconds). I was wondering if there is a way to do this more efficiently. I was thinking about backward hooks, but my understanding is that they’re executed after the backward of the module, while here I’d need them to be performed before the module backward. Any ideas/suggestions?
st103548
Someone from Stackoverflow recommended to me to post my question here: I’m getting a runtime error everytime I try to debug pytorch code for the second time. It seems like after restarting the ipython console some of the cuda programs are not canceled properly so that this error occurs while trying to rerun the code. Has anyone had the same experience / knows how to solve it? Is there a way to manually flush all cuda activities from my GPU? I’m using pytorch version 0.1.12_1 ipython 4.2.0 as IDE spyder 3.2.4. cuda 8.0 Titan X (pascal) Alternatively: Can someone tell me a working combination of pytorch / ipython / spyder versions? Maybe mine is simply buggy.
st103549
Hello,I have the same problem ,I also need help .Hope someone could tell us how to solve this problem.
st103550
Hello, i got the same problem, did you solve this for now? can you tell me how to solve it? Hoping for your reply. Thanks in advance.
st103551
Hi, You’re using pytorch v0.1, please upgrade to 0.3 as many interface has changed since then.
st103552
hi, thanks for replying, i use this command " conda update pytorch torchvision ", but i can only update my pytorch to 0.2. Can you tell me how to update? Is that pytorch version 0.3 can only install in python3?
st103553
You might need to update your conda install to be able to use the latest version of pytorch.
st103554
Hi, I encounter the same problem. It can be repeated by the following link 105. Thank you very much in advance.
st103555
I encounter the same problem, but I can run my code through sublime-text 3 (sublime-text 3 was started as root (sudo)). Then I execute ‘sudo chmod 777 -R ~/anaconda2’, the problem disappeared.
st103556
Hello , I have this classification problem , classifying crops using satellite imagery , I have two kinds of data optical ( 20 features )and radar data ( 6 features ) . There are two types of satellites each for our two data categories , they takes several images all year long ( not in a regular interval ) thus the temporal aspect of our data and the usage of recurrent networks. There’s a common issue with optical imagery concerning blockage , if there clouds are and most of cases there are , we use a method called gap-filling ( interpolate the missing data points from previous dates ) but it leads too many repetition in the data . on the other hand , Radar data they don’t have this issue but they give , alone ( 6 features ) or with optical data ( 20 + 6 features ) poor classification Overall Accuracy . So my question goes like this , is there a way of using both data but dynamically modifying weights in the neural nets so it becomes more sensitive to these kind of problems ( or with more examples the neural net will naturally become more sensitive to the problem ) , Sorry for the long post I just wanted to explain the problem at hand fully. And thank you.
st103557
I was thinking that would it be better to store and load images present as tensors? In general I load images with opencv or PIL and converting them to tensors, if I converted my data into tensor and dump them would it be faster while loading??
st103558
Solved by ptrblck in post #4 If you don’t want to transform your images, you could load them as tensors. However, if you do need a transformation, you would have to transform the tensors back to PIL.Images, apply the augmentation/transformation, and transform them back to tensors. This shouldn’t be too bad regarding the perfo…
st103559
You can try Dataloader, which doesn’t require to actually load all images in memory at any point. You can read more about it here: https://pytorch.org/docs/stable/data.html 342 tutorial: https://github.com/pytorch/tutorials/blob/master/beginner_source/data_loading_tutorial.py 300
st103560
Yeah I know about it, I was thibking about optimizing everytime __geittem__ is called, you have to load images so what is better to load, image itself or image stored as a tensor
st103561
If you don’t want to transform your images, you could load them as tensors. However, if you do need a transformation, you would have to transform the tensors back to PIL.Images, apply the augmentation/transformation, and transform them back to tensors. This shouldn’t be too bad regarding the performance, but can be annoying. On the other hand do you want to load single images/tensors? If so, I think you won’t get any performance improvement: x = torch.empty(3, 224, 224).uniform_(0, 1) img = transforms.ToPILImage()(x) # save torch.save(x, 'tensor_image.pth') img.save('pil_image.png') %timeit torch.load('tensor_image.pth') > 1000 loops, best of 3: 206 µs per loop %timeit Image.open('pil_image.png') > 10000 loops, best of 3: 86.6 µs per loop I think you’ll be faster if you load a whole batch of tensors.
st103562
Oh torch.load is slower! I thought it would have been efficient! I guess due to pickle backend perhaps?! I think you’ll be faster if you load a whole batch of tensors. Yeah perhaps loading multiple images consecutively would (SHOULD!) be slower compared to loading tensor at once. Thanks!
st103563
I think the tensor is just too small to get a significant performance boost. Naman-ntc: Yeah perhaps loading multiple images consecutively would (SHOULD!) be slower compared to loading tensor at once. That’s what I think, too.
st103564
Just as a followup modified the script for loading and comparing multiple images Even for loading 19 images at once loading tensor greatly outperform other methods from PIL import Image import torch import torchvision.transforms as transforms import timeit import cv2 K = 19 x = torch.zeros(3, 224, 224).uniform_(0, 1) img = transforms.ToPILImage()(x) for i in range(K): img.save(str(i) + ".png") x = x.unsqueeze(0).expand(K,3,224,224) torch.save(x, 'tensor_images.pth') def loading1(): torch.load('tensor_images.pth') def loading2(): for i in range(K): Image.open(str(i) + ".png") def loading3(): for i in range(K): cv2.imread(str(i) + ".png") And the outputs >>> timeit.timeit(loading1, number=100000) 14.435413494997192 >>> timeit.timeit(loading2, number=100000) 95.70677890698425
st103565
That’s great. What about the filesize? Is there any significant pytorch-specific overhead?
st103566
Stored only 19 images so can’t say much but yeah file sizes are smaller by factor of about 2
st103567
I have a tensor of shape (batch_size, num_steps, hidden_size). For each row vector in each matrix (num_steps, hidden_size) I want to compute the element-wise product with every other row vector, yielding a tensor of shape (batch_size, num_steps, num_steps, hidden_size). Is there a built-in way to do this that avoids for loops?
st103568
Solved by tom in post #2 You can use broadcasting. For example: y = x[:, :, None, :] * x[:, None, :, :] There also is torch.einsum for fancy stuff (torch.einsum("iaj,ibj->iabj", [x, x]), works in PyTorch self-compiled master), but in your case the above seems to be what looks most straightforward and transparent to me (th…
st103569
You can use broadcasting. For example: y = x[:, :, None, :] * x[:, None, :, :] There also is torch.einsum 57 for fancy stuff (torch.einsum("iaj,ibj->iabj", [x, x]), works in PyTorch self-compiled master), but in your case the above seems to be what looks most straightforward and transparent to me (the trailing “:” indices are for my taste, not technically necessary, alternatives include inserting the dimensions with unsqueeze instead of using indexing). Best regards Thomas
st103570
Dear Thomas, That’s brilliant. Quite a saving from my 10 lines of for loop rubbish. It has also expanded my world in terms of what can be achieved with broadcasting. I’ve played with it for a bit now and am thoroughly satisfied with the solution. Thanks very much for your kind help!!! Best regards, Tim.
st103571
I wrap my data with Dataset, then use Dataloader for enumerate. But because of copy-on-write mechanism, my memory goes so high out of expected. My problem can be simplified as following: class DataIter(Dataset): def __init__(self): self.data = range(90317731) def __len__(self): return len(self.data) def __getitem__(self, idx): return torch.Tensor(self.data[idx]) Then, i use Dataloader and for-loop to fetch data. train_data = DataIter() train_loader = DataLoader(train_data, batch_size=64, shuffle=True, num_workers=8) for i, item in enumerate(train_loader): print(i) After it is running , i watch my memory(RAM, RSS). It costs about 20GB RSS due to copy-on-write in subprocess. How to deal with it? self.data = range(90317731) should cost about 2~3GB using python list. I know using Numpy can reduce symptom, it reduces the train_data size, so the subprocess copies less. Summarize my problems: How to reduce the memory cost by subprocess due to copy-on-write? Using Manager or something else? Pytorch has only considered about the Tensor shared memory, but without DataSet class? am I right? I’m using Python2.7
st103572
I tried to write a piece of code as follows: class Test(nn.Module): def __init__(self): super(Test, self).__init__() self._coeffs = Variable(1e-3*torch.randn(3).cuda(), requires_grad=True) self. convs = nn.ModuleList([nn.Conv2d(10, 10, 3) for i in range(3)]) def forward(x): sum_output = None for i in range(3): sum_output += self._coeffs[i] * self.convs[i](x) return sum_output And I put the model on 4-gpus with Dataparallel class: model = nn.DataParallel(Test, [0,1,2,3]).cuda() During the forward pass, it reports “RuntimeError: tensors are on different GPUs…” and I’m very sure the problem is at the self._coeffs. Wait for answers…
st103573
You shouldn’t create CUDA tensors in your init on a specific device. Currently you are pushing self._coeffs to the default GPU. Try to remove the .cuda() op and run it again. PS: You can add code using three backticks `.
st103574
Hi ptrblck, Thanks for your quick answer and it works well now. But I found that nn.DataParallel doesn’t replicate module’s member variables. Specifically, nn.Dataparallel does not work on self._coeffs since I checked that all the forward and backward operations with respect to self._coeffs are on cuda:0 .
st103575
I have seen two types of for loops that go through the dataloader. for batch_idx, (data,target) in enumerate(train_loader): train... for data,target in train_loader: train... Is there a difference between those two? If so, what is it? Thanks
st103576
Solved by ptrblck in post #2 In the first one you’ll get the current index of the iteration saved in batch_idx. That’s what the enumerate op is for. It’s often used e.g. if you would like to print some training stats every N batches. Besides that, there is no difference.
st103577
In the first one you’ll get the current index of the iteration saved in batch_idx. That’s what the enumerate op is for. It’s often used e.g. if you would like to print some training stats every N batches. Besides that, there is no difference.
st103578
Hi, It seems like there is a difference in results when gradients are calculated for LongTensor import torch from torch.autograd import grad print(torch.__version__) x = torch.tensor([1,2,2], requires_grad=True) l = torch.norm(x.float()) g = torch.autograd.grad(l, x) print(g) Prints (tensor([ 0, 0, 0]),) which is wrong. However, if change the code slightly to x = torch.tensor([1.0,2,2], requires_grad=True) #Note 1.0 instead of 1 l = torch.norm(x.float()) g = torch.autograd.grad(l, x) print(g) It prints (tensor([ 0.3333, 0.6667, 0.6667]),) which is correct. Any idea what might be happening?
st103579
tensors of integral types shouldn’t require grad. this has been implemented as a hard constraint on master.
st103580
What is the reason for this restriction? Also, I think it might be better to throw an exception in this case as opposed to failing silently.
st103581
yes, as i said, it throws hard error on master now. assuming you are doing gradient descent type optimization, then since integral types are discrete, so it’s natural to not allow this.
st103582
I’ve written some code for text summarization based on the deep reinforced model for abstractive summarization. But when I call backward(), there is some error indicating some in-place operation is executed on some variable, which needs gradient computation. I’m wondering if there is a way to identify in-place operation in pytorch. Right below is my code snippet in the forward function. I really appreciate it if someone can help me on this. Thanks so much. # PyTorch device location device = torch.device('cuda') if cuda and torch.cuda.is_available() else torch.device('cpu') # For DataParallel with pack_pad and pad_packed docs = docs[:,:doc_lens[0]].contiguous() bsz, in_seq = docs.size() # Convert OOV token to UNK inputs = docs.clone() input_mask = inputs.ge( self.vocab_size ) inputs.masked_fill_( input_mask, 3 ) # <UNK> token # Document Word Embedding # dembeds: bsz x T_e x emb_size dembeds = self.embed( inputs ) dembeds = F.relu( self.dropout_embed( dembeds ) ) # Pack the embedding sequence packed_dembeds = pack_padded_sequence( dembeds, doc_lens, batch_first=True ) # Bidirectional LSTM # encoder_hiddens: used to initialize decoder packed_ehiddens, encoder_hiddens = self.encoder( packed_dembeds ) # Unpack ehiddens # ehiddens: bsz x T_e x (2*ehid_size) ehiddens = pad_packed_sequence( packed_ehiddens, batch_first=True )[0] # Decoder _, target_length = sums.size() output_mask = sums.ge( self.vocab_size ) sums.masked_fill_( output_mask, 3 ) # Target Summary Word Embedding # sembeds: bsz x T_d x emb_size sembeds = self.embed( sums ) sembeds = F.relu( self.dropout_embed( sembeds ) ) # Decoder start token decoder_input_0 = sembeds[:,0:1,:] # SOS token # Rewrap Encoder Hidden decoder_hiddens_0 = [ torch.cat( torch.split( _, 1, dim=0 ), dim=-1 ) for _ in encoder_hiddens ] # Mask for Encoder Attention # en_mask: bsz x 1 x T_e en_mask = docs.eq(0).unsqueeze(1) # batch and token index for Copy Attention batch_indices = torch.arange(0, bsz).long() batch_indices = batch_indices.expand(in_seq, bsz).transpose(0,1).contiguous().view(-1) idx_repeat = torch.arange(0, in_seq).repeat( bsz ).long() word_indices = docs.view(-1) # word index in vocab numbers = docs.view(-1).tolist() set_numbers = list(set(numbers)) # all unique numbers if 0 in set_numbers: set_numbers.remove(0) c = Counter(numbers) dup_list = [k for k in set_numbers if (c[k]>1)] # Cache probs of all timesteps p_y = [] # Initialize decoder input and hidden decoder_input = decoder_input_0 decoder_hiddens = decoder_hiddens_0 # Decoder unidirectional LSTM for t in range( 1, target_length+1 ): # h_dt: bsz x 1 x dhid_size # decoder_hiddens: h_t, c_t h_dt, decoder_hiddens = self.decoder( decoder_input, decoder_hiddens ) # Intra-Temporal Attention # e_t: bsz x 1 x T_e e_t = torch.matmul( h_dt, self.We_attn ) e_t = torch.bmm( e_t, ehiddens.transpose(1,2) ) if t == 1: ep_t = torch.exp( e_t ) # bsz x 1 x T_e e = e_t else: ep_t = torch.exp( e_t ) / torch.sum( torch.exp( e ), dim=1, keepdim=True ) # bsz x 1 x T_e e = torch.cat( [e, e_t], dim=1 ) # bsz x t x T_e # Encoder Attention ep_t.masked_fill_( en_mask, 0 ) en_alpha_t = ep_t / torch.sum( ep_t, dim=2, keepdim=True ) # Encoder Context # en_context_t: bsz x 1 x (2*ehid_size) en_context_t = torch.bmm( en_alpha_t, ehiddens ) # Decoder Context Vector if t == 1: de_context_t = torch.zeros( ( bsz, 1, self.dhid_size ), device=device ) dhidden = h_dt else: # Intra-Decoder Attention # ed_t: bsz x 1 x t-1 ed_t = torch.matmul( h_dt, self.Wd_attn ) ed_t = torch.bmm( ed_t, dhidden.transpose(1,2) ) de_alpha_t = F.softmax( ed_t, dim=2 ) de_context_t = torch.bmm( de_alpha_t, dhidden ) dhidden = torch.cat( [dhidden, h_dt], dim=1 ) # Merged_context: bsz x 1 x (3*dhid_size) merged_context = torch.cat( [ h_dt, en_context_t, de_context_t ], dim=2 ) # p(y_t|u=0) # p_yt_u0: bsz x 1 x vocab_size p_yt_u0 = F.softmax( self.out( self.dropout( merged_context ) ), dim=2 ) oovs = torch.zeros( (bsz, 1, self.max_oov), device=device ) + 1.0/self.vocab_size # small epislon to avoid zero prob p_yt_u0 = torch.cat( [p_yt_u0, oovs], dim=2 ) # p(u=1) # p_u1: bsz x 1 x 1 p_u1 = F.sigmoid( self.copy( self.dropout( merged_context ) ) ) # Encoder Attention Distribution # p(y_t|u=1) attn = en_alpha_t.squeeze(1) masked_idx_sum = torch.zeros( (bsz, in_seq), device=device) dup_attn_sum = torch.zeros( (bsz, in_seq), device=device ) for dup in dup_list: mask = docs.eq( dup ).float() masked_idx_sum += mask attn_mask = mask * attn attn_sum = attn_mask.sum( 1,keepdim=True ) dup_attn_sum += mask * attn_sum attn = attn * (1-masked_idx_sum) + dup_attn_sum p_yt_u1 = torch.zeros( (bsz, self.vocab_size+self.max_oov), device=device ) p_yt_u1[ batch_indices, word_indices] += attn[ batch_indices, idx_repeat ] p_yt_u1 = p_yt_u1.unsqueeze(1) # bsz x 1 x 1 # p(y_t): bsz x 1 x (vocab_size+max_oov) p_yt = p_u1 * p_yt_u1 + ( 1-p_u1 ) * p_yt_u0 # Concatenate for Training p_y.append( p_yt ) # Scheduled Sampling use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False if use_teacher_forcing: if t < target_length: # limit to feasible tokens decoder_input = sembeds[:, t:t+1, :] else: topv, topi = p_yt.topk( 1, dim=2 ) next_token = topi.squeeze(1).detach() next_mask = next_token.ge( self.vocab_size ) next_token.masked_fill_( next_mask, 3 ) # OOV -> UNK decoder_input = F.relu( self.dropout_embed( self.embed( next_token ) ) ) # log_p_y: bsz x (T_d-1) x vocab_size p_y = torch.cat( p_y, dim=1 ) log_p_y = torch.log( p_y )
st103583
If you just want to find in-place operations, maybe you can look for operations ends with underscore _. For instance, ‘inputs.masked_fill_( input_mask, 3 )’ in your case. Or more general, to consult with official doc: https://pytorch.org/docs/master/tensors.html 259 to see if there is any returns from the operator.
st103584
Thanks for the pointer. I actually found out that it’s ep_t.masked_fill_( en_mask, 0 ) that causes this error. But I’m just wondering why would such operation result in the error?
st103585
Hello, Currently I’m working on a char-rnn network with lstm. Everything goes well until I try to compute the nllloss - this is where I’m a bit lost and confused. As an example, I have a batch of 2 with a sequence length of 5 and a embedding dimension of say 10 target characters: so my input shape before passing through the one-hot embedding layer is [5,2], i.e. tensor([[ 10, 62], [ 49, 61], [ 34, 69], [ 42, 51], [ 2, 2]]) after passing the mini-batch input into the embedding layer, my shape is now [5,2,10] and continues to be so as I passed it onto the lstm and softmax layers. when I go to perform the loss function, it errors and says I have a miss match in size: loss = F.nll_loss(pred,input) obviously, the sizes now are F.nll_loss([5,2,10], [5,2]) I read that nllloss does not want one-hot encoding for the target space and only the indexs of the category. So this is the part where I don’t know how to structure the prediction and target for the NLLLoss to be calculated correctly. Thanks for the help and guidance, cheers!
st103586
Hi, F.nll_loss expects input and target to be 2-dimensional and 1-dimensional, respectively or (N, C, d_1, d_2, …, d_K) and (N, d_1, d_2, …, d_K) respectively source code 55. So, in your case, reshaping input tensor to (5 * 2, 10) and target tensor to (5 * 2) will make it work.
st103587
Hi all, I compiled whole pythorch with GPU support and console output was successfully compiled. Therefore I was able to get caffe2_pybind11_state.pyd and caffe2_pybind11_state_gpu.pyd. When I run python the following code without GPU support it succeeds: python char_rnn.py --train_data shakespeare.txt However, when I run it with GPU I got a CUDA error: python char_rnn.py --train_data shakespeare.txt --gpu Below my configuration PyTorch or Caffe2: Caffe2 How you installed PyTorch (conda, pip, source): github OS: Windows 10 PyTorch version: current version Python version: 2.7 CUDA/cuDNN version: 9.2/7.1 GPU models and configuration: NVIDIA GeForce GTX 1050 CMake version: cmake-3.12.0-rc2-win64-x64.msi Versions of any other relevant libraries: Visual Studio 2017 Do I need to change from CUDA 9.2 to CUDA 8.0 in order to solve the issue? Do I need to use also Visual Studio 2015 instead? Thanks in advance! Output from console: D:\Yeverino\git_projects\pytorch\caffe2\python\examples>python char_rnn.py --train_data shakespeare.txt --gpu [E D:\Yeverino\git_projects\pytorch\caffe2\core\init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. [E D:\Yeverino\git_projects\pytorch\caffe2\core\init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. [E D:\Yeverino\git_projects\pytorch\caffe2\core\init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU. Input has 62 characters. Total input size: 99993 DEBUG:char_rnn:Start training DEBUG:char_rnn:Training model WARNING:caffe2.python.workspace:Original python traceback for operator 0 in network char_rnn_init in exception above (most recent call last): WARNING:caffe2.python.workspace: File “char_rnn.py”, line 276, in WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\utils.py”, line 329, in wrapper WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\utils.py”, line 291, in run WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\utils.py”, line 328, in func WARNING:caffe2.python.workspace: File “char_rnn.py”, line 270, in main WARNING:caffe2.python.workspace: File “char_rnn.py”, line 71, in CreateModel WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\rnn_cell.py”, line 1571, in _LSTM WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\rnn_cell.py”, line 93, in apply_over_sequence WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\rnn_cell.py”, line 491, in prepare_input WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\brew.py”, line 107, in scope_wrapper WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\helpers\fc.py”, line 58, in fc WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\helpers\fc.py”, line 37, in _FC_or_packed_FC WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\model_helper.py”, line 214, in create_param WARNING:caffe2.python.workspace: File “D:\Yeverino\git_projects\pytorch\build\caffe2\python\modeling\initializers.py”, line 30, in create_param Entering interactive debugger. Type “bt” to print the full stacktrace. Type “help” to see command listing. [enforce fail at context_gpu.h:171] . Encountered CUDA error: no kernel image is available for execution on the device Error from operator: output: “LSTM/i2h_w” name: “” type: “XavierFill” arg { name: “shape” ints: 400 ints: 62 } device_option { device_type: 1 cuda_gpu_id: 0 } d:\yeverino\git_projects\pytorch\build\caffe2\python\workspace.py(178)CallWithExceptionIntercept() -> return func(*args, **kwargs) (Pdb)
st103588
Hello, I am currently implementing my own CUDA extension for Pytorch. Now I am wondering whether I should avoid manually allocating memory on the GPU and instead use ATen for this, so Pytorch’s memory manager can effectively handle allocation? I also appreciate if you can show me where to read up in Pytorch internals such as the memory management! Best regards, Tim
st103589
Hey guys, for educational purposes I’m trying to write a deep learning actor-critic model for the simplest problem I could find: The multi-armed bandit. For simplicity’s sake, I’m starting out with one machine that returns 0 or 1 with equal probability. (and I will slowly make the problem more complex once I understand the stuff ) So I have a model that’s working where the critic correctly converges to a value of 0.5 for the state of playing the game once. But I’m sure I’m doing several unnecessary steps in the training part: class Net(torch.nn.Module): def init(self): super(Net, self).init() self.l1 = torch.nn.Linear(1, 64) self.l2 = torch.nn.Linear(64, 64) self.actor = torch.nn.Linear(64, 3) self.critic = torch.nn.Linear(64, 1) self.memory = [] self.GAMMA = 0.8 def forward(self, obs): hl1 = self.l1(obs) hl1 = F.relu(hl1) hl2 = self.l2(hl1) hl2 = F.relu(hl2) #print('hl2',hl2) critic_out = self.critic(hl2) actor_out = self.actor(hl2) return actor_out, critic_out def get_pred_withgrad(self, obs): return self.forward(obs) def get_pred_nograd(self, obs): with torch.no_grad(): return self.forward(obs) def game(self): action = 0 reward = bandit.step(action) #memory.append self.train(action, reward) def train(self, action, reward): obs = torch.tensor([0], dtype= torch.float) _, bad_critic = self.forward(obs) bad_critic_value = bad_critic.detach().numpy()[0] print('Value estimation: ', bad_critic_value) _, better_critic = self.get_pred_nograd(obs) # this would be new_obs if we actually have different states delta = reward - bad_critic_value # + self.GAMMA * better_critic.numpy()[0] not included for now since we don't have a next state. print('Delta: ', delta) better_critic[0] += delta loss_fn = torch.nn.MSELoss(size_average=False) optimizer = torch.optim.SGD(self.parameters(), lr=0.0001) optimizer.zero_grad() loss = loss_fn(bad_critic, better_critic) loss.backward() optimizer.step() So basically, my question is this: the delta I calculate IS the loss. How can I use this directly to backpropagate? Because what I’m doing works but is probably ridiculous By the way: I’m aware that the whole state thing is unnecessary for now. But that will make more sense once I expand the problem… Thanks for your help! PS: Not sure why the first block is not shown the same way as the rest…
st103590
I have a RNN model. I did something like self.lstm.flatten_parameters() with DataParallel on multiple GPUs in order to eliminate a user warning. UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters(). However, I got this error RuntimeError: torch/csrc/autograd/variable.cpp:115: get_grad_fn: Assertion `output_nr == 0` failed. Any ideas on this? Is it a bug for pytorch?
st103591
I managed to fix it, but got a new error with get_grad_fn (details in the question). Do you have any idea? Thanks!
st103592
Hello I was checking out Question and Answering using Dynamic Memory Network (DMN) for BabI dataset from this source: 10.Dynamic-Memory-Network-for-Question-Answering.ipynb 1 I modified it above a bit so that I can save the model and later run the prediction/inference separately. src: dmn_babi.ipynb 1 and I saved my model as ‘dmn_qa’. Moreover, the inference is showing correctly when I am running wholly from the Ipython Notebook. Later I added 2 separate files which are model.py - contains the DMN model, src: model.py prediction.py - contains the loading of the model and inference part of the code. src: predict.py 2 However, when I run the prediction part now the output is not coming correctly. Please help, where I am doing wrong here. Best Regards
st103593
My dataset contains images with various sizes. Mostly cases are ‘1024768’, '800536’, etc. I find that it takes quite long time for the first epoch, about 10x longer than the training time of other epochs. However, since the second epoch, training time of each epoch tends to be stable. If I set cudnn.benchmark=False, training time goes up about 20% in my case. I wonder whether the gaining of speed is achieved at the cost of performance degradation(Just because my training images vary in size).
st103594
transform = transforms.Compose([ transforms.Resize(64), transforms.RandomRotation(360, resample=Image.BICUBIC), transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)) ]) After I call RandomRotation I get this image Its screwing up the GAN I am trying to train. Whats the easiest way to fill the corners white?
st103595
some fill with nearest pixels, but that may be hard to implement. alternatively, you can increase the size (e.g., by using nearest pixels to fill in blank areas. this is easy because it is shifting in axes-aligned directions), and then rotate.
st103596
So let me get this straight I increase the white border size but not the content so when I rotate/crop I would still get to keep the corners? If so how do I increase the size is there a direct function you can call in pytorch?
st103597
Ideally you would want to do that in dataset.__getitem__, where you can operate on np arrays and/or PIL images. There are plenty of useful functions there.
st103598
I was looking at the tutorial and someone code on github (https://github.com/bearpaw/pytorch-classification/blob/master/cifar.py 51) and the two preprocessing have different means and stds. Which one is the standard one or the best one? I’m confused on the advantages disadvantages of one vs the other or what the difference of the really are. What do the numbers mean? Inspired by the tutorial: transform = [] ''' converts (HxWxC) in range [0,255] to [0.0,1.0] ''' to_tensor = transforms.ToTensor() transform.append(to_tensor) ''' Given meeans (M1,...,Mn) and std: (S1,..,Sn) for n channels, input[channel] = (input[channel] - mean[channel]) / std[channel] ''' if standardize: gaussian_normalize = transforms.Normalize( (0.5, 0.5, 0.5), (0.5, 0.5, 0.5) ) transform.append(gaussian_normalize) ''' transform them to Tensors of normalized range [-1, 1]. ''' transform = transforms.Compose(transform) vs the one from the github link: def this_guys_preprocessor(): transform_train = transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) transform_test = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) return transform_train, transform_test
st103599
for the convolution (https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/conv.py 1) the implementation is: def reset_parameters(self): n = self.in_channels for k in self.kernel_size: n *= k stdv = 1. / math.sqrt(n) self.weight.data.uniform_(-stdv, stdv) if self.bias is not None: self.bias.data.uniform_(-stdv, stdv) which I take is the n=nb_chan*k_1*k_2. However, why isn’t it n=nb_chan+k_1+k_2? What is wrong with the sum? My question is based on the fact that the linear seems to be the total of “in units”: def __init__(self, in_features, out_features, bias=True): super(Linear, self).__init__() self.in_features = in_features self.out_features = out_features self.weight = Parameter(torch.Tensor(out_features, in_features)) if bias: self.bias = Parameter(torch.Tensor(out_features)) else: self.register_parameter('bias', None) self.reset_parameters() but my notion of “total” seems to be captured better by sums than by products…