id
stringlengths
3
8
text
stringlengths
1
115k
st97968
Sorry for making it not clear. I mean the in_channels of conv2 are set currently to 6, although conv1 returns 32 channels. So your model is currently not working. Try the following and you’ll get an error for conv2: model = LeNet() x = torch.randn(1, 1, 96, 96) output = model(x) > RuntimeError: Given groups=1, weight of size [64, 6, 2, 2], expected input[1, 32, 47, 47] to have 6 channels, but got 32 channels instead Also, after fixing this issue, you’ll get another size mismatch error for your fc1 layer, as the shape of x right before flattening it is [1, 128, 11, 11]. So fc1 should take in_featutres=128*11*11. After fixing this issue, you’ll get another error as you are reusing fc3. Removing the x = F.relu(self.fc3(x)) line finally makes your model run. I’m curious why the model seems to be working for you.
st97969
thanks a lot , now i understood how to calculate no of parameters for the next conv layer, also i tried to implement data loader class from torch.utils.data.dataset import Dataset class MyCustomDataset(Dataset): def __init__(self, x,y,dtype): self.x=torch.tensor(torch.from_numpy(x),dtype=dtype) self.y=torch.tensor(torch.from_numpy(y),dtype=dtype) self.dtype=dtype self.data_len=len(x) def __getitem__(self, index): # stuff img=self.x[index] label=self.y[index] return (img, label) def __len__(self): return self.data_len can i implement data augmentation inside this data loader class for each 128 batches, like in the tutorial they have created FlipBatchIterator class for augmenting 50% of data. Also i have doubt that how he is calculating the training and validation loss, does he is calculating the loss over a batch and then averaging it over 1 epoch or he is just summing up the loss for batches
st97970
Sure, there are several ways to implement data augmentation. One way would be to use torchvision.transforms on images. As these transformations often work only on images, we would need to transform the numpy arrays into images first, augment the data, and finally transform them to tensors. Another way would be to just implement these transformations by ourselves, as we already have the tensor data. Anyway, in both cases we would have to take care of the targets as well, since the keypoints would have to be e.g. flipped accordingly to the image. In the first case, we could use torchvision’s functional API. Here 1 is a small example I’ve written a while ago. You could just reimplement the blog post’s data augmentation as the flip indices etc. is already provided: class MyCustomDataset(Dataset): def __init__(self, x,y,dtype): self.x=torch.from_numpy(x).to(dtype=dtype).clone() self.y=torch.from_numpy(y).to(dtype=dtype).clone() self.dtype=dtype self.data_len=len(x) self.flip_indices = [ (0, 2), (1, 3), (4, 8), (5, 9), (6, 10), (7, 11), (12, 16), (13, 17), (14, 18), (15, 19), (22, 24), (23, 25), ] def __getitem__(self, index): # stuff img=self.x[index].clone() label=self.y[index].clone() # Transform every second sample if random.randint(0, 1) == 1: print('Flipping image') img = img.flip(2) label[::2] = label[::2] * -1 for a, b in self.flip_indices: label[a], label[b] = label[b].clone(), label[a].clone() return (img, label) def __len__(self): return self.data_len You have to add some clone() calls, as otherwise the original data will be modified or the label swap won’t work. Using the Dataset you just have to handle a single sample. The DataLoader will automatically create batches using the Dataset. I’m not sure, how Lasagne calculates the losses, but I assume in both cases the mean loss of all batches is averaged over the epoch.
st97971
Is it right to define optimizer at every epoch for varying learning rate and momentum? lr_arr=np.linspace(0.03, 0.0001, epochs) momentum_arr=np.linspace(0.9, 0.999, epochs) for epoch in range(1, epochs + 1): optimizer = torch.optim.SGD(model_two.parameters(),lr=lr_arr[epoch-1], momentum=momentum_arr[epoch-1], nesterov=True) train_loss_net2.append(train( model_two, device, train_loader, optimizer,criterion,epoch)) test_loss_net2.append(test(model_two, device,criterion,test_loader)) end = datetime.datetime.now() print(end-start)
st97972
For optim.SGD this should work. However, if you use another optimizer with internal states, e.g. running estimates, the recreation will clear out these buffers and you will most likely see a spiking loss curve. In that case I would recommend to use the optimizer.param_group to manipulate the internal values. Also, have a look at optim.lr_scheduler 3. These scheduler allow an easy manipulation of the learning rate using different methods.
st97973
thanks a lot, is this implementation correct lambda1 = lambda epoch: 0.03-epoch*(0.03-.0.0001)/epochs optimizer = torch.optim.SGD(model_three.parameters(),nesterov=True) scheduler = LambdaLR(optimizer, lr_lambda=lambda1) for epoch in range(100): scheduler.step() train(...) validate(...) how can I vary momentum in this scheduler, i have to vary both (learning rate and momentum)
st97974
The implementation won’t work, as LambdaLR is using a multiplicative factor to manipulate the learning rate. Your optimizer is also missing the lr argument. In your use case, I think it would be the easiest approach to use your initial lr_arr and momentum_arr and to manipulate the optimizer.param_group directly.
st97975
Here is the code. class EncoderBiLSTM(nn.Module): def __init__(self, hidden_size, pretrained_embeddings): super().__init__() self.hidden_size = hidden_size self.embedding_dim = pretrained_embeddings.shape[1] self.vocab_size = pretrained_embeddings.shape[0] self.num_layers = 2 self.dropout = 0.1 if self.num_layers > 1 else 0 self.bidirectional = True self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim) self.embedding.weight.data.copy_(torch.from_numpy(pretrained_embeddings)) self.embedding.weight.requires_grad = False # 冻结,无需训练 self.lstm = nn.LSTM(self.embedding_dim, self.hidden_size, batch_first = True, dropout = self.dropout, bidirectional = self.bidirectional) identity_init = torch.eye(self.hidden_size) ''' here will report wrong. AttributeError: 'LSTM' object has no attribute 'weight_hh_l1' ''' self.lstm.weight_hh_l0.data.copy_(torch.cat([identity_init]*4, dim=0)) self.lstm.weight_hh_l1.data.copy_(torch.cat([identity_init]*4, dim=0)) self.lstm.weight_hh_l0_reverse.data.copy_(torch.cat([identity_init]*4, dim=0)) self.lstm.weight_hh_l1_reverse.data.copy_(torch.cat([identity_init]*4, dim=0)) ... lstm =nn.LSTM(...) print(lstm.state_dict()) I have print the LSTM object’s state_dict(),‘weight_hh_l0’ is exactly there. Is anyone coudl help me?
st97976
Hi everyone, I have a basic LSTM encoder, which encode some texts. With the hidden representations, I’m doing several stuffs in another module. Until there, nothing special. Later, I need negative samples with their hidden states that will be needed in the another module to compute the HingeLoss (https://en.wikipedia.org/wiki/Hinge_loss). However, I don’t want the encoder updates its weights twice (or should I ?). In pseudo code, I have something like: batch_texts = ... batch_texts_hidden = myEncoder(batch_texts) batch_neg = ... with torch.no_grad(): batch_neg_hidden = myEncoder(batch_neg) loss = myHinge(batch_texts_hidden, batch_neg_hidden) In this manner, 1) is the gradient computed for anotherModule and once for myEncoder ? (only for batch_texts). 2) Do you think I should compute the gradient twice in the Encoder ? Thank you for your answers
st97977
class myModel(nn.Module): def __init__(self): super(myModel, self).__init__() # self.keep_prob=0.9 # self.block_size=7 self.block1 = nn.Sequential( nn.Conv2d(1, 32, kernel_size=3, padding=1, bias=False), nn.GroupNorm(4,32,affine=False), # nn.BatchNorm2d(32, affine=False), nn.ReLU()) self.bolck2=nn.Sequential(...) self.bolck3=nn.Sequential(...) self.bolck4=nn.Sequential(...) self.bolck5=nn.Sequential(...) self.bolck6=nn.Sequential(...) self.bolck7=nn.Sequential(...) self.block1.apply(weights_init) self.block2.apply(weights_init) self.block3.apply(weights_init) self.block4.apply(weights_init) self.block5.apply(weights_init) self.block6.apply(weights_init) self.block7.apply(weights_init) return def forward(self, input,keep_prob=1.0): x = self.block1(self.input_norm(input)) # x = DropBlock2D(keep_prob)(x) x = self.block2(x) # x = DropBlock2D(keep_prob)(x) x = self.block3(x) # x = DropBlock2D(keep_prob)(x) x = self.block4(x) x = self.block5(x) # x = DropBlock2D(keep_prob)(x) x = self.block6(x) x = DropBlock2D(keep_prob=keep_prob)(x) x = self.block7(x) # x_features=self.features(input) x = x.view(x.size(0), -1) return L2Norm()(x) writer = SummaryWriter(log_dir=args.tensorboardx_log, comment='Hardnet') dumpy_input = torch.zeros((1, 1, 32, 32)) temp_input = Variable(dumpy_input) dumpy_keep_prob=Variable(torch.tensor([1])) writer.add_graph(model, (temp_input,dumpy_keep_prob)) When I use the tensorboadX to save the model graph, image.png420×877 30.2 KB But the submodel DropBlock2D does not appear in the graph, where I am wrong. Thanks in advance.
st97978
My environment details are windows 10 CUDA 10 GPU Compute Capability 3, Nvidia Quadro K1000M anaconda 3 For running FASTAI tutorials, I built pytorch from source and while running the lesson 1, I am getting following error: cuda runtime error (48) : no kernel image is available for execution on the device at c:\anaconda2\conda-bld\pytorch_1519501749874\work\torch\lib\thc\generic/THCTensorMath.cu:15 The error refers to a path which is not even there on my system. What should i do?
st97979
GPU CC 3 is not supported anymore. There are three options: Use legacy PyTorch packages (0.3.1) Not recommend. Compile it by yourself Change GPU
st97980
Please specify the CUDA CC and try again. See https://github.com/pytorch/pytorch/issues/9242#issuecomment-403440134 21 for details.
st97981
set TORCH_CUDA_ARCH_LIST=3.0 worked for me. Thanks for pointing it out. When i ran the FASTAI lesson 1, it gave CUDA Out of memory error. After that i specified bs param and restarted kernel. Now i am able to run the code.
st97982
PyTorch has code generation tools like tools/autograd/gen_autogra.py and aten/src/ATen.py. For looking around the generated code, it seems fully compile it but it takes time. Is there any command only to generate code? (it does not need to comple) I just try to execute following command in master branch but it failed. python3 setup.py build_ext
st97983
I’m now trying to understand how nn.DataParallel use multiple GPUs. As far as I understand, whenever I call forward function of module wrapped by DataParallel, Split inputs to multiple GPUs with scatter function Replicate original module to multiple GPUs Call forward of each replica with corresponding (splitted) inputs Gather outputs from each replica and return it However, I couldn’t find how DataParallel guarantees sub-modules of each replica lying on certain cuda device. In case of parameters and buffers of original module, it’s guaranteed that they are copied to every usable GPU (https://github.com/pytorch/pytorch/blob/0988bbad2de5e0ce403c5e6f781437b24a484fc2/torch/nn/parallel/replicate.py#L12 14, https://github.com/pytorch/pytorch/blob/0988bbad2de5e0ce403c5e6f781437b24a484fc2/torch/nn/parallel/replicate.py#L19 6), but I can’t find such code for sub-modules of original module. Even though parallel_apply function runs the forward function of replica with torch.cuda.device(device of given inputs) context, I think it only affects where new tensors located, not the replica. Could you tell me how each replica broadcasted on each GPU?
st97984
Solved by wwiiiii in post #2 Move module to device x is same as move all the parameters and buffers moved to that device. So two lines mentioned in the question are sufficient to guarantee each module is replicated to different GPU.
st97985
Move module to device x is same as move all the parameters and buffers moved to that device. So two lines mentioned in the question are sufficient to guarantee each module is replicated to different GPU.
st97986
wwiiiii: Even though parallel_apply function runs the forward function of replica with torch.cuda.device(device of given inputs) context, I think it only affects where new tensors located, not the replica. It’s not true. When we don’t specify devices when constructing nn.DataParallel instance (which is common case), devices are set as list of None 4, and then become list of current device 2, for all replicated modules. But it doesn’t matter since each i-th (module, inputs) pair is already located on the GPU #i. So the result of calculation is also going to be located on i-th GPU, which makes with torch.cuda.device(device) meaningless. And that’s why we need the gather after the parallel_apply. Please let me know if there’s any wrong point.
st97987
image.png1090×720 74.8 KB I surprisingly found that I have both PyTorch 0.4.1 and PyTorch 1.0 preview in my Anaconda.I thougth the PyTorch1.0 has overwritten the 0.4.1 version.But it didn’t. Inspired by toggling bewteen Python2.x and Python3.x in PyCharm.I’m wondering if I could toggle the different version of PyTorch just like switching Python.If it really works, I can enjoy the new features and at the same time I can learn others’ code writting in previous versions.Anyone can give me an answer!thanks!
st97988
Hi I believe that if pytorch for py2 is in stable or pre-release channels you will be able to install on your py2 environment. If not you can try to find it on another channel so see if somebody built it already or you can just learn how to build your own.
st97989
I got a runtime error like: RuntimeError: the derivative for pow is not implemented I am wondering why the derivative for pow is not implemented, it seems as easy as sin()
st97990
Which PyTorch version are you using? You can check it with print(torch.__version__). It’s working on my machine, so if you are using the latest stable release of the 1.0 preview, could you please post a code snippet reproducing this error?
st97991
The Pytorch version is : 0.4.1 The pow function is used as follows: The para is a Parameter object with size [out, int, h]-----my custom net layer band = para[:,:,2].reshape(num,1,1).repeat(1,h,w) ratio = torch.tensor(np.sqrt(np.log(2)/2)/np.pi) * (torch.pow(2,band)+1)/(torch.pow(2,band)-1) when loss.backward() is executed it raise: RuntimeError: the derivative for pow is not implemented Besides, I test the code in Windows OS. Is it the reason?
st97992
pow with scalar basis was fixed in master about a month ago, so it isn’t in 0.4.1, but is available in more recent build on windows, too. Best regards Thomas
st97993
I have a training set with 43 variables and 7471 observations. The target has 6 outputs for each. The input, denoted by X, has as shape of (7471, 43), and the output, denoted by y , has a shape of (7471, 6). I want to implement a supervised regression model. Todo so a build a neural network based on the tutorial here 7. All I did is change the input shape, denoted by D_in, the shape of the hidden layer, denoted by H, and the output shape, denoted by D_out. In the end my code looks like this class DynamicNet(torch.nn.Module): def __init__(self, D_in, H, D_out): """ In the constructor we construct three nn.Linear instances that we will use in the forward pass. """ super(DynamicNet, self).__init__() self.input_linear = torch.nn.Linear(D_in[1], H) self.middle_linear = torch.nn.Linear(H, H) self.output_linear = torch.nn.Linear(H, D_out[1]) def forward(self, x): """ For the forward pass of the model, we randomly choose either 0, 1, 2, or 3 and reuse the middle_linear Module that many times to compute hidden layer representations. Since each forward pass builds a dynamic computation graph, we can use normal Python control-flow operators like loops or conditional statements when defining the forward pass of the model. Here we also see that it is perfectly safe to reuse the same Module many times when defining a computational graph. This is a big improvement from Lua Torch, where each Module could be used only once. """ h_relu = self.input_linear(x).clamp(min=0) for _ in range(random.randint(0, 3)): h_relu = self.middle_linear(h_relu).clamp(min=0) y_pred = self.output_linear(h_relu) return y_pred # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. D_in = input_x_train.shape # (7471, 43) H, D_out = 512, input_y_train.shape # (7471, 6) # Construct our model by instantiating the class defined above model = DynamicNet(D_in, H, D_out) # Construct our loss function and an Optimizer. Training this strange model with # vanilla stochastic gradient descent is tough, so we use momentum criterion = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9) losses = [] record_loss = losses.append x = torch.from_numpy(input_x_train).float() y = torch.from_numpy(input_y_train).float() for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = criterion(y_pred, y) record_loss(loss.item()) print(t, loss.item()) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() However, the result i get while training was something like this 0 7943298220032.0 1 nan 2 nan 3 nan 4 nan 5 nan 6 nan 7 nan 8 nan 9 nan 10 nan 11 nan 12 nan 13 nan 14 nan 15 nan 16 nan I’m not really sure if there was something wrong with my implementation. Does anybody knows what I am missing? Thanks in advance.
st97994
I think your loss might grow pretty quickly, as you are summing the sample losses for a big batch of 7471 samples. Try to use reduction='elementwise_mean' or maybe a smaller batch size and see, if you also get nan values.
st97995
Hi everyone, I am a little bit confused on how I could use torch.gather() or torch.index_select() to retrieve some specific values from a 5D tensor using a 2D tensor of indices I obtained via torch.nonzero(). Say I have two 5d tensors A and B, B being extremely sparse (95% of zeros): >> A.size() torch.Size([20, 20, 64, 64, 64]) >>B.size() torch.Size([20, 20, 64, 64, 64]) If I call nonzero() on B, I get a 2D tensor with for each row the 5 indices of a nonzero value in B: >> ind = B.nonzero() >> print(ind) tensor([[ 0, 1, 28, 32, 34], [ 0, 1, 32, 35, 39], [ 0, 2, 37, 26, 45], ..., [19, 17, 20, 39, 37], [19, 18, 27, 38, 31], [19, 19, 23, 38, 36]] ) How can I get the corresponding values in A using ind ?
st97996
OK, I found a way to circumvent this problem using torch. masked_select . Thanks!
st97997
Attempting to install dependencies for fastai using conda. both pytorch and torchvision nightly-cpu builds appear unavailable. Who is the best team to contact to ask for a re-publish? Edit: python 3.7 environment Capture.PNG1100×727 30 KB
st97998
these nightlies have never been available for Windows. We are making progress on them for Windows, see here: https://github.com/pytorch/pytorch/issues/13227#issuecomment-436520690 23
st97999
Error message: Traceback (most recent call last): File "reproduce.py", line 7, in <module> @torch.jit.script File "/home/sidney/anaconda3/envs/py27/lib/python2.7/site-packages/torch/jit/__init__.py", line 616, in script graph = _jit_script_compile(ast, _rcb) File "/home/sidney/anaconda3/envs/py27/lib/python2.7/site-packages/torch/jit/__init__.py", line 597, in _try_compile_weak_script entry = _compiled_weak_fns.get(fn) File "/home/sidney/anaconda3/envs/py27/lib/python2.7/weakref.py", line 391, in get return self.data.get(ref(key),default) TypeError: cannot create weak reference to 'builtin_function_or_method' object Files: // preprocess.cc #include <pybind11/pybind11.h> #include <string> #include <iostream> int convert_str_to_tensor(std::string str) { std::cout << str << std::endl; return 0; } PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { m.def("convert_str_to_tensor", &convert_str_to_tensor, "LSTM preprocess"); } // lstm.py import torch from torch.utils.cpp_extension import load preprocess = load(name="preprocess", sources=["preprocess.cc"]) @torch.jit.script def lstm(cells): # type: (List[torch.Tensor]) -> torch.Tensor preprocess.convert_str_to_tensor("sidney") hidden = torch.ones([10, 10]) for i in range(len(cells)): hidden = hidden.mm(cells[i]) return hidden print lstm([ torch.ones([10, 10]), torch.ones([10, 10]), torch.ones([10, 10]) ])
st98000
You need custom operators for that. A hint of how they work can be had from the tests: GitHub pytorch/pytorch 12 Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch Best regards Thomas
st98001
Hi all, I am hoping to confirm that what I did for data processing and visualizing images makes sense. I am doing a binary classification problem where images are (480,640,3)-sized depth images of blankets on a table-like surface. I have the following two gist codes here which one should be able to run in the same directory if I have set things correctly: gist.github.com https://gist.github.com/DanielTakeshi/c2a5ddad85dc3c938c9c61441e769db4 13 build_data.py import copy, cv2, os, sys, pickle, time import numpy as np from os.path import join TARGET = 'tmp/' RAW_PICKLE_FILE = 'data_raw_115_items.pkl' def prepare_data(): """Create the appropriate data for PyTorch using `ImageFolder`. From: This file has been truncated. show original gist.github.com https://gist.github.com/DanielTakeshi/bbaf432347aafa2e9878e93fd6982fd7 12 train.py import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import torchvision.models as models from torchvision import datasets, transforms import copy, cv2, os, sys, pickle, time import numpy as np from os.path import join This file has been truncated. show original The first one has the data there (see bottom post) and loads it for ImageLoader. It also computes mean and standard deviation, by putting all the numbers.extend( d_img[:,:,0].flatten() ) stuff into a numbers list and then taking a mean and standard deviation. The mean turns out to be 93 and the standard deviation is 84. It’s high because I have lots of 0s and lots of brighter values. First question: this is a correct way of computing per-channel mean? The depth images are replicated across 3 channels so the values would be the same across all channels. I imagine there is a more efficient way to do this, though, perhaps dynamically computing the standard deviation somehow? And also, I see on the ImageNet examples that the mean values are within [0,1], so I am not sure if the values here should be scaled as well … Next, I went ahead to train the model (see second gist). I put this at the top: MEAN = [93.8304761096, 93.8304761096, 93.8304761096] STD = [84.9985507432, 84.9985507432, 84.9985507432] because, again, data is replicated across three channels. Here’s the data transforms: data_transforms = { 'train': transforms.Compose([ transforms.RandomResizedCrop(224), transforms.ToTensor(), transforms.Normalize(MEAN, STD) ]), 'valid': transforms.Compose([ transforms.Resize(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize(MEAN, STD) ]), } I used a pre-trained ResNet-18 model. I went into the training loop and took the first minibatch. Then, I saved all the images. It took me a long time to figure out the correct way to get the images back to what I wanted: it’s in _save_images in the second gist. This will save into a directory and I see depth images that make sense, and which have been cropped correctly as you can see later in the second gist. I wanted to visualize the transforms. I had to do something like this: But what is confusing is that I needed to do this snippet (see Gist for details) img = img.transpose((1,2,0)) img = img*STD + MEAN img = img*255.0 img = img.astype(int) transpose to get it into (224,224,3), then undo STD, MEAN, and this is really weird: we then multiply by 255. I assume this undoes the scaling that the ToTensor() transform does? Second set of question(s): the data transformation that I used above makes sense (MEAN and STD on the domain data of interest), and the ToTensor() method can be undone by multiplying the image by 255? Then, I assume MEAN and STD are correctly “adjusted” so that they reflect the rescaled image where pixels are in [0,1], rather than [0,255] as previously? Sorry for the long message! I just wanted to make sure I was understanding PyTorch correctly. I’m happy to clarify anything,
st98002
Solved by ptrblck in post #2 The calculation of the mean and std on your images looks good. There is a small issue in your transformation. As you said, the mean and std for the ImageNet data is smaller than yours, because it was calculated on the normalized tensors. ToTensor will transform your PIL.Images to normalized tenso…
st98003
The calculation of the mean and std on your images looks good. There is a small issue in your transformation. As you said, the mean and std for the ImageNet data is smaller than yours, because it was calculated on the normalized tensors. ToTensor will transform your PIL.Images to normalized tensors in the range [0, 1]. If you are using Normalize afterwards, you should make sure to use the mean and std calculated on these tensor images in the range [0, 1]. However, since you’ve already computed these values, you could just scale them with 1./255. The same applies to undo the normalization using the mean and std. In your current code snippet you are assuming mean and std were calculated on the normalized tensors.
st98004
Thanks @ptrblck I fixed the code a bit. The issue is that, while saving the images should look good, this is not the correct way to normalize data. The way I had it earlier, if you take the mean and std from pixels in the range [0,255], then the data gets transformed like this: ToTensor transforms images and scales into range [0,1] Then Normalize will do this: ([0,1] - rawmean) / rawstd What we really want is the scaled mean and scaled std, as you pointed out (where by scaled data, I mean values in the range [0,1] not [0,255]). Of course, for undoing, it’s correct either way: (([0,1] - rawmean) / rawstd) * rawstd + rawmean = [0,1] and then we multiply by 255. or (([0,1] - scaledmean) / scaledstd) * scaledstd + scaledmean = [0,1] and then we multiply by 255. I will simply use scaled mean and scaled std on my data from now on.
st98005
When creating a variable-length PackedSequence with batch_first=True, accessing the .data attribute returns the sequences out of order, as if batch_first=False. I don’t really understand the reasoning behind having the sequence be the first dimension by default (it seems less intuitive given how pytorch otherwise deals with batches), but I’m assuming it is for performance reasons. Even then, given that the .data attribute is public-facing, I feel like it should be returned in the same order as it was given. Then, for those of us writing modules that use padded and packed sequences, we can more naturally deal with this input without hacking together even more re-ordering code; requiring pack_padded_sequence() and pack_sequence() to receive sequences in decreasing length is enough of a hassle, but that’s another topic. I wasn’t sure if this behavior was intended or not, so I’m posting here rather than making a bug report. But is this behavior correct? If so, why, and how does pytorch recommend dealing with this issue? Code to Reproduce Behavior: import torch from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence batch_first_seqs = [ \ torch.rand((3, 2)), torch.rand((2, 2)), torch.rand((1, 2))] lengths = torch.LongTensor([3, 2, 1]) padded_seqs = pad_sequence(batch_first_seqs, batch_first=True) packed_seqs = pack_padded_sequence(padded_seqs, lengths=lengths, batch_first=True) print(batch_first_seqs) print(padded_seqs) print(packed_seqs) print(torch.cat(batch_first_seqs) == packed_seqs.data) Output: [tensor([[0.7967, 0.5329], [0.6376, 0.3543], [0.6514, 0.8007]]), tensor([[0.1709, 0.1577], [0.5007, 0.8083]]), tensor([[0.3345, 0.7590]])] tensor([[[0.7967, 0.5329], [0.6376, 0.3543], [0.6514, 0.8007]], [[0.1709, 0.1577], [0.5007, 0.8083], [0.0000, 0.0000]], [[0.3345, 0.7590], [0.0000, 0.0000], [0.0000, 0.0000]]]) PackedSequence(data=tensor([[0.7967, 0.5329], [0.1709, 0.1577], [0.3345, 0.7590], [0.6376, 0.3543], [0.5007, 0.8083], [0.6514, 0.8007]]), batch_sizes=tensor([3, 2, 1])) tensor([[1, 1], [0, 0], [0, 0], [0, 0], [1, 1], [0, 0]], dtype=torch.uint8)
st98006
Hi, The .data field is kept for backward compatibility but should not be used at all. Why do you need it? You should replace all use of it with either .detach() to break the graph or with torch.no_grad() to perform ops that are not tracked by the autograd engine.
st98007
Thanks for responding. I was using this as a bit of a hack to remove the padding from a padded sequence. Basically, I wanted to to let a function parameter specify whether an input was padded or not, then break up that padded sequence into a list of variable-length sequences for use in a stateful layer (like an LSTM, or a GRU). And of course, I was wanting to keep the batch dimension as the first dimension. Knowing that the .data field shouldn’t be used at all, it makes sense that this isn’t the pytorch-intended way of doing this. In fact, it seems likely that the library would rather this not be done at all. It seems unavoidable when writing custom state-based layers, so I suppose I’ll just write my own utility functions to help clean things up. Here’s hoping 1.0 cleans up this part of the library!
st98008
Hi. I’m learning pytorch for my research. image.jpg947×627 148 KB I want to implement specific type of model in attached image. Is there anyone know how to implement the model? [Description] Size of input data: [channel=192, w=32, height=32]. (Note that 192 = 64 * 3) Next layers consist of 3 parallel layers which has no connections between the layers. I will call three layers A, B and C. 3-0) The number of Input & Output nodes of each three layers are 643232. 3-1) Layer A feed first 64 channels of input data. 3-2) Layer B feed next 64 channels of input data. 3-3) Layer C feed last 64 channels of input data 3-4) No connections among A, B and C Finally, merge output of A, B and C layer which has size [channel=192, w=32, height=32]. If you know examples or solution, please help me.
st98009
I ran training and saved loss value and weights of network. Then I again ran training and saved loss value and weights of network. It turned out, that I got the same loss value on first batch iteration, but my weights are different already after first batch iteration (i.e one optimization step was)… How could it happen? I set the same seed, but it doesn’t matter this, I suppose.
st98010
Besides setting the random seed, you should also disable non-deterministic cuDNN operations, if you are using the GPU. Have a look at the docs regarding reproducibility 4. Note that you might lose some performance enabling deterministic behavior.
st98011
Yes, I do it, but it didn’t help me, moreover, it doesn’t matter, because loss_tensor’s are the same in different running… But weights are different after first optimization…
st98012
That sounds quite strange, as I would assume the same loss is generated using the same data and model parameters. So the gradients / weight updates seem to differ somehow? Could you post a code snippet reproducing this behavior?
st98013
I’ve got a sequence to sequence model that includes a convolutional stack in front of some recurrent layers. In my experimental code, I’m carrying around a lengths tensor in addition to the data, downsampling them after each layer before instantiating/applying the mask, and packing/unpacking the sequences if they are handed to an recurrent layer. That is awfully inconvenient and while still a bit faster than single samples not as fast as I was hoping. Are there more elegant ways to deal with variable sized inputs in CNNs?
st98014
Hi all, I’m currently writing some files containing tensors via the C++ OutputArchive api. Reading the tensor back via InputArchive::read works fine when you know the tensor names. However I would also like to be able to list all the tensor names currently available in a given archive. Is there currently a way to achieve this ? more details: Tensor writing is done via torch::serialize::OutputArchive archive; archive.load_from(filename); archive.write(tensor_name, tensor); I used to be able to list tensors in a hacky way with the following code but this seems to segfault. shared_ptr<torch::jit::script::Module> module = torch::jit::load(filename); for (const auto &p : module->get_parameters()) { ... } update: Updating to g++ from 4.8.5 to 5.2 made the segfault disappear which mostly solves my issue. Happy to know if there is a better way to do this though.
st98015
Larger batchSize means less time to complete an epoch. Is this right? But in practice, when I set batchSize = 1, the time for an epoch is 108 seconds. When I set batchSize = 32, it’s 116s, and 142s for batchSize = 64. I am very confused by the results…
st98016
Dear all, I found it difficult to implement this function written in tensorflow, anyone helps me ? def upscale2d(x, n): """Box upscaling (also called nearest neighbors). Args: x: 4D tensor in NHWC format. n: integer scale (must be a power of 2). Returns: 4D tensor up scaled by a factor n. """ if n == 1: return x return tf.batch_to_space(tf.tile(x, [n**2, 1, 1, 1]), [[0, 0], [0, 0]], n)
st98017
Hello! In lstm(x)[0] we use only part of the data and it affects a number of epochs trained. Does a better solution exist? I mean we have to do something not to lose valuable data from our LSTM. Edit: I just realized doing some TF that it’s not a ready layer and some things have to be done with states. Someone ready to share a ready LSTM layer for PyTorch? I am making a function that I can call to make LSTM for TF. My struggle in TF. Maybe it’ll inspire someone. If TF code is against rules of the forum I will delete. Can I post TF code here?
st98018
Well, if you call lstm(x)[0] you are using the output of the lstm layer, not its states. nn.LSTM is a fully working layer, while nn.LSTMCell is only a single cell. What are you missing? If you have some TF code you would like to port to PyTorch, feel free to post it, and we can have a look at it.
st98019
Official tutorial on lstm from PyTorch: lstm = nn.LSTM(3, 3) # Input dim is 3, output dim is 3 inputs = [torch.randn(1, 3) for _ in range(5)] # make a sequence of length 5 # initialize the hidden state. hidden = (torch.randn(1, 1, 3), torch.randn(1, 1, 3)) for i in inputs: # Step through the sequence one element at a time. # after each step, hidden contains the hidden state. out, hidden = lstm(i.view(1, 1, -1), hidden) And I read somewhere on the PyTorch Forum that says: “Don’t use lstm in Sequential”. The question is: is it fully function layer or I have to use for loop and feed state into the model to get result. In TF there no LSTM layer to use in model and think the same situation in PyTorch. Just slicing the data from lstm(input) caused me to pay attention to the situation. But I don’t know right answer. I finished TF implementation of LSTM in TF but I don’t know how to append a row to Tensor in TF, got shape [?, 87] which is not feeding forward because of ?. Problem is simple but I don’t know the answer. Code for TF is working and getting results but you have to change batch_size manually. I guess similar code should be in PyTorch to run fully functional LSTM. def RNN(x): output = tf.Variable((0, 0), trainable=False, validate_shape=False, dtype=tf.float32) lstm_cell = tf.nn.rnn_cell.LSTMCell(3, activation="tanh") state = lstm_cell.zero_state(batch_size=27, dtype=tf.float32) for number in range(timesteps): output, state = lstm_cell(x[number], state) print('State: ', state) return output
st98020
I now use PyTorch 1.0 preview.However when I run the code writting in PyTorch 0.4.0.It shows the following error: image.png1120×263 16 KB I have known it’s the problem because of version after googling.But I don’t find the way how to fix the problem.I’m new to PyTorch, leave alone the latest PyTorch 1.0.Anyone can help me? Or just fallback to the stable PyTorch 0.4.x version?
st98021
Hi, The torch.utils.ffi module that was a temporary thing has been removed and replaced by a proper solution that is cpp extensions: doc 2 and tuto. This is used by the pytorch-lighthead library that you are using. So the library will need to be updated to support pytorch 1.0. In the meantime I’m afraid you will have to keep using 0.4 version
st98022
Hi all, Is there any equivalent functionality with tf.gather (https://www.tensorflow.org/api_docs/python/tf/gather 37) in pytorch? The pytorch function with the same name seems to do something else. I tried to search the documentation but I was unable to find anything similar. Thanks a lot!
st98023
If you want to index a single dimension, you can use index_select() 241. For more dimensions, you actually want to use torch.gather() 211 but it is trickier to use.
st98024
Yes. torch.gather() is different. but I think you just can use direct indexing of pytorch.
st98025
Is there two NN are identical from a practical point of view? Sequential( (0): Linear(in_features=9328, out_features=100, bias=True) (1): Sigmoid() (2): Linear(in_features=100, out_features=16, bias=True) (3): Softmax() ) Sequential( (0): Sequential( (0): Linear(in_features=9328, out_features=100, bias=True) (1): Sigmoid() (2): Linear(in_features=100, out_features=16, bias=True) ) (1): Softmax() )
st98026
Solved by ptrblck in post #2 Yes, they are identical. The only difference between them are of course the values of the parameters due to random initialization. If you want to use the same parameters, you could try the following: modelA = nn.Sequential( nn.Linear(9328, 100), nn.Sigmoid(), nn.Linear(100, 16), n…
st98027
Yes, they are identical. The only difference between them are of course the values of the parameters due to random initialization. If you want to use the same parameters, you could try the following: modelA = nn.Sequential( nn.Linear(9328, 100), nn.Sigmoid(), nn.Linear(100, 16), nn.Softmax(dim=1) ) modelB = nn.Sequential( nn.Sequential( nn.Linear(9328, 100), nn.Sigmoid(), nn.Linear(100, 16), ), nn.Softmax(dim=1) ) with torch.no_grad(): modelB[0][0].weight = modelA[0].weight modelB[0][0].bias = modelA[0].bias modelB[0][2].weight = modelA[2].weight modelB[0][2].bias = modelA[2].bias x = torch.randn(1, 9328) outputA = modelA(x) outputB = modelB(x) print((outputA == outputB).all()) > tensor(1, dtype=torch.uint8)
st98028
import torch import torch.nn.functional as F a = torch.arange(1, 5).view(1, 1, 2, 2).float() a = F.interpolate(a, size=[4, 4], mode='bilinear') print(a) output is tensor([[[[1.0000, 1.2500, 1.7500, 2.0000], [1.5000, 1.7500, 2.2500, 2.5000], [2.5000, 2.7500, 3.2500, 3.5000], [3.0000, 3.2500, 3.7500, 4.0000]]]]) This part seems to be correct. a = F.interpolate(a, size=[1, 4], mode='bilinear') print(a) The output is tensor([[[[1.0000, 1.2500, 1.7500, 2.0000]]]]) How to understand the result? As a comparison, cv2 gives the following result: a = a[0].transpose(0, 1).transpose(1, 2) a = cv2.resize(a.numpy(), (1, 4), interpolation=cv2.INTER_LINEAR) print(a) output: [[1.5] [2. ] [3. ] [3.5]]
st98029
OS: MAC OS 10.13 python: anaconda 3.7 Compiler: gcc-8 and g+±8 NO_CUDA=1 CC=gcc-8 CXX=g+±8 python setup.py install this also does not work for NO_CUDA=1 NO_DISTRIBUTED=1 NO_QNNPACK=1 DEBUG=1 NO_CAFFE2_OPS=1 CC=gcc-8 CXX=g+±8 python setup.py install – ******** Summary ******** – CMake version : 3.12.3 – CMake command : /usr/local/Cellar/cmake/3.12.3/bin/cmake – System : Darwin – C++ compiler : /usr/local/bin/g+±8 – C++ compiler version : 8.2.0 – CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations – Build type : Debug – Compile definitions : – CMAKE_PREFIX_PATH : /anaconda3/lib/python3.7/site-packages – CMAKE_INSTALL_PREFIX : /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/torch/lib/tmp_install – CMAKE_MODULE_PATH : /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/cmake/Modules – ONNX version : 1.3.0 – ONNX NAMESPACE : onnx_torch – ONNX_BUILD_TESTS : OFF – ONNX_BUILD_BENCHMARKS : OFF – ONNX_USE_LITE_PROTO : OFF – ONNXIFI_DUMMY_BACKEND : OFF – Protobuf compiler : – Protobuf includes : – Protobuf libraries : – BUILD_ONNX_PYTHON : OFF – Found gcc >=5 and CUDA <= 7.5, adding workaround C++ flags – Could not find CUDA with FP16 support, compiling without torch.CudaHalfTensor – Removing -DNDEBUG from compile flags – MAC OS Darwin Version: 17 – Compiling with OpenMP support – MAGMA not found. Compiling without MAGMA support – Could not find hardware support for NEON on this machine. – No OMAP3 processor on this machine. – No OMAP4 processor on this machine. – AVX compiler support found – AVX2 compiler support found – Atomics: using C11 intrinsics – Found a library with LAPACK API. (accelerate) – CuDNN not found. Compiling without CuDNN support disabling CUDA because NOT USE_CUDA is set – MIOpen not found. Compiling without MIOpen support disabling ROCM because NOT USE_ROCM is set disabling MKLDNN because USE_MKLDNN is not set – GCC 8.2.0: Adding gcc and gcc_s libs to link line – Using python found in /anaconda3/bin/python disabling CUDA because USE_CUDA is set false – Found OpenMP_C: -fopenmp – Found OpenMP_CXX: -fopenmp – Found OpenMP: TRUE – Configuring build for SLEEF-v3.2 Target system: Darwin-17.7.0 Target processor: x86_64 Host system: Darwin-17.7.0 Host processor: x86_64 Detected C compiler: GNU @ /usr/local/bin/gcc-8 – Using option -Wall -Wno-unused -Wno-attributes -Wno-unused-result -Wno-psabi -ffp-contract=off -fno-math-errno -fno-trapping-math to compile libsleef – Building shared libs : OFF – MPFR : /usr/local/lib/libmpfr.dylib – MPFR header file in /usr/local/include – GMP : /usr/local/lib/libgmp.dylib – RUNNING_ON_TRAVIS : 0 – COMPILER_SUPPORTS_OPENMP : 1 – Using python found in /anaconda3/bin/python – /usr/local/bin/g+±8 /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/torch/abi-check.cpp -o /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/build/abi-check – Determined _GLIBCXX_USE_CXX11_ABI=1 – NCCL operators skipped due to no CUDA support – Excluding ideep operators as we are not using ideep – Excluding image processing operators due to no opencv – Excluding video processing operators due to no opencv – MPI operators skipped due to no MPI support – Include Observer library – Using lib/python3.7/site-packages as python relative installation path – Automatically generating missing init.py files. – A previous caffe2 cmake run already created the init.py files. CMake Warning at CMakeLists.txt:387 (message): Generated cmake files are only fully tested if one builds with system glog, gflags, and protobuf. Other settings may generate files that are not well tested. – – ******** Summary ******** – General: – CMake version : 3.12.3 – CMake command : /usr/local/Cellar/cmake/3.12.3/bin/cmake – System : Darwin – C++ compiler : /usr/local/bin/g+±8 – C++ compiler version : 8.2.0 – BLAS : MKL – CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -D_FORCE_INLINES -D_MWAITXINTRIN_H_INCLUDED -D__STRICT_ANSI__ -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-stringop-overflow – Build type : Debug – Compile definitions : ONNX_NAMESPACE=onnx_torch;USE_C11_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1 – CMAKE_PREFIX_PATH : /anaconda3/lib/python3.7/site-packages – CMAKE_INSTALL_PREFIX : /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/torch/lib/tmp_install – TORCH_VERSION : 1.0.0 – CAFFE2_VERSION : 1.0.0 – BUILD_ATEN_MOBILE : OFF – BUILD_ATEN_ONLY : OFF – BUILD_BINARY : OFF – BUILD_CUSTOM_PROTOBUF : ON – Link local protobuf : ON – BUILD_DOCS : OFF – BUILD_PYTHON : ON – Python version : 3.7 – Python executable : /anaconda3/bin/python – Pythonlibs version : 3.7.0 – Python library : /anaconda3/lib/libpython3.7m.dylib – Python includes : /anaconda3/include/python3.7m – Python site-packages: lib/python3.7/site-packages – BUILD_CAFFE2_OPS : OFF – BUILD_SHARED_LIBS : ON – BUILD_TEST : ON – USE_ASAN : OFF – USE_CUDA : 0 – USE_ROCM : OFF – USE_EIGEN_FOR_BLAS : ON – USE_FFMPEG : OFF – USE_GFLAGS : OFF – USE_GLOG : OFF – USE_LEVELDB : OFF – USE_LITE_PROTO : OFF – USE_LMDB : OFF – USE_METAL : OFF – USE_MKL : OFF – USE_MKLDNN : OFF – USE_MOBILE_OPENGL : OFF – USE_NCCL : OFF – USE_NNPACK : 1 – USE_NUMPY : ON – USE_OBSERVERS : ON – USE_OPENCL : OFF – USE_OPENCV : OFF – USE_OPENMP : OFF – USE_PROF : OFF – USE_QNNPACK : 0 – USE_REDIS : OFF – USE_ROCKSDB : OFF – USE_ZMQ : OFF – USE_DISTRIBUTED : OFF – Public Dependencies : Threads::Threads – Private Dependencies : nnpack;cpuinfo;fp16;onnxifi_loader;gcc_s;gcc – Configuring done – Generating done – Build files have been written to: /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/build ninja install -j4 Error FAILED: lib/libcaffe2.dylib ld: library not found for -lgcc_s collect2: error: ld returned 1 exit status [1279/1481] Building CXX object modules/observers/CMakeFiles/caffe2_observers.dir/observer_config.cc.o [1280/1481] Building CXX object modules/observers/CMakeFiles/caffe2_observers.dir/net_observer_reporter_print.cc.o [1281/1481] Building CXX object modules/module_test/CMakeFiles/caffe2_module_test_dynamic.dir/module_test_dynamic.cc.o ninja: build stopped: subcommand failed. setup.py::build_deps::run() Failed to run ‘bash …/tools/build_pytorch_libs.sh --use-nnpack caffe2’ Case 2 OS: Mac OSX 10.13 python: anacnonda 3.7 compiler: clang and clang++ make command: NO_CUDA=1 NO_DISTRIBUTED=1 NO_QNNPACK=1 DEBUG=1 NO_CAFFE2_OPS=1 CC=clang CXX=clang++ python setup.py install – ******** Summary ******** – CMake version : 3.12.3 – CMake command : /usr/local/Cellar/cmake/3.12.3/bin/cmake – System : Darwin – C++ compiler : /Library/Developer/CommandLineTools/usr/bin/clang++ – C++ compiler version : 10.0.0.10001044 – CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations – Build type : Debug – Compile definitions : – CMAKE_PREFIX_PATH : /anaconda3/lib/python3.7/site-packages – CMAKE_INSTALL_PREFIX : /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/torch/lib/tmp_install – CMAKE_MODULE_PATH : /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/cmake/Modules – ONNX version : 1.3.0 – ONNX NAMESPACE : onnx_torch – ONNX_BUILD_TESTS : OFF – ONNX_BUILD_BENCHMARKS : OFF – ONNX_USE_LITE_PROTO : OFF – ONNXIFI_DUMMY_BACKEND : OFF – Protobuf compiler : – Protobuf includes : – Protobuf libraries : – BUILD_ONNX_PYTHON : OFF – Could not find CUDA with FP16 support, compiling without torch.CudaHalfTensor – Removing -DNDEBUG from compile flags – Found OpenMP_C: -Xclang -fopenmp (found version “3.1”) – Found OpenMP_CXX: -Xclang -fopenmp (found version “3.1”) – Found OpenMP: TRUE (found version “3.1”) – Compiling with OpenMP support – MAGMA not found. Compiling without MAGMA support – Could not find hardware support for NEON on this machine. – No OMAP3 processor on this machine. – No OMAP4 processor on this machine. – Looking for cpuid.h – Looking for cpuid.h - found – Performing Test HAVE_GCC_GET_CPUID – Performing Test HAVE_GCC_GET_CPUID - Success – Performing Test NO_GCC_EBX_FPIC_BUG – Performing Test NO_GCC_EBX_FPIC_BUG - Success – Performing Test C_HAS_AVX_1 – Performing Test C_HAS_AVX_1 - Failed – Performing Test C_HAS_AVX_2 – Performing Test C_HAS_AVX_2 - Success – Performing Test C_HAS_AVX2_1 – Performing Test C_HAS_AVX2_1 - Failed – Performing Test C_HAS_AVX2_2 – Performing Test C_HAS_AVX2_2 - Success – Performing Test CXX_HAS_AVX_1 – Performing Test CXX_HAS_AVX_1 - Failed – Performing Test CXX_HAS_AVX_2 – Performing Test CXX_HAS_AVX_2 - Success – Performing Test CXX_HAS_AVX2_1 – Performing Test CXX_HAS_AVX2_1 - Failed – Performing Test CXX_HAS_AVX2_2 – Performing Test CXX_HAS_AVX2_2 - Success – AVX compiler support found – AVX2 compiler support found – Performing Test HAS_C11_ATOMICS – Performing Test HAS_C11_ATOMICS - Failed – Performing Test HAS_MSC_ATOMICS – Performing Test HAS_MSC_ATOMICS - Failed – Performing Test HAS_GCC_ATOMICS – Performing Test HAS_GCC_ATOMICS - Success – Atomics: using GCC intrinsics – Looking for cheev_ – Looking for cheev_ - found – Found a library with LAPACK API. (accelerate) – CuDNN not found. Compiling without CuDNN support – MIOpen not found. Compiling without MIOpen support – Looking for mmap – Looking for mmap - found – Looking for shm_open – Looking for shm_open - found – Looking for shm_unlink – Looking for shm_unlink - found – Looking for malloc_usable_size – Looking for malloc_usable_size - not found – Performing Test C_HAS_THREAD – Performing Test C_HAS_THREAD - Success – Using python found in /anaconda3/bin/python – Check size of long double – Check size of long double - done – Performing Test COMPILER_SUPPORTS_LONG_DOUBLE – Performing Test COMPILER_SUPPORTS_LONG_DOUBLE - Success – Performing Test COMPILER_SUPPORTS_FLOAT128 – Performing Test COMPILER_SUPPORTS_FLOAT128 - Failed – Performing Test COMPILER_SUPPORTS_SSE2 – Performing Test COMPILER_SUPPORTS_SSE2 - Success – Performing Test COMPILER_SUPPORTS_SSE4 – Performing Test COMPILER_SUPPORTS_SSE4 - Success – Performing Test COMPILER_SUPPORTS_AVX – Performing Test COMPILER_SUPPORTS_AVX - Success – Performing Test COMPILER_SUPPORTS_FMA4 – Performing Test COMPILER_SUPPORTS_FMA4 - Success – Performing Test COMPILER_SUPPORTS_AVX2 – Performing Test COMPILER_SUPPORTS_AVX2 - Success – Performing Test COMPILER_SUPPORTS_SVE – Performing Test COMPILER_SUPPORTS_SVE - Failed – Performing Test COMPILER_SUPPORTS_AVX512F – Performing Test COMPILER_SUPPORTS_AVX512F - Success – Found OpenMP_C: -Xclang -fopenmp (found version “3.1”) – Found OpenMP_CXX: -Xclang -fopenmp (found version “3.1”) – Performing Test COMPILER_SUPPORTS_OPENMP – Performing Test COMPILER_SUPPORTS_OPENMP - Failed – Performing Test COMPILER_SUPPORTS_WEAK_ALIASES – Performing Test COMPILER_SUPPORTS_WEAK_ALIASES - Failed – Performing Test COMPILER_SUPPORTS_BUILTIN_MATH – Performing Test COMPILER_SUPPORTS_BUILTIN_MATH - Success – Configuring build for SLEEF-v3.2 – Using option -Wall -Wno-unused -Wno-attributes -Wno-unused-result -ffp-contract=off -fno-math-errno -fno-trapping-math to compile libsleef – Building shared libs : OFF – MPFR : /usr/local/lib/libmpfr.dylib – MPFR header file in /usr/local/include – GMP : /usr/local/lib/libgmp.dylib – RUNNING_ON_TRAVIS : 0 – COMPILER_SUPPORTS_OPENMP : – Using python found in /anaconda3/bin/python – NCCL operators skipped due to no CUDA support – Excluding ideep operators as we are not using ideep – Excluding image processing operators due to no opencv – Excluding video processing operators due to no opencv – MPI operators skipped due to no MPI support – Include Observer library – Using lib/python3.7/site-packages as python relative installation path – Automatically generating missing init.py files. – ******** Summary ******** – General: – CMake version : 3.12.3 – CMake command : /usr/local/Cellar/cmake/3.12.3/bin/cmake – System : Darwin – C++ compiler : /Library/Developer/CommandLineTools/usr/bin/clang++ – C++ compiler version : 10.0.0.10001044 – BLAS : MKL – CXX flags : -Wno-deprecated -fvisibility-inlines-hidden -Wno-deprecated-declarations -Xclang -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-invalid-partial-specialization -Wno-typedef-redefinition -Wno-unknown-warning-option -Wno-unused-private-field -Wno-inconsistent-missing-override -Wno-aligned-allocation-unavailable -Wno-c++14-extensions -Wno-constexpr-not-const -Wno-missing-braces -Qunused-arguments -faligned-new -Wno-unused-private-field -Wno-missing-braces -Wno-c++14-extensions -Wno-constexpr-not-const – Build type : Debug – Compile definitions : ONNX_NAMESPACE=onnx_torch;USE_GCC_ATOMICS=1;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1 – CMAKE_PREFIX_PATH : /anaconda3/lib/python3.7/site-packages – CMAKE_INSTALL_PREFIX : /Users/krishna.singh/Desktop/Object_Detection/Models/maskrcnn-benchmark/github/pytorch/torch/lib/tmp_install – TORCH_VERSION : 1.0.0 – CAFFE2_VERSION : 1.0.0 – BUILD_ATEN_MOBILE : OFF – BUILD_ATEN_ONLY : OFF – BUILD_BINARY : OFF – BUILD_CUSTOM_PROTOBUF : ON – Link local protobuf : ON – BUILD_DOCS : OFF – BUILD_PYTHON : ON – Python version : 3.7 – Python executable : /anaconda3/bin/python – Pythonlibs version : 3.7.0 – Python library : /anaconda3/lib/libpython3.7m.dylib – Python includes : /anaconda3/include/python3.7m – Python site-packages: lib/python3.7/site-packages – BUILD_CAFFE2_OPS : OFF – BUILD_SHARED_LIBS : ON – BUILD_TEST : ON – USE_ASAN : OFF – USE_CUDA : 0 – USE_ROCM : OFF – USE_EIGEN_FOR_BLAS : ON – USE_FFMPEG : OFF – USE_GFLAGS : OFF – USE_GLOG : OFF – USE_LEVELDB : OFF – USE_LITE_PROTO : OFF – USE_LMDB : OFF – USE_METAL : OFF – USE_MKL : OFF – USE_MKLDNN : OFF – USE_MOBILE_OPENGL : OFF – USE_NCCL : OFF – USE_NNPACK : 1 – USE_NUMPY : ON – USE_OBSERVERS : ON – USE_OPENCL : OFF – USE_OPENCV : OFF – USE_OPENMP : OFF – USE_PROF : OFF – USE_QNNPACK : 0 – USE_REDIS : OFF – USE_ROCKSDB : OFF – USE_ZMQ : OFF – USE_DISTRIBUTED : OFF – Public Dependencies : Threads::Threads – Private Dependencies : nnpack;cpuinfo;fp16;onnxifi_loader – Configuring done – Generating done Error “_omp_set_num_threads”, referenced from: _THSetNumThreads in THGeneral.cpp.o caffe2::Caffe2SetOpenMPThreads(int*, char***) in init_omp.cc.o ld: symbol(s) not found for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) [1363/1562] Building CXX object modules/observers/CMakeFiles/caffe2_observers.dir/perf_observer.cc.o ninja: build stopped: subcommand failed. setup.py::build_deps::run() Failed to run ‘bash …/tools/build_pytorch_libs.sh --use-nnpack caffe2’
st98030
I have about 400 images all labeled with correct anchor boxes from supervisely and I want to apply object detection on them. I am trying to understand the exact steps I need to get everything working? My current thought process is to first find out where I can grab darknet from pytorch like VGG and just apply transfer learning with my dataset. I can probably just change the input shape and the output vector because they are the only things that are going to be different. Can someone please help me and point me to the right direction please?
st98031
I would search for good implementations of object detection models on github. This YOLO3 version 319 seems to provide a training script. Also @chenyuntc has provided a nice repo for Faster RCNN here 66.
st98032
I am having trouble trying to use the transforms.Scale(size=(100,200)) method as I keep getting the error message below. I have checked that using only the ToTensor() does not give me any error so I don’t think its an issue with the installation ? Also, I am aware that I have an older version of torchvision but I still can’t figure out why it doesn’t work. Below is the error message followed by the __getitem__ method Traceback (most recent call last): File “train.py”, line 603, in main(args) File “train.py”, line 243, in main for batch in train_loader: File “/home/haziq/anaconda3/envs/pytorch-cuda9.0/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 336, in next return self._process_next_batch(batch) File “/home/haziq/anaconda3/envs/pytorch-cuda9.0/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 357, in _process_next_batch raise batch.exc_type(batch.exc_msg) TypeError: Traceback (most recent call last): File “/home/haziq/anaconda3/envs/pytorch-cuda9.0/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 106, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File “/home/haziq/anaconda3/envs/pytorch-cuda9.0/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 106, in samples = collate_fn([dataset[i] for i in batch_indices]) File “/home/haziq/Desktop/Trajectory Prediction/sgan-original/sgan/data/trajectories.py”, line 149, in getitem pedestrian_images.append(self.transform(cv2.imread(os.path.join(folder, ‘crops’, str(frame).zfill(10), str(int(idx)).zfill(10)+".png")))) File “/home/haziq/anaconda3/envs/pytorch-cuda9.0/lib/python3.5/site-packages/torchvision-0.1.9-py3.5.egg/torchvision/transforms.py”, line 34, in call img = t(img) File “/home/haziq/anaconda3/envs/pytorch-cuda9.0/lib/python3.5/site-packages/torchvision-0.1.9-py3.5.egg/torchvision/transforms.py”, line 199, in call return img.resize(self.size, self.interpolation) TypeError: ‘tuple’ object cannot be interpreted as an integer def __getitem__(self, index): # get data for the given index df = self.df[self.df['global_id'] == index] # read images pedestrian_images = [] for folder,frame,idx in zip(df['folder'],df['frame'],df['id']): pedestrian_images.append(self.transform(cv2.imread(os.path.join(folder, 'crops', str(frame).zfill(10), str(int(idx)).zfill(10)+".png")))) return pedestrian_images
st98033
Solved by ptrblck in post #2 The error message is a bit strange, however the error might be thrown, since you are passing a numpy array instead of a PIL.Image. The image transformations of torchvision.transforms usually work on PIL.Images, so try to load it as such. Also, transforms.Scale is deprecated, you should use transfo…
st98034
The error message is a bit strange, however the error might be thrown, since you are passing a numpy array instead of a PIL.Image. The image transformations of torchvision.transforms usually work on PIL.Images, so try to load it as such. Also, transforms.Scale is deprecated, you should use transforms.Resize instead.
st98035
Howdy. I’d like to convert an existing caffe2 model to float16 compute (existing in the sense that I have the .pb files in hand) … it seems like the kind of thing that should be simple to do by setting a switch in the workspace or when loading the file (analogous to .set_floatx in Keras), but I’m not getting very far trying to set float16_compute in places like NetDef.ParseFromString or workspace.CreateNet… is there an easy way to do this, or do I need to explicitly step through the layers of the old model and set float16_compute on each one? thanks, nate
st98036
Hello ! I’m currently having some trouble with a quite simple task, so here I am. I have a tensor A of size [N, T, K], and a tensor B of size [K, C]. I want to compute a tensor Y of size [N, T, C] according to the following algorithm: for i in range(N): for j in range(T): for l in range(C): Y[i, j, l] = 0 for k in range(K): Y[i, j, l] += A[i, j, k] * B[k, l] I’ve tried many strange thing (always ending with a “RuntimeError: the size of … must match the size of …”), but cannot find an efficient way to vectorize this. I feel like it’s a perfect job for einsum, but didn’t find the correct notation. Thanks for reading !
st98037
Oh goood, that was so easy, thank you ! By the way, for future reader : einsum notation is torch.einsum('ijk,kl->ijl', (A, B))
st98038
I am experimenting with [https://github.com/gpleiss/efficient_densenet_pytorch/blob/master/demo.py 2](http://memory efficient densenets) When calculating the accuracy and error they use: measure accuracy and record loss batch_size = target.size(0) _, pred = output.data.cpu().topk(1, dim=1) error.update(torch.ne(pred.squeeze(), target.cpu()).float().sum() / batch_size, batch_size) losses.update(loss.item(), batch_size) error is a custom class earlier which keeps track of errors: def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count In particular, what is the error.update line doing? I cannot find torch.ne in the documentation and I’m getting errors when I use this. (I am using my own data and my datatypes do not match up: RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 'other')
st98039
error.update updates the AverageMeter for the prediction errors, i.e. it’s keeping track of the average error, the sum of all falsely predicted samples, etc. torch.ne performs the “not equal” operation. You can find the docs here 2. The error points probably to the dtypes of the argument to torch.ne. Most likely pred is a torch.LongTensor, while you cast target to float. Try to remove the .float() cast or cast `pred to float as well.
st98040
Thanks @ptrblck. I have a separate question for you. For an EEG time series input I’m using a 1D CNN to look for N seizure events. Is there a built in pytorch function to output regressive events (to indicate seizure intensity on a continuous spectrum [0, inf])? This seems a bit niche. My first hunch is to look into making a decoder which takes the CNN feature map and then decodes the N events until it reaches a stopping condition. (Like a seq2seq model)
st98041
Do you have a dataset containing the seizure intensities in [0, inf]? If so, it seems like a regression use case for me, i.e. you could use a linear layer (maybe with a relu at the end) and something like nn.MSELoss as your criterion. Could you explain the stopping condition a bit?
st98042
The intensities are continuous. They have no upper bound. There are also no 0 intensity seizures. (I guess that would mean no seizure so its not included) I imagine the stopping condition like a seq2seq decoder. You feed the time series into the CNN. It produces the context/feature vector. A decoder (LSTM) treats this like a NLP problem. e.g. Section 2.2 in Pointer-generator network 1 You have a which doesn’t have any significance except to init the decoder. You then decode by timesteps, “event by event”, using the feature/context vector and the previous decoder input until you reach a stopping condition. I’m not sure of how the stopping condition is usually handled but I assume its similar in most seq2seq models. Instead of the decoder running a software over the probability distribution of a vocabulary, it would instead use a MSELoss criterion. This way it could produce N number of seizure events until it believes that it has output all the seizure events from the EEG data. Again, not sure what, if any, of this is out of the box in PyTorch.
st98043
What is PyTorch roadmap for supporting deep learning on the edge platforms like Intel Movidius - neural compute stick ?
st98044
Hi, Is there any documentation on how to use these systems ourselves? From what I remembered, Intel themself where providing with a full backend for caffe using them, but you can’t make it yourself. Basically we would need Intel to allow us to do it to be able to.
st98045
True. I could only find mvNCCompile script which converts caffe / tensorflow graph to one that can be used on the platform. But, I could not find any documentation regarding the internals. I was hoping to add backend in PyTorch, but was disappointed to see that it was a black box. Are there any other edge platforms ( less than 100 USD price range) which offer PyTorch support ?
st98046
I am not sure wha tyou call “edge platforms”? I know that Google guys said that they will add TPU support for pytorch, but this is work in progress and not public yet.
st98047
By edge platforms, I mean GPU like SoCs which can be added to embedded devices like cameras. Such embedded devices can be to made “intelligent” by offloading deep learning inference to a chip like Myriad VPU from Intel.
st98048
I trained a model on GPU and saved it, following the official instructions https://pytorch.org/docs/stable/notes/serialization.html 1 torch.save(the_model.state_dict(), PATH) Then when I try to load it with the_model = TheModelClass(*args, **kwargs) the_model.load_state_dict(torch.load(PATH, map_location='cpu')) I see the RAM usage explodes. Is there something I need to be careful of during training? Model is not big, 2 RNN, a few fully connected layers and that’s it. Or is it normal behaviour? Thank you
st98049
Hi, I would like to implement multiplicative LSTM (https://arxiv.org/pdf/1609.07959.pdf 10) and found an implementation that seem to work with normal inputs (i.e. not packed sequences) here: https://github.com/FlorianWilhelm/mlstm4reco/blob/master/src/mlstm4reco/layers.py 18 The code is below: class mLSTM(RNNBase): def __init__(self, input_size, hidden_size, bias=True): super(mLSTM, self).__init__( mode='LSTM', input_size=input_size, hidden_size=hidden_size, num_layers=1, bias=bias, batch_first=True, dropout=0, bidirectional=False) w_im = torch.Tensor(hidden_size, input_size) w_hm = torch.Tensor(hidden_size, hidden_size) b_im = torch.Tensor(hidden_size) b_hm = torch.Tensor(hidden_size) self.w_im = Parameter(w_im) self.b_im = Parameter(b_im) self.w_hm = Parameter(w_hm) self.b_hm = Parameter(b_hm) self.lstm_cell = LSTMCell(input_size, hidden_size, bias) self.reset_parameters() def reset_parameters(self): stdv = 1.0 / math.sqrt(self.hidden_size) for weight in self.parameters(): weight.data.uniform_(-stdv, stdv) def forward(self, input, hx): n_batch, n_seq, n_feat = input.size() hx, cx = hx steps = [cx.unsqueeze(1)] for seq in range(n_seq): mx = F.linear(input[:, seq, :], self.w_im, self.b_im) * F.linear(hx, self.w_hm, self.b_hm) hx = (mx, cx) hx, cx = self.lstm_cell(input[:, seq, :], hx) steps.append(cx.unsqueeze(1)) return torch.cat(steps, dim=1) I checked the code used in current LSTM/GRU/RNNBase in https://pytorch.org/docs/master/_modules/torch/nn/modules/rnn.html 28 but I don’t know how I could replace easily _impl = _rnn_impls[self.mode] or if there would be a way to handle PackedSequence directly in the code of mLSTM. Thank you for your help
st98050
Hello all, I have a network architecture as follows: input --> conv1 (3,1,1) --> bn --> relu --> conv2 (3,1,1) | ^ |-------------------------------| where conv1(3,1,1) means kernel size is 3, stride 1 and padding 1 The output of conv1 will be concatenate with the output of conv2. It is easy to using torch.cat function to concatenate them. However, I am using the function self.add_module to write the network. How could I use concatenation function on the case? This is my sample code self.add_module('conv1', nn.Conv3d(32, 64, kernel_size=3, stride=1)) self.add_module('conv1_norm', nn.BatchNorm3d(64)) self.add_module('conv1_relu', nn.ReLU(inplace=True)) self.add_module('conv2', nn.Conv3d(64, 128, kernel_size=3, stride=1)) # Concatenate ??? Thanks
st98051
Solved by albanD in post #4 I guess it’s like a resnet module: class ConcatMod(nn.Module): def __init__(self, args): # Use args properly to set conv params here depending on your application self.conv1 = nn.Conv3d(32, 64, kernel_size=3, stride=1) self.conv1_norm = nn.BatchNorm3d(64) self.co…
st98052
Hi, I Guess you do this into a Sequential Module? the sequential module can only perform sequential operations (not two branch and merge the results). You will need to implement the forward function for this yourself I’m afraid. For example, you can wrap these 4 ops in a custom nn.Module with your forward that concatenates the outputs and then add this new module to your Sequential one.
st98053
albanD: these 4 ops in a custom nn.Module with Yes. Could you give me an example code? I am writing it in init function of class
st98054
I guess it’s like a resnet module: class ConcatMod(nn.Module): def __init__(self, args): # Use args properly to set conv params here depending on your application self.conv1 = nn.Conv3d(32, 64, kernel_size=3, stride=1) self.conv1_norm = nn.BatchNorm3d(64) self.conv1_relu = nn.ReLU(inplace=True) self.conv2 = nn.Conv3d(64, 128, kernel_size=3, stride=1) def forward(self, input): out1 = self.conv1(input) out1_renormed = self.conv1_relu(self.conv1_norm(out1)) out2 = self.conv2(out1_renormed) output = torch.cat([out1, out2], 1) return output # In your Sequential, you can do self.add_module("concat1", ConcatMod(args))
st98055
I am using PyTorch on Windows 10, with Python 3.6.6 and Cuda 9.0. I would like to make batch inversion of matrices. Since this is not supported by torch.inverse, I use the following: def b_inv(b_mat): eye = b_mat.new_ones(b_mat.size(-1)).diag().expand_as(b_mat) b_inv, _ = torch.gesv(eye, b_mat) return b_inv cf. https://stackoverflow.com/questions/46595157/how-to-apply-the-torch-inverse-function-of-pytorch-to-every-sample-in-the-batc 17 But I get the following error: torch.gesv MAGMA library not found in compilation. Please rebuild with MAGMA. What does that mean? Does it mean PyTorch has to be built from sources to use MAGMA?
st98056
Solved by tomahawk810 in post #2 The above issue was happening using Pytorch installed using pip. I switched to a Conda installation, using the Python 3.7/Cuda 9.0 version. And this solved the problem for me (I don’t know why exactly…).
st98057
The above issue was happening using Pytorch installed using pip. I switched to a Conda installation, using the Python 3.7/Cuda 9.0 version. And this solved the problem for me (I don’t know why exactly…).
st98058
Dear all: The algorithm in MAML https://arxiv.org/abs/1703.03400 1 includes outer and inner loop gradient computation. Currently implementation will write the network as : import torch from torch import nn from torch import optim from torch.nn import functional as F class Model: def __init__(self): self.vars = [nn.Parameters(torch.Tensor(3,3)), nn.Parameter(torch.Tensor(3))] def forward(self, x, vars): if vars is None: vars = self.vars x = F.linear(x,w,b) return x Notably, We need to write every Tensor and then build then network by F.linear, F.conv2d, F.relu, F.max_pooling2d. This is every time costly and any modification of current network requires rewrite the hand-writen code. Anyone have an elegant way to acheve MAML style network?
st98059
There is a discussion 36 about MAML pointing to some implementations using the functional API. Would they work for you?
st98060
@ptrblck, yes, what’s I mean troublesome coding is write lines by functional API line by line. But I want to look for some way to achieve by avoiding write this.
st98061
Model specs: RGB images filter.jpg709×621 136 KB I build: class Model(nn.Module): def __init__(self, num_classes=5): super(Model, self).__init__() self.conv1 = nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=2) self.lu1 = nn.ReLU() self.pool2 = nn.MaxPool2d(kernel_size=3, stride=2) self.conv3 = nn.Conv2d(96, 192, kernel_size=5, padding=2, stride=1) self.lu3 = nn.ReLU() self.pool4 = nn.MaxPool2d(kernel_size=3, stride=2) self.conv5 = nn.Conv2d(192, 288, kernel_size=3, stride=1, padding=1) self.lu5 = nn.ReLU() self.conv6 = nn.Conv2d(288, 288, kernel_size=3, stride=1, padding=1) self.lu6 = nn.ReLU() self.conv7 = nn.Conv2d(288, 192, kernel_size=3, stride=1, padding=1) self.lu7 = nn.ReLU() self.pool8 = nn.MaxPool2d(kernel_size=3, stride=2) self.fc9 = nn.Linear(in_features=192, out_features=4096) self.fc10 = nn.Linear(in_features=4096, out_features=4096) self.fc11 = nn.Linear(in_features=4096, out_features=5) self.fc = nn.LogSoftmax() def forward(self, input): output = self.conv1(input) output = self.lu1(output) output = self.pool2(output) output = self.conv2d(output) output = self.lu3(output) output = self.pool4(output) output = self.conv5(output) output = self.lu5(output) output = self.conv6(output) output = self.lu6(output) output = self.conv7(output) output = self.lu7(output) output = self.pool8(output) output = self.fc9(output) output = self.fc10(output) output = self.fc11(output) output = self.fc(output) return output I’m newbie, i don’t know this is true/false? please help me. Thank you!
st98062
Solved by albanD in post #6 # Last convolutionnal layer return 4D output output = self.pool8(output) # Rephase to a 2D element to work with Linear layers output = output.view(output.size(0), 7*7*192) # Do Linear operations now output = self.fc9(output)
st98063
Hi, Yes they look like the same thing. Be careful on the loss you use: If you use CrossEntropyLoss it already contains the LogSoftmax and you will need to remove it from your model.
st98064
Thank @albanD I have bug from layer 8 to layer 9. Layer 9: fully connected with 4096 neurons. error.jpg1612×472 214 KB my code class Model(nn.Module): def __init__(self, num_classes=5): super(Model, self).__init__() #[256,256,3] (conv1)--> [63,63,96] (pool2)--> [31,31,96] self.conv1 = nn.Conv2d(3, 96, kernel_size=11, stride=4, padding=2) self.lu1 = nn.ReLU() self.pool2 = nn.MaxPool2d(kernel_size=3, stride=2) #[31,31,96] (conv3)--> [31,31,192] (pool4)-->[15,15,192] self.conv3 = nn.Conv2d(96, 192, kernel_size=5, padding=2, stride=1) self.lu3 = nn.ReLU() self.pool4 = nn.MaxPool2d(kernel_size=3, stride=2) #[15,15,192] (conv5) --> [15,15,288] self.conv5 = nn.Conv2d(192, 288, kernel_size=3, stride=1, padding=1) self.lu5 = nn.ReLU() #[15,15,288] (conv6) --> [15,15,288] self.conv6 = nn.Conv2d(288, 288, kernel_size=3, stride=1, padding=1) self.lu6 = nn.ReLU() #[15,15,288] (conv7)--> [15,15,192] (pool8) --> [7,7,192] self.conv7 = nn.Conv2d(288, 192, kernel_size=3, stride=1, padding=1) self.lu7 = nn.ReLU() self.pool8 = nn.MaxPool2d(kernel_size=3, stride=2) # [7,7,192] self.fc9 = nn.Linear(in_features=192, out_features=4096) self.fc10 = nn.Linear(in_features=4096, out_features=4096) self.fc11 = nn.Linear(in_features=4096, out_features=5) def forward(self, input): output = self.conv1(input) output = self.lu1(output) output = self.pool2(output) output = self.conv3(output) output = self.lu3(output) output = self.pool4(output) output = self.conv5(output) output = self.lu5(output) output = self.conv6(output) output = self.lu6(output) output = self.conv7(output) output = self.lu7(output) output = self.pool8(output) output = self.fc9(output) output = self.fc10(output) output = self.fc11(output) return output epoch_nums = 60 model = Model() optimizer = Adam(model.parameters(), lr=5e-8, weight_decay=5e-1) loss_fn = nn.CrossEntropyLoss() total_step = len(dataset_loader) for epoch in range(epoch_nums): for i, (images, labels) in enumerate(dataset_loader): optimizer.zero_grad = 0 outputs = model(images) loss = loss_fn(outputs, labels) loss.backward() optimizer.step()
st98065
The image sizes in your commend do not correspond to how they are stored in pytorch: The channel is before the size dimensions: batch_size x nb_channels x height x width. Also I am not sure to understand what you want to do in the convolutional layers, I think you want the in_features for fc9 to be 192*7*7=9408. You can do that by adding a output = output.view(output.size(0), -1) in the forward pass and changing the in features when you declare it.
st98066
# Last convolutionnal layer return 4D output output = self.pool8(output) # Rephase to a 2D element to work with Linear layers output = output.view(output.size(0), 7*7*192) # Do Linear operations now output = self.fc9(output)
st98067
Hi, I am new to PyTorch and I am currently working on an image classification project, in which I need to use ResNeXt-50 provided by FB 1. Firstly I had some problems loading the GPU-trained model on my CPU-only machine, but I used the built-in PyTorch ResNet-50 implementation and it works fine. However, I need to extract channel activations from 8,18,31 and 43 layer for further processing. I have read many topics on StackOverflow and here, read the documentation, but I did not find any similar problem how to extract intermediate values from a pre-trained model. Of course there are some, but usually they pertain ‘simpler’ nets. For example: 1 2, 2 3. I tried using sugesstions from many posts, but all to no avail. Could you give me some tips how to approach it? Thanks in advance. I am using PyTorch 0.4.1.post2