id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st98668
|
We don’t have a cross_entropy_loss per se, but as @ptrblck mentioned, nn.CrossEntropyLoss()(input, target) is the same as cross_entropy(input, target), which per https://github.com/pytorch/pytorch/blob/master/torch/nn/functional.py#L1671 43 is the same as nll_loss(log_softmax(input, 1), target), so your code in C++ would be something like
auto input = torch::randn({3, 5}, torch::requires_grad(true));
auto target = torch::empty(3, torch::kLong).random_(5);
auto output = torch::nll_loss(torch::log_softmax(input, /*dim=*/1), target)
output.backward()
Can you try that?
|
st98669
|
Hi, recently I read a paper named “Large Scale GAN Training for High Fidelity Natural Image Synthesis”, where the scale and bias value are not learned by training, but come from Generation Conditions (Class label embedding and Noise z).
However, I didn’t find any cues to implement this trick/architecture after reading Pytorch documentation about Batchnorm. Do I need to implement it from scratch?
Thanks in advance :>
|
st98670
|
Solved by tom in post #2
There are better experts to answer questions about this paper on the forums, but for the batch norm:
You can reuse the existing parts and add your own, for inspirations you might look at the conditional batch norm comments, and this one in particular.
Best regards
Thomas
|
st98671
|
There are better experts to answer questions about this paper on the forums, but for the batch norm:
You can reuse the existing parts and add your own, for inspirations you might look at the conditional batch norm comments, and this one in particular 4.
Best regards
Thomas
|
st98672
|
Did you provide this argument?
Could you post your code snippet so that we could have a look?
|
st98673
|
import numpy as np
import random
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.autograd as autograd
from torch.autograd import Variable
Creating the architecture of the Neural Network
class Network(nn.Module):
def init(self, input_size, nb_action):
super(Network, self).init()
self.input_size = input_size
self.nb_action = nb_action
self.fc1 = nn.Linear(input_size, 30)
self.fc2 = nn.Linear(30, nb_action)
def forward(self, state):
x = F.relu(self.fc1(state))
q_values = self.fc2(x)
return q_values
Implementing Experience Replay
class ReplayMemory(object):
def init(self, capacity):
self.capacity = capacity
self.memory = []
def push(self, event):
self.memory.append(event)
if len(self.memory) > self.capacity:
del self.memory[0]
def sample(self, batch_size):
# (state1, state2), (action1, action2), (reward1, reward2)...
samples = zip(*random.sample(self.memory, batch_size))
# torch.cat aligns everything as (state, action, reward)
return map(lambda x: Variable(torch.cat(x, 0)), samples)
Implementing Deep Q Learning
class Dqn():
def init(self, input_size, nb_action, gamma):
self.gamma = gamma
self.reward_window = []
self.model = Network(input_size, nb_action)
self.memory = ReplayMemory(100000)
self.optimizer = optim.Adam(self.model.parameters(), lr=0.001)
# Vector of dimension 6. The 3 signals of the 3 sensors and orientation/ - orientation
# (batch_size, left, straight, right, orientation, -orientation)
# batch_size dimension is added with unsqueeze
self.last_state = torch.Tensor(input_size).unsqueeze(0)
# 0, 1 or 2 (indexed), see the var action2rotation in map.py
self.last_action = 0
self.last_reward = 0
def select_action(self, state):
"""
Take the input state (the 3 Q-values) and return the best possible action
:param state: The input state of the neural net (left, straight, right, orientation, -orientation)
:return: The best possible action probabilities
"""
# volatile = True : We don't want the gradient in the graph of all the computation of the nn module
# 100 is the temperature parameter or the certainty about the next action to play
# The closer it is to 0 the less sure the nn will be
# to take the action. Far from 0, the more sure it will be about the action to play
# ex with T = 3: softmax([0.04, 0.11, 0.85]) => softmax([1,2,3] * 3) = [0, 0.02, 0.98]
probs = F.softmax(self.model(Variable(state, volatile=True)) * 100) # T=100
# random draw from the probabilities
action = probs.multinomial()
# Retrieve the action at index [0, 0]
return action.data[0, 0]
def learn(self, batch_state, batch_next_state, batch_reward, batch_action):
"""
Train the nn
:param batch_state:
:param batch_next_state:
:param batch_reward:
:param batch_action:
:return:
"""
outputs = self.model(batch_state).gather(1, batch_action.unsqueeze(1)).squeeze(1)
next_outputs = self.model(batch_next_state).detach().max(1)[0]
target = self.gamma * next_outputs + batch_reward
# td = temporal difference
td_loss = F.smooth_l1_loss(outputs, target)
# Reinitialize the Adam optimizer from the constructor
self.optimizer.zero_grad()
# Backprop, retain_variables=True to free the memory
td_loss.backward(retain_variables=True)
self.optimizer.step()
def update(self, reward, new_signal):
# The new states are the current signals
new_state = torch.Tensor(new_signal).float().unsqueeze(0)
self.memory.push(
(self.last_state, new_state,
torch.LongTensor([int(self.last_action)]), torch.Tensor([self.last_reward])))
action = self.select_action(new_state)
if len(self.memory.memory) > 100:
batch_state, batch_next_state, batch_action, batch_reward = self.memory.sample(100)
self.learn(batch_state, batch_next_state, batch_reward, batch_action)
self.last_action = action
self.last_state = new_state
self.last_reward = reward
self.reward_window.append(reward)
if len(self.reward_window) > 1000:
del self.reward_window[0]
return action
def score(self):
# +1 to avoid dividing by 0
return sum(self.reward_window) / (len(self.reward_window) + 1.)
def save(self):
torch.save({'state_dict': self.model.state_dict(),
'optimizer': self.optimizer.state_dict(),
}, 'last_brain.pth')
def load(self):
if os.path.isfile('last_brain.pth'):
print("=> loading checkpoint... ")
checkpoint = torch.load('last_brain.pth')
self.model.load_state_dict(checkpoint['state_dict'])
self.optimizer.load_state_dict(checkpoint['optimizer'])
print("done !")
else:
print("no checkpoint found...")
|
st98674
|
Is this your code? It looks differently from the official reinforcement learning tutorial 69.
I’m wondering how the error message is created, as obviously the arguments are missing, but also it seems you are trying to call multinominal on a tensor (probs in select_action), which is not a member of the tensor class.
|
st98675
|
Hi all,
I have pretrained embedding vector and used self.embedding.from_pretrained(emb_vecs) to copy the vector, whose shape is (172803, 300).
However, when I try to get the embedding for the following shaped tensor
print(q.size()) # output is (32, 9)
q_embedding = self.embedding(q)
I get such errors
RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorMath.cpp:352
This seems strange because the official document about Embedding 8 tells me that the tensor can be arbitrary shape.
I also did the following simple experienment
Python 2.7.15rc1 (default, Apr 15 2018, 21:51:34)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> embedding = torch.nn.Embedding(10, 3)
>>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]])
>>> embedding(input)
tensor([[[-0.9792, -1.5882, -0.0207],
[ 1.9464, -0.6515, -1.1061],
[ 1.2522, -0.2758, 0.3255],
[ 0.2748, -1.6323, 0.0761]],
[[ 1.2522, -0.2758, 0.3255],
[ 1.3587, -0.9372, 0.9779],
[ 1.9464, -0.6515, -1.1061],
[-0.3707, -0.4403, -0.4675]]], grad_fn=<EmbeddingBackward>)
>>> input1 = torch.LongTensor([[1,2,4,5],[4,3,2,9],[13,14,15,16]])
>>> embedding(input1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/fma/tensorflow/pytorch/local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/home/fma/tensorflow/pytorch/local/lib/python2.7/site-packages/torch/nn/modules/sparse.py", line 110, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/fma/tensorflow/pytorch/local/lib/python2.7/site-packages/torch/nn/functional.py", line 1110, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorMath.cpp:352
>>> embedding = torch.nn.Embedding(17, 3) # Must be 17, smaller value will cause error
>>> embedding(input1)
tensor([[[-0.1267, 0.5442, -1.5968],
[-0.2980, -0.8039, 0.7393],
[ 0.8526, -1.3021, -0.9185],
[-0.8957, 1.2497, -1.1549]],
[[ 0.8526, -1.3021, -0.9185],
[ 0.4550, -0.5091, -0.5557],
[-0.2980, -0.8039, 0.7393],
[-0.2894, -0.2622, -0.9497]],
[[-0.2754, -0.8513, -0.7684],
[-0.9200, -1.2583, 2.5170],
[ 0.7666, 0.4166, 0.8420],
[-0.3305, 0.9930, 0.1318]]], grad_fn=<EmbeddingBackward>)
That looks confusing, what is the exactly rule of using nn.Embedding?
Thanks
|
st98676
|
Solved by biggerfish in post #2
Never mind, I forgot to delete an old vocab_size variable, which is much smaller than the actual size of vocabulary. After I delete it, the error is gone.
|
st98677
|
Never mind, I forgot to delete an old vocab_size variable, which is much smaller than the actual size of vocabulary. After I delete it, the error is gone.
|
st98678
|
Hi community,
I am considering if I need to update torch from 0.4.1 to 0.4.1.post2. I did not find release notes for this post version, does anyone have idea what’s new in this version? Thanks!
|
st98679
|
Building from source, and I can’t get Jetson TX2, an ARM64, to compile properly.
I follow the reccommendations from nVidia’s forums here:
gist.github.com
https://gist.github.com/dusty-nv/ef2b372301c00c0a9d3203e42fd83426 109
pytorch_jetson_install.sh
#!/bin/bash
#
# pyTorch install script for NVIDIA Jetson TX1/TX2,
# from a fresh flashing of JetPack 2.3.1 / JetPack 3.0 / JetPack 3.1
#
# for the full source, see jetson-reinforcement repo:
# https://github.com/dusty-nv/jetson-reinforcement/blob/master/CMakePreBuild.sh
#
# note: pyTorch documentation calls for use of Anaconda,
# however Anaconda isn't available for aarch64.
This file has been truncated. show original
I can get it to compile on Python2.7, but not Python3. I had to disable NCCL support, which is desktop only CUDA, and that got me to about 27% of the build complete. You can see the nVidia conversation here: https://devtalk.nvidia.com/default/topic/1042821/pytorch-install-broken/?offset=7#5291321 52
For now, Pytorch keeps dying at the same spot when compiling Onnx and onnx-tesnsorrt, obviously both of which would be required to deploy for production inferencing on the TX2. Latest errors are:
third_party/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/build.make:134: recipe for target ‘third_party/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/onnx2trt_utils.cpp.o’ failed
make[2]: *** [third_party/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/onnx2trt_utils.cpp.o] Error 1
CMakeFiles/Makefile2:1488: recipe for target ‘third_party/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/all’ failed
make[1]: *** [third_party/onnx-tensorrt/CMakeFiles/nvonnxparser.dir/all] Error 2
[ 27%] Built target python_copy_files
Makefile:160: recipe for target ‘all’ failed
make: *** [all] Error 2
Failed to run ‘bash …/tools/build_pytorch_libs.sh --use-cuda --use-nnpack caffe2 libshm gloo c10d THD’
I’d be open to not compiling it all, and simply using an Onnx exported model, but I have no idea how that would fit into my Python production code. After all, the model is only a part of the PyTorch code, I still use Torch for translating image formats, evaluations, etc.
Not sure where to go from here, but I’d really like to stick with this library and figure out the inferencing for edge devices. Combined with fast.AI’s recent v1.0 release, there’s some pretty amazing work that can be done.
|
st98680
|
Solved by ptrblck in post #4
It looks like ONNX has some problems with protobuf.
Did you pull from master before trying to rebuild PyTorch?
If so, could you call git submodule update --init --recursive and try to build again?
I ran into similar issues when some submodules weren’t properly updated.
|
st98681
|
Could you post or upload the complete build log? The error seems to be missing in the current snippet.
|
st98682
|
It’s a LOT of code, so the log is too long to post here. You can view the read-only text file here:
https://www.dropbox.com/s/5qklhphl4sjdg7a/log.txt?dl=0 53
Here’s the setup command I ran.
sudo python3 setup.py install
|
st98683
|
It looks like ONNX has some problems with protobuf.
Did you pull from master before trying to rebuild PyTorch?
If so, could you call git submodule update --init --recursive and try to build again?
I ran into similar issues when some submodules weren’t properly updated.
|
st98684
|
Ok, so I completely wiped it and followed the standard PyTorch install directions from PyTorch versus the nVidia recommended pytorch_jetson_install.sh from Dusty.
The only thing is I added these changes as per recommended by nVidia since NCCL is only for CUDA desktop GPUs:
diff --git a/CMakeLists.txt b/CMakeLists.txt
index f7b24b728..f75f610ed 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -93,7 +93,7 @@ option(USE_LMDB "Use LMDB" ON)
option(USE_METAL "Use Metal for iOS build" ON)
option(USE_MOBILE_OPENGL "Use OpenGL for mobile code" ON)
option(USE_NATIVE_ARCH "Use -march=native" OFF)
-option(USE_NCCL "Use NCCL" ON)
+option(USE_NCCL "Use NCCL" OFF)
option(USE_SYSTEM_NCCL "Use system-wide NCCL" OFF)
option(USE_NNAPI "Use NNAPI" OFF)
option(USE_NNPACK "Use NNPACK" ON)
diff --git a/setup.py b/setup.py
index 99817f346..e39042b83 100644
--- a/setup.py
+++ b/setup.py
@@ -195,6 +195,7 @@ IS_LINUX = (platform.system() == 'Linux')
BUILD_PYTORCH = check_env_flag('BUILD_PYTORCH')
USE_CUDA_STATIC_LINK = check_env_flag('USE_CUDA_STATIC_LINK')
RERUN_CMAKE = True
+USE_NCCL = False
NUM_JOBS = multiprocessing.cpu_count()
max_jobs = os.getenv("MAX_JOBS")
When I do that, don’t enable TensorRT support, and run the install with Python3 now, it does compile on the fresh install! I do believe I will need the TensorRT support eventually on the TX2, I’ll keep plugging away on that.
For now, however, when trying to import it into a Python shell, I get this:
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nvidia/pytorch/torch/__init__.py", line 84, in <module>
from torch._C import *
ImportError: No module named 'torch._C'
>>> exit()
|
st98685
|
Good to hear you could compile it!
Are you creating the Python shell in your build directory?
If so, could you just switch the directory and try it again?
|
st98686
|
Duh! Thank you!!
So, it compiles without TensorRT support for now.
I think it’s enough to start exploring the transfer of Pytorch models via Onnx on desktop GPUs through TensorRT 3.0 to the Jetson TX2.
Thank you so much for the support here and on Twitter @ptrblck !
|
st98687
|
You are welcome!
I’m glad it working for now and I’m sure we can manage to build it with TensorRT in the next iteration.
Let me know how your experiments went deploying PyTorch through ONNX!
|
st98688
|
I got it working on AGX Xavier. I’ll take a crack at TX2 and fix any structural problems (and commit patches to master)
|
st98689
|
I’ve been scrolling down through PyTorch Cifar-10 convolutional neural network tutorial and I find it strange how the Softmax activation function wasn’t used in the output layer of the forward(self,x) function:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
Can anyone please specify why is that the softmax activation function wasn’t used here unlike Sigmoid which is pretty much used in every NN dealing with binary classification ?
Additionally I would like to know why is _ underscore used in the beginning of the following line ?
_, predicted = torch.max(outputs, 1)
thanks in advance.
|
st98690
|
Solved by ptrblck in post #2
In a classification use case you can pass the raw logits (i.e. the model output without a non-linearity) to nn.CrossEntropyLoss. Internally F.log_softmax and nn.NLLLoss is called.
Alternatively you could add nn.LogSoftmax to out output layer and use nn.NLLLoss as the criterion.
The underscore is u…
|
st98691
|
In a classification use case you can pass the raw logits (i.e. the model output without a non-linearity) to nn.CrossEntropyLoss. Internally F.log_softmax and nn.NLLLoss is called.
Alternatively you could add nn.LogSoftmax to out output layer and use nn.NLLLoss as the criterion.
The underscore is used to throw away the returned value. torch.max would return the max value and the corresponding index. As we only need the index, the value is discarded. Alternatively you could also use torch.argmax.
|
st98692
|
Getting out of memory error while calculating accuracy. The following code is used to calculate
out = model(x)
batch_acc = 0.0
pred = out.max(1)[1].unsqueeze(1)
batch_acc = ((y == pred).sum().item() * 1.0) / len(y)
|
st98693
|
If your GPU memory is nearly filled you might run OOM in the last compare operation.
You could push the tensors first to the CPU and calculate the accuracy later, as you are apparently storing them on the CPU anyway.
|
st98694
|
Perhaps another workaround that I found to be working is this. Turns out that this could be a more memory efficient implementation in my case. Also it would prevent the IO time spent on transferring tensors to CPU.
val = out.max(1)[1]
batch_acc = val.eq_(y).sum().item()
|
st98695
|
Hi,
I’m trying to understand the process of semantic segmentation and I’m having trouble at the loss function. For simple classification networks the loss function is usually a 1 dimensional tenor having size equal to the number of classes, but for semantic segmentation the target is also an image.
I have an input image of the shape: Inputs: torch.Size([1, 3, 224, 224]) which produces an output of shape: Outout: torch.Size([1, 32, 224, 224]). The target on the other hand is of the shape: Targets: torch.Size([1, 1, 360, 480]).
There is a size mismatch, does that matter?
I tried with the loss function: criterion =torch.nn.BCEWithLogitsLoss() but with this PyCharm just stalls and crashes.
The example I am following used CrossEntropyLoss2D() as shown here: https://github.com/ycszen/pytorch-segmentation/blob/master/loss.py 64
but when I use that I get an error with a warning that NLLLoss2d has been deprecated. Furthermore, there is no 2D loss function listed here in the documentation: https://pytorch.org/docs/stable/nn.html 10
I tried resizing the target making it the same as the output but that didint work as well.
I would like to read more about the loss function for semantic segmentation but couldnt find much help. Why am I having trouble with the loss functions, is it because of the size mismatch? since they are both images, do I need to write my own class to handle this?
Many thanks for any help/guidance.
|
st98696
|
Solved by ptrblck in post #2
Based on the output shape it looks like you have 32 different classes.
Your target shape, i.e. the segmentation mask, should have the shape [batch_size, 224, 224], and should contain the class indices as its values.
The spatial size mismatch between the target mask and the model output does matter…
|
st98697
|
Based on the output shape it looks like you have 32 different classes.
Your target shape, i.e. the segmentation mask, should have the shape [batch_size, 224, 224], and should contain the class indices as its values.
The spatial size mismatch between the target mask and the model output does matter, as you are trying to calculate the pixel-wise loss, i.e. each pixel prediction corresponds to the pixel target class.
You don’t have to use the *2d loss functions, as the vanilla loss functions now can take multi-dimensional tensors.
Here is a small dummy example for a segmentation use case:
batch_size = 1
c, h, w = 3, 10, 10
nb_classes = 5
x = torch.randn(batch_size, c, h, w)
target = torch.empty(batch_size, h, w, dtype=torch.long).random_(nb_classes)
model = nn.Sequential(
nn.Conv2d(c, 6, 3, 1, 1),
nn.ReLU(),
nn.Conv2d(6, nb_classes, 3, 1, 1)
)
criterion = nn.CrossEntropyLoss()
output = model(x)
loss = criterion(output, target)
loss.backward()
|
st98698
|
I am probably misunderstanding something but:
In the docs of functional.normalize (https://pytorch.org/docs/stable/nn.html#torch.nn.functional.normalize 541) we can read:
Performs Lp normalization of inputs over specified dimension.
Does v=v / max(‖v‖p,ϵ)
for each subtensor v over dimension dim of input. Each subtensor is flattened into a vector.
So if I do the following
import torch
import torch.nn.functional as F
x = torch.randn((4, 3, 32, 32))
x = F.normalize(x, dim=0, p=2)
I would expect that each subtensor along dim 0 (for instance x[0]) will have a L2 norm equal to 1.
However, this isn’t the case
torch.sqrt(torch.sum(x[0]**2)) # != 1
(I use pytorch 0.4.1 with CUDA 9.2)
|
st98699
|
Solved by ptrblck in post #2
The tensor is normalized over dimension dim, such that:
(x[:, 0, 0, 0])**2.sum() == 1
(x[:, 0, 0, 1])**2.sum() == 1
...
In your use case you could do the following:
x_ = F.normalize(x.view(x.size(0), -1), dim=1, p=2).view(x.size())
(x_[0]**2).sum() == 1
|
st98700
|
The tensor is normalized over dimension dim, such that:
(x[:, 0, 0, 0])**2.sum() == 1
(x[:, 0, 0, 1])**2.sum() == 1
...
In your use case you could do the following:
x_ = F.normalize(x.view(x.size(0), -1), dim=1, p=2).view(x.size())
(x_[0]**2).sum() == 1
|
st98701
|
I see thanks. In that case, isn’t the relevant part of the documentation a little bit misleading?
|
st98702
|
I’m not sure, as I’m not that familiar with the English mathematical description of such operations.
@tom is a great mathematician. Maybe he could put his 2 cents in.
|
st98703
|
Haha. Thanks.
I agree that the description is not as clear as it could be, but maybe it’s more the shaping that isn’t clear rather than the mathematical bits.
for each subtensor v over dimension dim of input.
Maybe it becomes clearer when you add the shape information: For a tensor of sizes (n_0, …, n_dim, …n_k), the each n_dim-element vector v along dimension dim is transformed as … and the equation.
Each subtensor is flattened into a vector, i.e. ∥v∥p\lVert v \rVert_p∥v∥p is not a matrix norm.
This sentence seems to be particularly misleading, and I would suggest to strike it - given that the things that are normed are one-dimensional, how could they be a matrix norm.
Best regards
Thomas
Edit: P.S.: I made this a quick PR. 56 Thank you, @alpapado, for your feedback on the documentation! It helps us improve.
|
st98704
|
In the class “torch.nn.Dropout (p=0.5, inplace=False)”, why the outputs are scaled by a factor of 1/1−p during training ? In the papers “Dropout: A Simple Way to Prevent Neural Networks from Overting” and “Improving neural networks by preventing co-adaptation of feature detectors”, the output of the dropout layer are not scaled by a factor of 1/1−p .
|
st98705
|
Dropout is scaled by 1/p to keep the expected inputs equal during training and testing.
Have a look at this post 75 for more information.
Where did you notice the 1/(1-p) scaling?
|
st98706
|
image.png942×554 35.5 KB
https://pytorch.org/docs/0.3.0/nn.html#dropout,it 62 is here
|
st98707
|
Sorry, that was my mistake!
From the original paper:
If a unit is retained with probability p during training, the outgoing weights of that unit are multiplied by p at test time as shown in Figure 2. This ensures that for any hidden unit the expected output (under the distribution used to drop units at training time) is the same as the actual output at test time.
Since PyTorch uses p as the drop probability (as opposed to the keep probability) you are scaling it with 1/(1-p).
|
st98708
|
In the paper, p is the retention probability . In pytorch, p is the drop probability, the corresponding retention probability is 1-p, indicating that unit is retained with probability 1-p during training, for the retained units, should be unchanged, why are scaled by a factor of 1/(1-p) . During testing, each unit should also be multiplied by (1-p).
|
st98709
|
That’s correct! You would have two options.
The option you mention is to use the keep probability (1-p) during training and just multiply with it during testing.
This would however add an additional operation during test time.
We could avoid this operation by scaling during training.
So the second option is to scale the units using the reciprocal of the keep probability during training, which yields the same result and is as far as I know the standard implementation for dropout.
As the training is expensive we can just add this scaling, and keep the inference as fast as possible.
|
st98710
|
Hi all:
Suppose I have a 3D tensor: x , shaped 2 x 247 x 4 and a 1D index tensor of length 2: y = [a, b], whose length corresponds to the size of the 1st dimension of the 3D tensor. What I want to do is index out x[0, a, :] and x[1, b, :], and have my output be shape 2 x 4, how would I do that? And of course in general, I want this to apply to 3D tensor of shape batch_size x 247 x 4, and my 1D index of length = batch_size. Please share your thoughts! Thank you!
Example:
Screen Shot 2018-10-22 at 9.23.21 AM.png2034×772 111 KB
|
st98711
|
I don’t quite understand your use case.
Let’s assume you have a tensor x = torch.randn(2, 247, 4).
Now you would like to create an index tensor y containing two values? What do you mean by “whole length corresponds to the size of the 1st dim”?
Do you have two values in y or is it of shape [2, 247]?
If you have two values and index x with y, you would get a tensor of shape [2, 2, 4].
Could you share some pseudo code so that I could have a look and understand the use case a bit better?
|
st98712
|
Thanks for the reply! Sorry I don’t know if I can format and include code directly into my post, but I added an example using screenshot.
|
st98713
|
I’m interested in examining the implementation of some PyTorch functions (in my case especially torch.histc()), but I feel overwhelmed by the code base. I tracked their first occurrence to these lines in torch.__init__.py:
for name in dir(_C._VariableFunctions):
globals()[name] = getattr(_C._VariableFunctions, name)
So it seems the functions are defined within the C extension. I then looked through every file in the torch package (by algorithm of course) to find the usage of the string histc. The following files turned up:
lib/include/TH/generic/THTensorMath.h
lib/include/ATen/Functions.h
lib/include/ATen/TensorMethods.h
lib/include/ATen/Tensor.h
lib/include/ATen/Type.h
lib/include/ATen/CPUFloatType.h
lib/include/ATen/CPUDoubleType.h
I’m not that proficient in C, but as far as I can tell only the prototypes are defined in these files, which makes sense for header files.
Hence I’m still on the hunt for the actual implementation. Can someone point me in the right direction?
|
st98714
|
Hi,
The lib folder is something created when compiling and in particular you are only getting the headers here.
The histc cpu implementation is here 79. The THTensorApply macros are defined here 18 but are quite involved to read.
|
st98715
|
Hi,
thanks for linking the implementation. I only looked through my local copy of torch, which does not include the aten folder.
|
st98716
|
Hi,
If you installed the binary, all the c implementations are not shipped. Only their compiled version. You should use the github repo to look for them.
|
st98717
|
Hi,
I have the same code running on two platforms:
platform1: Quadro P4000 (8119 MB RAM)
platform2: Titan V (12033 MB RAM)
I’m using mini batches of size 400 on both platforms.
While in platform1 everything works fine (consumes 7295/8119 MB , 99% volatile memory usage)
But, in platform2 it runs in CUDA: Out of Memory Error while only consuming 1129/12033 MB, 0% volatile memory usage and stops.
Traceback (most recent call last): │
File "run_meenet1.py", line 151, in <module> │
criterion=criterion) │
File "/home/ubuntu/projectSSML/meenet/modules/helpers.py", line 90, in train_│
batchwise │
loss.backward() │
File "/home/ubuntu/.local/lib/python3.7/site-packages/torch/tensor.py", line │
93, in backward │
torch.autograd.backward(self, gradient, retain_graph, create_graph) │
File "/home/ubuntu/.local/lib/python3.7/site-packages/torch/autograd/__init__│
.py", line 90, in backward │
allow_unreachable=True) # allow_unreachable flag │
RuntimeError: CUDA error: out of memory
The scripts are exactly the same.
What might be going wrong.
Thanks
|
st98718
|
I have a question related to Pytorch 1.0. If I want to convert to ONNX, previous versions only supported a method which essentially exported a trace of the actual flow. This was fine for certain topologies (e.g. CNNs), but ones with loops (e.g. NMT) would end up exporting unrolled loops.
I read something about a “script mode” and that there is some support now for simple loops (among other new features). Does this mean it will be able to capture Python branch/loop constructs or do we have to recode these constructs into “PyTorch loops” (if there is such a thing) in order to properly capture the loop construct (according to ONNX standard) without unrolling loops?
|
st98719
|
I want to compile PyTorch by custom CMake options. More specifically, I want to set paths for Python libraries. It looks like setup.py does not use any environmental variables for setting the Python libraries for compilation. I also do not want to change CMakeLists.txt manually. I wonder if there is an easier way of doing this?
|
st98720
|
Hi,
My code works fine with Pytorch 0.3.1. I used PyTorch to make a differentiable kinematics model in my recent conference paper. https://arxiv.org/pdf/1804.07873.pdf 1
However, when I upgrade to Pytorch 0.4.1, I get the following error:
File "trainer_convnet.py", line 697, in train_convnet
loss.backward()
File "/usr/local/lib/python2.7/dist-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/usr/local/lib/python2.7/dist-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
This only happens in my differentiable kinematics model. It’s annoying that my code would be compatible with an earlier version of PyTorch but not a later version. Also, from this error, I’m not sure how to debug my code. I plan on just using 0.3.1 going forward, but perhaps this problem would be of help to the PyTorch developers.
Further, I’ve got a problem with using PyTorch 0.3.1 conflicting with Keras. When I have PyTorch 0.4.1 installed, Keras works fine. When I have PyTorch 0.3.1 installed, I can’t import anything from Keras. I’d like to use Keras to import a VGG16 image extractor before my PyTorch kinematics network, but because of this issue I can’t. Keras gives the following error:
Traceback (most recent call last):
File "trainer_convnet.py", line 61, in <module>
from keras.applications.vgg16 import preprocess_input
File "/usr/local/lib/python2.7/dist-packages/keras/__init__.py", line 3, in <module>
from . import utils
File "/usr/local/lib/python2.7/dist-packages/keras/utils/__init__.py", line 6, in <module>
from . import conv_utils
File "/usr/local/lib/python2.7/dist-packages/keras/utils/conv_utils.py", line 9, in <module>
from .. import backend as K
File "/usr/local/lib/python2.7/dist-packages/keras/backend/__init__.py", line 89, in <module>
from .tensorflow_backend import *
File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 5, in <module>
import tensorflow as tf
File "/usr/local/lib/python2.7/dist-packages/tensorflow/__init__.py", line 22, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: dlopen: cannot load any more object with static TLS
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/install_sources#common_installation_problems
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
If anyone has suggestions to these issues, I’d appreciate it.
Thanks,
Henry M. Clever
|
st98721
|
Could you post a (small) executable code snippet throwing the RuntimeError?
I’m not sure, what’s the cause of the ImportError, but apparently your setup is working with 0.4.1, so it shouldn’t be an issue anymore once we solve the first problem.
|
st98722
|
I’m interested in performing the code generation for running a pytorch model in C/C++. Specifically I would like this code to run on a bare metal device (no OS) and without any dependencies on things that an OS would typically provide such as threads or file IO. Further, it would be great to have this compile with as reduced a dynamic memory footprint as possible (as much static RAM usage as possible). Is this currently possible or a future goal of the Pytorch C++ frontend? Also are there any metrics on memory usage for loading pytorch header/libraries/dependencies in C++?
|
st98723
|
I just came across Glow which seems to do what I’m after?
github.com
pytorch/glow/blob/master/docs/AOT.md 18
# Creating standalone executable bundles
This document provides a short description about producing ahead-of-time
compiled executable bundles. The motivation for this work is to remove the cost
of compile time by allowing the users of Glow to compile the package ahead of
time.
## Overview
A bundle is a self-contained compiled network model that can be used to execute
the model in a standalone mode. After following the instructions in this
document and the [Makefile](../examples/bundles/resnet50) in the example
directory you will be able to compile convolutional neural networks into small
executables. Example:
```
$make
...
$cd build
This file has been truncated. show original
|
st98724
|
I was about to post the same, but I wasn’t sure how “low level” you need to go.
Glow builds and runs on macOS and Linux. The software depends on a modern C++ compiler that supports C++11, on CMake, LLVM, protocol buffers, and libpng.
|
st98725
|
Everywhere I checked, I saw the note:
To use multi-threading with numpy random in the DataLoader, use the worker_init_fn with torch.initial_seed()
I’m trying to understand exactly what’s happening with this code snippet:
worker_init_fn=lambda _: np.random.seed(int(torch.initial_seed())%(2**32-1)))`
I know that np.random.seed() requires integer output. So converting the long from torch.initial_seed, and finding modulo 2^32 -1 will give a seed between (0 and 2^32-1)
Does this mean that each worker is initialized with this number as seed?
Or does it mean that each worker is initialized with this number + worker_id as seed?
And does the worker_id change between epochs? (I’m thinking it should, as it seems to be a new thread called by the main python thread…?)
|
st98726
|
SreenivasVRao:
each worker is initialized with this number + worker_id as seed?
it means this
SreenivasVRao:
And does the worker_id change between epochs?
no. but each worker is seeded using base_seed + worker_id, and base_seed is different every epoch.
|
st98727
|
I am trying to add a new function via the ATen’s Native Functions.
However the compilation fails.
Only changes I have done are,
- func: test_fn(Tensor input) -> Tensor
in aten/src/ATen/native/native_functions.yaml
Tensor test_fn(const Tensor & input){
return at::zeros_like(input);
}
in aten/src/ATen/native/Embedding.cpp
Link 3 to git diff of the changes.
I am adding these to this master 1.
Am I missing something in the process?
This is the error log,
[ 91%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_0.cpp.o
[ 91%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_1.cpp.o
[ 91%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_2.cpp.o
[ 91%] Building CXX object caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_3.cpp.o
/home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/generated/VariableType_0.cpp: In member function ‘virtual at::Tensor torch::autograd::VariableType::test_fn(const at::Tensor&) const’:
/home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/generated/VariableType_0.cpp:6104:40: error: ‘test_fn’ is not a member of ‘torch::jit::aten’
node = tracer_state->graph->create(jit::aten::test_fn, /*num_outputs=*/0);
^~~
/home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/generated/VariableType_0.cpp:6104:40: note: suggested alternatives:
In file included from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/aten/src/ATen/ATen.h:13:0,
from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/generated/VariableType.h:5,
from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/VariableTypeUtils.h:1,
from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/generated/VariableType_0.cpp:1:
/home/user/Desktop/Repositories/Pytorch/pytorch_db_back/build/aten/src/ATen/Functions.h:3494:22: note: ‘at::test_fn’
static inline Tensor test_fn(const Tensor & input) {
^~~~~~~
/home/user/Desktop/Repositories/Pytorch/pytorch_db_back/build/aten/src/ATen/Functions.h:3494:22: note: ‘at::test_fn’
In file included from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/build/aten/src/ATen/Functions.h:12:0,
from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/aten/src/ATen/ATen.h:13,
from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/generated/VariableType.h:5,
from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/VariableTypeUtils.h:1,
from /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/csrc/autograd/generated/VariableType_0.cpp:1:
/home/user/Desktop/Repositories/Pytorch/pytorch_db_back/build/aten/src/ATen/NativeFunctions.h:270:19: note: ‘at::native::test_fn’
CAFFE2_API Tensor test_fn(const Tensor & input);
^~~~~~~
[ 91%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/roi_pool_f_op.cc.o
caffe2/torch/CMakeFiles/torch.dir/build.make:280: recipe for target 'caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_0.cpp.o' failed
make[2]: *** [caffe2/torch/CMakeFiles/torch.dir/csrc/autograd/generated/VariableType_0.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
[ 92%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/sample_as_op.cc.o
[ 92%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/select_smooth_l1_loss_op.cc.o
[ 92%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/sigmoid_cross_entropy_loss_op.cc.o
[ 92%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/sigmoid_focal_loss_op.cc.o
[ 92%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/smooth_l1_loss_op.cc.o
[ 92%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/softmax_focal_loss_op.cc.o
[ 92%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/spatial_narrow_as_op.cc.o
[ 92%] Building CXX object modules/detectron/CMakeFiles/caffe2_detectron_ops.dir/upsample_nearest_op.cc.o
[ 92%] Linking CXX shared module python/caffe2_pybind11_state.cpython-37m-x86_64-linux-gnu.so
[ 92%] Built target caffe2_pybind11_state
CMakeFiles/Makefile2:6524: recipe for target 'caffe2/torch/CMakeFiles/torch.dir/all' failed
make[1]: *** [caffe2/torch/CMakeFiles/torch.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 92%] Linking CXX shared library ../../lib/libcaffe2_detectron_ops.so
[ 92%] Built target caffe2_detectron_ops
Makefile:140: recipe for target 'all' failed
make: *** [all] Error 2
Failed to run 'bash ../tools/build_pytorch_libs.sh --use-nnpack --use-mkldnn caffe2 libshm gloo c10d THD'
Following is the build config
--
-- ******** Summary ********
-- General:
-- CMake version : 3.12.2
-- CMake command : /home/user/Desktop/Repositories/Pytorch/Pytorch_EXP_DB_BACK_ENV/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 6.4.0
-- BLAS : MKL
-- CXX flags : --std=c++11 -Wno-deprecated -fvisibility-inlines-hidden -D_FORCE_INLINES -D_MWAITXINTRIN_H_INCLUDED -D__STRICT_ANSI__ -fopenmp -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -Wno-unused-but-set-variable -Wno-maybe-uninitialized
-- Build type : Release
-- Compile definitions : ONNX_NAMESPACE=onnx_torch;USE_GCC_ATOMICS=1;TH_BLAS_MKL;HAVE_MMAP=1;_FILE_OFFSET_BITS=64;HAVE_SHM_OPEN=1;HAVE_SHM_UNLINK=1;HAVE_MALLOC_USABLE_SIZE=1
-- CMAKE_PREFIX_PATH : /home/user/Desktop/Repositories/Pytorch/Pytorch_EXP_DB_BACK_ENV/lib/python3.7/site-packages
-- CMAKE_INSTALL_PREFIX : /home/user/Desktop/Repositories/Pytorch/pytorch_db_back/torch/lib/tmp_install
--
-- TORCH_VERSION : 1.0.0
-- CAFFE2_VERSION : 1.0.0
-- BUILD_ATEN_MOBILE : OFF
-- BUILD_ATEN_ONLY : OFF
-- BUILD_BINARY : OFF
-- BUILD_CUSTOM_PROTOBUF : ON
-- Link local protobuf : ON
-- BUILD_DOCS : OFF
-- BUILD_PYTHON : ON
-- Python version : 3.7
-- Python executable : /home/user/Desktop/Repositories/Pytorch/Pytorch_EXP_DB_BACK_ENV/bin/python
-- Pythonlibs version : 3.7.0
-- Python library : /home/user/Desktop/Repositories/Pytorch/Pytorch_EXP_DB_BACK_ENV/lib/libpython3.7m.so.1.0
-- Python includes : /home/user/Desktop/Repositories/Pytorch/Pytorch_EXP_DB_BACK_ENV/include/python3.7m
-- Python site-packages: lib/python3.7/site-packages
-- BUILD_CAFFE2_OPS : ON
-- BUILD_SHARED_LIBS : ON
-- BUILD_TEST : ON
-- USE_ASAN : OFF
-- USE_CUDA : 0
-- USE_ROCM : OFF
-- USE_EIGEN_FOR_BLAS :
-- USE_FFMPEG : OFF
-- USE_GFLAGS : OFF
-- USE_GLOG : OFF
-- USE_LEVELDB : OFF
-- USE_LITE_PROTO : OFF
-- USE_LMDB : OFF
-- USE_METAL : OFF
-- USE_MKL :
-- USE_MOBILE_OPENGL : OFF
-- USE_NCCL : OFF
-- USE_NNPACK : 1
-- USE_NUMPY : ON
-- USE_OBSERVERS : ON
-- USE_OPENCL : OFF
-- USE_OPENCV : OFF
-- USE_OPENMP : OFF
-- USE_PROF : OFF
-- USE_REDIS : OFF
-- USE_ROCKSDB : OFF
-- USE_ZMQ : OFF
-- USE_DISTRIBUTED : ON
-- USE_MPI : ON
-- USE_GLOO : ON
-- USE_GLOO_IBVERBS : OFF
-- Public Dependencies : Threads::Threads;caffe2::mkl
-- Private Dependencies : nnpack;cpuinfo;/usr/lib/x86_64-linux-gnu/libnuma.so;fp16;/home/user/Desktop/Repositories/Pytorch/Pytorch_EXP_DB_BACK_ENV/lib/libmpi.so;gloo;aten_op_header_gen;onnxifi_loader;rt;gcc_s;gcc;dl
-- Configuring done
-- Generating done
CMake Warning:
Manually-specified variables were not used by the project:
CUDNN_INCLUDE_DIR
CUDNN_LIBRARY
CUDNN_LIB_DIR
|
st98728
|
I am getting the following error in a custom layer.
RuntimeError: in-place operations can be only used on variables that don't share storage with any other variables, but detected that there are 2 objects sharing it
Here is the forward function of the nn layer.
def forward(self, batch_sentence1, batch_sentence2):
""""Defines the forward computation of the matching layer."""
sequence_length = batch_sentence1.size(1)
output_variable = Variable(
torch.zeros(self.config.batch_size, sequence_length, self.num_directions, self.length))
for word_idx in range(sequence_length):
for batch_idx in range(self.config.batch_size):
v1 = batch_sentence1[batch_idx][word_idx]
v2 = batch_sentence2[batch_idx][-1]
for matching_idx in range(self.length):
weighted_v1 = torch.mul(self.weight_forward[matching_idx], v1)
weighted_v2 = torch.mul(self.weight_forward[matching_idx], v2)
cosine = weighted_v1.dot(weighted_v2)
cosine = cosine / (torch.norm(weighted_v1, 2) * torch.norm(weighted_v2, 2))
output_variable[batch_idx][word_idx][0][matching_idx] = cosine
Getting the error in the last line. I have checked if the output_variable shares storage with other object but couldn’t find any.
Can anyone point me to the problem in my code?
|
st98729
|
I tried the following after noticing this post - What's the difference between a[0][1] and a[0 , 1] and it worked for me.
output_variable[batch_idx, word_idx, 0, matching_idx] = cosine
So, I am just curious to know why pytorch interprets x[i, j] and x[i][j] differently? As soumith mentioned in that post,
When you index with x[i][j], then an intermediate Tensor x[i] is created first, and the operation [j] is applied on it. If you index with x[i, j] then there’s no intermediate operation.
I want to know when we should use x[i][j] and when x[i, j]?
|
st98730
|
Wow dude, your finding has supported me a vital solution to my own solution, much thx
|
st98731
|
hello smth,
I just wonder why there are still a lot of code write in a[i][j] form, rather than a[i,j], though they might know it would waste a batch of room.
|
st98732
|
@Zichun_Zhang i think that’s just legacy. if you find such instances, please help highlight or fix them
|
st98733
|
Yes, this should work. You would have to pass these values as tuples to the arguments.
|
st98734
|
For publications that utilize/reference pytorch, should we just cite the project website? Or is there a white paper/preprint/article somewhere to preferably cite?
|
st98735
|
For others who end up at this thread, the answer can be found here: https://github.com/pytorch/pytorch/issues/4126 2.0k
|
st98736
|
From the link above, the citation is the following:
@inproceedings{paszke2017automatic,
title={Automatic differentiation in PyTorch},
author={Paszke, Adam and Gross, Sam and Chintala, Soumith and Chanan, Gregory and Yang, Edward and DeVito, Zachary and Lin, Zeming and Desmaison, Alban and Antiga, Luca and Lerer, Adam},
booktitle={NIPS-W},
year={2017}
}
|
st98737
|
Hi. I’m really new to pytorch. I was experimenting with code I found here:
http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#sphx-glr-intermediate-seq2seq-translation-tutorial-py 261
I’m trying to replace the EncoderRNN with a bidirectional version. Here’s my code.
class EncoderBiRNN(nn.Module):
def __init__(self, input_size, hidden_size):
super(EncoderBiRNN, self).__init__()
self.hidden_size = hidden_size
self.embedding = nn.Embedding(input_size, hidden_size)
self.bi_gru = nn.GRU(hidden_size, hidden_size, num_layers=1, batch_first=False,bidirectional=True)
self.reverse_gru = nn.GRU(hidden_size,hidden_size, num_layers=1,batch_first=False,bidirectional=False)
self.reverse_gru.weight_ih_l0 = self.bi_gru.weight_ih_l0_reverse
self.reverse_gru.weight_hh_l0 = self.bi_gru.weight_hh_l0_reverse
self.reverse_gru.bias_ih_l0 = self.bi_gru.bias_ih_l0_reverse
self.reverse_gru.bias_hh_l0 = self.bi_gru.bias_hh_l0_reverse
def forward(self, input, hidden):
embedded = self.embedding(input).view(1, 1, -1)
output = embedded
#output, hidden = self.gru(output, hidden)
bi_output, bi_hidden = self.bi_gru(output,hidden)
reverse_output, reverse_hidden = self.reverse_gru(output,hidden)
#return output, hidden
return torch.cat((bi_output,reverse_output)), torch.cat((bi_hidden, reverse_hidden))
def initHidden(self):
result = Variable(torch.zeros(1, 1, self.hidden_size))
if use_cuda:
return result.cuda()
else:
return result
Here’s the error.
Traceback (most recent call last):
File “pytorch.py”, line 744, in
n.trainIters(None, None, 75000, print_every=n.print_every)
File “pytorch.py”, line 646, in trainIters
decoder, encoder_optimizer, decoder_optimizer, criterion)
File “pytorch.py”, line 574, in train
input_variable[ei], encoder_hidden)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “pytorch.py”, line 85, in forward
bi_output, bi_hidden = self.bi_gru(output,hidden)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 190, in forward
self.check_forward_args(input, hx, batch_sizes)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 162, in check_forward_args
check_hidden_size(hidden, expected_hidden_size)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 154, in check_hidden_size
raise RuntimeError(msg.format(expected_hidden_size, tuple(hx.size())))
RuntimeError: Expected hidden size (2, 1, 256), got (1, 1, 256)
this is the link to where I read that bi directional RNNs needed to be put together in such a way.
Towards Data Science – 13 Nov 17
Understanding Bidirectional RNN in PyTorch – Towards Data Science 320
Quick Recap
Reading time: 3 min read
What I’m looking for is advice on my code, how to write it so that it works.
|
st98738
|
Solved by austin in post #5
If you’re going to pass an encoder_hidden to your decoder you don’t even need the initHidden method. Your gru will automatically set the initial hidden state to zero, process the whole sequence and pop out an output and hidden_state.
There are a few ways you can pass these to a decoder. The easiest…
|
st98739
|
if you specify bidirectional=True, pytorch will do the rest. The output will be (seq length, batch, hidden_size * 2) where the hidden_size * 2 features are the forward features concatenated with the backward features.
tldr, set bidirectional=True in the first rnn, remove the second rnn, bi_output is your new output. Also, not sure why you are setting gru weights as model params?
|
st98740
|
Thanks. I was hoping it was something simpler than what I was attempting. Thanks again.
|
st98741
|
D_Liebman:
result = Variable(torch.zeros(1, 1, self.hidden_size))
does this line change to result = Variable(torch.zeros(2, 1, self.hidden_size)) or not?
edit –
also, how do you pass the hidden state from a bidirectional encoder to a decoder? Let’s say that the hidden dimension for the encoder is 256. Then you’d get output of 512, but would not the hidden state be 256 still? You might pass the output (dim of 512) to an encoder that has a hidden dim of 512, but then what do you do about using the hidden state? What is reccomended? Is this not an issue? Can you pass half the output to a smaller encoder or do you pass twice the hidden state to a larger encoder? Am I seeing this wrong?
|
st98742
|
If you’re going to pass an encoder_hidden to your decoder you don’t even need the initHidden method. Your gru will automatically set the initial hidden state to zero, process the whole sequence and pop out an output and hidden_state.
There are a few ways you can pass these to a decoder. The easiest is to merge the forward and backward features with addition rather than concatenation, that way the dimensions stay the same.
encoder_out = (encoder_out[:, :, :self.hidden_dim] +
encoder_out[:, :, self.hidden_dim:])
You should also be able to pass the encoder hidden state to the decoder by passing the last layers encoder state to the first layer of the decoder. Optionally, if your encoder has the same or more layers than the decoder you could take the last n layers with n being the number of layers in the decoder.
decoder_hidden = encoder_hidden[-decoder.n_layers:] # take what we need from encoder
I’ll shamelessly link you to my own code for details:
github.com
A-Jacobson/minimal-nmt/blob/master/nmt_tutorial.ipynb 148
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import math\n",
"import torch\n",
"import random\n",
"from torch import nn\n",
"from torch.autograd import Variable\n",
"from torch.optim import Adam\n",
"import torch.nn.functional as F\n",
"import torchtext\n",
"from torchtext.datasets import Multi30k\n",
"from torchtext.data import Field, BucketIterator\n",
"from torch.nn.utils import clip_grad_norm\n",
"import spacy"
This file has been truncated. show original
|
st98743
|
OK. Thanks. I’ll try adding the two halves of encoder_out
together. Thanks for your reply.
|
st98744
|
@D_Liebman I was also having trouble understanding the dimensions of the hidden state when I moved my encoder from one direction to bi-directional. The exact problem you had with initHidden. I was uncertain if I should have result = Variable(torch.zeros(1, 1, self.hidden_size)) or result = Variable(torch.zeros(2, 1, self.hidden_size)). When I tried torch.zeros(2,1,self.hidden_size), which was what I thought I was correct, I got an error that it can’t convert more than a single value to a python scalar, and so I went back. Not quite sure, sorry, but I’m here with you in being confused lol. Also, I was using it for self.hidden = self.initHidden() and storing the hidden state in the encoder class, so I think I do need that function.
I have another question about bidirectional seq2seq. Can you replace the GRU with an LSTM? LSTM works fine with attention, right? It seems the LSTM’s get very nice results in papers, and I don’t see a reason to not use an LSTM (but I’m having trouble implementing it).
I’ll keep up with this thread and help out if I figure stuff out.
|
st98745
|
once more I’m not sure I’m in the right place.
hi again. I’m trying to use your Decoder class with attention. below is some code and my most recent error message. can you look at it and give me some feedback? If you want i’ll file it as an issue on github. Thanks.
def train(self,input_variable, target_variable, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):
encoder_hidden = Variable(torch.zeros(2, 1, self.hidden_size))
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
input_length = input_variable.size()[0]
target_length = target_variable.size()[0]
encoder_outputs = Variable(torch.zeros(max_length, encoder.hidden_size ))
encoder_outputs = encoder_outputs.cuda() if use_cuda else encoder_outputs
loss = 0
for ei in range(input_length):
encoder_output, encoder_hidden = encoder(
input_variable[ei], encoder_hidden)
encoder_outputs[ei] = encoder_output[0][0]
decoder_input = Variable(torch.LongTensor([[SOS_token]]))
decoder_input = decoder_input.cuda() if use_cuda else decoder_input
decoder_output = decoder_input
decoder_hidden = encoder_hidden
encoder_outputs = encoder_outputs.view(1,max_length,self.hidden_size)
if True:
# Teacher forcing: Feed the target as the next input
for di in range(target_length):
decoder_output, decoder_hidden, decoder_attention = decoder(decoder_output, encoder_output, decoder_hidden)
loss += criterion(decoder_output, target_variable[di])
decoder_input = target_variable[di] # Teacher forcing
loss.backward()
encoder_optimizer.step()
decoder_optimizer.step()
return loss.data[0] / target_length
if __name__ == '__main__':
encoder = EncoderBiRNN(input_lang.n_words, hidden_size )
decoder = Decoder(output_lang.n_words, hidden_size ,hidden_size, 1 ,dropout=0.1)
train(input_variable, target_variable, encoder, decoder, encoder_optimizer, decoder_optimizer, criterion)
Traceback (most recent call last):
File “pytorch.py”, line 837, in
n.trainIters(None, None, 75000, print_every=n.print_every, learning_rate=lr)
File “pytorch.py”, line 728, in trainIters
decoder, encoder_optimizer, decoder_optimizer, criterion)
File “pytorch.py”, line 663, in train
decoder_output, decoder_hidden, decoder_attention = decoder(decoder_output, encoder_output, decoder_hidden)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “pytorch.py”, line 222, in forward
decoder_hidden)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 190, in forward
self.check_forward_args(input, hx, batch_sizes)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 162, in check_forward_args
check_hidden_size(hidden, expected_hidden_size)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 154, in check_hidden_size
raise RuntimeError(msg.format(expected_hidden_size, tuple(hx.size())))
RuntimeError: Expected hidden size (1, 1, 512), got (2, 1, 512)
I’m not sure what I’m doing wrong but the error is with the hidden size and I believe it has to do with the initialized state again.
|
st98746
|
It’s telling you the problem right here
RuntimeError: Expected hidden size (1, 1, 512), got (2, 1, 512)
these hidden states are (num_layers * num_directions, batch_size, hidden_size) so when you turn on the bi-directional flag it doubles the first dim. You can either just take the last layer (or num decoder layers) on the first dim like i did in my helper classes
decoder_hidden = encoder_hidden[-decoder.n_layers:]
source: https://github.com/A-Jacobson/minimal-nmt/blob/master/decoding_helpers.py 62
or you could reshape it to (1, 1, 1024) and sum across the last dimension like we did with the encoder output.
The second option is probably strictly more correct as you’d get the hidden state for both directions but the first option works fine.
Your model may work without this part but you shouldn’t have to initialize the encoder hidden state and you should be able to feed batches of full sequences to the encoder (this will be much much faster than feeding one item at a time like your code). Also, the encoder and decoder can share the same optimizer.
|
st98747
|
Hi, I’m trying to follow your repository very closely. Still, I get the following error all the time. btw, the nmt repository of yours is great. If I were to use it in a paper, how would you want me to give you attribution?
Traceback (most recent call last):
File “pytorch.py”, line 930, in
n.trainIters(None, None, 75000, print_every=n.print_every, learning_rate=lr)
File “pytorch.py”, line 781, in trainIters
decoder, encoder_optimizer, decoder_optimizer, criterion)
File “pytorch.py”, line 713, in train
output, decoder_hidden, mask = decoder(output, encoder_output, decoder_hidden)
File “/home/dave/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “pytorch.py”, line 240, in forward
context, mask = self.attention(decoder_hidden[:-1], encoder_out) # 1, 1, 50 (seq, batch, hidden_dim)
File “/home/dave/.local/lib/python3.6/site-packages/torch/autograd/variable.py”, line 78, in getitem
return Index.apply(self, key)
File “/home/dave/.local/lib/python3.6/site-packages/torch/autograd/_functions/tensor.py”, line 89, in forward
result = i.index(ctx.index)
ValueError: result of slicing is an empty tensor
I got the code to run by changing decoder_hidden[:-1] to just decoder_hidden but something seems wrong.
|
st98748
|
It’s hard for me to tell what’s wrong without knowing what decoder_hidden is at that point or the exact architecture you are using but I can guess.
Is it possible that you’re only using one layers in your decoder? If that’s the case you don’t need to grab the last state for attention you can use the hidden state as is. in fact, if you try to index like that you will get an empty list… like this:
>>> x = [1] # list of length 1
>>> x[:-1]
[] # <-- empty list
I’m using n_layers = 2 in that repo so do have to grab the last state like that if I want to use the n_layers argument instead of explicitly splitting my decoder rnns.
As for attribution I’m not entirely sure as I don’t have an academic background. Would it help if I added a DOI? https://guides.github.com/activities/citable-code/#intro 10
|
st98749
|
using two layers in the decoder fixes it. Thanks. Yep, a doi would be good, but some open source license is what I was thinking about. thanks for all your time. finding someone doing this is wonderful.
|
st98750
|
Hi again. I’m trying out leaving the encoder output concatinated. I commented out the lines that added the two halves of the encoder output together. I find I have a problem with passing the hidden state from the encoder to the decoder. What is a good thing to do here? I am currently taking the hidden state and concatenating it with itself, making a object that is twice the size. The question is, does that pass any meaningfull info to the decoder? You’ve been very helpful and everything. I thought I’d ask you.
|
st98751
|
hey! @D_Liebman been a while I hope you discovered the answer elsewhere, but if you’re going to change the size of the encoder hidden state you have to change the number of channels the attention and decoder rnn layers expect as well. Additionally, you will have to reshape the encoder hidden layer so that the size is doubled only on the channel dimension. I haven’t seem much literature on addition vs concat, but intuitively since the gradients can flow through both operations the info is getting passed either way. Also, empirically I haven’t noticed much difference in my small scale experiments so I tend to stick with addition, as it makes the dimensions much cleaner. As a counter point, of course, harvard nlp and https://arxiv.org/pdf/1703.03906.pdf 14, and most of the google papers seem to use concat.
|
st98752
|
Question. Lets say I had a 4 layer bidirectional lstm, what if I wish to implement a fc inbetween rnn layers to perform skip connections “identity mapping”. How would we code out the solution?
|
st98753
|
Hi,
I’m trying to implement a fully connected network to do a classification task, while my training data contains some missing values and the positions of missingness are not fixed. For example:
[x1, x2, NaN, x4, x5] [y]
[x1, x2, x3, x4, NaN] [y]
[NaN, x2, x3, NaN, x5] [y]
I have tried to implement some methods to fill in the missing values but it will introduce bias after all.
To solve the bias problem, I’m planning to keep the missingness but drop the corresponding nodes that the missing values appear in the input layer so that the NaNs will not be fed into the network and cause errors, and the following layers of the network stay the same. I wonder if there are specific methods to achieve it in PyTorch, either by:
Drop the node in a specified position of the layer, or
Enable variable input size of the neural network.
However, these are just two possible solutions I can come up with, I would be appreciated if you could share with me other useful ways.
|
st98754
|
Hi guys,
I have a doubt, can I do this kind of operation without losing the gradient?
w = torch.zeros(1, M)
w[0][idx] = 1 # danger operation?
result = torch.mm(w, t2)
I want backprop on tensor t2, not on w
|
st98755
|
While the assignment (I’d write w[0, idx], personally) will be bad for backpropagation through w in general, there are two things to note here
For backpropagation through mm to t2, you just need the value of w, no differentiability of w.
In general the pattern w = torch.zeros(...) followed by w[...] = x will allow autograd to record operation enough that you get a gradient in x for the dependency of w on x.
However, if you assign to the same bits of w twice or otherwise overwrite something that already did require gradients, you’ll be in trouble for inplace operations.
Best regards
Thomas
|
st98756
|
I was reading A Guide to Convolutional Arithmetic 8 to understand Transpose Convolution.
From section 4.1
Using this representation, the backward pass is easily obtained by transposing C; in other words, the error is backpropagated by multiplying the loss with C.T. This operation takes a 4-dimensional vector as input and produces a 16-dimensional vector as output, and its connectivity pattern is compatible with C by construction.
When I try this out in pytorch, the error is certainly not equal to multiplying with C.T.
import torch
import torch.nn.functional as F
x = torch.arange(1, 17, dtype=torch.float).resize_(4, 4)
w = torch.rand(3, 3)
Convolve w and x
# Convert x into an "image" tensor with a single channel and part of a mini-batch of size 1
x1 = x.view(1, 1, 4, 4)
x1.requires_grad = True
# Convert w into a conv filter with a single input channel and a single output channel
w1 = w.view(1, 1, 3, 3)
w1.requires_grad = True
y1 = F.conv2d(x1, w1)
Backpropagate
y1.backward(torch.ones_like(y1))
x1.grad
Now create the C matrix as mentioned in the paper.
C = torch.zeros(4, 16, dtype=torch.float)
C[0, :3] = w[0]
C[0, 4:7] = w[1]
C[0, 8:11] = w[2]
C[1, 1:4] = w[0]
C[1, 5:8] = w[1]
C[1, 9:12] = w[2]
C[2, 4:7] = w[0]
C[2, 8:11] = w[1]
C[2, 12:15] = w[2]
C[3, 5:8] = w[0]
C[3, 9:12] = w[1]
C[3, 13:] = w[2]
Multiplying unrolled y1 by C.T will not equal to x1.grad.
torch.mm(C.transpose(0, 1), y1.view(-1, 1)).view(4, 4)
What am I doing wrong?
|
st98757
|
Avilay_Parekh:
torch.mm(C.transpose(0, 1), y1.view(-1, 1)).view(4, 4)
This needs to be
torch.mm(C.transpose(0, 1), torch.ones_like(y1).view(-1, 1)).view(4, 4)
When you want have a product that you want to backpropagate through, you replace the factor w.r.t. which you want to differentiate by the (appropriately expanded and summed up) output gradient, not by the output itself.
My favourite way of looking at derivatives of (multi)linear functions is in terms of Einstein summation notation, let’s take torch.nn.functional.bilinear as an example: It can be written for (inefficient) einsum as out = torch.einsum('bi,kij,bj->bk', left, weight, right). Now as torch.expand and torch.sum are dual to each other for taking derivatives and by the product differentiation rule, you have that
weight.grad = torch.einsum('bi,bk,bj->kij', left, out.grad, right) etc., so you swap in grad.out for one of the arguments and make the exchange between the right hand side and that factor in the equation, and you get the right gradient formula.
But now I got carried away, …
Best regards
Thomas
|
st98758
|
Hi, I’m trying to load pretrained weights of 3d resnet34, the model is from here:
https://github.com/kenshohara/3D-ResNets-PyTorch/blob/master/models/resnet.py 20
the weights are from here:
https://drive.google.com/drive/folders/1zvl89AgFAApbH0At-gMuZSeQB_LpNP-M 21
My code is :
def resnet34(feature_size, frame_size, frames_sequence):
"""Constructs a ResNet-34 model.
"""
path="/mypath/resnet-34-kinetics.pth"
pretrain = torch.load(path)
model = ResNet(BasicBlock, [3, 4, 6, 3], frame_size,frames_sequence,feature_size)
model.load_state_dict(pretrain['state_dict'])
And I’m getting That all the weights are missing, The file size is as expected, so the file is probably not corrupted, am I doing something wrong in the “loading” code?
Thanks!
|
st98759
|
Solved by ptrblck in post #4
You probably saved the model as a nn.DataParallel module. Have a look at this thread.
|
st98760
|
Could you post the error message? Maybe there is just a minor naming mismatch between the state_dict and your model definition?
|
st98761
|
sure!
Namespace(annotation_path=’/mypath/3D-ResNets-PyTorch/annotation_dir_path/hmdb51_1.json’, arch=‘mobilenet-1’, batch_size=16, begin_epoch=1, checkpoint=10, cnn_dim=‘3D’, crop_position_in_test=‘c’, dampening=0.9, dataset=‘hmdb51’, feature_size=400, feature_size_ds=256, frame_size=224, frames_sequence=16, ft_begin_index=0, initial_scale=1.0, learning_rate=0.11, lr_patience=10, manual_seed=1, mean=[114.7748, 107.7354, 99.475], mean_dataset=‘activitynet’, model=‘mobilenet’, model_depth=1, momentum=0.9, n_classes=51, n_epochs=300, n_finetune_classes=51, n_scales=5, n_threads=4, n_val_samples=3, n_warmup_steps=4000, nesterov=False, no_cuda=False, no_hflip=False, no_mean_norm=False, no_softmax_in_test=False, no_train=False, no_val=False, norm_value=1, number_gpu=2, optimizer=‘sgd’, pretrain_path=’’, result_path=’/mypath/3D-ResNets-PyTorch/results’, resume_path=’’, root_path=’/mypath/3D-ResNets-PyTorch’, scale_in_test=1.0, scale_step=0.84089641525, scales=[1.0, 0.84089641525, 0.7071067811803005, 0.5946035574934808, 0.4999999999911653], std=[38.7568578, 37.88248729, 40.02898126], std_norm=False, test=False, test_subset=‘val’, train_crop=‘corner’, video_path=’/mypath/3D-ResNets-PyTorch/hmdb51_jpg’, weight_decay=0.001)
3D CNN have been selected
/mypath/3D-ResNets-PyTorch/models/resnet.py:145: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
m.weight = nn.init.kaiming_normal(m.weight, mode=‘fan_out’)
Traceback (most recent call last):
File “main_w3d.py”, line 49, in
model, parameters = model(opt)
File “/mypath/3D-ResNets-PyTorch/model_w3d.py”, line 77, in model
model = BNet(opt)
File “/mypath/3D-ResNets-PyTorch/model_w3d.py”, line 42, in init
self.feature_size = resnet3d.resnet34(feature_size=opt.feature_size, frame_size=opt.frame_size,frames_sequence=opt.frames_sequence)
File “/mypath/3D-ResNets-PyTorch/models/resnet.py”, line 235, in resnet34
model.load_state_dict(pretrain[‘state_dict’])
File “/mypath/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 719, in load_state_dict
self.class.name, “\n\t”.join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for ResNet:
Missing key(s) in state_dict: “conv1.weight”, “bn1.weight”, “bn1.bias”, “bn1.running_mean”, “bn1.running_var”, “layer1.0.conv1.weight”, “layer1.0.bn1.weight”, “layer1.0.bn1.bias”, “layer1.0.bn1.running_mean”, “layer1.0.bn1.running_var”, “layer1.0.conv2.weight”, “layer1.0.bn2.weight”, “layer1.0.bn2.bias”, “layer1.0.bn2.running_mean”, “layer1.0.bn2.running_var”, “layer1.1.conv1.weight”, “layer1.1.bn1.weight”, “layer1.1.bn1.bias”, “layer1.1.bn1.running_mean”, “layer1.1.bn1.running_var”, “layer1.1.conv2.weight”, “layer1.1.bn2.weight”, “layer1.1.bn2.bias”, “layer1.1.bn2.running_mean”, “layer1.1.bn2.running_var”, “layer1.2.conv1.weight”, “layer1.2.bn1.weight”, “layer1.2.bn1.bias”, “layer1.2.bn1.running_mean”, “layer1.2.bn1.running_var”, “layer1.2.conv2.weight”, “layer1.2.bn2.weight”, “layer1.2.bn2.bias”, “layer1.2.bn2.running_mean”, “layer1.2.bn2.running_var”, “layer2.0.conv1.weight”, “layer2.0.bn1.weight”, “layer2.0.bn1.bias”, “layer2.0.bn1.running_mean”, “layer2.0.bn1.running_var”, “layer2.0.conv2.weight”, “layer2.0.bn2.weight”, “layer2.0.bn2.bias”, “layer2.0.bn2.running_mean”, “layer2.0.bn2.running_var”, “layer2.0.downsample.0.weight”, “layer2.0.downsample.1.weight”, “layer2.0.downsample.1.bias”, “layer2.0.downsample.1.running_mean”, “layer2.0.downsample.1.running_var”, “layer2.1.conv1.weight”, “layer2.1.bn1.weight”, “layer2.1.bn1.bias”, “layer2.1.bn1.running_mean”, “layer2.1.bn1.running_var”, “layer2.1.conv2.weight”, “layer2.1.bn2.weight”, “layer2.1.bn2.bias”, “layer2.1.bn2.running_mean”, “layer2.1.bn2.running_var”, “layer2.2.conv1.weight”, “layer2.2.bn1.weight”, “layer2.2.bn1.bias”, “layer2.2.bn1.running_mean”, “layer2.2.bn1.running_var”, “layer2.2.conv2.weight”, “layer2.2.bn2.weight”, “layer2.2.bn2.bias”, “layer2.2.bn2.running_mean”, “layer2.2.bn2.running_var”, “layer2.3.conv1.weight”, “layer2.3.bn1.weight”, “layer2.3.bn1.bias”, “layer2.3.bn1.running_mean”, “layer2.3.bn1.running_var”, “layer2.3.conv2.weight”, “layer2.3.bn2.weight”, “layer2.3.bn2.bias”, “layer2.3.bn2.running_mean”, “layer2.3.bn2.running_var”, “layer3.0.conv1.weight”, “layer3.0.bn1.weight”, “layer3.0.bn1.bias”, “layer3.0.bn1.running_mean”, “layer3.0.bn1.running_var”, “layer3.0.conv2.weight”, “layer3.0.bn2.weight”, “layer3.0.bn2.bias”, “layer3.0.bn2.running_mean”, “layer3.0.bn2.running_var”, “layer3.0.downsample.0.weight”, “layer3.0.downsample.1.weight”, “layer3.0.downsample.1.bias”, “layer3.0.downsample.1.running_mean”, “layer3.0.downsample.1.running_var”, “layer3.1.conv1.weight”, “layer3.1.bn1.weight”, “layer3.1.bn1.bias”, “layer3.1.bn1.running_mean”, “layer3.1.bn1.running_var”, “layer3.1.conv2.weight”, “layer3.1.bn2.weight”, “layer3.1.bn2.bias”, “layer3.1.bn2.running_mean”, “layer3.1.bn2.running_var”, “layer3.2.conv1.weight”, “layer3.2.bn1.weight”, “layer3.2.bn1.bias”, “layer3.2.bn1.running_mean”, “layer3.2.bn1.running_var”, “layer3.2.conv2.weight”, “layer3.2.bn2.weight”, “layer3.2.bn2.bias”, “layer3.2.bn2.running_mean”, “layer3.2.bn2.running_var”, “layer3.3.conv1.weight”, “layer3.3.bn1.weight”, “layer3.3.bn1.bias”, “layer3.3.bn1.running_mean”, “layer3.3.bn1.running_var”, “layer3.3.conv2.weight”, “layer3.3.bn2.weight”, “layer3.3.bn2.bias”, “layer3.3.bn2.running_mean”, “layer3.3.bn2.running_var”, “layer3.4.conv1.weight”, “layer3.4.bn1.weight”, “layer3.4.bn1.bias”, “layer3.4.bn1.running_mean”, “layer3.4.bn1.running_var”, “layer3.4.conv2.weight”, “layer3.4.bn2.weight”, “layer3.4.bn2.bias”, “layer3.4.bn2.running_mean”, “layer3.4.bn2.running_var”, “layer3.5.conv1.weight”, “layer3.5.bn1.weight”, “layer3.5.bn1.bias”, “layer3.5.bn1.running_mean”, “layer3.5.bn1.running_var”, “layer3.5.conv2.weight”, “layer3.5.bn2.weight”, “layer3.5.bn2.bias”, “layer3.5.bn2.running_mean”, “layer3.5.bn2.running_var”, “layer4.0.conv1.weight”, “layer4.0.bn1.weight”, “layer4.0.bn1.bias”, “layer4.0.bn1.running_mean”, “layer4.0.bn1.running_var”, “layer4.0.conv2.weight”, “layer4.0.bn2.weight”, “layer4.0.bn2.bias”, “layer4.0.bn2.running_mean”, “layer4.0.bn2.running_var”, “layer4.0.downsample.0.weight”, “layer4.0.downsample.1.weight”, “layer4.0.downsample.1.bias”, “layer4.0.downsample.1.running_mean”, “layer4.0.downsample.1.running_var”, “layer4.1.conv1.weight”, “layer4.1.bn1.weight”, “layer4.1.bn1.bias”, “layer4.1.bn1.running_mean”, “layer4.1.bn1.running_var”, “layer4.1.conv2.weight”, “layer4.1.bn2.weight”, “layer4.1.bn2.bias”, “layer4.1.bn2.running_mean”, “layer4.1.bn2.running_var”, “layer4.2.conv1.weight”, “layer4.2.bn1.weight”, “layer4.2.bn1.bias”, “layer4.2.bn1.running_mean”, “layer4.2.bn1.running_var”, “layer4.2.conv2.weight”, “layer4.2.bn2.weight”, “layer4.2.bn2.bias”, “layer4.2.bn2.running_mean”, “layer4.2.bn2.running_var”, “fc.weight”, “fc.bias”.
Unexpected key(s) in state_dict: “module.conv1.weight”, “module.bn1.weight”, “module.bn1.bias”, “module.bn1.running_mean”, “module.bn1.running_var”, “module.layer1.0.conv1.weight”, “module.layer1.0.bn1.weight”, “module.layer1.0.bn1.bias”, “module.layer1.0.bn1.running_mean”, “module.layer1.0.bn1.running_var”, “module.layer1.0.conv2.weight”, “module.layer1.0.bn2.weight”, “module.layer1.0.bn2.bias”, “module.layer1.0.bn2.running_mean”, “module.layer1.0.bn2.running_var”, “module.layer1.1.conv1.weight”, “module.layer1.1.bn1.weight”, “module.layer1.1.bn1.bias”, “module.layer1.1.bn1.running_mean”, “module.layer1.1.bn1.running_var”, “module.layer1.1.conv2.weight”, “module.layer1.1.bn2.weight”, “module.layer1.1.bn2.bias”, “module.layer1.1.bn2.running_mean”, “module.layer1.1.bn2.running_var”, “module.layer1.2.conv1.weight”, “module.layer1.2.bn1.weight”, “module.layer1.2.bn1.bias”, “module.layer1.2.bn1.running_mean”, “module.layer1.2.bn1.running_var”, “module.layer1.2.conv2.weight”, “module.layer1.2.bn2.weight”, “module.layer1.2.bn2.bias”, “module.layer1.2.bn2.running_mean”, “module.layer1.2.bn2.running_var”, “module.layer2.0.conv1.weight”, “module.layer2.0.bn1.weight”, “module.layer2.0.bn1.bias”, “module.layer2.0.bn1.running_mean”, “module.layer2.0.bn1.running_var”, “module.layer2.0.conv2.weight”, “module.layer2.0.bn2.weight”, “module.layer2.0.bn2.bias”, “module.layer2.0.bn2.running_mean”, “module.layer2.0.bn2.running_var”, “module.layer2.1.conv1.weight”, “module.layer2.1.bn1.weight”, “module.layer2.1.bn1.bias”, “module.layer2.1.bn1.running_mean”, “module.layer2.1.bn1.running_var”, “module.layer2.1.conv2.weight”, “module.layer2.1.bn2.weight”, “module.layer2.1.bn2.bias”, “module.layer2.1.bn2.running_mean”, “module.layer2.1.bn2.running_var”, “module.layer2.2.conv1.weight”, “module.layer2.2.bn1.weight”, “module.layer2.2.bn1.bias”, “module.layer2.2.bn1.running_mean”, “module.layer2.2.bn1.running_var”, “module.layer2.2.conv2.weight”, “module.layer2.2.bn2.weight”, “module.layer2.2.bn2.bias”, “module.layer2.2.bn2.running_mean”, “module.layer2.2.bn2.running_var”, “module.layer2.3.conv1.weight”, “module.layer2.3.bn1.weight”, “module.layer2.3.bn1.bias”, “module.layer2.3.bn1.running_mean”, “module.layer2.3.bn1.running_var”, “module.layer2.3.conv2.weight”, “module.layer2.3.bn2.weight”, “module.layer2.3.bn2.bias”, “module.layer2.3.bn2.running_mean”, “module.layer2.3.bn2.running_var”, “module.layer3.0.conv1.weight”, “module.layer3.0.bn1.weight”, “module.layer3.0.bn1.bias”, “module.layer3.0.bn1.running_mean”, “module.layer3.0.bn1.running_var”, “module.layer3.0.conv2.weight”, “module.layer3.0.bn2.weight”, “module.layer3.0.bn2.bias”, “module.layer3.0.bn2.running_mean”, “module.layer3.0.bn2.running_var”, “module.layer3.1.conv1.weight”, “module.layer3.1.bn1.weight”, “module.layer3.1.bn1.bias”, “module.layer3.1.bn1.running_mean”, “module.layer3.1.bn1.running_var”, “module.layer3.1.conv2.weight”, “module.layer3.1.bn2.weight”, “module.layer3.1.bn2.bias”, “module.layer3.1.bn2.running_mean”, “module.layer3.1.bn2.running_var”, “module.layer3.2.conv1.weight”, “module.layer3.2.bn1.weight”, “module.layer3.2.bn1.bias”, “module.layer3.2.bn1.running_mean”, “module.layer3.2.bn1.running_var”, “module.layer3.2.conv2.weight”, “module.layer3.2.bn2.weight”, “module.layer3.2.bn2.bias”, “module.layer3.2.bn2.running_mean”, “module.layer3.2.bn2.running_var”, “module.layer3.3.conv1.weight”, “module.layer3.3.bn1.weight”, “module.layer3.3.bn1.bias”, “module.layer3.3.bn1.running_mean”, “module.layer3.3.bn1.running_var”, “module.layer3.3.conv2.weight”, “module.layer3.3.bn2.weight”, “module.layer3.3.bn2.bias”, “module.layer3.3.bn2.running_mean”, “module.layer3.3.bn2.running_var”, “module.layer3.4.conv1.weight”, “module.layer3.4.bn1.weight”, “module.layer3.4.bn1.bias”, “module.layer3.4.bn1.running_mean”, “module.layer3.4.bn1.running_var”, “module.layer3.4.conv2.weight”, “module.layer3.4.bn2.weight”, “module.layer3.4.bn2.bias”, “module.layer3.4.bn2.running_mean”, “module.layer3.4.bn2.running_var”, “module.layer3.5.conv1.weight”, “module.layer3.5.bn1.weight”, “module.layer3.5.bn1.bias”, “module.layer3.5.bn1.running_mean”, “module.layer3.5.bn1.running_var”, “module.layer3.5.conv2.weight”, “module.layer3.5.bn2.weight”, “module.layer3.5.bn2.bias”, “module.layer3.5.bn2.running_mean”, “module.layer3.5.bn2.running_var”, “module.layer4.0.conv1.weight”, “module.layer4.0.bn1.weight”, “module.layer4.0.bn1.bias”, “module.layer4.0.bn1.running_mean”, “module.layer4.0.bn1.running_var”, “module.layer4.0.conv2.weight”, “module.layer4.0.bn2.weight”, “module.layer4.0.bn2.bias”, “module.layer4.0.bn2.running_mean”, “module.layer4.0.bn2.running_var”, “module.layer4.1.conv1.weight”, “module.layer4.1.bn1.weight”, “module.layer4.1.bn1.bias”, “module.layer4.1.bn1.running_mean”, “module.layer4.1.bn1.running_var”, “module.layer4.1.conv2.weight”, “module.layer4.1.bn2.weight”, “module.layer4.1.bn2.bias”, “module.layer4.1.bn2.running_mean”, “module.layer4.1.bn2.running_var”, “module.layer4.2.conv1.weight”, “module.layer4.2.bn1.weight”, “module.layer4.2.bn1.bias”, “module.layer4.2.bn1.running_mean”, “module.layer4.2.bn1.running_var”, “module.layer4.2.conv2.weight”, “module.layer4.2.bn2.weight”, “module.layer4.2.bn2.bias”, “module.layer4.2.bn2.running_mean”, “module.layer4.2.bn2.running_var”, “module.fc.weight”, “module.fc.bias”.
Thanks a lot!
Edit: now I’m seeing that the parameters inside the pretrained weight are with “module” hiererchy before the layer hiererchy, is there anyway to solve that?
|
st98762
|
You probably saved the model as a nn.DataParallel module. Have a look at this 1.5k thread.
|
st98763
|
Thanks a lot, I have a hard time to understand what does " n.DataParallel temporarily" means.
I’m using it after I’m calling my “model” but how can I add it temporarily for loading purposes?
Giving that this is my data loader (If I understood you right) :
train_loader = torch.utils.data.DataLoader(
training_data,
batch_size=opt.batch_size,
shuffle=True,
num_workers=opt.n_threads,
pin_memory=True,
drop_last=True)
I succeeded to solve the problem using a new dict, but it’s still interesting me…
|
st98764
|
python setup.py install
How to solve the following error?
/pytorch/aten/src/ATen/core/Half-inl.h(21): error:identifier “__half_as_short” is undefined
1 error detected in the compilation of “/tmp/tmpxft_000087bd_00000000-7_THCBlas.cpp1.ii”.
CMake Error at caffe2_gpu_generated_THCBlas.cu.o.Release.cmake:279(message):
|
st98765
|
Hi,
When I saw some demo codes:
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.data[0]
If we would like to extract the loss tensor from loss variable, why not use loss.data?
What does loss.data[0] mean here?
|
st98766
|
Hi,
There is:
loss the Variable,
loss.data the (presumably size 1) Tensor,
loss.data[0] the (python) float at position 0 in the tensor.
As such, by just using loss.data you would not run into the “keeping track over everything” problem (which would happen if you use loss and something is not volatile), but you would add torch tensors instead of just python numbers.
Best regards
Thomas
|
st98767
|
As the above still get’s likes:
Note that the above post is outdated.
Nowadays, with PyTorch >= 0.4 you have
loss the Tensor (which previously was the variable),
loss.data (shouldn’t be needed much anymore) which is roughly equivalent to loss.detach(), a Tensor which does not do tracing for derivatives anymore (you can use this to keep around but e.g. don’t want to move things off the GPU yet)
loss.item() the Python number contained in a 1-element tensor.
Best regards
Thomas
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.