id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st98468
|
Hello,
I am trying to convert InferSent pre-trained model into ONNX (https://github.com/facebookresearch/InferSent 5)
which can be downloaded using this,
curl -Lo encoder/infersent.allnli.pickle https://s3.amazonaws.com/senteval/infersent/infersent.allnli.pickle 1
and loaded into PyTorch by this
model = torch.load(‘infersent.allnli.pickle’, map_location=lambda storage, loc: storage)
However, while saving it into Onnx using the below code, I get
"ValueError: NestedIOFunction doesn’t know how to process an input object of type models.BLSTMEncoder"
import torch.onnx
import torchvision
torch.onnx.export(model, _, “infersent.proto”, verbose=True)
Any help will be appreciated.
|
st98469
|
Hello,
Did you find a way to solve this? I am in similar situation now, any advise is appreciated.
|
st98470
|
Can you try this https://github.com/facebookresearch/SentEval/issues/3#issuecomment-313414393 22
|
st98471
|
I don’t have a problem while loading the pre trained model, i am getting issues while converting it to onnx.
|
st98472
|
My code includes the following:
from torch.multiprocessing import set_start_method
set_start_method('spawn')
…
from torch.multiprocessing import Pool
input_length = len(input_hashes)
with Pool(2) as worker_pool:
embedding_loss = sum(worker_pool.starmap(
self._forward_i,
zip(input_hashes, input_embeddings, [ngram_embeddings] * input_length,
[latest_words] * input_length)))
Before the end of the first epoch, I get the following error:
Exception in thread Thread-19521:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.6/multiprocessing/pool.py", line 463, in _handle_results
task = get()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "/home/plvaudry/virtualenvs/aml/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 96, in rebuild_storage_cuda
storage = cls._new_shared_cuda(device, handle, size, offset, view_size)
RuntimeError: cuda runtime error (30) : unknown error at /pytorch/torch/csrc/generic/StorageSharing.cpp:304
|
st98473
|
For example, I have some dataset which is full of different names. I use one-hot vector for them.
Then, “Bob” would be a (26, 4) tensor, “Mike” would be a (26, 4) tensor, etc.
So, what should I do to concatenating these tensors to make them batches?
|
st98474
|
I use the nn.DataParallel to wrap my model to be trained on multiple GPUs and optimization was run successfully. Once the model was trained I save the model with the following code:
torch.save(cust_model.state_dict(), path)
Now when I want to make predictions, I use the following:
cust_model.load_state_dict(torch.load(model_dir, map_location=map_location_device))
, where the map_location_device is equal to “cpu” and model_dir is the model directory.
I get the following error when trying to load this model.
RuntimeError: Error(s) in loading state_dict for DenseSegmModel:
Missing key(s) in state_dict: “layer1.0.weight”, “layer1.1.weight”, “layer1.1.bias”, “layer2.denseblock1.denselayer1.norm1.weight”, “layer2.denseblock1.denselayer1.norm1.bias”, and it goes on
Any ideas?
|
st98475
|
Solved by Miles_Cranmer in post #2
Are you using nn.DataParallel both for the CPU and GPU models? I had a similar issue and that fixed it for me.
Edit: Doesn’t look like you are. Try that adding nn.DataParallel(.) to wrap your model before loading the state dict.
|
st98476
|
Are you using nn.DataParallel both for the CPU and GPU models? I had a similar issue and that fixed it for me.
Edit: Doesn’t look like you are. Try that adding nn.DataParallel(.) to wrap your model before loading the state dict.
|
st98477
|
Hey Guys,
I am new to Pytorch. I get the error AttributeError: module ‘torch.functional’ has no attribute ‘relu’.
My Pytorch version is pytorch-0.4.1 py36_cuda0.0_cudnn0.0_1
Can someone tell me more about this error?
Thanks!
|
st98478
|
Solved by tumble-weed in post #2
torch.nn.functional.relu
|
st98479
|
So im using torch.transpose 818, but it sounds like i cannot not quite understand how it works.
lets say i have:
M = torch.Tensor([[[1, 2, 3], [4, 5, 6]],[[10, 20, 30], [40, 50, 60]]])
print(M)
(0 ,.,.) =
1 2 3
10 20 30
(1 ,.,.) =
4 5 6
40 50 60
[torch.FloatTensor of size 2x2x3]
Now I do:
M2 = torch.transpose(M, 0, 1)
which is swiping the the dimension 1 with 0 and gives me:
(0 ,.,.) =
1 2 3
10 20 30
(1 ,.,.) =
4 5 6
40 50 60
[torch.FloatTensor of size 2x2x3]
so far so good. but why when i do M2 = torch.transpose(M, 1, 0) is still gives me the same results and NOT:
(0 ,.,.) =
10 20 30
1 2 3
(1 ,.,.) =
40 50 60
4 5 6
[torch.FloatTensor of size 2x2x3]
???
After all, can anyone please tell me how I can get the latest representation?
|
st98480
|
hmm did you by chance swap M and M2 in the paste here? when i do M[0,:,:], and M[1,:,:] i get:
M[0,:,:] = tensor([[1., 2., 3.],
[4., 5., 6.]])
M[1,:,:] = tensor([[10., 20., 30.],
[40., 50., 60.]])
and the transpose is:
tensor([[[ 1., 2., 3.],
[10., 20., 30.]],
[[ 4., 5., 6.],
[40., 50., 60.]]])
transposing just swaps axes, so transpose(M,0,1) is same as transpose(M,1,0)
You need M3 = torch.transpose(M.flip(0),0,1)
|
st98481
|
Sometimes, we need to create a module with learnable parameters. For example, when we construct a-softmax module, we need the module contains a weight W which should be learnt and updated during the process of training.
By looking at the docs, it seems that I should use it like this:
class mod(nn.Module):
def __init__(self):
self.W = torch.tensor(torch.random(3,4,5), requires_grad=True)
def forward(self, x):
w_norm = torch.norm(self.W, 2, 1, True).expand_as(self.W)
x_norm = torch.norm(x, 2, 1, True).expand_as(x)
theta = torch.mm(x, self.W) / (x_norm * w_norm)
return theta
...
loss = mod()
params = loss.parameters()
optim = Optimizer(params, lr = 1e-3...)
...
I am just trying to confirm, is this the recommanded way to do this ?
|
st98482
|
1.you need to run __init__ for mod's base class
2. the parameters should be wrapped within a torch.nn.Parameter
3. torch.random is a module, say you are using torch.randn for normal distribution, then putting these together:
def __init__(self):
super(mod,self).__init__()
self.W = torch.nn.Parameter(torch.randn(3,4,5))
self.W.requires_grad = True
then when you do loss.paramters() you can see W there.you can verify this by doing:
hex(id(next(loss.parameters()))) , hex(id(loss.W))
|
st98483
|
Hi,
I tried as you say, but when I add the parameters to an optimizer:
params = [net.parameters(), loss.parameters()]
opt = optim.Adam(params, lr = 1e-3, weight_decay = 5e-4)
It invokes the error message:
File “/home/zhangzy/.local/lib/python3.5/site-packages/torch/optim/optimizer.py”, line 192, in add_param_group
"but one of the params is " + torch.typename(param))
TypeError: optimizer can only optimize Tensors, but one of the params is Module.parameters
|
st98484
|
Have a look here 136. *.parameters() creates a generator, normally we would do torch.optim.Adam(loss.parameters(),lr = ...) when dealing with just one set of parameters, but here since you have 2 sets you will need to make a list out of one generator and extend it:
params = list(net.parameters())
params.extend(list(loss.parameters()))
opt = torch.optim.Adam(params,lr=1e-3,weight_decay=5e-4)
|
st98485
|
I have some problems with how to use ‘Variable’.
Which parameter should be ‘Variable’ when I achieve a network?
|
st98486
|
Solved by ptrblck in post #2
Since PyTorch 0.4.0 you don’t have to use Variables anymore.
You can use tensors now and set requires_grad=True, if you need the gradient for this tensor.
If you don’t need gradients at all, e.g. for validation, wrap your code in with torch.no_grad():. This was previously achieved by setting volat…
|
st98487
|
Since PyTorch 0.4.0 you don’t have to use Variables anymore.
You can use tensors now and set requires_grad=True, if you need the gradient for this tensor.
If you don’t need gradients at all, e.g. for validation, wrap your code in with torch.no_grad():. This was previously achieved by setting volatile=True using Variables.
|
st98488
|
I installed CUDA 9.2, cudnn 7.3 for CUDA 9.2, and nvcc for CUDA 9.2.
I’m using python3.6 so installed pytorch with “pip3 install http://download.pytorch.org/whl/cu92/torch-0.4.1-cp36-cp36m-linux_x86_64.whl” as notified on homepage.
On my terminal, when i call nvcc -V, I got “9.2”.
But, when I tried torch.version.cuda after import torch on python3.6, I got “'9.0.176”
|
st98489
|
OS: Mac OSX 10.13.6 High Sierra
Pytorch Version: 0.4.1
I am trying to just feed a long tensor, with shape (64, 32, 24) into an nn.Linear layer, which has been initialized to take a size 24 tensor and output a size 18 tensor.
Using pdb, I have confirmed that the tensor that I give to this linear layer is indeed a LongTensor. (output by tensor.type())
Yet, when I feed this tensor into the layer, it errors out and gives me the error message I’ve written in the title here.
"RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor for argument #2 ‘mat2’
Why would that be happening? Thanks in advance!
My model class is posted below.
In the forward function I hand a tuple of data, which includes a batch of either note pitch information or note duration information, and a batch of chords.
I separate that tuple into individual variables, and then feed them into separate networks. I first feed the chord batch into a linear layer, which is throwing the error.
The linear layers arguments are const.CHORD_DIM, which is 24, and const.CHORD_EMBED_DIM which is 18.
I have verified using pdb that the chord tensor I am passing to it is indeed a LongTensor, and it’s size is (64, 32, 24). That is, it is a batch of 64 length 32 sequences of chord representations, each of which is length 24.
class ChordCondLSTM(nn.Module):
def __init__(self, vocab_size=None, embed_dim=None, hidden_dim=None, output_dim=None, seq_len=None,
batch_size=None, dropout=0.5, batch_norm=True, no_cuda=False, **kwargs):
super().__init__(**kwargs)
self.hidden_dim = hidden_dim
self.num_layers = const.NUM_RNN_LAYERS
self.batch_norm = batch_norm
self.no_cuda = no_cuda
self.chord_fc1 = nn.Linear(const.CHORD_DIM, const.CHORD_EMBED_DIM)
self.chord_bn = nn.BatchNorm1d(seq_len)
self.chord_fc2 = nn.Linear(const.CHORD_EMBED_DIM, const.CHORD_EMBED_DIM)
self.embedding = nn.Embedding(vocab_size, embed_dim)
self.encoder = nn.Linear(embed_dim + const.CHORD_EMBED_DIM, hidden_dim)
self.encode_bn = nn.BatchNorm1d(seq_len)
self.lstm = nn.LSTM(hidden_dim, hidden_dim, num_layers=self.num_layers,
batch_first=True, dropout=dropout)
mid_dim = (hidden_dim + output_dim) // 2
self.decode1 = nn.Linear(hidden_dim, mid_dim)
self.decode_bn = nn.BatchNorm1d(seq_len)
self.decode2 = nn.Linear(mid_dim, output_dim)
self.softmax = nn.LogSoftmax(dim=2)
self.hidden_and_cell = None
if batch_size is not None:
self.init_hidden_and_cell(batch_size)
if torch.cuda.is_available() and (not self.no_cuda):
self.cuda()
return
def init_hidden_and_cell(self, batch_size):
hidden = Variable(torch.zeros(self.num_layers, batch_size, self.hidden_dim))
cell = Variable(torch.zeros(self.num_layers, batch_size, self.hidden_dim))
if torch.cuda.is_available() and (not self.no_cuda):
hidden = hidden.cuda()
cell = cell.cuda()
self.hidden_and_cell = (hidden, cell)
def repackage_hidden_and_cell(self):
new_hidden = Variable(self.hidden_and_cell[0].data)
new_cell = Variable(self.hidden_and_cell[1].data)
if torch.cuda.is_available() and (not self.no_cuda):
new_hidden = new_hidden.cuda()
new_cell = new_cell.cuda()
self.hidden_and_cell = (new_hidden, new_cell)
def forward(self, data):
x, chords = data
import pdb
pdb.set_trace()
chord_embeds = self.chord_fc1(chords)
if self.batch_norm:
chord_embeds = self.chord_bn(chord_embeds)
chord_embeds = self.chord_fc2(F.relu(chord_embeds))
x_embeds = self.embedding(x)
encoding = self.encoder(torch.cat([chord_embeds, x_embeds], 2)) # Concatenate along 3rd dimension
if self.batch_norm:
encoding = self.encode_bn(encoding)
lstm_out, self.hidden_and_cell = self.lstm(encoding, self.hidden_and_cell)
decoding = self.decode1(lstm_out)
if self.batch_norm:
decoding = self.decode_bn(decoding)
decoding = self.decode2(decoding)
output = self.softmax(decoding)
return output
|
st98490
|
Solved by bgenchel in post #4
The problem is that nn.Linear in general doesn’t take LongTensor, it only takes FloatTensor, because it’s internal weight matrix, ‘mat2’ here, is a FloatTensor.
It would be helpful if this type of thing were caught at a higher level so that the error message was clear and helpful instead of allow…
|
st98491
|
nn.Linear only takes FloatTensors
Re: my other post, which can be found here - RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor f or argument #2 'mat2'
I traced the issue into the nn.functional module, and it seems like the error is thrown because the weights variable in the layer class is a FloatTensor.
Idk if this is the right place to post this, but it appears to me that the devs could include in the docs somewhere that this is a requirement of the layer, and/or make this more clear by…
The problem is that nn.Linear in general doesn’t take LongTensor, it only takes FloatTensor, because it’s internal weight matrix, ‘mat2’ here, is a FloatTensor.
It would be helpful if this type of thing were caught at a higher level so that the error message was clear and helpful instead of allowing an error from a lower level function to propagate up.
|
st98492
|
1533895901.jpg625×833 127 KB
My target image is like this.
I want to calculate L1 loss, but do not want to include black space.
What should I do?
|
st98493
|
You could create a mask using all black pixels to zero out the losses corresponding to these pixels.
After you’ve created the mask, you could calculate the unreduced loss, multiply it with the mask, average and finally call backward.
|
st98494
|
While implementing the batched matrix multiplication, i noticed that the batched matrix multiplication is not efficient, see the code below
import torch
# Input tensor
## Batch size=8192, dim=512
x = torch.FloatTensor(8192, 512).requires_grad_().cuda()
if True:
# Batch strategy 1
x1 = x.view(8192, 8, 1, 64) # 512 = 8 * 64
W1 = torch.FloatTensor(8, 64, 64).cuda()
out1 = torch.matmul(x1, W1) # out: [8192, 8, 1, 64]
print(torch.cuda.memory_allocated()) # 1107427328
if False:
# Batch strategy 2
x2 = x.view(8192, 1, 512) # add one dimension for batch matmul
W2 = torch.FloatTensor(512, 512).cuda() # larger than W1
# out: [8192, 1, 512] # the same number of elements as out1
out2 = torch.matmul(x2, W2)
print(torch.cuda.memory_allocated()) # 34603008
However, it turns out that Batch strategy 2 has less memory cost despite that W2 is larger than Batch strategy 1. And everything else are the same (x1, x2 have same number of elements, also out1, out2).
I also found that by removing the requires_grad_() the memory costs are similar.
What’s the possible reason for that?
|
st98495
|
If you look at what matmul does (it’s in C++ but you can directly transpose it into Python) 53 you see that there are a number of reshaping / broadcasting ops involved. As matmul does not have a custom derivative (you can see this in tools/autograd/derivatives.yaml), the backward is done by keeping track of the operations performed and the inputs required for the backward. Quite likely, some of the intermediate results are cached for the backward.
If it’s excessive, you could file a bug to implement an explicit backward for matmul.
An alternative could be to try einsum and see if it is better about it (but I’m not sure it is).
Best regards
Thomas
|
st98496
|
Hi Tomas, thanks very much for your suggestions!
I can imagine that matmul involves many operations for this special case, and after trying einsum the memory cost is comparable to Batch strategy 2 now. It is really surprising!
For people who encounter a similar problem, here is what I did
# Batch strategy 1' ===> optimizes Batch strategy 1
x1 = x.view(8192, 8, 64) # 512 = 8 * 64
W1 = torch.FloatTensor(8, 64, 64).normal_().cuda()
out1 = torch.einsum('bij,abj->abi', (W1, x1))
|
st98497
|
Hey there!
I’m trying out the PyTorch 1.0 C++ API, and I can’t find how to do a simple assignation, like in the following python snippet:
my_tensor[0, 0] = 1
I tried the following:
my_tensor[0, 0] = 1.f;
I was surprised to see that it compiles; but unfortunately it’s just equivalent to calling:
my_tensor[0] = 1.f;
(Only the first index is used.)
So, how can I perform multidimensional indexing from C++?
Thanks!
|
st98498
|
Solved by ptrblck in post #2
Not sure, if that’s the recommended way, but my_tensor[0][0] = 1.f; works for me.
|
st98499
|
Not sure, if that’s the recommended way, but my_tensor[0][0] = 1.f; works for me.
|
st98500
|
As the cppdocs are still being updated, I usually have a look at some core implementations to see, how others are using ATen.
For me personally, @tom’s LossCTC 10 is a good cheat sheet, as there is some tensor indexing using accessors etc.
|
st98501
|
I’m trying to use the value of layer.register_forward_hook to train my network.
During training, if I use the value returned from network to train my network, the training goes well.
(I’ve checked the gradients, loss decreasing and accuracy increasing)
But if I use the value from layer.register_forward_hook to train my network, every gradients are zero and loss and accuracy is not changed.
I really don’t know what is the correct way to use the values from layer.register_forward_hook.
I want to know the method to train network with values from register_forward_hook function.
Please help me guys.
My entire code is below:
'''Train CIFAR10 with PyTorch.'''
from __future__ import print_function
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
import torchvision
import torchvision.transforms as transforms
import os
import argparse
from models import *
from utils import progress_bar
import sys
# ====================Register hooking functions====================
global glb_feature
def Get_features(self, input, output):
global glb_feature
glb_feature = output.data
return None
global glb_grad
def Get_grad(self, ingrad, outgrad):
global glb_grad
glb_grad = outgrad
return None
#============================================================
parser = argparse.ArgumentParser(description='PyTorch CIFAR10 Training')
parser.add_argument('--lr', default=0.1, type=float, help='learning rate')
parser.add_argument('--resume', '-r', action='store_true', help='resume from checkpoint')
args = parser.parse_args()
batch_size = 256
device = 'cuda' if torch.cuda.is_available() else 'cpu'
best_acc = 0 # best test accuracy
start_epoch = 0 # start from epoch 0 or last checkpoint epoch
# Dataset preparing
print('==> Preparing data..')
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
trainset = torchvision.datasets.CIFAR10(root='../data', train=True, download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size, shuffle=True, num_workers=4)
testset = torchvision.datasets.CIFAR10(root='../data', train=False, download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=100, shuffle=False, num_workers=4)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# Model
print('==> Building model..')
net = StudentNet()
net = net.to(device)
final_layer = net.classifier2
#====================define glb_feature and glb_grad====================
glb_feature = torch.tensor(torch.zeros(batch_size, len(classes))
, requires_grad=True, device=torch.device(device))
glb_grad = torch.tensor(torch.zeros(batch_size, len(classes))
, requires_grad=True, device=torch.device(device))
#============================================================
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=args.lr, momentum=0.9, weight_decay=5e-4)
# Training
def train(epoch):
global glb_feature
global glb_grad
print('\nEpoch: %d' % epoch)
net.train()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad()
outputs = net(inputs)
# ====================REGISTER_FORWARD_HOOOK====================
final_layer.register_forward_hook(Get_features)
final_tensor = torch.tensor(glb_feature, requires_grad=True
, device=torch.device(device))
#============================================================
loss = criterion(final_tensor, targets) # the way I want to implement
#loss = criterion(outputs, targets) # original code
loss.backward()
optimizer.step()
#====================REGISTER_BACKWARD_HOOK====================
final_layer.register_backward_hook(Get_grad)
print(glb_grad)
#============================================================
train_loss += loss.item()
_, predicted = outputs.max(1)
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
progress_bar(batch_idx, len(trainloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (train_loss/(batch_idx+1), 100.*correct/total, correct, total))
# main
for epoch in range(start_epoch, start_epoch+50):
train(epoch)
|
st98502
|
Solved by tumble-weed in post #4
In the forward hook, you are returning output.data, i usually return output. I did a run to check the type of a layer and layer.data for AlexNet, and both of them are tensor which is why you can return either, but when i checked the .requires_grad property, it seems for layer.data.requires_grad is F…
|
st98503
|
At least 1 thing i’ll say is you seem to be registering the hook in a loop, this is unnecessary, you can simply register once, and keep seeing the updated glb_feature every-time you do a forward pass, and glb_grad in backward.
After the line final_layer = net.classifier2
add the code, and dont keep the register lines in the loop.
final_layer.register_forward_hook(Get_features) final_layer.register_backward_hook(Get_grad)
|
st98504
|
In the forward hook, you are returning output.data, i usually return output. I did a run to check the type of a layer and layer.data for AlexNet, and both of them are tensor which is why you can return either, but when i checked the .requires_grad property, it seems for layer.data.requires_grad is False but layer.requires_grad is True. Try changing the output.data to output in the forward hook.
|
st98505
|
@tumble-weed Thank you! Finally solved this problem.
I’ve been realllly trying to find what is the problem, but never thought that output.data was the problem.
I wasted too much time to find this problem’s answer.
I really appreciate to you, @tumble-weed .
Best regards,
|
st98506
|
Re: my other post, which can be found here - RuntimeError: Expected object of type torch.LongTensor but found type torch.FloatTensor f or argument #2 'mat2' 11
I traced the issue into the nn.functional module, and it seems like the error is thrown because the weights variable in the layer class is a FloatTensor.
Idk if this is the right place to post this, but it appears to me that the devs could include in the docs somewhere that this is a requirement of the layer, and/or make this more clear by catching the error before something more general and less helpful is made to propogate up from some low level component?
|
st98507
|
Hi, can you update your other post with your findings, it’ll help others in the future.
|
st98508
|
Hello guyz.
I just want to get the middle output of my network and calculate the gradient.
So, I’ve found layer.register_forward_hook function.
My code is below:
global glb_feature_teacher
glb_feature_teacher = torch.tensor(torch.zeros(train_batch, num_emb), requires_grad=True, device=torch.device(device))
def Get_features4teacher(self, input, output):
global glb_feature_teacher
glb_feature_teacher = output.data
return None
t_emb_layer = teacher_net.module.linear1
In the training phase,
output = net(inputs)
t_emb_layer.register_forward_hook(Get_features4teacher)
emb_teacher = torch.tensor(glb_feature_teacher, requires_grad=True, device=torch.device(device))
mse_loss = nn.MSELoss()
loss = mse_loss(emb_teacher, some_vector)
loss.backward()
The code is ran well, and checked whole the middle outputs are extracted well(the value of emb_teacher)
But the problem is, gradient is not calculated.
When i print out the gradient of extracted layer, it prints out None.
That means gradients are not calculated.
The code is below:
grad_of_params_teacher = {}
for name, parameter in teacher_net.named_parameters():
grad_of_params_teacher[name] = parameter.grad
print('teacher: ', grad_of_params_teacher[‘module.linear1.weight’])
output: teacher: [None]
Can you tell me that what is the problem?
I realllly don’t know how to solve this problem.
Help me plz
|
st98509
|
Please indent your code for better readability.
Torch does not provide seeing the gradients w.r.t intermediate layers, you have to use register_backward_hook to have access to them.
|
st98510
|
Here is my full code.
# Register hooking function
global glb_feature_teacher
global glb_feature_student
def Get_features4teacher(self, input, output):
global glb_feature_teacher
glb_feature_teacher = output.data
return None
# end
def Get_features4student(self, input, output):
global glb_feature_student
glb_feature_student = output.data
return None
# end
# Parsers
parser = argparse.ArgumentParser(description='PyTorch CIFAR10 Training')
parser.add_argument('--lr', default=0.1, type=float, help='learning rate')
parser.add_argument('--resume', '-r', action='store_true', help='resume from checkpoint')
args = parser.parse_args()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
best_acc = 0 # best test accuracy
start_epoch = 0 # start from epoch 0 or last checkpoint epoch
# Data
print('==> Preparing data..')
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
transform_test = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
train_batch = 64
num_emb = 128
trainset = torchvision.datasets.CIFAR10(root='../../data', train=True, download=True, transform=transform_train)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=train_batch, shuffle=True, num_workers=0)
testset = torchvision.datasets.CIFAR10(root='../../data', train=False, download=True, transform=transform_test)
testloader = torch.utils.data.DataLoader(testset, batch_size=train_batch, shuffle=False, num_workers=0)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# Model
print('==> Building model..')
teacher_net = ResNet50()
student_net = StudentNet()
teacher_net = teacher_net.to(device)
student_net = student_net.to(device)
if device == 'cuda':
teacher_net = torch.nn.DataParallel(teacher_net)
student_net = torch.nn.DataParallel(student_net)
cudnn.benchmark = True
print('Loading teacher, student network weight file')
try:
checkpoint_teacher = torch.load('./resnet50.t7')
teacher_net.load_state_dict(checkpoint_teacher['net'])
except FileNotFoundError:
print('ERROR::No pretrained teacher network file found!')
sys.exit(1)
t_emb_layer = teacher_net.module.linear1
s_emb_layer = student_net.module.classifier1
'''=============================parameter settings=========================='''
for param in student_net.parameters():
param.requires_grad=True
for param in teacher_net.parameters():
param.requires_grad=False
'''==============================LOSS FUNCTION LOCATION=============================='''
mse_loss = nn.MSELoss()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(student_net.parameters(), lr=args.lr, momentum=0.9 ,weight_decay=5e-4)
glb_feature_teacher = torch.tensor(torch.zeros(train_batch, num_emb), requires_grad=False, device=torch.device(device))
glb_feature_student = torch.tensor(torch.zeros(train_batch, num_emb), requires_grad=True, device=torch.device(device))
def train(epoch):
global glb_feature_teacher
global glb_feature_student
print('\nEpoch: %d' % epoch)
student_net.train()
teacher_net.eval()
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.to(device), targets.to(device)
optimizer.zero_grad()
outputs_teacher = teacher_net(inputs)
outputs_student = student_net(inputs)
'''============================================================================'''
t_emb_layer.register_forward_hook(Get_features4teacher)
s_emb_layer.register_forward_hook(Get_features4student)
emb_teacher = torch.tensor(glb_feature_teacher, requires_grad=False, device=torch.device(device))
emb_student = torch.tensor(glb_feature_student, requires_grad=True, device=torch.device(device))
loss_c = criterion(outputs_student, targets)
loss_v = mse_loss(emb_student, emb_teacher)
loss = loss_c + 0.1*loss_v
loss.backward()
optimizer.step()
torch.cuda.synchronize()
'''==========================GRADIENT CHECKING================================='''
grad_of_params_student = {}
for name, parameter in student_net.named_parameters():
grad_of_params_student[name] = parameter.grad
#print(name, parameter.grad)
#print('checking student: ', parameter.size())
grad_of_params_teacher = {}
for name, parameter in teacher_net.named_parameters():
grad_of_params_teacher[name] = parameter.grad
#print('checking teacher: ', parameter.size())
print('student: ', grad_of_params_student['module.classifier1.weight']) # for student net
print('teacher: ', grad_of_params_teacher['module.linear1.weight']) # for teacher net
'''============================================================================'''
train_loss += loss.item()
_, predicted = outputs_student.max(1) #max(1): second value returns argmax
total += targets.size(0)
correct += predicted.eq(targets).sum().item()
and the main function is below:
for epoch in range(start_epoch, start_epoch+100):
train(epoch)
The code does not calculate gradient when only loss_v is applied.
How can i fix this bug?
|
st98511
|
Like the forward hooks you have created, you should create a backward hook like:
global glb_grad_teacher
def Get_grad4teacher(self, ingrad, outgrad):
global glb_grad_teacher
glb_grad_teacher = outgrad
return None
t_emb_layer.register_backward_hook(Get_grad4teacher)
And then after you have done backward(), you can look at glb_grad_teacher to look at the gradient for t_emb_layer for that loss function
|
st98512
|
Thank you @tumble-weed.
Is the usage of layer.register_forward_hook correct?
I want to calculate loss value from hooked values with register_forward_hook function from middle of network.
And I’ve used your register_backward_hook function, but all the values are zero.
Implemented below:
global glb_grad_student
def Get_grad4student(self, ingrad, outgrad):
global glb_grad_student
glb_grad_student = outgrad
return None
glb_grad_student = torch.tensor(torch.zeros(train_batch, num_emb), requires_grad=True, device=torch.device(device))
in the train function,
loss.backward()
s_emb_layer.register_backward_hook(Get_grad4student)#
print(glb_grad_student)
and the output is
tensor([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]], device='cuda:0', requires_grad=True)
What’s wrong in my code?
|
st98513
|
I think you used the code directly. Looking at your previous post it seems that emb_teacher has requires_grad=False, maybe you want to use the backward hook for the student. I hope you are getting the technique behind using register_backward_hook, so you can adapt it for your use cases.
|
st98514
|
The teacher_emb is not needed to calculate gradient.
So I’ve fixed required_grad as False.
Only student_emb in needed to calculate gradient.
for param in student_net.parameters():
param.requires_grad=True
for param in teacher_net.parameters():
param.requires_grad=False
|
st98515
|
ok that’s in line with the student grads being all zero, if it wasn’t i’d be confused. Now all we have to see is why the student grad is 0.
try making a fake loss like loss_fake = emb_student.sum(). Doing backward on with this simple loss, you should expect non-zero gradients to propagate to emb_student .If it doesn’t then there is a snag some-place
Also i found out from here 19 that you don’t need to do register_backward_hook, you can just do emb_student.retain_grad() to see if it is getting a gradient. try out this method as well to cross check once.
|
st98516
|
Same thing is happened.
Every gradients are zero. If i use the output of network not from register_forward_hook, the gradients are not zero.(Checked.)
Can you see my new question in here 167?
The same contents, but maybe more easy to understand what I want to say.
Thank you, @tumble-weed
|
st98517
|
PyCharm is a perfect IDE! With code completion and hints, coding can be easier.But I found for Pytorch 0.4.1, the feature is lost! And I don’t know why.Just like this:
The torch.empty method() is highlighted in yellow.And when I put mouse on the word “empty”, it shows that "Cannot find reference ‘empyt’ in init.py ". Besides I wanna see the corresponding source code with pressing ‘ctrl’ key and clilcking the left mouse button!But it cannot work, either!
Anyone can help me!thanks!
|
st98518
|
HI Micheal,
This is a known issue tracked in https://github.com/pytorch/pytorch/issues/7318 70
I think it’s on track to be fixed in a not too distant future release.
|
st98519
|
Ohoops, so it’s not my fault! I think I did something wrong causing this! Thanks for your answer!
|
st98520
|
I’m conducting simulated maximum likelihood estimation using autograd for gradient information. I noticed that while my program runs on my GPU, the kernel in Jupyter and Spyder dies every time I try to run it on my CPU. I tried running the program from the command line, but the program still stops in the same place without returning any error information.
My CPU has 5.2 GB of available memory, and the program only consumes 1.4 GB so I don’t think memory is the issue. I’ve also updated pytorch (stable version) and anaconda recently, so I don’t think any of my packages should be out of date. When I run the program without autograd or if I change .rsample() to .sample(), however, the program doesn’t experience this problem. I need autograd and .rsample() though to get the relevant gradient information. Does anyone know why this might be happening?
Edit: I’ve also learned that if I restrict the number of simulations to be sufficiently small, the program will run on the CPU but it fails to calculate the gradient and returns nan’s instead. The GPU conversely correctly calculates the gradient at each time step. The CPU and GPU also return very different values for the objective function.
|
st98521
|
>>> input = torch.ones(1, 2 , 5)
>>> m = nn.Conv1d(2, 3, 3)
>>> m(input).shape
torch.Size([1, 3, 3])
C_in = 2, C_out = 3
Why do we need that argument? i thought that 1Dconv try to work on every vector(channel) separately and apply 1D filter on it producing the same number of channels. Does this function use a separate 2D filter for that purpose(changing the channel size)?
|
st98522
|
Solved by lugiavn in post #2
No. I think what you thought is wrong.
What Conv1D does is the same as Conv2D.
So Conv2D takes input of [batch_size x depth x width x height] and output [batch_size x new_depth x (maybe-new)width x (maybe-new)height]. Here c_in = depth and c_out = new_depth. The parameters have the size of depth …
|
st98523
|
No. I think what you thought is wrong.
What Conv1D does is the same as Conv2D.
So Conv2D takes input of [batch_size x depth x width x height] and output [batch_size x new_depth x (maybe-new)width x (maybe-new)height]. Here c_in = depth and c_out = new_depth. The parameters have the size of depth x new_depth x kernel_size, so you have to specify all these numbers to create the layer.
Conv1D is the same, except that the height dimension size is now always 1, you can think of it that way.
Each filter work on all channels (depth dimension), but only locally in the spatial dimension (width & height) according to the kernel size, so having size [depth x kernel_size]. There is new_depth of those filters. So the parameters are [new_depth x depth x kernel_size] (or more if there’s bias)
|
st98524
|
Hello,
I was wondering about loss function defined with a simple python definition, say for example
def loss(arg):
return arg.abs().mean()
What is the right way to make it run on multi gpu ?
My current way would be to wrap it inside a Module, that can then be wrapped inside a DataParallel
class module_wrapper(nn.Module):
def __init__(self):
return
def forward(self, arg):
return loss(arg)
my_parallel_loss = nn.DataParallel(module_wrapper())
I don’t find this piece of code very compact nor readable. Would a DataParallel for functions like this one (which can be much longer) be a good idea ? At first I thought a decorator could be a good idea, but that would require the creation of a new wrapper module for each call which is probably not what we want.
Thanks !
|
st98525
|
Is it allowed to copy model dictionary keys from a checkpoint one at a time instead of load_state_dict which copies all keys at a time?
Using PyTorch 0.4.
|
st98526
|
according to this example it does not seem possible:
d2 = dict((k,inception_v3.state_dict()[k]) for k in ['conv1.weight'])
inception_v3.load_state_dict(d2)
throws the error:
RuntimeError: Error(s) in loading state_dict for ResNet:
Missing key(s) in state_dict: “bn1.weight”, “bn1.bias”, “bn1.running_mean”, “bn1.running_var”, “layer1.0.conv1.weight”, “layer1.0.bn1.weight”, “layer1.0.bn1.bias”, “layer1.0.bn1.running_mean”, “layer1.0.bn1.running_var”, “layer1.0.conv2.weight”, “layer1.0.bn2.weight”, “layer1.0.bn2.bias”, “layer1.0.bn2.running_mean”, “layer1.0.bn2.running_var”, “layer1.1.conv1.weight”, “layer1.1.bn1.weight”, “layer1.1.bn1.bias”, “layer1.1.bn1.running_mean”, “layer1.1.bn1.running_var”, “layer1.1.conv2.weight”, “layer1.1.bn2.weight”, “layer1.1.bn2.bias”, “layer1.1.bn2.running_mean”, “layer1.1.bn2.running_var”, “layer2.0.conv1.weight”, “layer2.0.bn1.weight”, “layer2.0.bn1.bias”, “layer2.0.bn1.running_mean”, “layer2.0.bn1.running_var”, “layer2.0.conv2.weight”, “layer2.0.bn2.weight”, “layer2.0.bn2.bias”, “layer2.0.bn2.running_mean”, “layer2.0.bn2.running_var”, “layer2.0.downsample.0.weight”, “layer2.0.downsample.1.weight”, “layer2.0.downsample.1.bias”, “layer2.0.downsample.1.running_mean”, “layer2.0.downsample.1.running_var”, “layer2.1.conv1.weight”, “layer2.1.bn1.weight”, “layer2.1.bn1.bias”, “layer2.1.bn1.running_mean”, “layer2.1.bn1.running_var”, “layer2.1.conv2.weight”, “layer2.1.bn2.weight”, “layer2.1.bn2.bias”, “layer2.1.bn2.running_mean”, “layer2.1.bn2.running_var”, “layer3.0.conv1.weight”, “layer3.0.bn1.weight”, “layer3.0.bn1.bias”, “layer3.0.bn1.running_mean”, “layer3.0.bn1.running_var”, “layer3.0.conv2.weight”, “layer3.0.bn2.weight”, “layer3.0.bn2.bias”, “layer3.0.bn2.running_mean”, …
|
st98527
|
I try to catch the except so that model can change to next gpu automatically, but it do not work
|
st98528
|
Have a look at the FairSeq example 132 on how to recover from OOM errors.
They just skip the batch and try to continue the training. You could try to adapt this example to move your model.
|
st98529
|
Hello.
I’ve test normal 2d convolution and depthwise 2d conv, the latter latter is faster.
However, when move to 3d, the depthwise 3d convolutin is about 10 times slower than normal 3d conv, when depthwise 3d convolution will be optimized ?
Or whether I used a wrong configuration ?
I am using pytorch 0.4.1 and cuda 8.0.
Thanks.
|
st98530
|
Hi, is there any smooth implementation that could adjust the training/testing batch size automatically to fit the GPU ram without CUDA segmentation fault? The GPUs are on the remote servers that shared with others, so it’s difficult to foresee the exact usage.
|
st98531
|
Dont know a smooth plug and play implementation, but if you had to do this yourself , you can put the particular code in a try except block, and if the error is of the particular type that cuda gives when crashing, you could reduce the batch size. although this way you will not know if ram has been freed up and you can increase the batch size again. maybe you ought to try and increase the batch size every few iterations, and if there is a seg fault revert it back again.
|
st98532
|
Thanks for the reply. I tried to find something useful in pytorch cuda doc but haven’t found a solution yet. Try/except worth a try but it would be expensive to get the best parameters.
|
st98533
|
Here, a multilabel task such as Text Classification. One text can be labeled with several labels just like the following example:
text_a = […001000100001110111…], here are N labels, so the dimension of text_a is N
My idea is to put the text into RNN, then map the output to a vector distribution with the same dimension as the count of labels, i.e. N.
There are three candidate loss function I am thinking of:
KLDivLoss
The true label and the predicted can be seen as two distribution. The model is trying to make them more similar.
MultiLabelSoftMarginLoss
Is this selection right? How to use this loss function if it is suitable for the aforementioned task?
Bayesian Personal Ranking
Is there readymade function for BPR in PyTorch?
What is the difference of using these loss function for such task? Is there any selection else?
Thanks~~~
|
st98534
|
Type of train_labels and train_data in CIFAR10 are <type ‘list’> and <type ‘numpy.ndarray’> respectively. While they are <class ‘torch.LongTensor’> and <class ‘torch.ByteTensor’> in MNIST. Why did you choose different types for different datasets? Both is fine of course, I just wanted to know if there is any specific reason for these choices.
Thanks.
|
st98535
|
I think it is because MNIST is provided as a CSV in which the first column is the label and the other 784 columns are the pixels, while for cifar10 (https://www.kaggle.com/c/cifar-10 9) images and class labels are provided separately. Of course, you can write your own Dataset loader.
|
st98536
|
Is it normal that I get when working with MNIST because of that type of tensor?
RuntimeError: Expected object of type torch.FloatTensor but found type torch.ByteTensor for argument #4 ‘mat1’
It is just a simple Linear layer.
|
st98537
|
It seems you loaded the images as uint8 and just transformed them into tensors.
If you are using the torchvision.datasets.MNIST data, you can just pass a transformation to it:
dataset = datasets.MNIST(
root=YOUR_PATH,
download=False,
transform=transforms.ToTensor())
Otherwise you should call x = x.float() on the image tensor.
|
st98538
|
Hi,
Based on what I see in https://pytorch.org/cppdocs/, autograd in cpp now only works when we call backward() on leaf node right? Do we have something like torch.autograd.grad(f, x) which returns the grad of a function f w.r.t. x?
The reason I need this is that when computing second order derivatives, the grad is accumulated on the node if I call backward() twice on original tensor and grad tensor. I will need to store the first-order derivative as a temporary variable and subtract it from the result.
Any help/suggestion would be appreciated. Thanks!
|
st98539
|
Hi,
I’m training an autoencoder using this set of scripts 6 (specifically the attention-based parts) with cuda.
I’ve enabled cuda on every tensor I can find, and also set the usual suspects:
model.cuda()
torch.backends.cudnn.benchmark = True
torch.backends.cudnn.fastest = True
It’s definitely enabled and I can see a bit of GPU memory in use, but there is no performance speed-up at all. In fact (probably due to all the additional initialisation) I’m seeing a marginal decrease in performance.
Am I missing something obvious, or are the tensor operations simply not particularly suited to GPU?
Note: functionally the scripts work exactly as expected and produce a working trained model.
Thanks!
|
st98540
|
Hi…
Let’s start the talk by understanding your machinery.
What base OS do you have ?
Its a local machine or are you running in Cloud Computing ?
What version of Cuda, Cudnn, PyTorch, and Otther Libraries do you have?
How many Cores in CPU and What Model of GPU do you have ?
How many GPUs ?
How did you installed each piece of Software ?
Pre-compiled or did you compiled yourself ?
If so what tutorials have you used or what scripts have you used to do so ?
Let’s start the debate from there.
|
st98541
|
Hi,
Thanks, I should maybe have clarified, I’m already running other scripts successfully using cuda on this instance. For example there is a set of RNNs for which I’m seeing a very clear 15X speed-up when enabling cuda. Although the attention-based autoencoder works a little differently, I was hopeful of seeing at least a measurable increase when enabling cuda.
What I’m trying to understand is why there isn’t any improvement in performance. For example, are the type of tensor operations in the attention-based autoencoder not optimised for GPU? Or am I missing something more obvious?
I don’t want to clutter the thread with my version of the scripts, but basically I perform an initial check with:
torch.cuda.is_available()
This succeeds as True, and subsequent enablings of cuda on tensors proceed without error.
Regarding your question, I’ve tried the attention-based scripts (with some modifications for cuda) on two different Cloud platforms. One of them is an AWS Sagemaker instance, info below.
Instance type: ml.p2.xlarge (4 X vCPU, 1 X K80 GPU, 61GiB Memory)
Notebook kernel: conda_pytorch_p27
Kernel: 4.14.72-68.55.amzn1.x86_64 (Amazon modified)
OS: Red Hat 7.2.1-2
Pytorch: 0.4.1
Cuda: 9.2.148
Cudnn: 7104
These are all provided by Sagemaker by default.
Thanks!
|
st98542
|
Hi again,
Thanks to let me understand your environment a little bit.
There is some occasions that CPU and GPU may have the same performance.
It depends the architecture of your network, may depend of the data volume are you using.
May be the Network is too shallow.
May be increasing the batch size of your GPU training you can see the difference compared to your CPU.
May the code have some latency between each batch sent to the GPU.
Without see your code/model and the way your data is trained is hard to have a clue of it.
Note: GPU accelerates matrix computations but if your computations are not so big or difficult at all to compute almost the same performance can be achieved by the CPU. ( a modern GPU 1024+ threads while the top CPU today has only 200 threads).
Your instance has 4 cores that may give 8 threads, but to me it means your data is very easy to process that the CPU doesn’t have problems at all to do it.
|
st98543
|
Hi!
Thanks for the explanation, I see what you’re getting at.
As per your suggestion I’m going to play around a bit more with the input and hyperparameters. I’ll post back with any new findings.
|
st98544
|
I have the following bottleneck in my model:
I need to normalize groups of elements in my input tensors. The groups of elements that need to be normalized together are given by lists. Those lists are specific to a given layer in the model and do not change when another input tensor is treated.
In the example below, the input tensor ln_input is of shape [16,14], and there are 2 groups of elements to normalize using softmax:
Elements:[0,0],[0,1],[1,0],[1,1]
Elements:[10,10],[10,11]
Those 2 groups are each saved as a list of tuples in the dictionary coord_mapping.
coord_mapping is fixed, while there are many different ln_input for which I need to repeat the operation. Therefore any expensive modification of coord_mapping data structure is acceptable.
Here is a minimal working example:
import torch
import torch.nn as nn
from collections import defaultdict
## normalization function
softmax0 = nn.Softmax(dim=0)
## input tensor
h_in = 16 # input height
w_in = 14 # input width
ln_input = torch.zeros([h_in,w_in])
output = torch.zeros_like(ln_input)
## groups of elements to normalize together
coord_mapping = defaultdict(list)
coord_mapping[1].append((0,0))
coord_mapping[1].append((0,1))
coord_mapping[1].append((1,0))
coord_mapping[1].append((1,1))
coord_mapping[2].append((10,10))
coord_mapping[2].append((10,11))
## inefficient normalization
for _, value in coord_mapping.items():
h_list, w_list = zip(*value)
output[h_list, w_list] = softmax0(ln_input[h_list, w_list])
## those groups of elements each sum to 1
print(output[0,0],output[0,1],output[1,0],output[1,1])
print(output[10,10],output[10,11])
How could I make this more efficient?
|
st98545
|
after I see the install tutorial of libtorch PyTorch cppdocs 8, I can use libtorch in cpp file. However, I meet an error when include<torch/torch.h> in .cu file.
Reference the step in PyTorch cppdocs 8. I write a demo like follow:
CMakeLists.txt
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(torchcpp)
find_package(CUDA REQUIRED)
find_package(Torch REQUIRED)
enable_language(CUDA)
add_executable(torchcpp tmp.cu)
target_link_libraries(torchcpp "${TORCH_LIBRARIES}")
# set_property(TARGET torchcpp PROPERTY CXX_STANDARD 11)
set_property(TARGET torchcpp PROPERTY CUDA_STANDARD 11)
tmp.cu(the demo code is silly, however, i just want to test)
#include <torch/torch.h>
#include <iostream>
using namespace std;
int main() {
at::Tensor tensor = torch::randn({5}).cuda();
cout << tensor << endl;
}
And follow the instruction to make:
mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH=my_path_to_libtorch/libtorch ..
can compile without error.
make
there is an error:
xxx/libtorch/include/torch/csrc/jit/argument_spec.h(56): error: static assertion failed with "ArgumentInfo is to be a POD struct"
I hope get your help. thank you!
Addition: If I change the head file to
#include <ATen/ATen.h>
#include <ATen/cuda/CUDAContext.h>
#include <iostream>
using namespace std;
using namespace at;
int main() {
auto tensor = at::randn({5}).cuda();
cout << tensor << endl;
}
it can make correctly.
|
st98546
|
I have this tensorflow model that I’d like to convert to pytorch. I tried reading the documentation but it’s still a little fuzzy. Could someone help me?
upsampled = tf.image.resize_images(white_lr, tf.shape(label)[1:3])
C0 = 25
D = 5
h = tf.concat([one_hot_label, upsampled], axis=-1)
hs = []
for i in range(D):
hs.append(h)
h = tf.contrib.layers.conv2d(h, int(C0*1.5**i), (3,3), stride=2, scope='conv%d'%(i+1))
h = tf.concat([h, tf.image.resize_images(white_lr, tf.shape(h)[1:3])], axis=-1)
for i in range(D)[::-1]:
h = tf.contrib.layers.conv2d_transpose(h, int(C0*1.5**i), (3,3), stride=2, scope='upconv%d'%(i+1))
h = tf.concat([h, hs[i]], axis=-1)
h = tf.contrib.layers.conv2d(h, C0, (1,1), scope='fc1')
h = tf.contrib.layers.conv2d(h, 3, (1,1), scope='cls', activation_fn=None)
h = h + upsampled
|
st98547
|
Could you explain the code a bit more? Currently I’m not sure what shapes some values have, so that re-creating the model is a bit tricky.
It seems upsample is an image tensor. I’m not sure, what h should represent as you are concatenating a one-hot encoded tensor with this image.
Could you clarify it?
I’m sure we can reimplement the model, if you could add some comments to the code (in the best case with shape information!).
|
st98548
|
I’m trying to replace another NN architecture with pytorch. I can compile the simple demos with no problem but run into trouble when I try to compile a library for inclusion into another binary.
As a typical configuration (which worked with the previous architecture) I’m building the library as:
file(GLOB SOURCES src/*.cpp src/*.cu)
cuda_add_library(${ModuleName} SHARED ${SOURCES})
target_include_directories(${ModuleName} PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>)
target_include_directories(${ModuleName} PRIVATE ${TORCH_INCLUDE_DIRS})
target_link_libraries(${ModuleName} image engine cuda ${TORCH_LIBRARIES})
I then export this using something like:
install(TARGETS ${ModuleName}
EXPORT ${ModuleName}Targets
LIBRARY DESTINATION lib/${ProjectName}
INCLUDES DESTINATION include/${ProjectName})
install(DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/include/${ModuleName} DESTINATION include/${ProjectName})
install(EXPORT ${ModuleName}Targets DESTINATION share/cmake/${ProjectName})
In this case, cmake gives errors about not being able to find caffe2 and caffe2_gpu. If I try to add them to any of the install messages or to their own export I get errors about not having built them within this module.
I’ve also tried making an interface library and exporting that instead. Still no success. Any pointers?
|
st98549
|
Is there a native way in pytorch to shuffle the elements of a tensor?
I tried generating a random permutation of indeces with torch.randperm() and apply it using torch.index_select() , but I was only able to to shuffle rows/columns using this technique.
|
st98550
|
Solved by tumble-weed in post #2
you can do tensor.view(-1) to view it as a flat tensor, apply the shuffling and then view it in the original shape again
|
st98551
|
you can do tensor.view(-1) to view it as a flat tensor, apply the shuffling and then view it in the original shape again
|
st98552
|
tumble-weed:
tensor.view(-1)
Thanks, that works! Although it feels a little bit like a workaround.
|
st98553
|
My loss have “NaN” problem during training. It is caused by a tensor where some elements are zeros, but it shouldn’t be zeros. When I print the tensor, I find a very strange phenomenon! Just see below:
print(alpha)
result:
tensor([[8.4926e-06, 2.9750e-04, 2.0732e-05, 6.5416e-05, 1.2785e-04, 1.3783e-04,
8.0852e-05, 2.9128e-04, 8.3976e-06, 3.4069e-05],
[4.3417e-06, 4.6478e-06, 4.7960e-07, 1.1161e-05, 2.4620e-06, 3.6034e-03,
2.8802e-06, 3.2489e-07, 7.1982e-06, 7.9321e-06],
[3.5168e-04, 1.3959e-04, 6.6393e-03, 9.1526e-04, 1.3291e-04, 3.9203e-05,
2.0094e-05, 8.3743e-05, 1.3102e-04, 1.3114e-04],
[2.2118e-05, 3.0005e-05, 4.5028e-05, 2.7926e-02, 4.7457e-04, 1.3916e-04,
3.8518e-06, 1.7940e-05, 6.8158e-05, 2.4488e-05],
[1.1020e-04, 9.7328e-06, 9.2700e-05, 2.2556e-02, 1.9584e-04, 2.0269e-04,
6.9351e-06, 1.5777e-05, 2.5748e-04, 4.8471e-05],
[5.4854e-03, 6.5477e-06, 1.3129e-04, 2.8175e-05, 1.2210e-05, 8.0755e-06,
9.8790e-05, 3.7378e-06, 3.2873e-04, 3.7017e-05],
[3.2331e-04, 2.5080e-06, 7.0140e-06, 1.2707e-05, 5.7030e-05, 6.1795e-04,
1.0593e-02, 1.4990e-06, 7.0081e-05, 1.4437e-05],
[4.5361e-05, 1.7738e-04, 5.7259e-06, 1.1173e-04, 1.4167e-04, 2.3912e-02,
7.2580e-05, 5.9300e-06, 2.1757e-05, 4.4645e-05],
[8.0816e-05, 9.9628e-06, 4.9268e-05, 2.9979e-04, 4.9817e-06, 2.5931e-05,
3.2751e-05, 1.4161e-05, 2.5827e-02, 3.5003e-04],
[1.7151e-05, 4.0785e-06, 8.5164e-05, 3.1453e-04, 9.0293e-06, 1.0992e-05,
2.4123e-06, 1.2716e-05, 4.0623e-04, 2.6726e-05],
[4.6351e-04, 3.3088e-05, 4.1036e-04, 2.5175e-04, 2.6937e-05, 5.9167e-05,
1.1110e-04, 2.1697e-05, 7.3982e-03, 1.1573e-04],
[1.6273e-02, 1.0944e-05, 3.8956e-04, 9.3451e-06, 1.0117e-05, 1.1785e-06,
3.2926e-05, 3.4684e-06, 1.2162e-05, 1.8236e-05]],
dtype=torch.float64)
for i in range(len(alpha)):
print(alpha[i])
result:
tensor([8.4926e-06, 2.9750e-04, 2.0732e-05, 6.5416e-05, 1.2785e-04, 1.3783e-04,
8.0852e-05, 2.9128e-04, 8.3976e-06, 3.4069e-05], dtype=torch.float64)
tensor([4.3417e-06, 4.6478e-06, 4.7960e-07, 1.1161e-05, 2.4620e-06, 3.6034e-03,
2.8802e-06, 3.2489e-07, 7.1982e-06, 7.9321e-06], dtype=torch.float64)
tensor([0.0004, 0.0001, 0.0066, 0.0009, 0.0001, 0.0000, 0.0000, 0.0001, 0.0001,
0.0001], dtype=torch.float64)
tensor([2.2118e-05, 3.0005e-05, 4.5028e-05, 2.7926e-02, 4.7457e-04, 1.3916e-04,
3.8518e-06, 1.7940e-05, 6.8158e-05, 2.4488e-05], dtype=torch.float64)
tensor([1.1020e-04, 9.7328e-06, 9.2700e-05, 2.2556e-02, 1.9584e-04, 2.0269e-04,
6.9351e-06, 1.5777e-05, 2.5748e-04, 4.8471e-05], dtype=torch.float64)
tensor([5.4854e-03, 6.5477e-06, 1.3129e-04, 2.8175e-05, 1.2210e-05, 8.0755e-06,
9.8790e-05, 3.7378e-06, 3.2873e-04, 3.7017e-05], dtype=torch.float64)
tensor([3.2331e-04, 2.5080e-06, 7.0140e-06, 1.2707e-05, 5.7030e-05, 6.1795e-04,
1.0593e-02, 1.4990e-06, 7.0081e-05, 1.4437e-05], dtype=torch.float64)
tensor([4.5361e-05, 1.7738e-04, 5.7259e-06, 1.1173e-04, 1.4167e-04, 2.3912e-02,
7.2580e-05, 5.9300e-06, 2.1757e-05, 4.4645e-05], dtype=torch.float64)
tensor([8.0816e-05, 9.9628e-06, 4.9268e-05, 2.9979e-04, 4.9817e-06, 2.5931e-05,
3.2751e-05, 1.4161e-05, 2.5827e-02, 3.5003e-04], dtype=torch.float64)
tensor([1.7151e-05, 4.0785e-06, 8.5164e-05, 3.1453e-04, 9.0293e-06, 1.0992e-05,
2.4123e-06, 1.2716e-05, 4.0623e-04, 2.6726e-05], dtype=torch.float64)
tensor([0.0005, 0.0000, 0.0004, 0.0003, 0.0000, 0.0001, 0.0001, 0.0000, 0.0074,
0.0001], dtype=torch.float64)
tensor([1.6273e-02, 1.0944e-05, 3.8956e-04, 9.3451e-06, 1.0117e-05, 1.1785e-06,
3.2926e-05, 3.4684e-06, 1.2162e-05, 1.8236e-05], dtype=torch.float64)
See, there are some differences! When I use a specific element(like alpha[2]), the value is truncated to 0.0001,and cause some value to zeros. I am confused about that. Can anyone help me?
|
st98554
|
Solved by tumble-weed in post #2
are you sure it isn’t the print statement that is just printing the values in this form. maybe you should print alpha==0 and see if it places a 1 anywhere, that would mean the element actually is zero there.
|
st98555
|
are you sure it isn’t the print statement that is just printing the values in this form. maybe you should print alpha==0 and see if it places a 1 anywhere, that would mean the element actually is zero there.
|
st98556
|
You are right, I can’t get any 1 when print alpha==0. Thank you! well I need to find what happens to my loss again hhhhhh
|
st98557
|
Hello,
I recently moved from a machine where I was using python3.6, torch 0.4.1 and cuda 9.0, to a machine where I have installed with virtualenv python3.6 and torch 0.4.0, the cuda version on the machine is 8.0.
My code was working perfectly on the first machine, but it is giving “inplace operation error” on the second machine.
I have tried to inspect the code to understand where the inplace operation is happening, but actually I can’t understand, the problem is even not in the set of statements where I was expecting it to be.
My code looks like:
char_rep_lst = []
for i in range(sequence_length):
self.char_hidden = autograd.Variable(torch.zeros(2, self.batch_size, self.char_hidden_dim).type(self.dtype))
char_embeds = self.embed_dropout( self.char_embeddings( char_sequence[i,:,:] ) )
self.charRNN.flatten_parameters()
char_rep_seq, self.char_hidden = self.charRNN(char_embeds, self.char_hidden)
char_rep_lst.append( self.char_mlp( char_rep_seq.sum(dim=0) ) )
char_rep = torch.stack( char_rep_lst )
word_embeds = self.embed_dropout( self.word_embeddings(sentence) )
self.lexRNN.flatten_parameters()
lexical_input = torch.cat( [word_embeds, self.hidden_dropout( char_rep )], 2)
lex_rep, self.lex_hidden = self.lexRNN(lexical_input, self.lex_hidden)
lex_rnn_out = torch.cat( [lex_rep, char_rep], 2)
scores = F.log_softmax(self.hidden2tag(lex_rnn_out), dim = 2)
return [scores, scores, scores]
In the last statement I return 3 scores because this is actually a simplification of the whole code, that I did to try to understand where the inplace operation is. In the whole code I return 3 different scores and I do 2 optimization steps with 2 different optimizers, keeping the graphs with “retain_graph = True”. It is indeed the first “backward(retain_graph = True)” triggering the “inplace operation error”.
The self.char_mlp is a MLP class, defined as:
class ReLU_MLP(nn.Module):
def __init__(self, params):
super(ReLU_MLP, self).__init__()
self.n_layers = params[0]
self.input_size = params[1]
self.output_size = params[2]
layers = []
for i in (range(self.n_layers)):
if i == 0:
layers.append( ('linear'+str(i+1), nn.Linear(self.input_size, self.output_size)) )
layers.append( ('relu'+str(i+1), nn.ReLU()) )
else:
layers.append( ('linear'+str(i+1), nn.Linear(self.output_size, self.output_size)) )
if i < self.n_layers-1:
layers.append( ('relu'+str(i+1), nn.ReLU()) )
self.MLP = nn.Sequential( OrderedDict(layers) )
def forward(self, input):
return self.MLP(input)
Does anyone understand where the inplace operation is ?
Is there a way to see which particular line is doing the inplace operation ? I mean, instead of seeing the trace ending up with the backward call…
Thank you in advance for any help
|
st98558
|
When I randomly create a tensor of specific size using python with torch, it seems that the value of tensor is very small, almost equal to zero. However, today I find a tensor initialized with a value 2? Is it normal? The code I run is as follow.
image.png322×533 3.08 KB
|
st98559
|
torch.tensor uses uninitialized memory to create the tensor. Whatever is stored in this memory will be interpreted as a value. If you want to create a random tensor you could use torch.randn etc.
|
st98560
|
I want to remote access tensorboard and was wondering how to ssh into paperspace? A lot of post on stack overflow (https://stackoverflow.com/questions/37987839/how-can-i-run-tensorboard-on-a-remote-server 10) instruct me to find the ifconfig ip address then ssh into the server. Also how do I find my user name and hostname? I’ve also posted the same question on fast.ai forum (https://forums.fast.ai/t/ssh-into-tensorboard/28022 4)
|
st98561
|
Try to change the order using CUDA_VISIBLE_DEVICES=1,0 python script.py args.
This should change both devices, so that data will be accumulated on GPU1.
|
st98562
|
Thank you very much for the reply. Is this equvilant to the following?
model = nn.DataParallel(model, device_ids=[1, 0])
|
st98563
|
I think you are right! The first device_id will be used as the output_device.
Also, I think you could just set output_device to the id you want to accumulate your updates on.
Here 14 are the important lines of code.
|
st98564
|
I did the following for PyTorch 0.4.1:
model = nn.DataParallel(model, device_ids=[3, 0, 1, 2])
And got:
torch/cuda/comm.py", line 40, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: all tensors must be on devices[0]
|
st98565
|
Is it working if you pass device_ids=[0, 3, 1, 2]? or output_device=3?
I don’t have multiple GPUs currently, otherwise I would test it quickly.
|
st98566
|
On a 4 x 1080Ti cluster:
I passed device_ids=[3, 0, 1, 2] without setting output_device=3, it didn’t work. I got the error:
torch/cuda/comm.py", line 40, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: all tensors must be on devices[0]
However, on my own 2 x Titan Xp Linux box:
When I set CUDA_VISIBLE_DEVICES=1,0 with or without passing device_ids=[0, 1], it worked, the second Titan Xp was using more memory.
When I only passed device_ids=[1, 0] without setting CUDA_VISIBLE_DEVICES, I got error:
torch/cuda/comm.py", line 40, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: all tensors must be on devices[0]
I passed device_ids=[0, 1] and output_device=1, and I got the following error:
torch/nn/functional.py", line 1407, in nll_loss
return torch._C._nn.nll_loss(input, target, weight, Reduction.get_enum(reduction), ignore_index)
RuntimeError: Assertion `THCTensor(checkGPU)(state, 4, input, target, output, total_weight)’ failed. Some of weight/gradient/input tensors are located on different GPUs. Please move them to a single one. at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:29
|
st98567
|
Hi I have a question about how to collect the correct result from a BI-LSTM module’s output.
Suppose I have a 10-length sequence feeding into a single-layer LSTM module with 100 hidden states:
lstm = nn.LSTM(5, 100, 1, bidirectional=True)
The output will be have dimension:
[10 (seq_length), 1 (batch), 200 (num_directions * hidden_size)]
# or according to the doc, can be viewed as
[10 (seq_length), 1 (batch), 2 (num_directions), 100 (hidden_size)]
If I want to get the 3rd (1-index) input’s output at both directions (two 100-dim vectors), how can I do it correctly?
I know output[2, 0] will give me a 200-dim vector. Does this 200 dim vector represent the output of 3rd input at both directions?
A thing bothering me is that when do reverse feeding, the 3rd (1-index) output vector is calculated from the 8th(1-index) input, right?
Will pytorch automatically take care of this and group output considering direction?
Thanks!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.