id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st101500 | In NLLLoss for multiple dimensions, i see that the log probs tensor has to be arranged like (NxCxd1xd2…)
and the target as (Nxd1xd2…). Why is this required?
Why cannot Nxd1xd2…dkxC and Nxd1xd2…dkx1[on unsqueezing in last dimension] figure out the way to calculate the loss itself [as only 1 dimension is different] |
st101501 | Solved by richard in post #3
It’s standard for pytorch tensors to be organized in a (N, C, <other_sizes>) fashion. For example, images would be represented as (N, C, H, W).
You could permute the dimensions to use NLLLoss:
tensor # Nxd1xd2…dkxC
tensor.permute(0, 2, 3, ..., 1) |
st101502 | It’s standard for pytorch tensors to be organized in a (N, C, <other_sizes>) fashion. For example, images would be represented as (N, C, H, W).
You could permute the dimensions to use NLLLoss:
tensor # Nxd1xd2…dkxC
tensor.permute(0, 2, 3, ..., 1) |
st101503 | Hi.
I have a very simple question. Given a tensor T = torch.Tensor([1,2,3,4]) , I need a temporary copy with the first s elements set to 0?
For now, I have the following:
T_tmp = T.clone()
T_tmp [:s] = 0
distribution(T_tmp…)
I am wondering if there are better alternatives because I only need to pass the copy through a function (distribution), and preceding my call with two lines is annoying.
Thanks! |
st101504 | Not the best solution, but at least just one line:
T.clone().index_put_([torch.arange(0, s, out=torch.LongTensor())], torch.tensor([0], dtype=torch.float))
Seems any fixes is not made for the output type of torch.arange ? (https://github.com/pytorch/pytorch/issues/2812 11) |
st101505 | I was following the excellent tutorials on pytorch’s website. I modified the code for An LSTM for Part-of-Speech Tagging to implement the exercise which requires to add another LSTM to get char level representation of words concatenate it with word embedding and train for learning tags.
My network code is as follows:
class LSTMTaggerWithChar(nn.Module):
def __init__(self, embedding_dim, hidden_dim, embedding_dim_char, hidden_dim_char, vocab_size, vocab_size_char, target_size):
super(LSTMTaggerWithChar, self).__init__()
self.hidden_dim = hidden_dim
self.hidden_dim_char = hidden_dim_char
self.embedding_char = nn.Embedding(vocab_size_char, embedding_dim_char)
self.lstm_char = nn.LSTM(embedding_dim_char, hidden_dim_char)
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
self.hidden2tag = nn.Linear(hidden_dim, target_size)
self.hidden = self.init_hidden()
self.hidden_char = self.init_hidden_char()
def init_hidden(self):
return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim)), autograd.Variable(torch.zeros(1, 1, self.hidden_dim)))
def init_hidden_char(self):
return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim_char)), autograd.Variable(torch.zeros(1, 1, self.hidden_dim_char)))
def forward(self, sentence, words):
for ix, word in enumerate(sentence):
chars = words[ix]
# self.hidden_char = self.init_hidden_char() Should I re-initialize hidden_char tensor here?
char_embeds = self.embedding_char(chars).view(len(chars), 1, -1)
lstm_char_out, self.hidden_char = self.lstm_char(char_embeds, self.hidden_char)
char_rep = lstm_char_out[-1]
embeds = self.embedding(word).view(1, 1, -1)
lstm_out, self.hidden = self.lstm(embeds, self.hidden)
tag_score = F.log_softmax(self.hidden2tag(lstm_out.view(1, -1)))
if ix == 0:
tag_scores = tag_score
else:
tag_scores = torch.cat((tag_scores, tag_score), 0)
return tag_scores
Here even if I uncomment the line self.hidden_char = self.init_hidden_char() in forward function, I get the same results. I don’t understand why this should happen?
Also I think that character LSTM’s hidden state should be reset after it spits out representation for a word, assuming representation of next word is unrelated to previous word. But if I do that I am not clear how back-propagation will happen for the character LSTM on calling loss.backward() upon training?
Is it OK to have for loops in the forward function? How can it be avoided? |
st101506 | I am learning Pytorch by going through the tutorial as well. I tried to implement the character-level last for pos and the code is as below. Plz correct me if you find something not reasonable.
# -*- coding: utf-8 -*-
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
######################################################################
# Example: An LSTM for Part-of-Speech Tagging
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Prepare data:
def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]
tensor = torch.LongTensor(idxs)
return autograd.Variable(tensor)
training_data = [
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
word_to_ix = {}
char_to_ix = {}
MAX_WORD_LEN = 0
for sent, tags in training_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
for ch in word:
if ch not in char_to_ix:
char_to_ix[ch] = len(char_to_ix)
if len(word) > MAX_WORD_LEN:
MAX_WORD_LEN = len(word)
char_to_ix[' '] = len(char_to_ix)
print(word_to_ix)
print(char_to_ix)
print MAX_WORD_LEN
tag_to_ix = {"DET": 0, "NN": 1, "V": 2}
# These will usually be more like 32 or 64 dimensional.
# We will keep them small, so we can see how the weights change as we train.
EMBEDDING_DIM = 16
HIDDEN_DIM = 16
CHAR_EMBEDDING_DIM = 3
CHAR_HIDDEN_DIM = 3
######################################################################
# Create the model:
class LSTMTagger(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size, char_hidden_dim, char_embedding_dim, alphabet_size, max_word_len, tagset_size):
super(LSTMTagger, self).__init__()
# word embedding
self.hidden_dim = hidden_dim
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
# char embedding
self.char_hidden_dim = char_hidden_dim
self.char_embeddings = nn.Embedding(alphabet_size, char_embedding_dim)
self.char_lstm = nn.LSTM(char_embedding_dim, char_hidden_dim)
self.overall_hidden_dim = hidden_dim + max_word_len * char_hidden_dim
# The linear layer that maps from hidden state space to tag space
self.hidden2tag = nn.Linear(self.overall_hidden_dim, tagset_size)
self.hidden = self.init_hidden()
self.char_hidden = self.init_hidden(isChar=True)
def init_hidden(self, isChar=False):
# Before we've done anything, we dont have any hidden state.
# Refer to the Pytorch documentation to see exactly
# why they have this dimensionality.
# The axes semantics are (num_layers, minibatch_size, hidden_dim)
if isChar:
return (autograd.Variable(torch.zeros(1, 1, self.char_hidden_dim)),
autograd.Variable(torch.zeros(1, 1, self.char_hidden_dim)))
else:
return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim)),
autograd.Variable(torch.zeros(1, 1, self.hidden_dim)))
def forward(self, sentence, chars):
embeds = self.word_embeddings(sentence)
# print 'LEN SENTENCE', len(sentence)
# print 'HIDDEN', self.hidden
lstm_out, self.hidden = self.lstm(
embeds.view(len(sentence), 1, -1), self.hidden)
embedc = self.char_embeddings(chars)
char_lstm_out, self.char_hidden = self.char_lstm(embedc.view(len(chars), 1, -1), self.char_hidden)
merge_out = torch.cat((lstm_out.view(len(sentence), -1), char_lstm_out.view(len(sentence), -1)), 1)
tag_space = self.hidden2tag(merge_out)
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores
######################################################################
# Train the model:
model = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), CHAR_EMBEDDING_DIM, CHAR_HIDDEN_DIM, len(char_to_ix), MAX_WORD_LEN, len(tag_to_ix))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
# See what the scores are before training
# Note that element i,j of the output is the score for tag j for word i.
inputs = prepare_sequence(training_data[0][0], word_to_ix)
sent_chars = []
for w in training_data[0][0]:
sps = ' ' * (MAX_WORD_LEN - len(w))
sent_chars.extend(list(sps + w))
inputc = prepare_sequence(sent_chars, char_to_ix)
tag_scores = model(inputs, inputc)
print(tag_scores)
for epoch in range(300): # again, normally you would NOT do 300 epochs, it is toy data
for sentence, tags in training_data:
# Step 1. Remember that Pytorch accumulates gradients.
# We need to clear them out before each instance
model.zero_grad()
# Also, we need to clear out the hidden state of the LSTM,
# detaching it from its history on the last instance.
model.hidden = model.init_hidden()
model.char_hidden = model.init_hidden(isChar=True)
# Step 2. Get our inputs ready for the network, that is, turn them into
# Variables of word indices.
sentence_in = prepare_sequence(sentence, word_to_ix)
sent_chars = []
for w in sentence:
sps = ' ' * (MAX_WORD_LEN - len(w))
sent_chars.extend(list(sps + w))
char_in = prepare_sequence(sent_chars, char_to_ix)
targets = prepare_sequence(tags, tag_to_ix)
# Step 3. Run our forward pass.
tag_scores = model(sentence_in, char_in)
# Step 4. Compute the loss, gradients, and update the parameters by
# calling optimizer.step()
loss = loss_function(tag_scores, targets)
loss.backward()
optimizer.step()
# See what the scores are after training
inputs = prepare_sequence(training_data[0][0], word_to_ix)
sent_chars = []
for w in training_data[0][0]:
sps = ' ' * (MAX_WORD_LEN - len(w))
sent_chars.extend(list(sps + w))
inputc = prepare_sequence(sent_chars, char_to_ix)
tag_scores = model(inputs, inputc)
# The sentence is "the dog ate the apple". i,j corresponds to score for tag j
# for word i. The predicted tag is the maximum scoring tag.
# Here, we can see the predicted sequence below is 0 1 2 0 1
# since 0 is index of the maximum value of row 1,
# 1 is the index of maximum value of row 2, etc.
# Which is DET NOUN VERB DET NOUN, the correct sequence!
print(tag_scores)
The results looks like this:
Variable containing:
-0.0829 -2.7836 -4.0300
-6.9329 -0.0083 -4.9270
-3.9040 -3.5350 -0.0506
-0.0214 -4.8225 -4.3353
-4.4914 -0.0152 -5.5591
[torch.FloatTensor of size 5x3]
It looks correct:
The(0:DET) dog(1:NN) ate(2:V) the(0:DET) apple(1:NN)
The key is to concat the two hidden tensors before they are fed into hidden2tag layer. |
st101507 | I did not look at your code thoroughly but it seems like your code doesn’t work the way the tutorial requires. It says “to get the character level representation, do an LSTM over the characters of a word, and let character-level representation of the word the final hidden state of this LSTM” but I could not find a line for this.
Also, these two lines don’t make sense for me since you reshaped char_lstm_out with len(sentence). The sequence length of a sentence by words and by characters are different.
mutux:
char_lstm_out, self.char_hidden = self.char_lstm(embedc.view(len(chars), 1, -1), self.char_hidden)
merge_out = torch.cat((lstm_out.view(len(sentence), -1), char_lstm_out.view(len(sentence), -1)), 1) |
st101508 | Did you ever figure out the answer to your questions? I converged to almost the same code and have the same doubts.
Also, I noticed that my implementation became incredibly slow after adding the char lstm. Did that happen to you as well? |
st101509 | @silpara it seems you forgot to concatenate the character representation of words with word embeddings as an input to the 2nd LSTM in your code? in __init__, you should have:
self.lstm = nn.LSTM(embedding_dim + hidden_dim_char, hidden_dim)
And then in forward:
lstm_out, self.hidden = self.lstm(torch.cat([embeds, char_rep], dim=2), self.hidden)
# To effectively have a tensor oh shape (1,1,embedding_dim + hidden_dim_char) as input here
Now to answer your questions, I would say:
Maybe because char_rep was not used? Not so sure why otherwise…
I agree with the reset, but resetting the hidden states do not change how gradients are accumulated at each .backward() call, so you still get a training here, if, for instance, you update your gradients between each words.
For a very large model running on GPU, I would say that a for loop in the forward method is problematic and will slow down computation… Though I am not sure about that. But for safety, this can be avoided by looping on words outside the call to model(sentence, words) (and in that case you would modify your code such that forward takes one word at a time?)
@rgalhama I think it makes sense that it is slower, before we had one LSTM for sequence of words, and now for each of these words another LSTM computes each sequence of characters… I observed 5 to 10x slower…
Here is my code for this. I tried to get two versions, one following same training schedule as in the tutorial example, i.e. inputting entire sentence at once and another one working per word.
The advantage of having a whole sentence processed is to put the sequence at once, which is faster than repeating the same operation len(sequence) times. However, I wanted to avoid looping on words within the forward method, so I had to pack the list of character sequences somehow which also means I had to give up on resetting char_lstm's hidden states between words (however this is not true for the word by word version).
The problem is that the list of chars has variable lengths (e.g. for “Everybody read that book” >> (9,4,4,4)), and one way I found to handle this in PyTorch was to use torch.nn.utils.rnn.pack_sequence (from version 0.4, current master on Github).
The two versions are embedded in the same class LSTMCharTagger, the classic forward method implements the version working on whole sentences, with PackedSequence and no resetting of char_lstm's hidden states between words; then forward_one_word implements the second version where I train and accumulate gradients at each words (Note: in both case I zero out the gradients and update the parameters at the sentence level).
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import numpy as np
from time import time
torch.manual_seed(1)
def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]
tensor = torch.LongTensor(idxs)
return autograd.Variable(tensor)
training_data = [
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
word_to_ix = {}
for sent, tags in training_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
print(word_to_ix)
tag_to_ix = {"DET": 0, "NN": 1, "V": 2}
char_to_ix = {}
for sent,_ in training_data:
for w in sent:
for char in w:
if char not in char_to_ix:
char_to_ix[char] = len(char_to_ix)
EMBEDDING_DIM = 6
HIDDEN_DIM = 6
CHAR_EMBEDDING = 3
CHAR_LEVEL_REPRESENTATION_DIM = 3
def prepare_both_sequences(sentence, word_to_ix, char_to_ix):
chars = [prepare_sequence(w, char_to_ix) for w in sentence]
return prepare_sequence(sentence, word_to_ix), chars
class LSTMCharTagger(nn.Module):
'''
Augmented model, takes both sequence of words and char to predict tag.
Characters are embedded and then get their own representation for each WORD.
It is this representation that is merged with word embeddings and then fed to the sequence
LSTM which decodes the tags.
'''
def __init__(self, word_embedding_dim, char_embedding_dim, hidden_dim,
hidden_char_dim, vocab_size, charset_size, tagset_size):
super(LSTMCharTagger, self).__init__()
self.hidden_dim = hidden_dim
self.hidden_char_dim = hidden_char_dim
# Word embedding:
self.word_embedding = nn.Embedding(vocab_size, word_embedding_dim)
# Char embedding and encoding into char-lvl representation of words (c_w):
self.char_embedding = nn.Embedding(charset_size, char_embedding_dim)
self.char_lstm = nn.LSTM(char_embedding_dim, hidden_char_dim)
# Sequence model:
self.lstm = nn.LSTM(word_embedding_dim + hidden_char_dim, hidden_dim)
self.hidden2tag = nn.Linear(hidden_dim, tagset_size)
# Init hidden state for lstms
self.hidden = self.init_hidden(self.hidden_dim)
self.hidden_char = self.init_hidden(self.hidden_char_dim)
def init_hidden(self, size, batch_size=1):
"Batch size argument used when PackedSequence are used"
return (autograd.Variable(torch.zeros(1, batch_size, size)),
autograd.Variable(torch.zeros(1, batch_size, size)))
def forward_one_word(self, word_sequence, char_sequence):
''' For a word by word processing.
'''
# Word Embedding
word_embeds = self.word_embedding(word_sequence)
# Char lvl representation of each words with 1st LSTM
char_embeds = self.char_embedding(char_sequence)
char_lvl, self.hidden_char = self.char_lstm(char_embeds.view(len(char_sequence),1,-1), self.hidden_char)
# Merge
merged = torch.cat([word_embeds.view(1,1,-1), char_lvl[-1].view(1,1,-1)], dim=2)
# Predict tag with 2nd LSTM:
lstm_out, self.hidden = self.lstm(merged, self.hidden)
tag_space = self.hidden2tag(lstm_out.view(1, -1))
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores
def forward(self, word_sequence, char_sequence):
''' Importantly, char_sequence is a list of tensors, one per word, and one tensor
must represent a whole sequence of character for a given word.
E.g.: is word_sequence has length 4, char_seq must be of length 4, thus char_lstm
will output 4 char-level word representations (c_w).
Here we deal with variable lengths of character tensors sequence using nn.utils.rnn.pack_sequence
'''
# Word Embedding
word_embeds = self.word_embedding(word_sequence)
# Char lvl representation of each words with 1st LSTM
# We will pack variable length embeddings in PackedSequence. Must sort by decreasing length first.
sorted_length = np.argsort([char_sequence[k].size()[0] for k in range(len(char_sequence))])
sorted_length = sorted_length[::-1] # decreasing order
char_embeds = [self.char_embedding(char_sequence[k]) for k in sorted_length]
packed = nn.utils.rnn.pack_sequence(char_embeds) # pack variable length sequence
out, self.hidden_char = self.char_lstm(packed, self.hidden_char)
encodings_unpacked, seqlengths = nn.utils.rnn.pad_packed_sequence(out, batch_first=True) # unpack and pad
# We need to take only last element in sequence of lstm char output for each word:
unsort_list = np.argsort(sorted_length) # indices to put list of encodings in orginal word order
char_lvl = torch.stack([encodings_unpacked[k][seqlengths[k]-1] for k in unsort_list])
# Merge
merged = torch.cat([word_embeds, char_lvl], dim=1) # gives tensor of size (#words, #concatenated features)
# Predict tag with 2nd LSTM:
lstm_out, self.hidden = self.lstm(merged.view(len(word_sequence), 1, -1), self.hidden)
tag_space = self.hidden2tag(lstm_out.view(len(word_sequence), -1))
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores
def get_batch_size(seq2pack):
"Need this to correctly initialize batch lstm hidden states when packing variable length sequences..."
sorted_length = np.argsort([seq2pack[k].size()[0] for k in range(len(seq2pack))])
sorted_length = sorted_length[::-1] # decreasing order
packed = nn.utils.rnn.pack_sequence([seq2pack[k] for k in sorted_length])
return max(packed.batch_sizes)
model = LSTMCharTagger(EMBEDDING_DIM, CHAR_EMBEDDING, HIDDEN_DIM, CHAR_LEVEL_REPRESENTATION_DIM,
len(word_to_ix), len(char_to_ix), len(tag_to_ix))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
# See what the scores are before training
words_in, chars_in = prepare_both_sequences(training_data[0][0], word_to_ix, char_to_ix)
model.hidden_char = model.init_hidden(model.hidden_char_dim, batch_size=get_batch_size(chars_in))
tag_score = model(words_in, chars_in)
print(tag_score)
t0 = time()
for epoch in range(300):
for sentence, tags in training_data:
# Step 1. Remember that Pytorch accumulates gradients.
model.zero_grad()
# Step 2. Get our inputs ready
sentence_in, chars_in = prepare_both_sequences(sentence, word_to_ix, char_to_ix)
targets = prepare_sequence(tags, tag_to_ix)
model.hidden = model.init_hidden(model.hidden_dim)
model.hidden_char = model.init_hidden(model.hidden_char_dim, batch_size=get_batch_size(chars_in))
# Step 3. Run our forward pass.
tag_score = model(sentence_in, chars_in)
# Step 4. Compute the loss, gradients, and update the parameters
loss = loss_function(tag_score, targets)
loss.backward()
optimizer.step()
print("300 epochs in %.2f sec for model with packed sequences"%(time()-t0))
model = LSTMCharTagger(EMBEDDING_DIM, CHAR_EMBEDDING, HIDDEN_DIM, CHAR_LEVEL_REPRESENTATION_DIM,
len(word_to_ix), len(char_to_ix), len(tag_to_ix))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
t0 = time()
for epoch in range(300):
for sentence, tags in training_data:
sentence_score = []
# Step 1. Remember that Pytorch accumulates gradients.
model.zero_grad()
# Step 2. Get our inputs ready
sentence_in, chars_in = prepare_both_sequences(sentence, word_to_ix, char_to_ix)
targets = prepare_sequence(tags, tag_to_ix)
model.hidden = model.init_hidden(model.hidden_dim)
#model.hidden_char = model.init_hidden(model.hidden_char_dim)
# Step 3. Run our forward pass on each word
for k in range(len(sentence)):
# Clear hidden state between EACH word (char level representation must be independent of previous word)
model.hidden_char = model.init_hidden(model.hidden_char_dim)
tag_score = model.forward_one_word(sentence_in[k], chars_in[k])
sentence_score.append(tag_score)
loss = loss_function(tag_score, targets[k].view(1,))
loss.backward(retain_graph=True) # accumulate gradients now
#tag_score = autograd.Variable(torch.cat(sentence_score), requires_grad=True)
# Step 4. Update parameters at the end of sentence
optimizer.step()
print("300 epochs in %.2f sec for model at word level"%(time()-t0))
# See what the scores are after training
words_in, chars_in = prepare_both_sequences(training_data[0][0], word_to_ix, char_to_ix)
model.hidden_char = model.init_hidden(model.hidden_char_dim, batch_size=get_batch_size(chars_in))
tag_score = model(words_in, chars_in)
print(tag_score) |
st101510 | @Hugo-W
I have looked at all of your code but I can’t understand this code
merged = torch.cat([word_embeds.view(1,1,-1), char_lvl[-1].view(1,1,-1)], dim=2) in function forward_one_word
Here is my question:
Why use char_lvl[-1]? Is it the last char in char_lvl ?
Should we train model with words and the affix?
Thx and looking forward to your answer |
st101511 | It’s a shame I did not reuse my own code since that post… So right now I don’t have it in mind anymore, I had to read it probably as you did to understand what I might have done (plus I am not testing it as I am writing). I would advise you to run the code line by line, and also run the lines within that function forward_one_word and see what is inside char_lvl[-1].view(1,1,-1) in comparison to char_lvl…
My guess (again sorry I cannot verify the code right now) is that I do take the last word representation at the character level, since I think the input is the list of char in the list of words of a given sentence. So basically it’s just a way to merge representation of one word (woird embedding + its char level representation) together…
Do you mean Part of speech tag (the labels in this task) by affix? But yes you train the model with words as input and target POS tag as output, the training procedure is also in the code I posted. I wrap inputs and outputs in corresponding sequences with prepare_sequence:
sentence_in, chars_in = prepare_both_sequences(sentence, word_to_ix,
targets = prepare_sequence(tags, tag_to_ix) |
st101512 | Hi, I’m trying to install pytorch 0.4.1 on ubuntu, and the cuda version is 9.1.85,.
But i can‘t find the pytorch version.
If i install pytorch for cuda9.0 or 9.2, it all get wrong.
9.0
RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:74
9.2
RuntimeError: cuda runtime error (35) : CUDA driver version is insufficient for CUDA runtime version at /pytorch/aten/src/THC/THCGeneral.cpp:74
But i need to use a new function in 0.4.1(pos_weight in BCEWithLogitsLoss).
How can i solve it? Thank you!!
[solves]: Sorry, cuda9.0 can use |
st101513 | try using docker … thats the easiest way to get around i think (Suggestion Only) |
st101514 | go to https://pytorch.org/previous-versions/ 219 , select the version you need ( like cu91/torch-0.4.0-cp27-cp27mu-linux_x86_64.whl), and do pip install. |
st101515 | I want to insert a trained pytorch model into the middle of a multi-process pipeline. The input/output data for the model should never move off the GPU. Device pointers to the data need to be passed back and forth between processes using CUDA IPC memory handles.
Basically, I need a way to access/create the IPC handles and to convert to/from torch.cuda.*Tensor objects.
What is the best way to implement this? I know pycuda gives access to CUDA IPC handles (e.g. pycuda.driver.mem_get_ipc_handle), but from my experience pycuda does not play nicely with pytorch. Are there any other simple solutions in the python realm? |
st101516 | You can share CUDA tensors across processes using multiprocessing queues. (e.g. multiprocessing.SimpleQueue) The PyTorch code will create an IPC handle when the tensor is added to the queue and open that handle when the tensor is retrieved from the queue.
Beware that you need to keep the original CUDA tensor alive for at least as long as any view of it is accessible in another process. |
st101517 | Thanks for the quick response @colesbury.
Just to clarify, the other processes in the pipeline are not python processes (they are C/C++/CUDA). So it’s important that I can access/create IPC handles with device pointers to the raw underlying tensor data. My confusion is how to work with these handles within the python/pytorch process. Correct me if I’m wrong, but it seems that multiprocessing.SimpleQueue will only work the way you describe if both processes are using Pytorch.
So, just to be absolutely clear, the full plan is to use shared memory to pass IPC handles between processes. For example, the shared memory file will include a 64byte cudaIpcMemHandle_t (containing a pointer to the raw data in GPU memory), plus additional bytes to specify the number of rows and columns in the tensor. |
st101518 | It will be a bit tricky to do correctly because small PyTorch storages are packed into the same CUDA allocation block. You will have to rely on implementation details of PyTorch that may change in the future:
x = torch.randn(100, device='cuda')
storage = x.storage()
device, handle, size, offset, view_size = storage._share_cuda_()
device is the index of the GPU (i.e. 0 for the first GPU)
handle is the cudaIpcMemHandle_t as a Python byte string
size is the size of the allocation (not the Storage!, in elements, not bytes!)
offset is the offset in bytes of the storage data pointer from the CUDA allocation
view_size is the size of the storage (in elements, not bytes!) |
st101519 | Thanks again @colesbury.
So _share_cuda_() gives me access to the cudaIpcMemHandle_t of an existing torch.cuda.Tensor. It’s unfortunate that the handle is not exposed through a regular function call, but it’s a good start.
Now, what about when I need to convert the other way around, from handle to tensor? If I have a cudaIpcMemHandle_t, read in from shared memory and converted to a Python byte string, can I insert that into a torch.cuda.Storage and thereby produce a torch.cuda.Tensor which points to the appropriate data from GPU memory?
Also, can you explain the offset a bit more? It sounds like multiple different torch.cuda.Storage objects share the same cudaIpcMemHandle_t, but with different offsets in memory. Is that correct? I don’t see that as a major problem. I’ll just have to write the offset to shared memory as well.
Another idea altogether: What about using Pytorch’s extension-ffi to access the cudaIpcMemHandle_t and storing the data into a THCudaTensor? I’ve never played with the extension-ffi before, so I don’t really understand it’s capabilities. I’ll need to make calls to functions like cudaIpcOpenMemHandle, which are part of CUDAs runtime API. Is this possible? |
st101520 | If you want to go back and forth between C/C++ and Python you probably want to use an extension. You should prefer https://github.com/pytorch/extension-cpp 22 over extension-ffi as TH/THC is being slowly deprecated and moved into ATen.
ATen provides a Type::storageFromBlob function which you can use after you open the IPC handle.
I don’t think there’s an equivalent function in Python. It would probably be good for us to add something like that. |
st101521 | @colesbury Thanks so much for all the help on this. I think I’m almost there.
I’ve been playing around with extension-cpp and I’m running into a couple of issues.
As a reference point, I am mostly following the extension-cpp tutorial here:
https://pytorch.org/tutorials/advanced/cpp_extension.html#writing-a-mixed-c-cuda-extension 18
So I have three files, a .py, a .cpp, and a .cu. I am using the the JIT method for compiling my extension.
In the .cu file, I am using the CUDA runtime API to extract a float* device pointer from a cudaIpcMemHandle. I am then using tensorFromBlob to fill an at::Tensor object. Here is how I am using tensorFromBlob:
at::Tensor cuda_tensor_from_shm = at::CUDA(at::kFloat).tensorFromBlob(d_img, {rows,cols});
My first problem is that the above line of code takes about three seconds to execute. Does it only take so long the first time I call the extension, or is it going to be slow every time? Obviously the whole point of using shared memory and CUDA IPC handles was to make the cost of transferring data negligibly small; I was hoping for sub-millisecond times.
The second problem is that I get a segmentation fault happening at some point between the .cpp code and the .py code. I haven’t precisely pinpointed it yet. However, my guess is that after calling tensorToBlob, I need to copy the data to a new at::Tensor before I can use it in Pytorch. Is that correct? If so, is there a super-fast ATen device-to-device copy I can use? |
st101522 | Everything works after modifying my tensorFromBlob code from:
at::Tensor cuda_tensor_from_shm = at::CUDA(at::kFloat).tensorFromBlob(d_img, {rows,cols});
to:
at::Tensor cuda_tensor_from_shm = torch::CUDA(at::kFloat).tensorFromBlob(d_img, {rows,cols});
I’ll need to dig into the code to understand why torch::CUDA is the correct scoping, but anyway it works. |
st101523 | The question is maybe strange. I always used clip_grad_norm for recurrent unit in order to prevent gradient explosion.
It is possible that training a CNN within a clip gradient (0.5) can help a better convergence or retrospecttly is a limitation?
Best,
Nico |
st101524 | I have multiple GPUs on my computer and want to use only the second one (GPU 1). I notice that when I transfer my model to that device I still use up some memory on the first GPU (GPU0). I have a minimal working example where this happens.
import torch
import torch.nn as nn
device = torch.device(“cuda:1”)
f = nn.Linear(100, 100).to(device)
when I do this and check the memory usage on the GPUs I can clearly see that my process has allocated memory on both GPUs (although the amount on GPU0 is smaller). Can anyone explain why this is happening |
st101525 | change
device = torch.device(“cuda:0”)
and try executing this way?
CUDA_VISIBLE_DEVICES=1 python your_program.py |
st101526 | Yes that works (I forgot to mention that this was the workaround that I had been using) but I’m curious as to why this happens. |
st101527 | Oh ok. I understand now.
version 0.4.1: Maybe, pytorch uses cuda:0 always by default?
pytorch master 0.5.0a0+ab6afc2: I have verified that this issue is fixed.
Sorry for many edits of the answer |
st101528 | You can install the latest version of pytorch on windows from http://pytorch.org/ 6.1k.
Preferably install it via anaconda (Python 3.6). |
st101529 | PyTorch was the only framework (and I tried with Theano and TensorFlow as well) that I was able to use right with GPU. Here’s tutorial that worked for me (Win10): https://github.com/peterjc123/pytorch-scripts 1.9k |
st101530 | Does the pip installer install CUDA Toolkit in Windows or should the user download CUDA from nVidia and install them manually? |
st101531 | pip install http://download.pytorch.org/whl/cu90/torch-0.4.1-cp36-cp36m-win_amd64.whl
pip install torchvision
This will do the job |
st101532 | It seems it will do the job only if 2 things happens:
The user installed the latest nVidia Driver.
The user installed system wide CUDA Toolkit from nVidia.
I wish it wasn’t like that and the pip was self sustained with all the CUDA libraries needed supplied in it. |
st101533 | I have a tensor sized (1, 160) I need to resize it to (20, 20, and divide each of the 4 corresponding (20, 20) tensors by a different number, and then resize the tensor and return it as (1, 160)
How do I do this? |
st101534 | I feel your tensor size should be (1,1600).
Assuming it is.
import torch
import random
x = torch.randint(0,200,(1,1600))
x = x.reshape(4,20,20)
for i in range(4):
x[i]/=random.randint(1,200)
x = x.reshape(4,20,20)
Hope this is what you were looking for. |
st101535 | Hi,
I am training a network for the video classification task, I am using Cross entropy loss for learning, the issue is that after many epochs my networks accuracy remains the same (1%) and loss is not coming down, I inspect the issue and noticed that the gradients are non-zero only for the layer before loss calculation, I also made sure that the requires_grad flag to be True for all network parameters.
here is my code:
optimizer = optim.Adam(net.parameters(), lr=args.lr)
criterion = nn.CrossEntropyLoss()
for epoch in range(args.start_epoch, args.epochs):
for i, data in enumerate(train_loader):
frames, labels = data
frames, labels = frames.cuda(), labels.cuda()
inputs = frames
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
I am pretty sure the problem is not with the optimizer since before even taking the step I am having the issue right after initiating the training session.
After doing the first back propagation the for the list(net.parameters())[-1] I have non zero gradients which corresponds to the bias of last fully connected layer, but for the rest of parameters they are all zeros.
I appreciate any suggestions about why I am having this issue,
Thanks in advance. |
st101536 | Hi,
First thing i may be true or false. But i did faced the same issue here is the reason which i found out.Even though i made my gradients true for all layers except the one layer (last or middle) . My rest of the values are zeros same like you.Because as the gradient is calculated with respect to all the network parameters the layer which is set requiregrad=False acts like a wall and nothing passes through it so theoretically all the values left side of the layer(requiregrad=False) will be zero but right side will behave normally. |
st101537 | Hi,
Thanks for your response, in my case I set the requires_grad True for all the layers, I just noticed that I confused the drop out probability with keep probability that was set to 1 for the layer before the last one and that was why i got zeros for the rest of gradients.
Thanks |
st101538 | Hi,
I was trying to train GoogLeNet on a server with multiple GPUs using python3 + pytorch 0.3.1.post2. However, I keep getting this error: ‘RuntimeError: CUDNN_STATUS_INTERNAL_ERROR’. It seems like this error happens when conv3d() is called. The complete error message is below. Could anyone help me out with it?
Thanks
File “main.py”, line 220, in class_reg_eval
outputs = combined_classifier.net(event_data)
File “/home/junzel2/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/home/junzel2/Triforce_CaloML/Architectures/GoogLeNet.py”, line 142, in forward
x = self.pre_layers(x)
File “/home/junzel2/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/home/junzel2/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/container.py”, line 67, in forward
input = module(input)
File “/home/junzel2/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 357, in call
result = self.forward(*input, **kwargs)
File “/home/junzel2/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/modules/conv.py”, line 388, in forward
self.padding, self.dilation, self.groups)
File “/home/junzel2/anaconda2/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py”, line 126, in conv3d
return f(input, weight, bias)
RuntimeError: CUDNN_STATUS_INTERNAL_ERROR
Updates:
I have tried several solutions I found with Google. I’ve tried ‘rm -rf ~/.nv’, ‘torch.backends.cudnn.benchmark = False’, etc. But they all didn’t work. |
st101539 | Solved by ptrblck in post #2
Is your code running on CPU?
Also, could you update to the latest release and check it again or do you need this PyTorch version for some reason? |
st101540 | Is your code running on CPU?
Also, could you update to the latest release and check it again or do you need this PyTorch version for some reason? |
st101541 | Thanks for your reply.
The code is running on GPUs. I have just updated PyTorch to 0.4.1, and ‘CUDNN_STATUS_INTERNAL_ERROR’ is gone. My GoogLeNet was working fine on 0.3.1 previously, and just went wrong recently. It is a weird error tho.
Thanks for your suggestion and it is runnable now. |
st101542 | I got connection error when using data loader when num_worker is nonzero. But it works fine when it is 0. Why? |
st101543 | I try to put a tensor with grad_fn into a Queue. But the queue received the tensor without the grad_fn. Can I put the grad_fn into the queue with the tensor? If not, can I run tensor.backward() in the process and run optimizer.step() later in the main process? |
st101544 | I have an Enc-Dec LSTM model. I know one way of connecting the two models i.e output of the last state of Enc as Hidden to Dec. But the issue is dimension mismatch.
Encoder output size : [batch_size, seq_len, input_size], ex. [10, 30, 256]
Decoder Hidden size : [num_layer*num_directions, batch_size, hidden_size] ex. [2, 10, 128]
Can dimension issue be resolved? or is there any other way of connecting Enc-Dec together? |
st101545 | I have been working with Caffe2 for 6 weeks now. I am stuck at an issue from past 25 days, I have searched the internet far and wide and have tried several things.
The issue in a single line: Unable to use MPI rendezvous in Caffe2
Environment: Cray XC40/XC50 supercomputer, uses SLURM!
Details:
For reproducibility, I am using a container made using the following the Dockerfile:
FROM nvidia/cuda:8.0-cudnn7-devel-ubuntu16.04
LABEL maintainer="[email protected]"
# caffe2 install with gpu support
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
libgflags-dev \
libgoogle-glog-dev \
libgtest-dev \
libiomp-dev \
libleveldb-dev \
liblmdb-dev \
libopencv-dev \
libprotobuf-dev \
libsnappy-dev \
protobuf-compiler \
python-dev \
python-numpy \
python-pip \
python-pydot \
python-setuptools \
python-scipy \
wget \
&& rm -rf /var/lib/apt/lists/*
RUN wget -q http://www.mpich.org/static/downloads/3.1.4/mpich-3.1.4.tar.gz \
&& tar xf mpich-3.1.4.tar.gz \
&& cd mpich-3.1.4 \
&& ./configure --disable-fortran --enable-fast=all,O3 --prefix=/usr \
&& make -j$(nproc) \
&& make install \
&& ldconfig \
&& cd .. \
&& rm -rf mpich-3.1.4 \
&& rm mpich-3.1.4.tar.gz
RUN pip install --no-cache-dir --upgrade pip==9.0.3 setuptools wheel
RUN pip install --no-cache-dir \
flask \
future \
graphviz \
hypothesis \
jupyter \
matplotlib \
numpy \
protobuf \
pydot \
python-nvd3 \
pyyaml \
requests \
scikit-image \
scipy \
setuptools \
six \
tornado
########## INSTALLATION STEPS ###################
RUN git clone --branch master --recursive https://github.com/pytorch/pytorch.git
RUN cd pytorch && mkdir build && cd build \
&& cmake .. \
-DCUDA_ARCH_NAME=Manual \
-DCUDA_ARCH_BIN="35 52 60 61" \
-DCUDA_ARCH_PTX="61" \
-DUSE_NNPACK=OFF \
-DUSE_ROCKSDB=OFF \
&& make -j"$(nproc)" install \
&& ldconfig \
&& make clean \
&& cd .. \
&& rm -rf build
ENV PYTHONPATH /usr/local
The command:
srun -N 4 -n 4 -C gpu \
shifter run --mpi load/library/caffe2_container_diff \
python resnet50_trainer.py \
--train_data=$SCRATCH/caffe2_notebooks/tutorial_data/resnet_trainer/imagenet_cars_boats_train \
--test_data=$SCRATCH/caffe2_notebooks/tutorial_data/resnet_trainer/imagenet_cars_boats_val \
--db_type=lmdb \
--num_shards=4 \
--num_gpu=1 \
--num_labels=2 \
--batch_size=2 \
--epoch_size=150 \
--num_epochs=2 \
--distributed_transport ibverbs \
--distributed_interface mlx5_0
The output/error:
srun: job 9059937 queued and waiting for resources
srun: job 9059937 has been allocated resources
E0816 14:14:20.081552 7042 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.081637 7042 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.081642 7042 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.083420 6442 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.083504 6442 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.083509 6442 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
INFO:resnet50_trainer:Running on GPUs: [0]
INFO:resnet50_trainer:Using epoch size: 144
INFO:resnet50_trainer:Running on GPUs: [0]
INFO:resnet50_trainer:Using epoch size: 144
E0816 14:14:20.087043 5987 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.087126 5987 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.087131 5987 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
INFO:resnet50_trainer:Running on GPUs: [0]
INFO:resnet50_trainer:Using epoch size: 144
INFO:data_parallel_model:Parallelizing model for devices: [0]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
INFO:data_parallel_model:Parallelizing model for devices: [0]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
E0816 14:14:20.102372 11086 init_intrinsics_check.cc:43] CPU feature avx is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.102452 11086 init_intrinsics_check.cc:43] CPU feature avx2 is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
E0816 14:14:20.102457 11086 init_intrinsics_check.cc:43] CPU feature fma is present on your machine, but the Caffe2 binary is not compiled with it. It means you may not get the full speed of your CPU.
INFO:data_parallel_model:Parallelizing model for devices: [0]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
INFO:resnet50_trainer:Running on GPUs: [0]
INFO:resnet50_trainer:Using epoch size: 144
INFO:data_parallel_model:Parallelizing model for devices: [0]
INFO:data_parallel_model:Create input and model training operators
INFO:data_parallel_model:Model for GPU : 0
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Adding gradient operators
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
INFO:data_parallel_model:Add gradient all-reduces for SyncSGD
WARNING:data_parallel_model:Distributed broadcast of computed params is not implemented yet
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Post-iteration operators for updating params
INFO:data_parallel_model:Calling optimizer builder function
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Add initial parameter sync
INFO:data_parallel_model:Creating barrier net
INFO:data_parallel_model:Creating barrier net
INFO:data_parallel_model:Creating barrier net
*** Aborted at 1534428860 (unix time) try "date -d @1534428860" if you are using GNU date ***
INFO:data_parallel_model:Creating barrier net
*** Aborted at 1534428860 (unix time) try "date -d @1534428860" if you are using GNU date ***
*** Aborted at 1534428860 (unix time) try "date -d @1534428860" if you are using GNU date ***
PC: @ 0x2aaab0afb108 caffe2::ConvPoolOpBase<>::TensorInferenceForConv()
PC: @ 0x2aaab0afb108 caffe2::ConvPoolOpBase<>::TensorInferenceForConv()
PC: @ 0x2aaab0afb108 caffe2::ConvPoolOpBase<>::TensorInferenceForConv()
*** SIGSEGV (@0x8) received by PID 5987 (TID 0x2aaaaaae5480) from PID 8; stack trace: ***
@ 0x2aaaaace4390 (unknown)
@ 0x2aaab0afb108 caffe2::ConvPoolOpBase<>::TensorInferenceForConv()
*** SIGSEGV (@0x8) received by PID 7042 (TID 0x2aaaaaae5480) from PID 8; stack trace: ***
@ 0x2aaaaace4390 (unknown)
@ 0x2aaab0afb108 caffe2::ConvPoolOpBase<>::TensorInferenceForConv()
*** Aborted at 1534428860 (unix time) try "date -d @1534428860" if you are using GNU date ***
*** SIGSEGV (@0x8) received by PID 6442 (TID 0x2aaaaaae5480) from PID 8; stack trace: ***
@ 0x2aaaaace4390 (unknown)
@ 0x2aaab0afb108 caffe2::ConvPoolOpBase<>::TensorInferenceForConv()
@ 0x2aaab0af78d3 std::_Function_handler<>::_M_invoke()
PC: @ 0x2aaab0afb108 caffe2::ConvPoolOpBase<>::TensorInferenceForConv()
@ 0x2aaab0af78d3 std::_Function_handler<>::_M_invoke()
@ 0x2aaab09e8094 caffe2::InferBlobShapesAndTypes()
@ 0x2aaab09e9659 caffe2::InferBlobShapesAndTypesFromMap()
@ 0x2aaab0af78d3 std::_Function_handler<>::_M_invoke()
@ 0x2aaab09e8094 caffe2::InferBlobShapesAndTypes()
@ 0x2aaab032588e _ZZN8pybind1112cpp_function10initializeIZN6caffe26python16addGlobalMethodsERNS_6moduleEEUlRKSt6vectorINS_5bytesESaIS7_EESt3mapINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES6_IlSaIlEESt4lessISI_ESaISt4pairIKSI_SK_EEEE36_S7_JSB_SR_EJNS_4nameENS_5scopeENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNES19_
@ 0x2aaab09e9659 caffe2::InferBlobShapesAndTypesFromMap()
@ 0x2aaab035273e pybind11::cpp_function::dispatcher()
@ 0x4bc3fa PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c16e7 PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x2aaab032588e _ZZN8pybind1112cpp_function10initializeIZN6caffe26python16addGlobalMethodsERNS_6moduleEEUlRKSt6vectorINS_5bytesESaIS7_EESt3mapINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES6_IlSaIlEESt4lessISI_ESaISt4pairIKSI_SK_EEEE36_S7_JSB_SR_EJNS_4nameENS_5scopeENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNES19_
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x2aaab09e8094 caffe2::InferBlobShapesAndTypes()
@ 0x4eb30f (unknown)
@ 0x4e5422 PyRun_FileExFlags
@ 0x4e3cd6 PyRun_SimpleFileExFlags
@ 0x493ae2 Py_Main
@ 0x2aaaaaf10830 __libc_start_main
@ 0x4933e9 _start
@ 0x2aaab09e9659 caffe2::InferBlobShapesAndTypesFromMap()
*** SIGSEGV (@0x8) received by PID 11086 (TID 0x2aaaaaae5480) from PID 8; stack trace: ***
@ 0x2aaab035273e pybind11::cpp_function::dispatcher()
@ 0x0 (unknown)
@ 0x4bc3fa PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c16e7 PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x2aaab032588e _ZZN8pybind1112cpp_function10initializeIZN6caffe26python16addGlobalMethodsERNS_6moduleEEUlRKSt6vectorINS_5bytesESaIS7_EESt3mapINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES6_IlSaIlEESt4lessISI_ESaISt4pairIKSI_SK_EEEE36_S7_JSB_SR_EJNS_4nameENS_5scopeENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNES19_
@ 0x2aaaaace4390 (unknown)
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4eb30f (unknown)
@ 0x4e5422 PyRun_FileExFlags
@ 0x4e3cd6 PyRun_SimpleFileExFlags
@ 0x493ae2 Py_Main
@ 0x2aaaaaf10830 __libc_start_main
@ 0x4933e9 _start
@ 0x2aaab0afb108 caffe2::ConvPoolOpBase<>::TensorInferenceForConv()
@ 0x0 (unknown)
@ 0x2aaab035273e pybind11::cpp_function::dispatcher()
@ 0x4bc3fa PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c16e7 PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4eb30f (unknown)
@ 0x4e5422 PyRun_FileExFlags
@ 0x4e3cd6 PyRun_SimpleFileExFlags
@ 0x493ae2 Py_Main
@ 0x2aaaaaf10830 __libc_start_main
@ 0x4933e9 _start
@ 0x0 (unknown)
@ 0x2aaab0af78d3 std::_Function_handler<>::_M_invoke()
@ 0x2aaab09e8094 caffe2::InferBlobShapesAndTypes()
@ 0x2aaab09e9659 caffe2::InferBlobShapesAndTypesFromMap()
@ 0x2aaab032588e _ZZN8pybind1112cpp_function10initializeIZN6caffe26python16addGlobalMethodsERNS_6moduleEEUlRKSt6vectorINS_5bytesESaIS7_EESt3mapINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEES6_IlSaIlEESt4lessISI_ESaISt4pairIKSI_SK_EEEE36_S7_JSB_SR_EJNS_4nameENS_5scopeENS_7siblingEEEEvOT_PFT0_DpT1_EDpRKT2_ENUlRNS_6detail13function_callEE1_4_FUNES19_
@ 0x2aaab035273e pybind11::cpp_function::dispatcher()
@ 0x4bc3fa PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c16e7 PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4c1e6f PyEval_EvalFrameEx
@ 0x4b9ab6 PyEval_EvalCodeEx
@ 0x4eb30f (unknown)
@ 0x4e5422 PyRun_FileExFlags
@ 0x4e3cd6 PyRun_SimpleFileExFlags
@ 0x493ae2 Py_Main
@ 0x2aaaaaf10830 __libc_start_main
@ 0x4933e9 _start
@ 0x0 (unknown)
srun: error: nid06499: task 2: Segmentation fault
srun: Terminating job step 9059937.0
srun: error: nid06497: task 0: Segmentation fault
srun: error: nid06498: task 1: Segmentation fault
srun: error: nid06500: task 3: Segmentation fault
I understand that this information may not be sufficient for helping me out. Hence, I request you to ask me to perform whatever steps that are required to get more information about the situation.
I am grateful for your help. |
st101546 | Hi guys,
I tested a simple example with nn.DataParallel() to use multiple GPUs, but got a hang.
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.linear = nn.Linear(2,1)
def forward(self, x):
h = self.linear(x)
return h
epochs = 2000
lr = 1e-3
momentum = 0
w_decay = 1e-5
train_data = torch.randn(288,2)
train_label = torch.zeros([288], dtype=torch.long)
num_gpu = list(range(torch.cuda.device_count()))
model = nn.DataParallel(MyNet().cuda(0), device_ids = num_gpu)#.cuda()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr = lr, weight_decay = w_decay)
print "Starting training"
model.train()
for epoch in range(epochs):
optimizer.zero_grad()
inputs = Variable(train_data.cuda(0))
labels = Variable(train_label.cuda(0))
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
print("epoch{}, loss: {}".format(epoch, loss.data.item()))
It hangs when I try to forward the data to the model. nvidia-smi gives
Thu Aug 16 09:56:56 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.67 Driver Version: 390.67 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 TITAN V Off | 00000000:1B:00.0 Off | N/A |
| 28% 39C P8 25W / 250W | 1087MiB / 12066MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 TITAN V Off | 00000000:1C:00.0 Off | N/A |
| 28% 41C P2 39W / 250W | 1087MiB / 12066MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 2 TITAN V Off | 00000000:1D:00.0 Off | N/A |
| 31% 45C P2 41W / 250W | 1087MiB / 12066MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 3 TITAN V Off | 00000000:1E:00.0 Off | N/A |
| 31% 45C P2 40W / 250W | 1087MiB / 12066MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 4 TITAN V Off | 00000000:3D:00.0 Off | N/A |
| 28% 39C P2 38W / 250W | 1087MiB / 12066MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 5 TITAN V Off | 00000000:3E:00.0 Off | N/A |
| 28% 41C P2 40W / 250W | 1087MiB / 12066MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 6 TITAN V Off | 00000000:3F:00.0 Off | N/A |
| 28% 40C P2 38W / 250W | 1087MiB / 12066MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 7 TITAN V Off | 00000000:40:00.0 Off | N/A |
| 31% 45C P2 40W / 250W | 1087MiB / 12066MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
| 8 TITAN V Off | 00000000:41:00.0 Off | N/A |
| 29% 43C P2 41W / 250W | 1087MiB / 12066MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 131331 C python 1076MiB |
| 1 131331 C python 1076MiB |
| 2 131331 C python 1076MiB |
| 3 131331 C python 1076MiB |
| 4 131331 C python 1076MiB |
| 5 131331 C python 1076MiB |
| 6 131331 C python 1076MiB |
| 7 131331 C python 1076MiB |
| 8 131331 C python 1076MiB |
+-----------------------------------------------------------------------------+
I have tried the solution in this 7, but it didn’t work.
I use
CUDA 9.1.85
Pytorch 0.4.1 (installed by pip)
Python 2.7.13
Debian 4.9.110-3+deb9u1 (2018-08-03) x86_64 GNU/Linux
9 TITAN V cards
Any ideas to solve this issue? Or I should let NVIDIA’s folks see this issue? |
st101547 | Hi everyone,
I tried to use such a function to determine cuda availability
def get_torch_device():
if torch.cuda.is_available():
return torch.device("cuda:0")
return torch.device("cpu")
and it returns ‘cuda:0’ on a device without Nvidia GPU at all.
Have anyone faced the same problem? Probably anyone has another solution?
Thanks |
st101548 | Solved by roaffix in post #5
Hello.
I’m sorry for a delay with a feedback. It seems that it was an issue with my env setup. I installed pytorch for GPU on a laptop without GPU. Updated to version 0.4.1 for CPU and everything works just fine.
pip3 install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.… |
st101549 | That’s strange, as the script works even if I run it on a machine with a masked GPU:
CUDA_VISIBLE_DEVICES="" python your_script.py
> device(type='cpu')
EDIT: Wait a moment. I’ve run the script on the wrong machine.
What is torch.cuda.is_available() returning on your machine? |
st101550 | It returns True. So I manually set device='cpu' everytime I run code on my laptop without discrete graphics card |
st101551 | What does torch.cuda.device_count() return?
The script indeed works, i.e. masking the GPU returns ‘cpu’, while the plain call returns ‘cuda:0’ on my machine. |
st101552 | Hello.
I’m sorry for a delay with a feedback. It seems that it was an issue with my env setup. I installed pytorch for GPU on a laptop without GPU. Updated to version 0.4.1 for CPU and everything works just fine.
pip3 install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp36-cp36m-linux_x86_64.whl
pip3 install torchvision
Thanks |
st101553 | I have the following code. I don’t know why it’s giving this error.
# Generate dummy data
input_data = torch.rand((100, 30, 40))
output_data = torch.rand((100, 20, 10))
# Data loader
class Loader(Dataset):
def __len__(self):
return inpu_data.size()[0]
def __getitem__(self, idx):
input_frame = input_data[idx*batch_size:(idx+1)*batch_size]
output_words = output_data[idx*batch_size:(idx+1)*batch_size]
return input_frame, output_words
# dataloader
loader = Loader()
dataloader = DataLoader(loader, batch_size=20,shuffle=True)
for idx, sample in enumerate(dataloader):
print(idx)
RuntimeError: cannot unsqueeze empty tensor |
st101554 | Solved by fangyh in post #2
It works for me.
import torch
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
# Generate dummy data
input_data = torch.rand((100, 30, 40))
output_data = torch.rand((100, 20, 10))
# Data loader
class Loader(Dataset):
def __len__(self):
return input_… |
st101555 | It works for me.
import torch
from torch.utils.data.dataset import Dataset
from torch.utils.data import DataLoader
# Generate dummy data
input_data = torch.rand((100, 30, 40))
output_data = torch.rand((100, 20, 10))
# Data loader
class Loader(Dataset):
def __len__(self):
return input_data.size()[0]
def __getitem__(self, idx):
input_frame = input_data[idx]
output_words = output_data[idx]
return input_frame, output_words
# dataloader
loader = Loader()
dataloader = DataLoader(loader, batch_size=20, shuffle=True)
for idx, sample in enumerate(dataloader):
print(idx, sample) |
st101556 | There are 2 things wrong with the code here. Firstly, the example you have provided does not run. Secondly, I think that you may have misunderstood how the data loader and dataset works.
The dataset class is just a class that provides access to your data. As such it needs 2 things:
a) how much data you have in the dataset, which it gets from the len method
b) how to access a data point from the dataset, which it gets from the getitem method.
Once the dataloader has the dataset and the the batch size that you need it will assemble the batch for you. I think the point that you misunderstood is that the getitem method does not need to provide the entire batch but just one element. The entire batch is assembled by the data loader automatically.
The example by fangyh is the proper way to do it |
st101557 | hello, when i use the pack sequence -> recurrent network -> unpack sequence pattern in a LSTM training with nn.DataParallel, i encounter a very strange problem.
here is my code:
class LSTM(nn.Module):
def __init__(self, input_size, hidden_layer, hidden_size, num_classes,
rnn_type='lstm', dropout=0.0, bidirect=True, residual=False):
super(LSTM, self).__init__()
if bidirect:
layer = [nn.LSTM( input_size=input_size, hidden_size=hidden_size, batch_first=True, bidirectional=bidirect, dropout=dropout)]
for i in range(hidden_layer):
if i == hidden_layer-1:
layer.append(nn.LSTM(hidden_size*2, num_classes, batch_first=True, dropout=0.0, bidirectional=bidirect))
else:
layer.append(nn.LSTM(hidden_size*2, hidden_size, batch_first=True, dropout=0.0, bidirectional=bidirect))
self.lstm = nn.Sequential(*layer)
else:
pass
def forward (self, x, input_len):
_, idx_sort = th.sort(input_len, dim=0, descending=True)
_, idx_unsort = th.sort(idx_sort, dim=0)
x = x.index_select(0, Variable(idx_sort))
input_len = input_len[idx_sort]
x = x.cuda()
max_length = x.shape[1]
x = rnn_pack.pack_padded_sequence(x, input_len, batch_first=True)
_x = self.lstm(x)_
out , _ = rnn_pack.pad_packed_sequence(x, total_length=max_length, batch_first=True)
out = out.index_select(0, Variable(idx_unsort))
return out
def train_one_epoch(nnet, criterion, optimizer, train_loader, num_parallel, is_rnn=True):
nnet.train()
for index, (key, feats, labels, len_list) in enumerate(train_loader):
labels = labels.view(labels.shape[0], labels.shape[1])
input_len = np.array(len_list)
optimizer.zero_grad()
if is_rnn:
label_mat = labels.view(labels.size(0) * labels.size(1))
targets = Variable(label_mat.cuda())
input_len = th.from_numpy(input_len)
out = nnet(feats, input_len)
……………………
nnet = LSTM((args.left_context + args.right_context + 1) * args.feat_dim, args.hidden_layer, args.hidden_size, args.num_classes, rnn_type=net_type, dropout=args.dropout, bidirect=bidirect, residual=residual)
nnet = nn.DataParallel(nnet, device_ids=[0,1,2])
when I run the main script, the error occurs as follows:
Traceback (most recent call last):
File "./train/train_rnn_pack_sort.py", line 195, in <module>
train(args)
File "./train/train_rnn_pack_sort.py", line 150, in train
tr_frame_acc = train_epoch(nnet, criterion, optimizer, train_loader, num_parallel, train_dataset.num_frames, is_rnn=True)
File "./train/train_rnn_pack_sort.py", line 60, in train_epoch
train_frame, pos_frames = common_pack_sort.train_one_epoch(nnet, criterion, optimizer, train_loader, num_parallel, is_rnn)
File "/search/speech/wangqingnan/asr_tools/pytorch/asr_egs/common/common_pack_sort.py", line 107, in train_one_epoch
out = nnet(feats, input_len)
File "/search/speech/wangqingnan/Anaconda/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/search/speech/wangqingnan/Anaconda/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 114, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/search/speech/wangqingnan/Anaconda/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 124, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/search/speech/wangqingnan/Anaconda/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 65, in parallel_apply
raise output
AttributeError: 'tuple' object has no attribute 'size'
and if i use “x = self.lstm(x.data)” instead of “x = self.lstm(x)” in the forward method,the error shows as follows:
Traceback (most recent call last):
File "./train/train_rnn_pack_sort.py", line 195, in <module>
train(args)
File "./train/train_rnn_pack_sort.py", line 150, in train
tr_frame_acc = train_epoch(nnet, criterion, optimizer, train_loader, num_parallel, train_dataset.num_frames, is_rnn=True)
File "./train/train_rnn_pack_sort.py", line 60, in train_epoch
train_frame, pos_frames = common_pack_sort.train_one_epoch(nnet, criterion, optimizer, train_loader, num_parallel, is_rnn)
File "/search/speech/wangqingnan/asr_tools/pytorch/asr_egs/common/common_pack_sort.py", line 107, in train_one_epoch
out = nnet(feats, input_len)
File "/search/speech/wangqingnan/Anaconda/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/search/speech/wangqingnan/Anaconda/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 114, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/search/speech/wangqingnan/Anaconda/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 124, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/search/speech/wangqingnan/Anaconda/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 65, in parallel_apply
raise output
RuntimeError: input must have 3 dimensions, got 2
However, I remember that with one GPU, “x” is still a PackedSequence object, and there is no problems in the forward propagation.
Any help would be appreciated… |
st101558 | I define my own learnable parameters, like this
w1 = torch.randn(D_in, H, device=device, dtype=dtype, requires_grad=True)
Now, after get the loss, I would like to optimize w1 by torch.optim.SGD. How can I do this? |
st101559 | You can just pass it in a list to the optimizer:
# Your parameter
w = torch.ones(1, requires_grad=True)
optimizer = torch.optim.SGD([w], lr=1.)
# Your forward and backward pass
(w * torch.ones(1)).backward()
# Update parameters
optimizer.step() |
st101560 | This is an experimental project.
I have saved a trained model(nmt) in my local machine. Now i have created some HTML webpage with some javascript. Now what I want is, the user inputs some data in my website and the website displays its equivalent translation. So for this purpose, i want to use my pre-trained model for the translation task. Is it achievable? if yes, how? |
st101561 | Have a look at Flask 6 or Django 4 to create a small web application. In this application you could load your model and process the user input.
I’m by far not experienced in web development, but even I managed to setup a small (demo) web app using Flask. |
st101562 | Hello, everyone. I am new to Pytorch and recently I learned to implement a CNN training for semantic segmentation. However I meet some strange GPU memory behavior and can not find the reason.
[1] Different memory consumption on different GPU platform
I have 2 GPU platforms for model training.
Same code, same ResNet50 CNN, same training data, same batchsize = 1
platform 1 [GTX 970 (4G), cuda-8.0, cudnn-5.0, Nvidia Driver-375.26] consume 3101M GPU memory
platform 2 [GTX TITAN X (12G), cuda-7.5, cudnn-5.0, Nvidia Driver-352.30] consume 1792M GPU memory
There is a big difference in GPU memory consumption, and I can not find the reason. Can anyone give some help?
[2] Strange out of memory error when training
On platform 1, when there is only one image/label pair in the training data list, the model is trained normally for 100 epochs.(batchsize = 1. thus 1 iteration per epoch) However, if I repeat the training pair thus there are 2 image/label pairs in the training data list, the training break down at the second iteration with out of memory error showed in the following image (batchsize = 1. thus 2 iteration per epoch). This error does not occur on the platform 2. So strange.
snapshot1.png1080×587 106 KB
Anyone meet the same problem ? or Can anyone give some help?
THANKS!!! |
st101563 | i’m wondering if cudnn is choosing different algorithms for convolution on platform1 and platform2.
Try the following on platform 1:
torch.backends.cudnn.enabled = False
Also separately maybe try:
torch.backends.cudnn.benchmark = True |
st101564 | Thanks very much, @smth That is exactly the reason for different memory consumption on 2 platforms. (cudnn)
On platform 1, the torch.backends.cudnn.enabled did not work, even it was set true. The memory consumption is still 3101M. However on platform 2, 1792M if torch.backends.cudnn.enabled=true, and 3100M if torch.backends.cudnn.enabled = false. So there are maybe something wrong with cudnn on platform 1.
And what do you think of the second problem? The strange out of memory behavior. cudnn.enabled and cudnn.benchmark seems not the reason. |
st101565 | I meet a similar problem.
Which is caused by torch.backends.cudnn.benchmark = True, when change to torch.backends.cudnn.enabled = True, everything is ok. |
st101566 | hi, when I do
cudnn.enabled = True
cudnn.benchmark = True
it warns me "RuntimeError: CUDNN_STATUS_INTERNAL_ERROR"
do you know how to resolve it |
st101567 | more error info
Traceback (most recent call last):
File “main_test.py”, line 14, in
t.test()
File “code/test.py”, line 64, in test
output = self.model(input)
File “/home/.local/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “code/model/model.py”, line 46, in forward
x = self.headConv(x)
File “/home/.local/lib/python2.7/site-packages/torch/nn/modules/module.py”, line 224, in call
result = self.forward(*input, **kwargs)
File “/home/.local/lib/python2.7/site-packages/torch/nn/modules/conv.py”, line 254, in forward
self.padding, self.dilation, self.groups)
File “/home/.local/lib/python2.7/site-packages/torch/nn/functional.py”, line 52, in conv2d
return f(input, weight, bias)
RuntimeError: CUDNN_STATUS_INTERNAL_ERROR |
st101568 | It seems that error occurs in Conv Module. If it happens only when cudnn is enabled, there may be something wrong in cudnn library. Wrong version or something else.
Sorry I can not figure out the reason exactly. |
st101569 | my pytorch version is 0.2.0, install using pip command follow official guide. I also suspect it is environment problem. but I don’t know how to resolve it. |
st101570 | Sorry, I usually build pytorch from source and do not know what’s wrong with the pytorch you installed.
Maybe you can check if your pytorch link all libraries correctly.
cd /usr/local/lib/python2.7/dist-packages/torch
ldd _C.so
Enter the installation path and check the library link. |
st101571 | lib link like below
linux-vdso.so.1 => (0x00007ffd291d5000)
libshm.so => /usr/local/lib/python2.7/dist-packages/torch/./lib/libshm.so (0x00007fe5e1c80000)
libcudart-5d6d23a3.so.8.0.61 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libcudart-5d6d23a3.so.8.0.61 (0x00007fe5e1a18000)
libnvToolsExt-422e3301.so.1.0.0 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libnvToolsExt-422e3301.so.1.0.0 (0x00007fe5e180e000)
libcudnn-3f9a723f.so.6.0.21 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libcudnn-3f9a723f.so.6.0.21 (0x00007fe5d82aa000)
libTH.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libTH.so.1 (0x00007fe5d5c0e000)
libTHS.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libTHS.so.1 (0x00007fe5d59db000)
libTHPP.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libTHPP.so.1 (0x00007fe5d543f000)
libTHNN.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libTHNN.so.1 (0x00007fe5d511b000)
libATen.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libATen.so.1 (0x00007fe5d47d6000)
libTHC.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libTHC.so.1 (0x00007fe5c3632000)
libTHCS.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libTHCS.so.1 (0x00007fe5c31f4000)
libTHCUNN.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libTHCUNN.so.1 (0x00007fe5bedf8000)
libnccl.so.1 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libnccl.so.1 (0x00007fe5bc11d000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fe5bbf07000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe5bbcea000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe5bb920000)
/lib64/ld-linux-x86-64.so.2 (0x00007fe5e3740000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fe5bb718000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fe5bb40f000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe5bb20b000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fe5bae89000)
libgomp-ae56ecdc.so.1.0.0 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libgomp-ae56ecdc.so.1.0.0 (0x00007fe5bac72000)
libcublas-e78c880d.so.8.0.88 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libcublas-e78c880d.so.8.0.88 (0x00007fe5b7c2a000)
libcurand-3d68c345.so.8.0.61 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libcurand-3d68c345.so.8.0.61 (0x00007fe5b3cb3000)
libcusparse-94011b8d.so.8.0.61 => /usr/local/lib/python2.7/dist-packages/torch/./lib/libcusparse-94011b8d.so.8.0.61 (0x00007fe5b1193000) |
st101572 | It looks like all libraries are linked correctly. Maybe you can create a new topic in pytorch forums to get help from others. I am so sorry that I can not figure out the reason for this error. |
st101573 | I am facing some similar issue where GPU memory consumed by the network on one platform one is 15 GB and platform 2 is 11 GB.
Platform1: NVIDIA Tesla P100 GPU 16GB, Cuda 9.1.85, Pytorch 0.4.0
Platform2: GTX 1080 GPU 12GB, Cuda 8.0.61 , Pytorch 0.4.0
Both print cudnn version to be 7102 inside python by torch,backends.cuda.version()
Strangely, when i turn torch.backends.cudnn.enabled=False in platform1 my gpu consumption becomes 11 GB. But i think gpu consumption should be lesser using cudnn as suggested in above answers. Can anyone help what could be the problem? |
st101574 | I guess cudnn will benchmark different implementation methods and select the one with the best performance/fastest speed for different GPU platform. |
st101575 | It eventually dispatches to https://github.com/pytorch/pytorch/blob/2e0dd8690320fb1a7ecd548730824c1610207179/aten/src/ATen/native/LinearAlgebra.cpp#L136-L148 67, which calls blas gemm. |
st101576 | I want to train a NN to play the game snake. I know that there are alot of implemetations out there but i want to do it on my own to understand every basic step.
I already got the snake game. Now i want to generate data from that game but I don’t know what data do I have to generate that I can train a NN with it to predict the next step of the snake.
I have no clue what data do I need. Is the position of the snake head and apple position enough? How do I have to define the label? What to I minimise? I could minimise the distance between apple and snake head but then the network does not learn to predict a direction. And it would not learn not to crash in walls and in itself.
Another approach would be to give the snake eyes. That means the data would contain informations about the direct pixels around the snake head for example if it’s a wall or parts of her body. In addition i could think of the position of the apple. With that input data I want the NN to predict a next direction. Therefore I could think about rewards. For a predictied direction in that the snake does not die e.g. 10 points and if the NN finds an apple 100 points. That means I would train the NN to maximise this reward but in this case where do I get my direction from?
If some one could give me an advise that would be great. I found a few implementation on the internet for that problem but they are all not good explained or documented.
Just for you. I solved image classification and segmentation tasks with CNNs. That means I already know a lot about how NN works etc.
Ty |
st101577 | This is usually done with reinforcement learning. For this you would need an environment which is able to react on your networks predictions. The environment must provide the current state (e.g. an image of the current game situation) which can be fed to the CNN and must accept actions (the NNs predictions) and calculate a reward for the given action. The optimization goal would be to maximize the sequence of states without being game over (and maybe a higher reward for an apple).
You could have a look at the environments of openai-gym 15. Even if they don’t have such an environment you could look at other games or the cartpole environment as a beginning. |
st101578 | I’m currently building sequence models for forecasting, and have tried using RNNs, LSTMs, and GRUs.
Something unusual I noticed was the highly unstable loss curves, where the loss sometimes goes back to the loss level in the first few epochs. Interesting, the severity of this decreases from RNNs to LSTMs to GRUs.
Would anyone have an idea why this occurs?
For reference, here are the loss curves for the following models, across 500 epochs.
rnn.png1169×341 82.1 KB
lstm.png1169×342 48.7 KB
gru.png1169×339 47.6 KB |
st101579 | I have a small model and a large model. The large model is built upon the small one so I would like to load the pre-trained small model to pre-set some layers of the large model. However, when I tried to do large_model.load_state_dict(small_model.state_dict()), then there will always be error like Missing key(s) in state_dict:. Is there a way that I could ignore those layers that do not exist in the small model and load those exist?
Thanks! |
st101580 | Solved by InnovArul in post #2
Use the parameter strict=False with big_model.load_state_dict().
i.e., big_model.load_state_dict(torch.load('file.pth'), strict=False).
import torch
from torch import nn
from torch.autograd import Variable, grad
#define network weights and input
class small_model(nn.Module):
def __init__(sel… |
st101581 | Use the parameter strict=False with big_model.load_state_dict().
i.e., big_model.load_state_dict(torch.load('file.pth'), strict=False).
import torch
from torch import nn
from torch.autograd import Variable, grad
#define network weights and input
class small_model(nn.Module):
def __init__(self):
super(small_model, self).__init__()
self.linear1 = nn.Linear(3,4,bias=False)
def forward(self,x):
pass
class big_model(nn.Module):
def __init__(self):
super(big_model, self).__init__()
self.linear1 = nn.Linear(3,4,bias=False)
self.linear2 = nn.Linear(4,5,bias=False)
def forward(self,x):
pass
def print_params(model):
for name, param in model.named_parameters():
print(name, param)
# create small model
small = small_model()
print('small model params')
print_params(small)
# save the small model
torch.save(small.state_dict(), 'small.pth')
# create big model
big = big_model()
print('big model params before copying')
print_params(big)
big.load_state_dict(torch.load('small.pth'), strict=False)
assert torch.equal(big.linear1.weight, small.linear1.weight), 'params do not match after copying'
print('big model params after copying')
print_params(big) |
st101582 | Assume that I save a model using torch.save('model.pt'). After the save, I add a new function to the model class in the source code, say new_feature(self, batch).
After this, if I do model = torch.load('model.pt'), I get a few warnings about the source code change. but I can surprisingly also use the new_feature() in the loaded model (that was serialized before the new_feature was added).
In this case, does torch.load() only load the parameters from model.pt? My understanding about serialization was that the entire model object will be dumped and not just the parameters. Can some one please shed some light on this topic for me?
Much appreciated. |
st101583 | Solved by justusschock in post #4
Let’s have a look at the underlying code:
torch.save basically only calls torch._save
Inside this function there is a function named persistent_id defined and beside other things the return values of this function are pickled.
For torch.nn.Module this function does the following:
if isinstance(o… |
st101584 | That depends on how you saved your model. If you saved the state_dict with torch.save(model.state_dict(), 'model.pt') only the parameters and buffers will be saved. If you save the model with torch.save(model, 'model.pt') (which is not recommended) your whole model will be pickled and saved. You may want to have a look at this guidelines 50. |
st101585 | @justusschock thank you for the prompt reply.
I did NOT save the state_dict, I saved the model directly like torch.save(model, 'model.pt'). In this case, the whole model object was pickled. So, why am I able to access the new_feature() that was added after the model.pt was serialized and dumped on the disk? |
st101586 | Let’s have a look at the underlying code:
torch.save basically only calls torch._save
Inside this function there is a function named persistent_id defined and beside other things the return values of this function are pickled.
For torch.nn.Module this function does the following:
if isinstance(obj, type) and issubclass(obj, nn.Module):
if obj in serialized_container_types:
return None
serialized_container_types[obj] = True
source_file = source = None
try:
source_file = inspect.getsourcefile(obj)
source = inspect.getsource(obj)
except Exception: # saving the source is optional, so we can ignore any errors
warnings.warn("Couldn't retrieve source code for container of "
"type " + obj.__name__ + ". It won't be checked "
"for correctness upon loading.")
return ('module', obj, source_file, source)
Which means the source code and the source file are pickled to. This results in the fact, that your source file will be parsed again during loading (and compared to the pickled source code to generate warnings if necessary). This source file is then used for model creation if the changes can be merged automatically. And thus adding new methods is valid as long as you don’t change the existing ones in a way that prevents python from merging automatically. |
st101587 | We are working on to increase supports for sparse tensor. Currently we have summarized current state of sparse tensor and listed out sparse ops to support. We would like to collect sparse tensor use cases to facilitate the design decisions. It will be very helpful if you can post your use cases and desired sparse ops here or at https://github.com/pytorch/pytorch/issues/10043 115. Thanks!
I find these questions useful when writing use cases:
Where do I need sparse tensor? During training deep learning model?
Do I need autograd support for the sparse ops?
A possible example will be:
I am training model that has mul(Sparse, Dense) ops. I would like to have its forward and backward. I know there will be a dense gradient at the backward of mul, so here I am asking for a special kind of mul ops (called sparse_mul) that returns a sparse grad tensor and only keep the nnz’s gradients. |
st101588 | I previously had a use-case wherein I was training an auto-encoder that learned rank 4 tensors that modeled the weights between a large graph of words. The majority of the words shared no weights and were thus 0. I also needed to normalize the columns of each matrix (at the rank 2 level) of these tensors.
I found that very few of the basic tensor operations for dense vectors were implemented for sparse vectors (mm products, etc), and there was no easy way to normalize. I ended up needing to do a ton of hacky things to reformat my problem with dense vectors that were rank 3 in order to be able to feasibly run all of the computations.
Idk if this is too vague to be helpful… |
st101589 | I have a very sparse dataset that is organized as a scipy sparse csr_matrix and it is too large to convert it to a single dense numpy array. For now, I can only extract part of it and convert that part to an numpy array, then to a tensor and forward the tensor. But the csr_matrix to numpy array step is still awfully time-consuming.
Right now I have a solution as below, which is quite fast:
def spy_sparse2torch_sparse(data):
"""
:param data: a scipy sparse csr matrix
:return: a sparse torch tensor
"""
samples=data.shape[0]
features=data.shape[1]
values=data.data
coo_data=data.tocoo()
indices=torch.LongTensor([coo_data.row,coo_data.col])
t=torch.sparse.FloatTensor(indices,torch.from_numpy(values).float(),[samples,features])
return t
But it is still not very helpful. I need to sample a mini-batch out of the whole dataset, feed a classifier that mini-batch and update the weights of the classifier. If mini-batch sampling is supported, it will be great. |
st101590 | This makes sense. norm() is already supported in sparse for computing global norm among all values with exponent=2, I guess what you need is the standard one as in dense: torch.norm(input, p, dim, keepdim=False, out=None). I will add this to TODO. btw, do you also need backward for this? |
st101591 | I’m actually not working in it anymore and found workarounds. But in that instance, I did not need backwards on the norm. Of course I can see that being useful… |
st101592 | I guess what you need is a sampler that can sort the sparse dataset by batch dim and efficiently return mini-batch according to batch dim. But there is a harder problem, that is to have batch op supports (e.g. bmm). I will take a look at this. |
st101593 | I have a few TB of tiny images and I would like to scan/classify them with a model I have trained. Currently, I load a bunch to memory, create a DataLoader object, run them through the model, and move to the next bunch. By using the DataLoader I get to use the same image transformation I used for the model. However, this is a little painful. Is there a better way of running a model over millions of files? |
st101594 | You could use a Dataset to lazily load all images.
Have a look at this tutorial 48.
Basically you can pass the image paths to your Dataset and just load and transform the samples in __getitem__. |
st101595 | That was easy, I got it to work. I keep forgetting how flexible PyTorch is compared to other frameworks. Thanks @ptrblck. |
st101596 | I am getting ‘bool value of Variable objects containing non-empty torch.ByteTensor is ambiguous’ runtime error when I run the below code.
elif mask[i] ==1 and var[i] - mu[i] > 5:
Not sure what this means. |
st101597 | mask[i], var[i] or mu[i] will most likely return a non-scalar tensor, which cannot be converted to bool.
Have a look at this dummy example:
x = torch.randn(1, 2)
if x[0]==1:
print('passed')
As x[0] returns a tensor of shape torch.Size([2]), the comparison will also have the same shape, e.g. tensor([0, 0], dtype=torch.uint8).
If you want to check, if all values are equal to 1, you can use:
if (x[0]==1).all():
print('passed')
, but obviously this depends on your use case. |
st101598 | Thanks. But what if I do something like mask[0][1] ==1? I still get the same error where mask is a Variable |
st101599 | Ah. Ok. I figured it out. No it is a 1-dim tensor. But the if statement gives weird results if mask is a Variable. If its a tensor, it works fine.
m=Variable(torch.rand(3,4))
if m[0][1] > 0:
print(‘passed’)
vs
m=torch.rand(3,4)
if m[0][1] > 0:
print(‘passed’) |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.