id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st101800 | can any one please tell me the equivalent of chainer’s StatelessLSTM 16 layer in pytorch? |
st101801 | Is this necessarily a bug for a code: test performance on train data
hidden size: 100 Iterations: 400 : Perfect performance
hidden size: 100 Iterations: >=700 : Gets one wrong
hidden size: 200 iterations >=700: Perfect performance |
st101802 | How many training samples do you have?
One misclassified sample doesn’t sound bad and your resubstitution error looks also like your model is perfectly able to overfit the training data. |
st101803 | I have 5 examples. My main concern is why increasing the iterations caused it to wrongly predict a translation. |
st101804 | Maybe the learning rate was too high and so your model parameters were thrown out of a local minimum.
It’s common to see some noisy results, especially using a very small number of samples. |
st101805 | Here’s a slightly simplified version of my problem: let’s say I have 100 parameters that I pass to an optimizer to optimize. In a loop I then load a batch of 4 new images, pass them through some CNNs, compute loss, and backpropagate the loss to the 100 parameters (the parameters are used in one of the functions in the CNN). The loss for each image depends one of these parameters, so after each batch has been processed the optimizer should update 4 of these parameters. This works fine for the first batch (where 4 parameters get updated), but after the second batch has been processed the optimizer updates 8 parameters instead of just 4 – it updates 4 params corresponding to the images from the current batch but also 4 params corresponding to images from the previous batch. This keeps repeating and after each new batch of images more and more parameters are being updated (instead of just the current 4 that I want).
My training loop looks something like
parameters = [torch.Tensor([1.]), torch.Tensor([2.]), torch.Tensor([3.])]
optimizer = torch.optim.Adam(parameters, lr)
for image_batch in enumerate(image_loader):
loss = calculate_loss(image_batch, parameters)
optimizer.zero_grad()
loss.backward()
optimizer.step()
I have checked the code and I’m zeroing the gradients before each optimizer update with optimizer.zero_grad() so the gradients shouldn’t be accumulating. Also, I have checked the gradients of the parameters after calling loss.backward() and the only parameters with non-zero gradients are just those 4 that I want to update, so that seems ok as well. What is confusing is that the parameters that get updated in addition to these 4 have had zero gradients after loss.backward() but their value changed nevertheless after calling optimizer.step(). Does anybody know what’s going on here? Thanks! |
st101806 | Yes I have read that post but still don’t know what’s causing the problem here. Is there any way to make it not update the parameters from the previous batch? I thought that since the previous batch params haven’t been used to calculate the current loss they shouldn’t be updated.
Update: to be more specific, I still want to use Adam or SGD with momentum, I just don’t want the parameters to update if they haven’t been used to calculate the loss for the current batch. |
st101807 | But then what should Adam or SGD do with these steps? Not update the momentums? In that case you would get wrong momentum terms since they would work with different set of parameters all the time? |
st101808 | Probably a better solution in this case would be to create a different optimizer for each of the parameters since I want to update them separately (i.e. I didn’t want to update all parameters during every execution of the training loop) |
st101809 | Selection_017.png603×600 79.9 KB
The microarchitecture of NVIDIA DGX-1 & Titan X is pascal only. I thought the difference between training on single GPU & Multiple GPUs(I’m guessing he trained on multiple gpu cores of DGX-1) is BN synchronization. BN sync leads to decrease the peformance but convergence do occur. Exp |
st101810 | Hi,
I’m trying to modify the character level rnn classification code to make it fit for my application. The data set I have is pretty huge (4 lac training instances). The code snippets are shown below (I’ve shown only the necessary parts, all helper functions are same as the official example)
I initially faced the problem of exploding / vanishing gradient as described in this issue issue
I used the solution given there to clip the gradient in the train() function. But now, I seem to get negative values for loss. What is that supposed to mean?
Also, how is it that in the official example (when I apply it to my dataset) I get loss values that are greater than 1.
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.i2h = nn.Linear(input_size + hidden_size, hidden_size)
self.i2o = nn.Linear(input_size + hidden_size, output_size)
self.softmax = nn.Softmax()
def forward(self, input, hidden):
combined = torch.cat((input, hidden), 1)
hidden = self.i2h(combined)
output = self.i2o(combined)
output = self.softmax(output)
output = output.add(1e-8)
output = output.log()
return output, hidden
def initHidden(self):
return Variable(torch.zeros(1, self.hidden_size).cuda())
criterion and the train() function are written as follows:
criterion = nn.NLLLoss().cuda()
learning_rate = 0.005 # If you set this too high, it might explode. If too low, it might not learn
def train(category_tensor, line_tensor):
hidden = rnn.initHidden()
rnn.zero_grad()indian
# print(len(line_tensor.size()))
if(line_tensor.dim() != 0): #I have random new lines in some cases. This condition is to handle those
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
loss = criterion(output, category_tensor)
loss.backward()
# This line is used to prevent the vanishing / exploding gradient problem
torch.nn.utils.clip_grad_norm(rnn.parameters(), 0.25)
for p in rnn.parameters():
p.data.add_(-learning_rate, p.grad.data)
return output, loss.data[0]
else:
return None, -1
Training of the model happens here
n_iters = 40000
print_every = 200
plot_every = 200
# # Keep track of losses for plotting
current_loss = 0
all_losses = []
def timeSince(since):
now = time.time()
s = now - since
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
start = time.time()
tp = 0
tn = 0
fp = 0
fn = 0
precision = 0
recall = 0
fmeasure = 0
for iter in range(1, n_iters + 1):
category, line, category_tensor, line_tensor = randomTrainingExample()
output, loss = train(category_tensor, line_tensor)
if loss != -1:
current_loss += loss
guess, guess_i = categoryFromOutput(output)
if guess == -1 and guess_i == -1:
continue
else:
correct = '1' if guess == category else '0 (%s)' % category
if guess == 'class1' and category == 'class1':
tp += 1
elif guess == 'class2' and category == 'class2':
fn += 1
elif guess == 'class1' and category == 'class2':
fp += 1
else:
tn += 1
if iter % print_every == 0:
loss = current_loss / print_every
print('%d %d%% (%s) %.4f %s / %s %s' % (iter, iter / n_iters * 100, timeSince(start), loss, line, guess, correct))
all_losses.append(current_loss / plot_every)
current_loss = 0
def evaluate(line_tensor):
hidden = rnn.initHidden()
if(line_tensor.dim() == 0):
return line_tensor
else:
for i in range(line_tensor.size()[0]):
output, hidden = rnn(line_tensor[i], hidden)
return output
def predict(input_line, category, n_predictions=1):
output = evaluate(Variable(lineToTensor(input_line)).cuda())
global total
global indian
global nonindian
total += 1
if(output.dim() != 0):
topv, topi = output.data.topk(1, 1, True)
for i in range(0, n_predictions):
value = topv[0][i]
category_index = topi[0][i]
if category_index <= 1:
if all_categories[category_index] == 'indian':
indian += 1
else:
nonindian += 1
predictions.append([value, all_categories[category_index], category]) |
st101811 | Haven’t checked the gradient clipping part but the negative loss occurs because you need to use LogSoftmax with the NLLLoss. If the output vector is y^ = [0.99 0.01 0.0] (which is the output of softmax) and the true class y = 0 the NLLLoss is simply defined as -0.99. If you use log softmax for the same output vector defined above then y^ = [0 -4.6 -inf] so NLLLoss will be -0 = 0. |
st101812 | Hello guys¡
Today I am preparing birds and cars dataset from stanford university to finetune some pretrained imagenet models. I am preparing the dataset in different folders as keeping the entire dataset in RAM needs more than 20 GB and use torch.datasets.ImageFolder
I want to use several threads to upload the dataset as reading from disk is slow so I do not want to have a bottleneck on data loading (I am pretty sure i will have it because cannot use several threads to load as I will be reading from a hard disk).
I have been reading tutorials and other posts. I want to know two things:
-first: Does each worker load a batch or each workers loads samples from the next batch? I think each worker load a batch but I am not very used to multiprocessing package and after reading the code I cannot be certain about that.
-second: I read this in the tutorial https://pytorch.org/docs/stable/notes/faq.html#dataloader-workers-random-seed 24 but I am not really sure on what should I do to not get replicated data.
Thanks. |
st101813 | Currently the workers load batches of data each.
As long as you don’t get any random numbers in your Dataset, everything should work as expected. |
st101814 | Thanks.
I pretend to use random transformations from the torchvision package. How should I proceed? |
st101815 | You can just create your transformations using torchvision.transforms 2 and pass it to your Dataset as transform. |
st101816 | Yes, I know that. I mean that I would be using random transformation such as RandomHorizontalFlip. In such a case would I get replicated data?
Your first answers stated: “As long as you don’t get any random numbers in your Dataset, everything should work as expected.”
Because I would be using random transformation when loading data and based on your first answer I could have problems getting replicated data. How should I fix? |
st101817 | Sorry for the misleading statement.
Each worker will get its base seed, so it will be alright.
If you try to sample from another library like np.random, you might encounter problems.
Have a look at this code sample:
class MyDataset(Dataset):
def __init__(self):
self.data = torch.randn(10, 1)
def __getitem__(self, index):
print("numpy random: ", np.random.randint(0, 10, size=1))
print("pytorch random: ", torch.randint(0, 10, (1,)))
x = self.data[index]
return x
def __len__(self):
return len(self.data)
dataset = MyDataset()
loader = DataLoader(dataset, num_workers=2)
for batch_idx, data in enumerate(loader):
print("batch idx {}".format(batch_idx)) |
st101818 | Ok THANKs¡
One more thing. I have checked torchvision code at github and see that transform uses the random module from python. I understand that when using multiprocess the distribution of which samples are sampled from each thread are based on torch.random, and random is only user to (as example) decide if we rotate a particular image or not.
Moreover, how can one really ensure that we will not be replicating a sample? Because in (potentially) large datasets different seeds can end up producing the same number after several calls to random. As example consider:
numpy.random.seed(1)
numpy.random.randint(1,100,10)
array([10, 8, 64, 62, 23, 58, 2, 1, 61, 82])
numpy.random.seed(2)
numpy.random.randint(1,100,10)
array([41, 16, 73, 23, 44, 83, 76, 8, 35, 50] |
st101819 | The sampler is responsible for the creation of the indices.
As long as you don’t use a sampler which samples with replacement, the indices won’t be repeated.
You can add print("index ", index) into the __getitem__ method and see that each index is unique for the current example. |
st101820 | So the problems we might encounter are related to random modifications on the given index in getitem with a different random library, right?
Thanks for quick replies. |
st101821 | Yes. If you use something like if np.random(...) > 0, I would make sure the random number is not always the same. |
st101822 | Traceback (most recent call last):
File “train.py”, line 124, in
main()
File “train.py”, line 119, in main
model.to(device)
File “/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 393, in to
return self._apply(lambda t: t.to(device))
File “/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 176, in _apply
module._apply(fn)
File “/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 176, in _apply
module._apply(fn)
File “/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 182, in _apply
param.data = fn(param.data)
File “/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 393, in
return self._apply(lambda t: t.to(device))
TypeError: to() received an invalid combination of arguments - got (bool), but expected one of:
(torch.device device, torch.dtype dtype)
(torch.dtype dtype)
didn’t match because some of the arguments have invalid types: (bool)
(Tensor other)
didn’t match because some of the arguments have invalid types: (bool) |
st101823 | Could you print the type of device?
Apperently its type is bool, while you are probably trying to move your model to GPU?
If so, try device = 'cuda'. |
st101824 | I want to convert my pytorch model into caffe2 model.
Convert successed. However, it can not run.
E/native: [E operator_schema.cc:83] Argument ‘is_test’ is required for Operator ‘SpatialBN’. |
st101825 | I am trying to make a density estimation of a sample of data to compute a divergence between probability distributions. I used torch.histc to histogram my data so I could very roughly approximate the pdf. However, upon backpropagating I receive the error:
“the derivative for “histc” is not implemented”
Does anyone know how to work around this so I could bin my data and normalize it to make this calculation differentiable? |
st101826 | The histc function did not implemented backward operation is because it is a discrete operation (I really don’t know how would you define that exactly). But the value could be added to some other loss. I tested with the following code.
Noted: the loss will be updated respect to lossB below. And I am not sure how would you applied that to your problem
The setup
class TestModule(nn.Module):
def __init__(self):
super(TestModule, self).__init__()
self.sigmoid = nn.Sigmoid()
self.conv = nn.Conv2d(1, 1, 3, padding=1)
def forward(self, x):
x = self.conv(x)
x = self.sigmoid(x)
return x
model = TestModule()
optimizer = optim.Adam(model.parameters())
isAborted = True
def loop_stack(loss, acc):
global isAborted
if isAborted:
return
if loss == None:
print(list(reversed(list(map(lambda x: str(x)[1:-1].split(" ")[0], acc)))))
return
new_acc = acc[:] + [loss]
try:
losses_child = list(map(lambda x: x[0], loss.next_functions))
for l in losses_child:
loop_stack(l, new_acc)
if isAborted:
break
except KeyboardInterrupt:
isAborted = True
return
except:
print(list(reversed(list(map(lambda x: str(x)[1:-1].split(" ")[0], acc)))))
return
def print_backprop(loss):
global isAborted
tmp = loss.grad_fn
isAborted = False
loop_stack(tmp, [])
The execution:
source = torch.rand(1, 1, 5, 5)
target = model(source)
s = source.contiguous().view(-1)
t = target.contiguous().view(-1)
t_min = torch.min(torch.cat((s, t), 0)).item()
t_max = torch.max(torch.cat((s, t), 0)).item()
n_bins = 4
s_his = torch.histc(source, bins=n_bins, min=t_min, max=t_max)
t_his = torch.histc(target, bins=n_bins, min=t_min, max=t_max)
lossA = F.mse_loss(s_his.detach(), t_his.detach())
lossB = F.mse_loss(source, target)
optimizer.zero_grad()
loss = lossB/lossB.detach()*lossA #(lossB/lossB)*lossA
loss.backward(retain_graph=True)
optimizer.step()
print("Loss: {}\tBack prop path".format(loss))
print_backprop(loss)
print()
print("Before:\n{}\n\n{}\n=============".format(source, target))
print("After:\n{}\n\n{}".format(source, model(source)))
The final result showed that the weight is updated (somehow).
There are a few things I still don’t know
like is the weight updated with lossB with magnitude of 1 or with magnitude of loss calculated with lossA
If I missed anything please reply or message me (I’m really curious) |
st101827 | I am trying to build a progressive autoencoder, for the encoder part I would like to prepend more layer on top of a ModuleList, how can I achieve that? I want to avoid copying and remaking my model every time I grow a layer.
eg:
# BEFORE GROWTH
self.layers = nn.ModuleList( [ conv2d( 256, 512 ), nn.ReLu() ] )
forward(self, x ):
for layer in self.layers.children():
x = layer( x )
return x
** PREPEND module to the top of the encoder... **
# AFTER GROWTH
self.layers = nn.ModuleList( [ conv2d( 128, 256 ), nn.ReLu(), conv2d( 256, 512 ), nn.ReLu() ] )
forward(self, x ):
for layer in self.layers.children():
x = layer( x )
return x
I have a working prototype model already but I was constantly destroying and making new nn.sequential which isn’t efficient at all. |
st101828 | Solved by ptrblck in post #2
Would that work:
mlist = nn.ModuleList([nn.Conv2d(3, 6, 3, 1, 1)])
mlist = nn.ModuleList([nn.Conv2d(1, 3, 3, 1, 1), *mlist])
Or are you trying to avoid exactly this? |
st101829 | Would that work:
mlist = nn.ModuleList([nn.Conv2d(3, 6, 3, 1, 1)])
mlist = nn.ModuleList([nn.Conv2d(1, 3, 3, 1, 1), *mlist])
Or are you trying to avoid exactly this? |
st101830 | Hi prtblck I think there was some mistake in my original code, I didn’t do it your way, I actually made another variable to store both old and new values first then I unpacked everything into a nn.Sequential() like what you did. I can see how that is unnecessary and takes up extra memory. Btw the code above does it make a shadow copy of the original list? Sorry if this sounds like a beginner python question. |
st101831 | It should just pass the nn.Module references around, i.e. no copy should be involved. |
st101832 | Follow up question, do I have to make a new parameter list to feed it into my optimizer every time I increase my network’s complexity? Does Pytorch keep a track of that?
Eg:
# Before growth
Encoder = [ conv2d, relu ]
Decoder = [ contranspose2d, sigmoid]
parameter = list(Encoder.parameters() ) + list(Decoder.parameters() )
optimizer = optim.Adam(parameter, lr)
# After growth
Encoder = [ conv2d, relu, conv2d, relu ]
Decoder = [ constranspose2d, relu, contranspose2d, sigmoid]
parameter = list(Encoder.parameters() ) + list(Decoder.parameters() ) ?
# Should I grab the new parameters again? |
st101833 | I would try to use optimizer.add_param_group, as a complete re-initialization would remove all running estimates, if your optimizer supports these (e.g. Adam). |
st101834 | I did not know that thank you for the tip.
Edited: For add_param_group I assume you need to loop through the entire network and grab all newly added weights and then feed it into add_param_group is this thinking correct? The documentation says param_group is a dict so I should feed the state_dict() correct? |
st101835 | Sorry, missed your edit.
You could add it with:
optimizer.add_param_group({'params': torch.randn(1, requires_grad=True)})
print(optimizer.param_groups) |
st101836 | “Out of Memory” error when restarting the training after validation. And validation is OK. |
st101837 | Did you use with torch.no_grad() for your validation?
If so, could you post a simplified code snippet representing your training and validation routine? |
st101838 | local/anaconda/lib/python2.7/site-packages/torch/utils/ffi/…/…/lib/include/TH/THMath.h:134:3: error: ‘for’ loop initial declarations are only allowed in C99 or C11 mode
for (size_t i = 0; i <= len; i++)
how can I solve it? |
st101839 | I have read a paper about a fancy multitask model, and I am trying to rebuild it through pytorch. While I have encountered with several questions. In the model(shown below), the multitask model has two part of input, and they share an encoder. As a freshman, I am a little confused about how to build the model.
I have some thoughts, but I don’t whether they would work.
Should I input all the inputs into one model but separate them into different parts?
But if I do this, how can i solve the problem that the two datasets have different size?
Thank you very much!!!
the model:
image.png1270×896 65.3 KB |
st101840 | Assuming the encoder is the same one, simply compute forward outputs. If the size of inputs from different datasets is different, conventionally, you need to make them the same size such as cropping them or resizing them to the same size.
I write a brief example for you reference.
enc = model()
fc = nn.Linear(...)
DNN = another_model()
input1 = torch.ones(batch_size, 10)
input2 = torch.zeros(batch_size, 10)
optim = optim.Adam(enc.parameters(), lr=lr)
... some operation for input1
output1 = fc(enc(input1))
... some operation for input2
output2 = DNN(enc(intput2))
loss_1 = cal_loss1(output1, target1)
loss_2 = cal_loss2(output2, target2)
total_loss = loss_1 + loss_2
optim.zero_grad()
total_loss.backward()
optim.step() |
st101841 | Hi, I’ve tried to use the code below for determining the number of Floating Point Operations required at forward pass for CNN models. For a similar model that has been made very sparse(90% zeros) through quantization, I would expect that the number of FLOPS required would be a lot less but i get the same number of FLOPS as compared to the original model. How do i get the FLOPS for a sparse model or is there a reason why the value remains the same? Thanks
def count_flops(model, input_image_size):
# flops count from each layer
counts = []
# loop over all model parts
for m in model.modules():
if isinstance(m, nn.Conv2d):
def hook(module, input):
factor = 2*module.in_channels*module.out_channels
factor *= module.kernel_size[0]*module.kernel_size[1]
factor //= module.stride[0]*module.stride[1]
counts.append(
factor*input[0].data.shape[2]*input[0].data.shape[3]
)
m.register_forward_pre_hook(hook)
elif isinstance(m, nn.Linear):
counts += [
2*m.in_features*m.out_features
]
noise_image = torch.rand(
1, 3, input_image_size, input_image_size
)
# one forward pass
_ = model(Variable(noise_image.cuda(), volatile=True))
return sum(counts) |
st101842 | pgadosey:
How do i get the FLOPS for a sparse model or is there a reason why the value remains the same?
Well, if you count the sizes of the parameters and inputs, the zeros count just as well as any other numbers.
Actually, to convert the sparsity into a reduced flop-count, you would either have to identify weight parts to eliminate (e.g. channels which are all zero) or move to a sparse representation of the parameters and inputs. |
st101843 | Thanks for your reply. Is there a way to my already trained model to a sparse representation in pytorch? |
st101844 | It seems that some of the source files are missing for some functions and classes. For Eg: DropoutBackward and torch.bernoulli are missing. Can anyone suggest where to look for these source files? |
st101845 | Hi,
The files that you cannot find in the python code come from the C backend.
The C implementations are in the Aten library folder. |
st101846 | As titled, I am working on a text autoencoder with CNN The parameters of the encoder and decoder seem to be the same but the output is of different size. I know it has something to do with the padding and the confusion with stride. But I don’t really understand why is it this case and how to fix it. Thanks a lot!
class ConvEncoder(nn.Module):
def __init__(self, embedDim, maxLength, filterSize, filterShape, latentSize):
super(ConvEncoder, self).__init__()
self.embedDim = embedDim
self.maxLength = maxLength
self.filterSize = filterSize
self. filterShape = filterShape
self.latentSize = latentSize
t1 = maxLength + 2 * (filterShape - 1)
t2 = int(math.floor((t1 - filterShape) / 2) + 1) # "2" means stride size
t3 = int(math.floor((t2 - filterShape) / 2) + 1) - 2
#self.embed = embedding
self.conv1 = nn.Conv2d(1, filterSize, kernel_size=(filterShape, embedDim), stride=2)
self.batchNorm1 = nn.BatchNorm2d(filterSize)
self.conv2 = nn.Conv2d(filterSize, filterSize*2, kernel_size=(filterShape, 1), stride=2)
self.batchNorm2 = nn.BatchNorm2d(filterSize*2)
self.conv3 = nn.Conv2d(filterSize*2, latentSize, kernel_size=(t3, 1), stride=2)
# weight initialize for conv layer
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
def forward(self, x):
# x.size() is (L, emb_dim) if batch_size is 1.
# So interpolate x's dimension if batch_size is 1.
if len(x.size()) < 3:
x = x.view(1, *x.size())
# reshape for convolution layer
if len(x.size()) < 4:
x = x.view(x.size()[0], 1, x.size()[1], x.size()[2])
print("input: " + str(x.size()))
conv1Output = self.conv1(x)
h1 = F.relu(self.batchNorm1(conv1Output))
conv2Output = self.conv2(h1)
h2 = F.relu(self.batchNorm2(conv2Output))
h3 = F.relu(self.conv3(h2))
print("conv1: " + str(conv1Output.size()))
print("conv2: " + str(conv2Output.size()))
print("conv3: " + str(h3.size()))
return h3
class ConvDecoder(nn.Module):
def __init__(self, tau, embedDim, maxLength, filterSize, filterShape, latentSize):
super(ConvDecoder, self).__init__()
self.tau = tau
self.maxLength = maxLength
self.embedDim = embedDim
#self.embed = embedding
"""
embedWeightSize = embedWeights.size()
self.vocabSize = embedWeightSize[0]
self.embeddingDim = embedWeightSize[1]
print("Vocab size: " + str(self.vocabSize))
print("embeddingDim: " + str(self.embeddingDim))
self.emb = nn.Embedding(self.vocabSize, self.embeddingDim)
self.emb.weight.data.copy_(embedWeights)
#Freeze embedding weights
self.emb.weight.requires_grad = False
"""
t1 = maxLength + 2 * (filterShape - 1)
t2 = int(math.floor((t1 - filterShape) / 2) + 1) # "2" means stride size
t3 = int(math.floor((t2 - filterShape) / 2) + 1) - 2
self.deconv1 = nn.ConvTranspose2d(latentSize, filterSize * 2, kernel_size=(t3, 1), stride=2)
self.batchNorm1 = nn.BatchNorm2d(filterSize * 2)
self.deconv2 = nn.ConvTranspose2d(filterSize * 2, filterSize, kernel_size=(filterShape, 1), stride=2)
self.batchNorm2 = nn.BatchNorm2d(filterSize)
self.deconv3 = nn.ConvTranspose2d(filterSize, 1, kernel_size=(filterShape, embedDim), stride=2)
#output_padding=(1,0)
# weight initialize for conv_transpose layer
for m in self.modules():
if isinstance(m, nn.ConvTranspose2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
def forward(self, h3):
deconvOutput1 = self.deconv1(h3)
h2 = F.relu(self.batchNorm1(deconvOutput1))
deconvOutput2 = self.deconv2(h2)
h1 = F.relu(self.batchNorm2(deconvOutput2))
deconvOutput3 = self.deconv3(h1)
xHat = F.relu(deconvOutput3)
xHat = xHat.squeeze()
print("Deconv1: " + str(deconvOutput1.size()))
print("Deconv2: " + str(deconvOutput2.size()))
print("Deconv3: " + str(deconvOutput3.size()))
exit()
# x.size() is (L, emb_dim) if batch_size is 1.
# So interpolate x's dimension if batch_size is 1.
if len(xHat.size()) < 3:
xHat = xHat.view(1, *xHat.size())
# normalize
normXHat = torch.norm(xHat, 2, dim=2, keepdim=True)
recXHat = xHat / normXHat
return recXHat
Output:
input: torch.Size([12, 1, 20, 100])
conv1: torch.Size([12, 300, 9, 1])
conv2: torch.Size([12, 600, 4, 1])
conv3: torch.Size([12, 500, 1, 1])
Deconv1: torch.Size([12, 600, 3, 1])
Deconv2: torch.Size([12, 300, 7, 1])
Deconv3: torch.Size([12, 1, 15, 100])
I am really new to this. Sorry for my ignorance and thank you for your help |
st101847 | Hi, I would like to resize the features, which obtained from VGG, for example, from 512×m×n to 512×a×b, on GPU, and use bilinear interpolation. I am sorry, I can not find the function to do this. So, I want to know, how can I do this? |
st101848 | Below is my code:
import torch as pt
from torch.nn import functional as F
a = pt.Tensor([[0, 1], [2, 3]])
b = pt.Tensor([[1, 0], [5, 4]])
print(F.mse_loss(a, b), F.mse_loss(a, b, reduction='elementwise_mean'))
a = pt.nn.Parameter(a)
b = pt.nn.Parameter(b)
print(F.mse_loss(a, b), F.mse_loss(a, b, reduction='elementwise_mean'))
The output was:
tensor(3.) tensor(3.)
tensor(12., grad_fn=<SumBackward0>) tensor(12., grad_fn=<SumBackward0>)
I wonder why they gave two different results?
Environment setting:
python 3.6
pytorch 0.4.1 |
st101849 | This is a bug. Sorry about it. It was previous reported at https://github.com/pytorch/pytorch/issues/10009 82 and we already issued a fix on master. |
st101850 | Hi!
I’m using save_image after some conv layer to output the image. But since the output shape of the tensor is torch.Size([32, 1, 3, 3]). I end up having very tiny images. How can I resize before calling save_image.
By the way, using scipy works
img = x.detach().cpu().numpy()
img = img[0] # Taking one image to test with
img = np.transpose(img, (2, 1, 0))
print(img.shape)
from scipy.misc import imsave, imresize
img = imresize(img, (224, 224))
imsave("./images/att.png", img)
Thank you |
st101851 | Solved by ptrblck in post #2
You could use:
x = torch.randn(32, 1, 3, 3)
transform = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize(size=24),
transforms.ToTensor()
])
x = [transform(x_) for x_ in x]
torchvision.utils.save_image(x, 'test.png') |
st101852 | You could use:
x = torch.randn(32, 1, 3, 3)
transform = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize(size=24),
transforms.ToTensor()
])
x = [transform(x_) for x_ in x]
torchvision.utils.save_image(x, 'test.png') |
st101853 | Using x = torch.randn(32, 1, 3, 3) works, but using my x tensor does’nt work
File "/home/paul/miniconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 49, in __call__
img = t(img)
File "/home/paul/miniconda3/lib/python3.6/site-packages/torchvision/transforms/transforms.py", line 110, in __call__
return F.to_pil_image(pic, self.mode)
File "/home/paul/miniconda3/lib/python3.6/site-packages/torchvision/transforms/functional.py", line 109, in to_pil_image
npimg = np.transpose(pic.numpy(), (1, 2, 0))
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead.
I have tested this
x = att.view(b, 1, h, w)
print(x.size()) # ==> torch.Size([32, 1, 3, 3])
print(type(x)) # ==> <class 'torch.Tensor'>
x = torch.randn(32, 1, 3, 3)
print(x.size()) # ==> torch.Size([32, 1, 3, 3])
print(type(x)) # ==> <class 'torch.Tensor'>
So why mine doesn’t work
Thanks |
st101854 | Sorry, it still didn’t work
RuntimeError: Can't call numpy() on Variable that requires grad. Use var.detach().numpy() instead. |
st101855 | using x = att.detach() didn’t work
TypeError: can't convert CUDA tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
But using x = detach().cpu() works.
For someone who will face the same problem
Thank you. |
st101856 | import torch
def _process(queue):
input_ = queue.get()
print('get')
queue.put(input_)
print('put')
if __name__ == '__main__':
torch.multiprocessing.set_start_method('spawn')
input_ = torch.ones(1).cuda()
queue = torch.multiprocessing.Queue()
queue.put(input_)
process = torch.multiprocessing.Process(target=_process, args=(queue,))
process.start()
process.join()
result = queue.get()
print('end')
print(result)
I executed this code, and only get and put printed, no end. I use ctrl-c to interrupt it, I find it blocks in result = queue.get(). Any idea? |
st101857 | Is there a older version of pytorch I can use with libgcc 2.1.2 ( I cant use conda) I need to use pip and I am restricted to using CentOS 6.9
(with conda and local libgcc I am able to run on CentOS 6.9)
Thanks
Anand |
st101858 | Hi,
I’m getting an CUDNN_STATUS_INTERNAL_ERROR error like below.
python train_v2.py
Traceback (most recent call last):
File "train_v2.py", line 113, in <module>
main()
File "train_v2.py", line 74, in main
model.cuda()
File "/home/ahkim/Desktop/squad_vteam/src/model.py", line 234, in cuda
self.network.cuda()
File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 249, in cuda
return self._apply(lambda t: t.cuda(device))
File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 176, in _apply
module._apply(fn)
File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 176, in _apply
module._apply(fn)
File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/module.py", line 176, in _apply
module._apply(fn)
File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 112, in _apply
self.flatten_parameters()
File "/home/ahkim/anaconda3/envs/san/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 105, in flatten_parameters
self.batch_first, bool(self.bidirectional))
RuntimeError: CUDNN_STATUS_INTERNAL_ERROR
What should I try to resolve this issue?
I tried deleting .nv but no success.
The same code runs without error using Nvidia Driver Version: 396.26 (cuda V9.1.85. torch.backends.cudnn.version(): 7102). I’m getting an error using Driver Version: 390.67 (cuda V9.1.85. torch.backends.cudnn.version(): 7102) |
st101859 | solved by below steps.
export LD_LIBRARY_PATH= “/usr/local/cuda-9.1/lib64”
Due to nfs issue, have pytoch cache not in nfs. For example:
$ rm ~/.nv -rf
$ mkdir -p /tmp/$USER/.nv
$ ln -s /tmp/$USER/.nv ~/.nv |
st101860 | Is there something in pytorch to speed up circular/ring buffers ?
I think it would be a good addition to torch.utils.data.
In RL is very common to collect observations and later sample from there. |
st101861 | Can any one help me with this question?
Lets say i have a 4x4 tensor and i want to do subsampling in the following way, so for each 2x2 block i put the smallest elements together,
then the second smallest elements, and so on, so the out put will be 4 tensors with size half of the input. Stride will be = 2.
Here is an example:
Input:
A =
[1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16 ]
Results:
A1 =
[1 3
9 11]
A2 =
[2 4
9 12]
A3 =
[5 7
13 15]
A4 =
[6 8
14 16]
And that was just a toy example, the real question is how to do it for a tensor with dimention 'BxCxMxM` |
st101862 | Solved by ptrblck in post #9
You are right! Thanks for pointing this out.
Here is a (hopefully) fixed version:
kh, kw = 2, 2
dh, dw = 2, 2
input = torch.randint(10, (1,2,6,6))
input_windows = input.unfold(2, kh, dh).unfold(3, kw, dw)
input_windows = input_windows.contiguous().view(*input_windows.size()[:-2], -1)
input_windows… |
st101863 | This code should work:
kh, kw = 2, 2
dh, dw = 2, 2
input = torch.randint(10, (1, 2, 4, 4))
input_windows = input.unfold(2, kh, dh).unfold(3, kw, dw)
input_windows = input_windows.contiguous().view(*input_windows.size()[:-2], -1)
input_windows_sorted = input_windows.sort(descending=True)[0]
input_windows_sorted = input_windows_sorted.view(*input.size())
input_windows_sorted = input_windows_sorted.transpose(2, 3)
input_windows_sorted = input_windows_sorted.unfold(3, kh, kw)
Let me know, if that works for you. |
st101864 | It works great!!
I was dealing with it for 3 days and was so hopeless and were just writing loop inside loops lol, and now im just staring at your stunning code and im like how it is possible …
basically im googleing every line of your code to understand what is going on :))
Thanks a lot |
st101865 | I’m glad it works for you.
I would suggest to set the number of input channels to 1 for easy debugging / understanding.
Maybe there is another way, as I’m not really happy to use unfold twice, so let me know, if this code is not fast enough for your use case. There might be some tweaks I’m not thinking of. |
st101866 | Sure! Thank you!
first I was trying to dig in in the maxpooling function and see how it is written and change it, but it was compiled and couldnot figure it out.
But this should be fine for now, I want to try it on a simple case first.
Just a quick question, when you use the .view, what is the role of * in .view(*input_windows.size()[:-2], -1)?
I haven’t seen * before in view |
st101867 | The * is used to unpack the following tuple or list.
In my code I’m using it to unpack input_window.size()[:-2] into the separate sizes.
Python creates therefore something like this:
tensor.view(*tensor.size(), -1)
# will be unpacked to
tensor.view(1, 1, 2, 2, -1)
You can read more about this operation here 2. |
st101868 | Sorry I realized that the code is not working exactly as expected, can you please let me know your opinion.
So here i provide an example:
for example:
kh, kw = 2, 2
dh, dw = 2, 2
input = torch.rand(1,2,4,4)
input_windows = input.unfold(2, kh, dh).unfold(3, kw, dw)
input_windows = input_windows.contiguous().view(*input_windows.size()[:-2], -1)
input_windows_sorted = input_windows.sort(descending=True)[0]
input_windows_sorted = input_windows_sorted.view(*input.size())
input_windows_sorted = input_windows_sorted.transpose(2, 3)
input_windows_sorted = input_windows_sorted.unfold(3, kh, kw)
print(input_windows_sorted.size())
Output: torch.Size([1, 2, 4, 2, 2])
but when i change the size of channels i see:
input = torch.rand(1,2,6,6)
input_windows = input.unfold(2, kh, dh).unfold(3, kw, dw)
input_windows = input_windows.contiguous().view(*input_windows.size()[:-2], -1)
input_windows_sorted = input_windows.sort(descending=True)[0]
input_windows_sorted = input_windows_sorted.view(*input.size())
input_windows_sorted = input_windows_sorted.transpose(2, 3)
input_windows_sorted = input_windows_sorted.unfold(3, kh, kw)
print(input_windows_sorted.size())
OutPut: torch.Size([1, 2, 6, 3, 2])
but the output should be
OutPut: torch.Size([1, 2, 4, 3, 3) |
st101869 | You are right! Thanks for pointing this out.
Here is a (hopefully) fixed version:
kh, kw = 2, 2
dh, dw = 2, 2
input = torch.randint(10, (1,2,6,6))
input_windows = input.unfold(2, kh, dh).unfold(3, kw, dw)
input_windows = input_windows.contiguous().view(*input_windows.size()[:-2], -1)
input_windows_sorted = input_windows.sort(descending=True)[0]
input_windows_sorted = input_windows_sorted.permute(0, 1, 4, 2, 3)
print(input_windows_sorted.size()) |
st101870 | yes it works I think!
just a last question here,
if i want to concatenate the results based on channels, is there a faster way than:
B = input_windows_sorted[:,0,:,:,:]
for i in range(1,input_windows_sorted.size(1)):
B=torch.cat((B,input_windows_sorted[:,i,:,:,:]),1)
for this example B will have the size of [1,8,3,3] |
st101871 | You could use a view for it:
input_windows_sorted = input_windows_sorted.contiguous().view(
input_windows_sorted.size(0), -1, *input_windows_sorted.size()[-2:])
If the batch_size is known before, you can use it instead of input_windows_sorted.size(0), which makes the code a bit more readable. |
st101872 | I trained the model with matconvnet code, and convert the model to PyTorch from MatConvNet model with the tool https://github.com/albanie/pytorch-mcn 16, and the converted models can be download from http://www.robots.ox.ac.uk/~albanie/pytorch-models.html#pedestrian-alginment 17
https://github.com/albanie/pytorch-mcn/issues/1 9
I hope to load the net structure and modify parts of layers, and use the parameters of other layers in the converted model. How can I load the net structure of the model and its parameters with Pytorch from the pretrained pedestrian-alginment.pth?
I write the follow code, but it get the errors: I think the error arises from the wrong defined model variable? I should define a empty net object or how to correct it?
import torchvision.models as models
import torch
import torch.nn as nn
pretrained=False
model = models.resnet50(pretrained)
modelpath="/home/chengzi/Downloads/netvlad_v103/pre-trained-models/pedestrian_alignment.pth"
checkpoint = torch.load(modelpath)
net=model.load_state_dict(checkpoint)
print(net)
Traceback (most recent call last):
File "/home/chengzi/workspace/demo/model_s.py", line 13, in
net=model.load_state_dict(checkpoint)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 490, in load_state_dict
.format(name))
KeyError: 'unexpected key "conv1.bias" in state_dict' |
st101873 | Hi
To load parameters from a pre-trained model to another, two models must have the same structure. And the conv1 layer in officially implemented resnet does not have bias parameters.
github.com
pytorch/vision/blob/master/torchvision/models/resnet.py 257
import torch.nn as nn
import math
import torch.utils.model_zoo as model_zoo
__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
'resnet152']
model_urls = {
'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth',
'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth',
'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth',
'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth',
'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth',
}
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
This file has been truncated. show original |
st101874 | @SKYHOWIE25 Your method is not suitable for my question. The net structure is undefined in Pytorch, and I want to directly load it from the converted model http://www.robots.ox.ac.uk/~albanie/pytorch-models.html#pedestrian-alginment 46
Because my model it is converted from a trained matconvnet model, and Can I do not the net structure but directly load the net structure from the converted model? |
st101875 | The .pth file very probably only contains the trained parameter values. That is the recommended way of saving a model.
So you need to create the network structure in your code (or borrow their code) and then load the weights.
Here is how their code to load their saved model http://www.robots.ox.ac.uk/~albanie/models/pytorch-mcn/ped_align.py 393 |
st101876 | @jpeg729
After create the network structure with the above code ped_align.py, the .pth pretrained model can be load, but get the follow warning. I fix it according to fix unbroadcastable UserWarning in inception.py 9, but it is failed.
/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py:482: UserWarning: src is not broadcastable to dst, but they have the same number of elements. Falling back to deprecated pointwise behavior.
own_state[name].copy_(param) |
st101877 | I think you can ignore the warning.
The model was probably saved in a previous version of pytorch and there has probably been of a slight change in behaviour in some part of pytorch.
The warning occurs for res2a_branch2a.weight, which is of shape (64, 64, 1, 1), but got saved shape (64, 64). It looks to me like they are compatible and that a pointwise copy would work equivalently to the suggested fix.
I wondered why only one instance of a Conv2d caused such a warning when the model contains many, and interestingly enough, there is only one Conv2d in the entire model with in_channels==out_channels and kernel_size=[1, 1] and stride=(1, 1). Maybe the shape of the weight array in this specific case has been changed in a recent update to pytorch. |
st101878 | I load the converted Pytorch model with PyTORCH0.3.0_post4 and Python3.5.4, and the model is also converted with the same version of Pytorch.
$ pip3 show torch
Metadata-Version: 2.0
Name: torch
Version: 0.3.0.post4
Summary: Tensors and Dynamic neural networks in Python with strong GPU acceleration
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
Installer: pip
License: UNKNOWN
Location: /usr/local/lib/python3.5/dist-packages
Requires: numpy, pyyaml
Classifiers: |
st101879 | Is this what you are looking for?
https://pytorch.org/docs/stable/notes/serialization.html#recommend-saving-models 653 |
st101880 | I am wondering if maxpool2d in pytorch as any learnable parameter? and if so what is that?
I saw people use self.pool1 = nn.MaxPool2d(2, 2) , self.pool2 = nn.MaxPool2d(2, 2), etc in their models. So i assume there should be some learnable parameters.
The reason that im asking is that im trying to build my own maxpool and wanna make sure im doing it right. |
st101881 | Solved by ptrblck in post #2
Max pooling does not have any learnable parameters.
You can check if with:
pool = nn.MaxPool2d(2)
print(list(pool.parameters()))
> []
The initialization of these layers is probably just for convenience, e.g. if you want easily change the pooling operation without changing your forward method. |
st101882 | Max pooling does not have any learnable parameters.
You can check if with:
pool = nn.MaxPool2d(2)
print(list(pool.parameters()))
> []
The initialization of these layers is probably just for convenience, e.g. if you want easily change the pooling operation without changing your forward method. |
st101883 | What type of neural network would be best for trying to predict a 1D sequence in space that varies over time? I’m assuming some kind of RNN/LSTM, but I’m unsure how to structure the input for this task.
Basically, I hope to incorporate 1D space dependency as well as time dependency beyond just encoding the space as a feature in an LSTM. |
st101884 | It used to be possible to get data, size & type from a storage object.
e.g. for Tensor pointer t:
Tensor *t
auto s=t->storage(); n=s->size(); v=s->data(); Type T=s->type();
These have been made private, how are these attributes to be accessed now?
Thanks |
st101885 | The best I could figure out is to use pImpl(), e.g.
Tensor *t = …
auto s=t->storage()->pImpl(); n=s->size(); v=s->data(); |
st101886 | I am training RNN network.
The network relies on a “hidden” RNN state variable that is saved from cycle to cycle.
I guess when it uses loss.backward(), it will backpropagate through N, where N is the number of cycles since the hidden RNN state has been initiated.
Is it possible to just have the loss.backward() work on the last cycle, even if I don’t reinitialize the hidden state variable. |
st101887 | Solved by albanD in post #2
Hi,
If you only need to backprop the last cycle, you can .detach() the hidden state between each cycle. That way, no gradient will flow back to the previous cycles. |
st101888 | Hi,
If you only need to backprop the last cycle, you can .detach() the hidden state between each cycle. That way, no gradient will flow back to the previous cycles. |
st101889 | When I want to upsample features in the middle of a model, usually by applying deconvolution. However, does it make sense that adopting upsample in network? |
st101890 | Hello,
I’m relatively new to PyTorch but not to deep learning and I have a question concerning implementation of gradient updates:
Can I pass custom gradients to torch.optim.Adam and similar optimizers?
I’m trying to implement DNI 8 which, in some parts, uses approximations of gradients (which are used in an optimization algorithm such as Adam) to update the parameters.
The way I understand how PyTorch works is that each optimizer contains a list of parameters it’s going to change and when it’s called and it uses torch.autograd.Variable’s .grad parameter as the inputs to its optimization procedure. It then assigns the transformed gradients to that same .grad parameter.
Now, my question is: how can sidestep this procedure where the torch.optim class has the “side effect” using the models parameters, instead of an explicit argument?
I’ve thought of two solutions, none which seem to work:
Manually assigning the .grad parameter and then using the optimizer - but documentation 6 says “This attribute is lazily allocated and can’t be reassigned.”
Manually changing the param.data value, but then I’d have to redefine the optimizer I want to use myself and that doesn’t seem like a genuine solution to this problem
What is the preferred way of doing this?
I’ve seen one post that touches upon this question, and actually has several links to projects that are doing something similar.
But those don’t seem to be genuine solutions to this problem (the DFA project is using the 2nd method I described) and the DNI project doesn’t seem to have a clear explanation of what it is actually doing.
I figured I’d start a topic to discuss a principled way to solve problem of usage of custom gradients in torch.optim and provide a clear reference to people who are trying to solve the same problem as me.
I apologize if I’m breaking any rules or missing something obvious
Cheers! |
st101891 | Have you found a solution for that?
I think one solution is to explicitly assign the grad value of the variable as suggested here: How to use modified gradient to update parameters 385 |
st101892 | I am trying to create a batched version of the method that I am writing and I wanted to compute the loss over the whole dataset and then optimize for the specific slices. An example of what i want to do can be seen in
import torch as th
from torch.autograd import *
x = Variable(th.arange(4), requires_grad=True)
loss = th.sum(th.max(x ** 3, th.zeros(4)))
print(th.autograd.grad(loss, x))
print(th.autograd.grad(loss, x[:2]))
where I wish the last print would give the derivative for the first two elements. So I need someway of slicing the data while preserving the graph. How can I do this, without changing the way that I compute the loss? |
st101893 | when i load trained mode(trained in GPU,python3.6) in another computer(only cpu,python 2.7),but the error show as follow.And i add the '#!/usr/bin/env python# --coding:utf-8 --‘at the top.’
so why the bug at line 388 when i load the trained model?
thinks
Traceback (most recent call last):
File “hahaha.py”, line 388, in
pre_train_model_mask = torch.load(model_dir_mask,map_location=lambda storage, loc: storage)
File “/usr/local/lib/python2.7/dist-packages/torch/serialization.py”, line 303, in load
return _load(f, map_location, pickle_module)
File “/usr/local/lib/python2.7/dist-packages/torch/serialization.py”, line 469, in _load
result = unpickler.load()
File “/home/chxx/predict/Model_Xception1.py”, line 47
SyntaxError: Non-ASCII character ‘\xe7’ in file /home/chxx/predict/Model_Xception1.py on line 47, but no encoding declared; see http://python.org/dev/peps/pep-0263/ 4 for details |
st101894 | As it says, it has to do with encoding not PyTorch. Try adding below line (or other alternatives as mentioned in the given link):
# -*- coding: utf-8 -*- |
st101895 | Good Morning everyone.
I have an issue that should be apparently obvious but for which I didn’t find an optimal nor elegant solution yet.
I have a class My_model(nn.Module): {.....} which returns a feature tensor. All the rest of my code is build according to this, therefore I could not change what is returned.
Inside the model I create a variable (essentially some attention weights) that I would like to access from outside the model in order to store them in a log file. What’s the most “pytorchy” way to do that? |
st101896 | Do you assign the attention weights to self? If so, you can just call model.attention_weights outside and get the parameter. |
st101897 | When processing 1K batches of data using torch.nn.DataParallel on 8 GPUs, it took 700+ seconds. But when I attempted to do that same job with just 1 GPU it took 400+ seconds.
I could imagine the bookeeping and associated processing time with parallelization. Could you share any guidelines or learnings you may have on the factors that could affect parallelization efficiency?.
Thanks. |
st101898 | Hi, I’d like to ask how to store cuda tensors without the need for I/O from GPU at the end of every training step.
Clearly below shows a negative example of how things should be done
# We assume that loss_history is an array
# and loss is a cuda tensor with size of [1]
loss_history.append(loss.item())
Does the following implementation avoid the I/O problem?
loss_history += [loss]
Please advice! Sorry for being a pytorch noob! |
st101899 | Solved by albanD in post #2
This will not send data to the cpu indeed. But you want to add a .detach() to make sure that the computational graph associated with loss is not kept around otherwise your memory usage is going to quickly explode. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.