id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st98968
|
Thank you very much!
I still have doubts about which problem I have before, because in my own defined module, no gradient calculation is needed, so why it is wrong to modify the temp tensor inplace ?
|
st98969
|
Is there any effective way to multiple the values of a Tensor along a particular dimension? I am doing the operation using for loops.
Thank You
|
st98970
|
are you looking for multiplying along the particular dimension?
Can you give an example for what are you looking for?
|
st98971
|
for a 2D tensor
[1,2,3;
4,5,6] ;
Multiplying along dimension 1 should give [6;120] and multiplying along dimension 0 should be [4, 10, 18]
|
st98972
|
how should i compute the loss in the following situation
In one forward pass i do:
Predicted_Values_1 = Model(input1)
Predicted_Values_2 = Model(input2)
in the above the same Model is used for both inputs.
now i do:
Loss_1 = loss(Predicted_Values_1,Expected_Values_1)
Loss_2 = loss(Predicted_Values_2,Expected_Values_2)
Can i consider my total loss as sum of Loss_1 and Loss_2 and then do back-propagation for it or i should do something else?
Also, is there any cases that this way of back-propagation wont work this easily and i gotta be careful?
Thanks
|
st98973
|
Sum of loss and then do back-propagation would work.
But, It depends how you want to model your loss function and depending on the loss function, your model will perform. e.g. you can model loss as follows
total_loss = alpha * Loss_1 + (1-alpha) * Loss_2
or
total_loss = alpha * Loss_1 + beta * Loss_2
Where, alpha and beta are hyper-parameters.
When you have more than one loss, then usually we combine then using some function (Which will determine performance of your model) and then backprop.
Another example for loss function is following:
E.g. here, input1 and input2 were expected to give same output. In that case, you might want to penalize network more if Predicted_Values_1 and Predicted_Values_2 are different, Such a loss function will ensure network is trained to produce same output for both the inputs. (this loss function is on top of normal loss functions to achieve better results as similar output is expected and known)
Summary: Loss function should be modeled depending on what is the expected output of both the predicted values and how can we use them to make network learn faster.
|
st98974
|
Thank you for your answer.
I just provided 2 losses for simplicity, in my code I have more than 2 losses.
I think i got the gist of it
Just one more thing, can I have alpha and beta to be trainable as well?
|
st98975
|
You need to tune alpha and beta with your cross validation set.
AutoML is framework which does this automatically for you. But just like other hyper-parameters, You will have to do it manually for now.
I am working on a project to update such hyperparameters as well from cross-validation set. Will update once I complete it. Till then, happy tuning!!
|
st98976
|
Hello everyone,
I am wondering if when we save the parameters of a trained model which contains layers with custom pre-hook operations (such as spectral normalization) the state dictionary actually also contains parameters related to those pre-hook operations and can we also recover those parameters with the load_state_dict function.
I made a very simple example using spectral normalization (Pytorch implementation) as pre-hook operation available here:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import optim
from torch.autograd import Variable
import numpy as np
from torch.utils.data import Dataset, DataLoader
import os
import gzip
import struct
from tqdm import tqdm
import cv2
from collections import OrderedDict
from spectral_norm import spectral_norm
class MNISTDataset(Dataset):
def __init__(self, data):
self.data = data
def __len__(self):
return len(self.data)
def __getitem__(self, index):
sample = self.data[index]
return sample
class TestModel1 (nn.Module):
def __init__(self):
super(TestModel1, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Sequential(
spectral_norm(nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1)),
nn.ReLU(),
spectral_norm(nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1)),
nn.ReLU(),
spectral_norm(nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1)),
nn.ReLU()
)
self.conv3 = spectral_norm(nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1))
self.conv4 = nn.Sequential(
spectral_norm(nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1)),
nn.ReLU(),
spectral_norm(nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1)),
nn.ReLU(),
spectral_norm(nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1)),
nn.ReLU()
)
self.conv9 = nn.ConvTranspose2d(64, 32, 4, 2, 1)
self.conv10 = nn.Sequential(
spectral_norm(nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1)),
nn.ReLU(),
spectral_norm(nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1)),
nn.ReLU(),
spectral_norm(nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1)),
nn.ReLU()
)
self.conv11 = nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0)
def forward(self, x):
y_ = x/255
y_ = self.conv1(y_)
y = self.conv2(y_)
y_ = y_+y
y_ = self.conv3(y_)
y = self.conv4(y_)
y_ = y_+y
y_ = self.conv9(y_)
y = self.conv10(y_)
y_ = y_+y
y_ = self.conv11(y_)
y_ = F.sigmoid(y_)
y_ = 255*y_
return y_
def load(self, path):
dic = torch.load(os.path.join(path, "intermediate_state.pth"))["state_dict"]
new_dic = OrderedDict()
for k,v in dic.items():
name = k[7:]
new_dic[name] = v
self.load_state_dict(new_dic)
class TestModel0 (nn.Module):
def __init__(self):
super(TestModel0, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3, stride=1, padding=1)
self.conv2 = nn.Sequential(
nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1),
nn.ReLU()
)
self.conv3 = nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1)
self.conv4 = nn.Sequential(
nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1),
nn.ReLU()
)
self.conv9 = nn.ConvTranspose2d(64, 32, 4, 2, 1)
self.conv10 = nn.Sequential(
nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1),
nn.ReLU(),
nn.Conv2d(32, 32, kernel_size=3, stride=1, padding=1),
nn.ReLU()
)
self.conv11 = nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0)
def forward(self, x):
y_ = x/255
y_ = self.conv1(y_)
y = self.conv2(y_)
y_ = y_+y
y_ = self.conv3(y_)
y = self.conv4(y_)
y_ = y_+y
y_ = self.conv9(y_)
y = self.conv10(y_)
y_ = y_+y
y_ = self.conv11(y_)
y_ = F.sigmoid(y_)
y_ = 255*y_
return y_
def load(self, path):
dic = torch.load(os.path.join(path, "intermediate_state.pth"))["state_dict"]
new_dic = OrderedDict()
for k,v in dic.items():
name = k[7:]
new_dic[name] = v
self.load_state_dict(new_dic)
def train(model, loader, path, epochs=10, parallelize=True):
if parallelize:
model = nn.DataParallel(model)
else:
model = model
model = model.cuda()
path_epoch = os.path.join(path, "Training")
if not os.path.exists(path_epoch):
os.makedirs(path_epoch)
loss_reco = 0
optimizer = optim.Adam(model.parameters(), lr=0.00001)
updater = lambda epoch: 0.95**(epoch//10)
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda = updater)
criterion_reco = nn.MSELoss()
for epoch in range(epochs):
print("Begining epoch {}/{}.".format(epoch+1, epochs))
count = 0
for num, batch in tqdm(enumerate(loader)):
batch = batch.unsqueeze(1).cuda()
count += 1
rec = model(batch)
loss2 = criterion_reco(rec, batch.detach())
loss_reco += loss2.item()
optimizer.zero_grad()
loss2.backward()
optimizer.step()
if num==0:
for i in range(min(5, rec.size(0))):
im = np.transpose(batch[i].detach().cpu().numpy(), (1, 2, 0))
reco = np.transpose(rec[i].detach().cpu().numpy(), (1, 2, 0))
cv2.imwrite(os.path.join(path_epoch, "epoch_"+str(epoch)+"_"+str(i)+"_image.png"), im)
cv2.imwrite(os.path.join(path_epoch, "epoch_"+str(epoch)+"_"+str(i)+"_reco.png"), reco)
loss_reco = loss_reco/(count)
print("Reconstruction loss: {}".format(loss_reco))
loss_reco = 0
torch.save({"state_dict":model.state_dict()}, os.path.join(path,"intermediate_state.pth"))
scheduler.step()
def eval_model(model, loader, parallelize=True):
if parallelize:
model = nn.DataParallel(model)
else:
model = model
model = model.cuda()
model.eval()
loss_reco = 0
path_epoch = os.path.join(path, "Val")
if not os.path.exists(path_epoch):
os.makedirs(path_epoch)
criterion_reco = nn.L1Loss()
count = 0
for num, batch in tqdm(enumerate(loader)):
count += 1
batch = batch.unsqueeze(1).cuda()
rec = model(batch)
loss2 = criterion_reco(rec, batch.detach())
loss_reco += loss2.item()
if num==0:
for i in range(min(5, rec.size(0))):
im = np.transpose(batch[i].detach().cpu().numpy(), (1, 2, 0))
reco = np.transpose(rec[i].detach().cpu().numpy(), (1, 2, 0))
cv2.imwrite(os.path.join(path_epoch, str(i)+"_image.png"), im)
cv2.imwrite(os.path.join(path_epoch, str(i)+"_reco.png"), reco)
loss_reco = loss_reco/(count)
print("Reconstruction loss: {}".format(loss_reco))
return model
def read_idx(filename):
with gzip.open(filename) as f:
zero, data_type, dims = struct.unpack(">HBB", f.read(4))
shape = tuple(struct.unpack(">I", f.read(4))[0] for d in range(dims))
return np.fromstring(f.read(), dtype=np.uint8).reshape(shape)
path_mnist = "brain_anomaly_detection/MNIST/images_train.gz"
path_labels = "brain_anomaly_detection/MNIST/label_train.gz"
if __name__=="__main__":
os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
path = "brain_anomaly_detection/test_save_model"
if not os.path.exists(path):
os.makedirs(path)
mnist = read_idx(path_mnist).astype(np.float32)
data = MNISTDataset(mnist)
loader = DataLoader(dataset=data, batch_size=128, shuffle=True)
test_model = TestModel1()
eval_model(test_model, loader)
test_model.train(True)
train(test_model, loader, path, epochs=20)
print("Test when using Spectral normalization")
print("Evaluation right after training")
eval_model(test_model, loader)
test_model = TestModel1()
test_model.load(path)
print("Evaluation after save/load parameter")
eval_model(test_model, loader)
test_model = TestModel0()
eval_model(test_model, loader)
test_model.train(True)
train(test_model, loader, path, epochs=20)
print("Test when NOT using Spectral normalization")
print("Evaluation right after training")
eval_model(test_model, loader)
test_model = TestModel0()
test_model.load(path)
print("Evaluation after save/load parameter")
eval_model(test_model, loader)
First a few explanations about this code:
-TestModel1 and TestModel0 are two very simple convolutional auto encoder except that TestModel1 includes spectale normalization at several layers.
-They are both trained on MNIST dataset (I downloaded the orignal version in the folder “brain_anomaly_detection/MNIST” so you should adapt that part) with the exact same hyperparameters for 20 epochs (the idea here is just to train a little bit to obtained a better score than the random one corresponding to parameters initialization).
-One last important thing is that I slightly modified the Pytorch code of spectral normalization:
first line 29:
weight_mat = weight_mat.reshape(height, -1)
becomes
weight_mat = weight_mat.contiguous().view(height, -1)
because otherwise spectral_norm does not support data parallelization on multiple GPU in my experience.
then line 51->57:
else:
r_g = getattr(module, self.name + '_orig').requires_grad
getattr(module, self.name).detach_().requires_grad_(r_g)
becomes
else:
weight, u = self.compute_weight(module)
setattr(module, self.name, weight)
because otherwise the accuracy drops when switching to evaluation mode and it makes more sens to me this way (I am not an expert tho).
To summurize, if you want to execute this script you should:
modify the path to MNIST training set
Copy-past the Pytorch script for spectral norm and name it “spectral_norm.py” OR change line 14 of this script to import your version of spectral normalization
Indicate your personal amount of GPU (line 265)
Then everything should work fine.
After running this script, I obtain the following message which summarize the scores:
Test when using Spectral normalization
Evaluation right after training
Reconstruction loss: 1.65
Evaluation after save/load parameter
Reconstruction loss: 110.02
Test when NOT using Spectral normalization
Evaluation right after training
Reconstruction loss: 1.88
Evaluation after save/load parameter
Reconstruction loss: 1.88
110 (the score obtained by TestModel1 after save/load parameters) actually corresponds to the kind of score obtained when the network is just randomly initialized without any training which shows that there is a problem with the pre-hook operation when loading model’s parameter.
Sorry for the long post.
|
st98977
|
Solved by SimonW in post #7
A fix is at https://github.com/pytorch/pytorch/pull/12671
|
st98978
|
I had the same problem. I was using DataParallel(), and loading the model in eval() mode does not work.
@el_samou_samou, could you elaborate more on weight_mat = weight_mat.contiguous().view(height, -1)? I don’t think I had problem with that.
|
st98979
|
Hey Tae and @el_samou_samou … This is my bad. SN with DP currently is broken. They don’t work on training or eval mode. A fix is in the works. But a workaround is:
use this DP:
class DataParallel(nn.parallel.DataParallel):
def replicate(self, module, device_ids):
replicas = super(DataParallel, self).replicate(module, device_ids)
replicas[0] = module
return replicas
use this SN: https://gist.github.com/SsnL/8e638bcfd49e71d6b1930db0df87d970 5
Note that only line 56 changed.
|
st98980
|
weight_mat = weight_mat.contiguous().view(height, -1)
Was just a way to stop error messages but it probably doesn t solve the problem. If you did not encounter problems with that line it might just be because of your Pytorch version (I am on 0.4.0 so maybe they made some changes in 0.4.1 which solve this).
Anyway I solved this using another SN implementation found on Github but @SimonW solution looks nice as well! Thank you for that Simon.
|
st98981
|
Thank you for your answer. I still have to modify .reshape to .contiguous().view line 29. According to you, is it because I am on Pytorch 0.4.0 or something else?
|
st98982
|
Hello,
I have been assigned several hours per day for using GPU; after which I have to interrupt training and resume during the next available slot. I am able to store everything I need for resuming the training except the state of the dataloader.
I am using LSUN bedroom dataset for training a model. I would like to resume Dataloader from where it stopped the previous day. Can anyone suggest how to do this?
|
st98983
|
One way is to write your custom DataLoader with seed value (which will be same for every usage).
And log number of times you have iterated over the dataloader before stopping. When you start again, you need to iterate till the last time using log.
this link 126 might be helpful for random seed based data loader
|
st98984
|
Hi,
I see there are several pytorch implementations of MAML around, eg:
https://github.com/katerakelly/pytorch-maml/tree/master/src 33
https://github.com/dragen1860/MAML-Pytorch 34
… however they all seem to work by using torch.nn.functional, rather than using the nn.Module form, and then eg somehow iterating over model.parameters().
Is this a fundamental limitation of pytorch? Or is there some way of running MAML, without having to rewrite the entire network in functional form?
|
st98985
|
Wow I was asking myself the exact same thing. I am working on an adaptation of MAML-based iteration for multi-agent learning (https://github.com/alexis-jacq/LOLA_DiCE 30), and it worked since I am not using nn.Module.
Now I want to try it on larger networks (CNN with a recurrent layer) but I have to create everything by hand with nn.functional and implement my own gradient descents.
So far, it seems that Pytorch can’t differentiate through Module parameters updates.
|
st98986
|
I have created this feature request: https://github.com/pytorch/pytorch/issues/12659 104
Let’s see what could be done.
|
st98987
|
loaded_model=load_model(’/home/king/supervisely-tutorials/anpr_ocr/src/model_num.hdf5’)
image.jpg780×371 119 KB
|
st98988
|
I had been reading about SHAP (Paper) 9, (Code) 13. Is it possible to use this in case of tabular data where I have a number of samples, each having a fixed number of features and single label class labels? I wish to get a rank of all features based on their contribution in classification.
I have seen how to use SHAP with scikit learn models. I am wondering how to integrate it with Keras/PyTorch which are used to build deep learning models. If this is not possible, can anyone please suggest any other method to do so?
|
st98989
|
Hello i have a .pb file which will work fine against new input images, how to get trained model code for the .pb file
|
st98990
|
Hi! I’m recently trying to adapt meta learning(https://arxiv.org/pdf/1703.03400.pdf 1) to my custom module, so for a defined module, e.g., , the widely used vgg16, we would usually use it as:
fc_out = vgg16(x)
loss = F.crossEntropy(fc_out, label)
loss.backword()
Is there some feature of pytorch that, we can simply:
fc_out = vgg16(x, params)
I observed in the widely used maml pytorch implementation https://github.com/katerakelly/pytorch-maml 4, it rewrite the forward function as follows, but for more complex case where I have very complex module which contains more self defined modules, rewrite all module would be arduous
class OmniglotNet(nn.Module):
‘’’
The base model for few-shot learning on Omniglot
‘’’
def __init__(self, num_classes, loss_fn, num_in_channels=3):
super(OmniglotNet, self).__init__()
# Define the network
self.features = nn.Sequential(OrderedDict([
('conv1', nn.Conv2d(num_in_channels, 64, 3)),
('bn1', nn.BatchNorm2d(64, momentum=1, affine=True)),
('relu1', nn.ReLU(inplace=True)),
('pool1', nn.MaxPool2d(2,2)),
('conv2', nn.Conv2d(64,64,3)),
('bn2', nn.BatchNorm2d(64, momentum=1, affine=True)),
('relu2', nn.ReLU(inplace=True)),
('pool2', nn.MaxPool2d(2,2)),
('conv3', nn.Conv2d(64,64,3)),
('bn3', nn.BatchNorm2d(64, momentum=1, affine=True)),
('relu3', nn.ReLU(inplace=True)),
('pool3', nn.MaxPool2d(2,2))
]))
self.add_module('fc', nn.Linear(64, num_classes))
# Define loss function
self.loss_fn = loss_fn
# Initialize weights
self._init_weights()
def forward(self, x, weights=None):
''' Define what happens to data in the net '''
if weights == None:
x = self.features(x)
x = x.view(x.size(0), 64)
x = self.fc(x)
else:
x = conv2d(x, weights['features.conv1.weight'], weights['features.conv1.bias'])
x = batchnorm(x, weight = weights['features.bn1.weight'], bias = weights['features.bn1.bias'], momentum=1)
x = relu(x)
x = maxpool(x, kernel_size=2, stride=2)
x = conv2d(x, weights['features.conv2.weight'], weights['features.conv2.bias'])
x = batchnorm(x, weight = weights['features.bn2.weight'], bias = weights['features.bn2.bias'], momentum=1)
x = relu(x)
x = maxpool(x, kernel_size=2, stride=2)
x = conv2d(x, weights['features.conv3.weight'], weights['features.conv3.bias'])
x = batchnorm(x, weight = weights['features.bn3.weight'], bias = weights['features.bn3.bias'], momentum=1)
x = relu(x)
x = maxpool(x, kernel_size=2, stride=2)
x = x.view(x.size(0), 64)
x = linear(x, weights['fc.weight'], weights['fc.bias'])
return x
|
st98991
|
Hi,
If you reload all the weights, you can simply use load_state_dict() either on self or on a deepcopy of self (if you don’t want the original module to be changed), then do the regular forward on this.
|
st98992
|
Hi, but I need to implement the meta learning training, which involves grad over grad
|
st98993
|
Heyho,
I would like to use my (not anymore supported) GPUs ‘Tesla K10.G1.8GB’ to train my model with Pytorch 0.4.0. The GPUs have a computing capability of 3.0. Does it work with Pytorch 0.4.0 when I compile the binaries found at https://pytorch.org/previous-versions/ 4? Or do I need to downgrade Pytorch?
Thanks!
Raph
|
st98994
|
I’m not sure if it with the pytorch 0.4 binary out of the box. If it doesn’t, then it might be easier to install from source: https://github.com/pytorch/pytorch 15.
|
st98995
|
Gonna try this. When I need help (which I probably, because I never installed anything from source ) I will come back at you. Thanks!
|
st98996
|
@richard so like promised: I am trying to install the thing from scratch in a conda enviroment: And i get following error:
cmake: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by cmake)
cmake: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by cmake)
cmake: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by cmake)
cmake: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.22' not found (required by cmake)
Failed to run 'bash tools/build_pytorch_libs.sh --with-nnpack --with-mkldnn caffe2 nanopb libshm gloo THD'
cmake and gcc are both installed
|
st98997
|
I have GCC 4.9 in Ubuntu 14.04 with cuda 9.0.
GCC version is dependent on the Cuda toolkit version as well. You can go with the above combination mostly. Atleast I know it works.
|
st98998
|
Guys, I wanna extend pytorch using C++, however, in my python env, I cannot find libATen.so which will be used for at::Tensor in C++? I installed pytorch by "conda install -c pytorch pytorch ". Donot tell me to install from pytorch source, thanks.
As pytorch is based on C++/C, there should be a library + header files, while I found header files of ATen related in “lib/python3.6/site-packages/torch/lib/include/ATen” but NOT the library. Help!
|
st98999
|
Hello there, I think ATen is part of the libtorch.so. This article may help: https://pytorch.org/tutorials/advanced/cpp_extension.html 42
You may for example create a minimal project. In your CMakeLists.txt:
cmake_minimum_required(VERSION 3.11 FATAL_ERROR)
set(CMAKE_CXX_STANDARD 11)
find_package(Torch REQUIRED)
add_executable(main main.cpp)
target_link_libraries(main ${TORCH_LIBRARIES})
In your main.cpp:
// #include <torch/torch.h> // You could of course also include everything.
#include <ATen/ATen.h>
using namespace at;
int main(int argc, char** argv) {
Tensor a = ones({2, 2}, kInt);
Tensor b = randn({2, 2});
auto c = a + b.to(kInt);
return 0;
}
Hope it helps, although a little late, but maybe for future readers. I have to admit that I don’t know whether this works for the stable binaries right now. I compiled PyTorch from the master branch on GitHub.
|
st99000
|
pytorch version: 1.0.0.dev20181010
saving the model :
traced_script_module = torch.jit.trace(net, im_tensor)
traced_script_module.save('model.pt')
and then:
#include <torch/script.h>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
// Deserialize the ScriptModule from a file using torch::jit::load().
std::shared_ptr<torch::jit::script::Module> module = torch::jit::load(argv[1]);
assert(module != nullptr);
std::cout << "ok\n";
}
and got an error in the terminate when loading the model:
terminate called after throwing an instance of 'std::runtime_error'
what(): unexpected string for type kind
Aborted (core dumped)
How to fix it?
|
st99001
|
Solved by XiaXuehai in post #2
updated the libtorch. Solved
|
st99002
|
Hi,
I tried to build a resnet50 network and wanna get the buffers inside the network, i.e. running means and variances. But when using method named_buffers which is defined in https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/module.py#L810-L834 11, an AttributeError comes out telling me that ‘ResNet’ object has no attribute ‘named_buffers’. I also tried method named_parameters which works fine. I am curious about the reason of it.
Thanks.
|
st99003
|
I’m not sure, but AFAIK, named_buffers and buffers are relatively new methods (corresponding PR 23).
Also, resnet models are available before these 2 methods are merged.
|
st99004
|
Thank you for your reply. I find it raises error in 0.4.1 and 0.5.0a0+3cb45ba, but works fine in 0.5.0a0+82aeebb.
|
st99005
|
I’ve had success modifying the RNN Names Classifier example from the PyTorch tutorials for my own purposes with minimal modification of parameters: https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html 1
I’m using it to input a string of something called SMILES (a way of writing a structure of a chemical in text form where each character is important) and output a single value.
Recently I put the model on GPU using .to(device), but the GPU performance is about half the speed of the CPU, which is very unexpected given it’s a Nvidia 1080 vs a middling i5. I was told this may be because this example doesn’t use a dataloader, so I’ve been trying to implement one without success.
So I must ask you all, is it true that a dataloader with the right batch size will speed up the training of this kind of model on GPU?
And how do you implement a data loader that takes characters from a string and turns them into one hot vectors? Most every dataloader example I’ve found can do images (not useful here) or does text in a way that has vocabs and thousands of words. My total unique character count is around 80 so those examples needlessly complicate things.
Thanks all.
|
st99006
|
Hi,
I want to build pytorch from the source on the ubuntu machine. I am using aliyun. But whether I use pip or conda to install the packages required, I always encounter some problems.
pip
when I run
pip install mkl
the error is
Looking in indexes: http://mirrors.cloud.aliyuncs.com/pypi/simple/
Collecting mkl
Could not find a version that satisfies the requirement mkl (from versions: )
No matching distribution found for mkl
I also check https://pypi.org/project/mkl/ 74, and download the file mkl-2018.0.3-py2.py3-none-manylinux1_x86_64.whl. Then I
pip install mkl-2018.0.3-py2.py3-none-manylinux1_x86_64.whl
And I got an error
mkl-2018.0.3-py2.py3-none-manylinux1_x86_64.whl is not a supported wheel on this platform.
Strange.
conda
➜ ~ conda install -c mingfeima mkldnn
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- mkldnn
Current channels:
- https://conda.anaconda.org/mingfeima/linux-32
- https://conda.anaconda.org/mingfeima/noarch
- https://repo.anaconda.com/pkgs/main/linux-32
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/free/linux-32
- https://repo.anaconda.com/pkgs/free/noarch
- https://repo.anaconda.com/pkgs/r/linux-32
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/pro/linux-32
- https://repo.anaconda.com/pkgs/pro/noarch
- https://conda.anaconda.org/conda-forge/linux-32
- https://conda.anaconda.org/conda-forge/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
Has anyone encountered this problem?
|
st99007
|
liuzhishan:
https://conda.anaconda.org/mingfeima/linux-32
hi, i didn’t try to build on aliyun before. But the normal build process is listed on
GitHub
pytorch/pytorch 31
pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
i suggest you use conda to install all the dependencies as CMAKE_PREFIX_PATH refers to /home/user/anaconda3. if you use pip at the first place, the libs will installed under system python instead of conda python.
if you still have trouble installing with conda, it might be a aliyun support issue.
mkldnn is only needed if you want to run cpu instances, in case mkldnn is installed in conda, it will be enabled during the build by default.
|
st99008
|
Hi, Please check if you’re using a 32bit OS? If it is, please try to run it on a 64bit OS. All the binaries that we’ve hosted on either Anaconda Cloud or PyPI are built for 64b OSes.Thanks
|
st99009
|
hi, @mingfeima, thanks for the help. I’m so sorry that I was too busy recently that I didn’t check your response.
I followed the steps on the github page to use conda. And yes, I only want to run cpu instances. Maybe it’s a aliyun support issue. The machine I use is a 32bit OS. I will try 64bit OS to see if it works.
Thanks.
|
st99010
|
hi, @jason_ye, thanks for the response. I’m so sorry that I was too busy recently that I didn’t check your response.
The machine I use is a 32bit OS. Maybe It’s the problem. I’ll try a 64bit OS to see whether it works.
Thanks.
|
st99011
|
Hi!
I know that it is possible to import LuaTorch models into PyTorch with load_lua, but the command does not support models built with nngraph.
Any idea about how to import LuaTorch models built with nngraph?
Thanks!
|
st99012
|
I am trying to train a resnet18 model on CUB birds dataset with a batch size of 16 across 4 GPUs using data parallel. My resnet code adapted from here is as follows:
'''ResNet in PyTorch.
For Pre-activation ResNet, see 'preact_resnet.py'.
Reference:
[1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
Deep Residual Learning for Image Recognition. arXiv:1512.03385
'''
import torch
import torch.nn as nn
import torch.nn.functional as F
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, in_planes, planes, stride=1):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, self.expansion*planes, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(self.expansion*planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
out = self.bn3(self.conv3(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=200):
super(ResNet, self).__init__()
self.in_planes = 64
self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
self.linear = nn.Linear(2048, num_classes)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = self.layer4(out)
out = F.avg_pool2d(out, 4)
out = out.view(out.size(0), -1)
# print(out.shape)
out = self.linear(out)
return out
def ResNet18():
return ResNet(BasicBlock, [2,2,2,2])
def ResNet34():
return ResNet(BasicBlock, [3,4,6,3])
def ResNet50():
return ResNet(Bottleneck, [3,4,6,3])
def ResNet101():
return ResNet(Bottleneck, [3,4,23,3])
def ResNet152():
return ResNet(Bottleneck, [3,8,36,3])
My main trianing and test driver is as follows:
criterion = nn.CrossEntropyLoss()
logger = Logger(logs_path)
model = ResNet34()
model = nn.DataParallel(model)
model = model.cuda()
optimizer = optim.Adam(params=model.parameters(), lr=1e-4)
for epoch in range(1000):
train_loss, train_acc = train(epoch)
test_loss, test_acc = test(epoch)
print("%f Epoch; %f Train Loss, %f Test Loss, %f Train Acc, %f Test Acc" % (epoch, train_loss, test_loss,
train_acc, test_acc))
My train and test loops are as follows:
def train(epoch):
model.train()
train_loss = 0.0
train_acc = 0.0
count = 0.0
for id, sample in enumerate(train_loader):
count += 1
image, label = sample['image'], sample['label']
if torch.cuda.is_available():
image = image.cuda()
label = label.cuda()
optimizer.zero_grad()
predictions = model(image)
predictions = predictions.view(-1, 200)
loss = criterion(predictions, label)
loss.backward()
optimizer.step()
train_loss += loss.item()
_, predictions = predictions.detach().max(1)
correct = predictions.eq(label.detach()).sum().item()
acc = 100.0 * correct / image.size(0)
train_acc += acc
# print(train_loss, count)
train_loss = train_loss / count
train_acc = train_acc / count
info = {'train_loss': train_loss, 'train_acc': train_acc}
for k, v in info.items():
logger.scalar_summary(k, v, epoch+1)
return train_loss, train_acc
def test(epoch):
model.eval()
test_loss = 0.0
test_acc = 0.0
count = 0.0
for id, sample in enumerate(test_loader):
count += 1
image, label = sample['image'], sample['label']
if torch.cuda.is_available():
image = image.cuda()
label = label.cuda()
predictions = model(image)
predictions = predictions.view(-1, 200)
loss = criterion(predictions, label)
test_loss += loss
_, predictions = predictions.detach().max(1)
correct = predictions.eq(label.detach()).sum().item()
acc = 100.0 * correct / image.size(0)
test_acc += acc
test_loss = test_loss / count
test_acc = test_acc / count
info = {'test_loss': test_loss, 'test_acc': test_acc}
for k, v in info.items():
logger.scalar_summary(k, v, epoch + 1)
return test_loss, test_acc
I get the following traceback:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m checks.check ✭ ◼ master
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/.../checks/check.py", line 190, in <module>
test_loss, test_acc = test(epoch)
File "/.../checks/check.py", line 138, in test
predictions = model(image)
File "/.../lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/.../lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 123, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/.../lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 133, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/.../lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 77, in parallel_apply
raise output
File "/.../python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 53, in _worker
output = module(*input, **kwargs)
File "/.../lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/.../networks_supervised/resnet.py", line 89, in forward
out = self.layer2(out)
File "/.../lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/.../lib/python3.6/site-packages/torch/nn/modules/container.py", line 91, in forward
input = module(input)
File "/.../lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/.../networks_supervised/resnet.py", line 33, in forward
out = F.relu(out)
File "/.../lib/python3.6/site-packages/torch/nn/functional.py", line 643, in relu
return torch.relu(input)
RuntimeError: CUDA error: out of memory
Would love to get any advice on what I am doing wrong. Thanks
|
st99013
|
Solved by rudra_saha in post #2
Solved it, using
with torch.no_grad():
after the for loop in the test.
|
st99014
|
I have recently trained a model with NLLLoss that looks like this:
(0): Linear(in_features=22761, out_features=300, bias=True)
(1): ReLU()
(2): Linear(in_features=300, out_features=300, bias=True)
(3): ReLU()
(4): Linear(in_features=300, out_features=300, bias=True)
(5): ReLU()
(6): Linear(in_features=300, out_features=2, bias=True)
(7): Softmax()
I want to get the probabilities from the output of the model after testing it with one sample. I use this code to test:
for batch_idx, (x, y) in enumerate(dataloader): #comprised of one sample
x = Variable(x.cuda()) #sample of size 22761
y = Variable(y.cuda()) #label, which is 1
# forward pass
y_model = model(x)
print(torch.log(y_model))
And get the following output:
Variable containing:
0.0000 -16.9570
[torch.cuda.FloatTensor of size 1x2 (GPU 0)]
I don’t know what these values mean or why they are the same for every input. Could someone please explain what is happening?
Thanks
|
st99015
|
I assume you’ve also used torch.log(y_model) during training, as nn.NLLLoss expects log-probabilities as input. If so, you might encounter some issues regarding the numerical stability. It’s usually better to call F.log_softmax on the output.
That being said, you are printing out the log-probabilities for both classes.
It seems your model gives class0 a very high probability (basically 100%) and a very low one to class1.
To see the probabilities, just remove the torch.log call on y_model.
If you test your model, you should call model.eval() on it to switch the behavior of some layers to evaluation.
In your case it doesn’t seem to be necessary as your model doesn’t have nn.BatchNorm or nn.Dropout layers.
Does your model output the same values for every sample in the test set?
How was your training accuracy? Do you have an imbalanced dataset, i.e. many samples of class0 and very few of class1?
Also, as another side node, Variables are deprecated since 0.4.0. You can now just use tensors directly.
If you want the volatile behavior, use with torch.no_grad(): for your eval loop.
|
st99016
|
Thanks for the reply,
When normally outputting without torch.log I get:
Variable containing:
1.0000e+00 4.3219e-08
[torch.cuda.FloatTensor of size 1x2 (GPU 0)]
Something seems wrong here. The output is always the same for every sample. I am using Pytorch 3.0 to get the same results as a paper’s implementation I am following.
I have retrained the model with LogSoftMax and NLLLoss with the same parameters. when I use torch.exp(y_model) I get the following for a single sample:
Variable containing:
0.5180 0.4820
[torch.cuda.FloatTensor of size 1x2 (GPU 0)]
This means that the model is not sure which class the sample is? That output is also the same for a large test set. The training set is balanced with 19000 samples for both classes. I used code from another implementation to train the model, and it is very confusing, I might have also somehow trained this model incorrectly.
|
st99017
|
So after retraining with F.log_softmax your model outputs the new probabilities? How is your accuracy during training? Do you calculate the validation accuracy as well?
|
st99018
|
After retraining the F.log_softmax model with losswise to see accuracy and loss I get these 2 graphs.
stats.PNG3657×1276 112 KB
For some reason the model’s accuracy gets stuck at exactly 50%? Does this mean the model is being trained incorrectly? The model outputs 0.5 for all samples and classes.
|
st99019
|
Yes, your model doesn’t learn anything useful. It just outputs class0 for every sample. That’s also the reason you get the same predictions in your test set, as the training also fails.
Could you try to use a small sample of your training data and overfit your model on it?
If that’s not possible your architecture, hyperparameters etc. are not suitable for the task or you might have a bug somewhere in your training procedure.
|
st99020
|
I used 5 epochs to train the model whereas they used 300, does it matter? Even with just 5 I should still be seeing some results.
After debugging the code I see that the dataloader lists my malicious class (my two classes are “malicious” and “benign”) of my training dataloader as follows:
image.png1005×66 3.94 KB
Is the timeout significant? I never noticed it before, and have no idea why its just for that class.
But everything still seems to work, The very first training batch of 8 samples of both classes gives this as output (after evaluated with torch.exp(y_model)):
Does this mean the samples do have some effect on the training? Still not sure what is going, however it looks like the model is getting all the data fed in correctly.
I tried to overfit the model on a small sample size but it still only outputs 50% accuracy. I have no idea what is going on.
|
st99021
|
Are you using images as training data?
Try training it for 10 or 20 more epochs. How much time does 1 epoch take?
|
st99022
|
With a small sample size and 20 epochs or a large sample size with 5 epochs the model doesn’t learn anything. I am using vectors comprised of 0’s and 1’s of size 22761.
|
st99023
|
Hey in CNN visualisation toolkit : https://github.com/utkuozbulak/pytorch-cnn-visualizations/blob/master/src/cnn_layer_visualization.py 67 I am unable to understand :
def hook_layer(self):
def hook_function(module, grad_in, grad_out):
# Gets the conv output of the selected filter (from selected layer)
self.conv_output = grad_out[0, self.selected_filter]
# Hook the selected layer
self.model[self.selected_layer].register_forward_hook(hook_function)
What are the hooks functions and ‘grad_out’ doing? Grad_out has been not defined anywhere yet the code works fine. Also later this is called as :
def visualise_layer_with_hooks(self):
# Hook the selected layer
self.hook_layer()
without any arguments passed for grad_in or out. What is this line doing and how?
|
st99024
|
The naming is a bit misleading as grad_in and grad_out are used in backward hooks.
In forward hooks the vanilla naming would just be input and output.
You are basically creating a function named hook_function with a specific signature which is expected by register_forward_hook.
register_forward_hook makes sure to call the function you’ve passed with two arguments, the input and output of the nn.Module you’ve registered it to.
This is done automatically, so you don’t actually see in your code where input and output is created.
The last line just tries to register the current selected_layer to hook_function.
selected_layer has to be set beforehand or should have a default value otherwise.
|
st99025
|
Is grad_out a predefined function in torch? I could not find it in the docs.
But self.conv_output is not taken in by register_forward_hook(as hook_function does not return in)
|
st99026
|
No, they are just variables. It’s actually not grad_in and grad_out, but input and output in the forward function.
You could also name them a and b. The important fact is, that register_forward_hook needs a function with a signature of getting two arguments. The first argument is the input to this layer, the second its output.
self.conv_output just saves the activation of the first sample in the batch and self.selected_filter.
It’s a member of your class to visualize the activation. Later you may call my_class.conv_output to visualize the activation map.
|
st99027
|
How can I extract activation map form a specific layer? I know how to extract filters but not the maps
|
st99028
|
Since Ive got you on hooks here, can you pls answer my other question on hooks: CNN visualisation hooks 150
|
st99029
|
torch = 0.4.0
After running for about 1 hour, the process seems freeze with no output. What are the possible reasons?
+-------------------------------+----------------------+----------------------+
| 6 TITAN Xp On | 00000000:0E:00.0 Off | N/A |
| 23% 30C P8 15W / 250W | 12011MiB / 12196MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
|
st99030
|
Are you able to reproduce this issue? or it happened only once?
The issue might be specific to your machine because of several reasons such as maximum RAM/CPU usage.
Nevertheless, If you are able to reproduce the issue, please share a reproducible code snippet with us.
|
st99031
|
It is a bit hard to reproduce.
Currently, I found
it related to the logging in python or
cuda9.2 or
pytorch multiple gpus, nccl related problem
I will figure out more.
|
st99032
|
Hey, my model is giving me around the same loss and not training. Can someone pls help me.
trainset = torchvision.datasets.CIFAR10(root='.', train=True,
download=False, transform=transform)
trainloader = torch.utils.data.Data`Preformatted text`Loader(trainset, batch_size=32,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='.', train=False,
download=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=32,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 6, 3),
nn.ReLU(inplace=True),
nn.MaxPool2d(2, 2),
nn.Conv2d(6, 16, 5),
nn.ReLU(inplace=True),
nn.Conv2d(16, 20, 5),
nn.ReLU(inplace=True),
torch.nn.Conv2d(20, 27, 3),
nn.ReLU(inplace=True)
)
self.classifier = nn.Sequential(
nn.Linear(27 * 5 * 5, 120),
nn.ReLU(inplace=True),
nn.Linear(120, 84),
nn.ReLU(inplace=True),
nn.Linear(84, 10)
)
def forward(self, x):
x = self.features(x)
x = x.view(-1, 27 * 5 * 5)
x = self.classifier(x)
return x
net = Net().cuda()
criterion = nn.CrossEntropyLoss().cuda()
init_lr=0.01
opt = torch.optim.Adam(net.parameters(), lr=init_lr)
for epoch in tqdm_notebook(range(10)):
forward_times=4
running_loss = 0.0
for i, data in tqdm_notebook(enumerate(trainloader, 0)):
# get the inputs
inputs, labels = data
if torch.cuda.is_available():
# in versions of Torch < 0.4.0 we have to wrap these into torch.autograd.Variable as well
inputs, labels = Variable(inputs).cuda(), Variable(labels).cuda()
# zero the parameter gradients
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
opt.step()
opt.zero_grad()
if i%100:
print(i,loss)
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
|
st99033
|
Play around with hyperparameters. (learning rate, weight decay, optimizers (SGD with momentum, Adam, RMSProp), number of channels in layers etc)
For example, if you change the learning rate to 0.001, the error starts reducing.
|
st99034
|
I have trained a model with pytorch, and I want run the model in C/C++. Maybe the model should be transformed to caffe/caffe2 firstly. Anybody know how to do that? Thanks!
|
st99035
|
Probably this new tutorial 29 might be helpful.
EDIT: This will only work with the 1.0 preview.
|
st99036
|
My input to the neural network consists of a tensor, which is (BxCxHxW), with B as the batch size. As I understand, torch.nn.DataParallel splits the first dimension into chunks and processes. In my case, I also have some metadata, basically a flag for each input, which is a list of size B. This is essential for me to decide which path go in later stages of the network.
My problem is that DataParallel is not splitting the associated list into same chunks as the input tensor, it is just sending the whole list to each child thread. Is there any way in which I can get the input information as well as the meta data information into each GPU?
For example, with 4 GPUs if my input is a tuple of a tensor of shape (12,3,24,24) and my metadata, a list of size 12. Now, each GPU is showing an input size of (3,3,24,24) and metadata a list of size 12, and I have no way of associating each input to its flag.
|
st99037
|
Solved by InnovArul in post #2
If you can make your metadata as a tensor (instead of a python list) , Dataparallel will be able to split appropriately.
|
st99038
|
If you can make your metadata as a tensor (instead of a python list) , Dataparallel will be able to split appropriately.
|
st99039
|
Thanks! This seems like a good workaround. My flags are actually strings, so I guess I have to map them to one-hot before converting them to Tensors.
|
st99040
|
recently, i’ve been seeing warnings saying that you need to add a ‘dim’ argument to Softmax as the implicit dimension selection is being deprecated.
I have a model that I found on github that uses a softmax layer (nn.LogSoftmax) in its forward function and an F.softmax() in its inference functions.
The data i’m feeding in has dimensions batch_size x output_classes. So, I replaced the call to F.softmax with F.softmax(___, dim=2)
This results in the very confusing and nonsensical error:
RuntimeError: Dimension out of range (expected to be in range of [-2, 1], but got 2)
I don’t … get why I would ever try to take a softmax over a negative dimension. Additionally, I don’t understand why this is stopping me from taking a softmax over the second dimension, when that is actually the dimension that I want to take the softmax over.
I tried swapping out the call to F.softmax with a call to self.softmax, which refers to the nn.LogSoftmax layer initialized in the model’s init() function with dim=2 as an argument, and got the same error, despite this softmax working during training.
I don’t understand what these errors mean, they seem totally illogical, and I don’t understand what the difference is between an nn.LogSoftmax and F.softmax in the first place.
|
st99041
|
Solved by ptrblck in post #2
Could you print the shape of the tensor you are passing to F.softmax?
It seems dim2 is just missing. Are you adding the batch dimension to your data in the test case? This is often forgotten and yields these kind of errors.
You can call functions like softmax on “negative” dimensions to use revers…
|
st99042
|
Could you print the shape of the tensor you are passing to F.softmax?
It seems dim2 is just missing. Are you adding the batch dimension to your data in the test case? This is often forgotten and yields these kind of errors.
You can call functions like softmax on “negative” dimensions to use reverse indices just like with python lists. -1 for example would be the last dimension.
The difference between nn.Softmax and F.softmax is not that big, as neither has any parameters stored. While nn.Softmax is an nn.Module and must therefore be initialized, F.softmax is just the functional API equivalent.
|
st99043
|
awesome!! you are definitely my hero today haha.
Input Dimension:
torch.Size([64, 5000])
Are dimensions then, by the same logic 0 indexed? Is the first dimension dimension 0?
|
st99044
|
Using PyTorch 0.4.1 on Ubuntu 16.04
Riddle me this:
In my model, I have a list of convolutional filters that I want to train in parallel.
In my model’s forward function, I want to run my embedded input data through each convolutional layer, followed by a relu, followed by squeezing out the last dimension, which is just size 1.
So, I try to do this with a list comprehension, like so:
convs = [F.relu(conv_layer(embedded)).sqeeze(-1) for conv_layer in self.conv_layers]
This results in the following error message:
Traceback (most recent call last):
File "main.py", line 230, in <module>
main()
File "main.py", line 178, in main
loss = train_epoch(discriminator, dis_data_iter, dis_criterion, dis_optimizer)
File "main.py", line 79, in train_epoch
pred = model.forward(data_var)
File "/home/bgenchel/SeqGAN/Curious-Musical-SeqGAN/src/models/baseline/discriminator.py", line 38, in forward
convs = [F.relu(conv(embedded)).sqeeze(-1) for conv in self.conv_layers]
File "/home/bgenchel/SeqGAN/Curious-Musical-SeqGAN/src/models/baseline/discriminator.py", line 38, in <listcomp>
convs = [F.relu(conv(embedded)).sqeeze(-1) for conv in self.conv_layers]
AttributeError: 'Tensor' object has no attribute 'sqeeze'
HOWEVER, if I do the exact same thing using a for loop:
convs = []
for conv_layer in self.conv_layers:
convs.append(F.relu(conv_layer(embedded)).squeeze(-1))
Then everything is fine. No error message; code proceeds as usual.
???
The way I figured this out was to use pdb, and then first run the list comprehension without the squeeze function, then try to use the squeeze method on each individual element of the resulting list. This worked fine, and I got no complaints about the squeeze method not existing.
|
st99045
|
Solved by tumble-weed in post #2
your spelling of squeeze is wrong, you write it as sqeeze
|
st99046
|
Hi everyone, I am trying to understand if there is a reason why Embedding.from_pretrained() in current code 8 doesn’t take a padding_idx option that’s passed through to the constructor. Is that simply missing or is there something subtle going on here?
@classmethod
def from_pretrained(cls, embeddings, freeze=True, sparse=False):
assert embeddings.dim() == 2, \
'Embeddings parameter is expected to be 2-dimensional'
rows, cols = embeddings.shape
embedding = cls(
num_embeddings=rows,
embedding_dim=cols,
_weight=embeddings,
sparse=sparse,
)
embedding.weight.requires_grad = not freeze
return embedding
Thanks!
|
st99047
|
Hi, I met some error when using torch.zeros function and finally found solutions to it.
However, I think there may be something wrong with the implementation itself. Can anyone help me with the root reason? Thanks.
Bug
Program gets a Segmentation fault when the parameter setting of torch.zeros function is very large and the second parameter being a tensor instead of an integer.
To Reproduce
Steps to reproduce the behavior:
seq_length = torch.LongTensor(range(895))
torch.zeros((69137, seq_length.max(), 13))
Segmentation Fault
Expected behavior
If I do the following
import torch
torch.zeros((69137, torch.LongTensor([895]).max(), 13))
An error of TypeError: an integer is required will be shown, indicating we should change torch.LongTensor([895]) to torch.LongTensor([895]).item().
If I do the following
torch.zeros((69137, torch.LongTensor([1]).max(), 13))
No error will be produced.
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
PyTorch version: 0.4.1.post2
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Debian GNU/Linux 9.4 (stretch)
GCC version: (Debian 4.9.2-10+deb8u1) 4.9.2
CMake version: version 3.9.4
Python version: 2.7
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
Nvidia driver version: 387.26
cuDNN version: Probably one of the following:
/usr/local/cuda-8.0/lib64/libcudnn.so.6
/usr/local/cuda-9.0/lib64/libcudnn.so
/usr/local/cuda-9.0/lib64/libcudnn.so.7
/usr/local/cuda-9.0/lib64/libcudnn.so.7.0.5
/usr/local/cuda-9.0/lib64/libcudnn.so.7.1.2
/usr/local/cuda-9.0/lib64/libcudnn_static.a
/usr/local/cuda-9.1/lib64/libcudnn.so
/usr/local/cuda-9.1/lib64/libcudnn.so.7
/usr/local/cuda-9.1/lib64/libcudnn.so.7.1.2
/usr/local/cuda-9.1/lib64/libcudnn_static.a
Versions of relevant libraries:
[pip] Could not collect
[conda] magma-cuda90 2.3.0 1 pytorch
[conda] pytorch 0.4.1 py27__9.0.176_7.1.2_2 pytorch
[conda] torch 0.4.0a0+964707e
[conda] torch 0.4.0a0+92a0f78
[conda] torchfile 0.1.0
[conda] torchnet 0.0.2
[conda] torchvision 0.2.0
[conda] torchvision 0.2.1 py27_1 pytorch
|
st99048
|
my_torch:
seq_length = torch.LongTensor(range(895))
torch.zeros((69137, seq_length.max(), 13))
Segmentation Fault
I am unable to reproduce this in pytorch 0.4.0, 0.4.1, 0.5.0a0+ab6afc2, 1.0.0.dev20181008
|
st99049
|
Couldn’t reproduce the error either in 0.4.1 not the current master build.
@my_torch could you try to run your script with gdb as explained here 4?
|
st99050
|
How did you install pytorch? If you have installed it by building from source, it might be possible that there are some libs missing or wrongly linked. Try installing from a binary source in that case. Here is one such source:
Segmentation fault in torch.svd even for small matrices
I do not know what dependencies are wrong.
I have collected pip wheels from here: torchvision, torch 0.4.0
and installed them. They do not give such segmentation faults.
|
st99051
|
Hi, I followed the instructions of that post and found really weird.
I created a file pytorch.py
import torch
seq_length = torch.LongTensor([895])
torch.zeros((69137, seq_length.max(), 13))
When I type python2 pytorch.py in my bash. I got segmentation fault.
While I follow the gdb instruction in that post,
I got
(gdb) run
Starting program: /home/dsk/anaconda2/bin/python2 pytorch.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Traceback (most recent call last):
File "pytorch.py", line 5, in <module>
torch.zeros((69137, seq_length.max(), 13))
I tried 5 times and all the cases are as I said.
Could you give me more guidance?
Thanks
|
st99052
|
Hi, I just installed pytorch using conda install pytorch torchvision -c pytorch taken from the official website.
|
st99053
|
Running PyTorch 0.4.1 on Ubuntu 16.04
Trying to run a network, and get the following warning message:
UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
Add the dim argument to the function then get the following error:
File "main.py", line 160, in main
loss = train_epoch(generator, gen_data_iter, gen_criterion, gen_optimizer)
File "main.py", line 78, in train_epoch
pred = model.forward(data_var)
File "/home/bgenchel/SeqGAN/Curious-Musical-SeqGAN/src/models/baseline/generator.py", line 46, in forward
ped = self.softmax(fc_out, dim=2)
File "/home/bgenchel/anaconda3/envs/rl4mg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'dim'
Why is this happening?
|
st99054
|
Solved by ptrblck in post #2
Pass the dim argument to the constructor of the class: self.softmax = nn.Softmax(dim=1).
|
st99055
|
Pass the dim argument to the constructor of the class: self.softmax = nn.Softmax(dim=1).
|
st99056
|
I want to add random gaussian noise to my network weights, for every forward pass. When backpropagating, I want to calculate gradients in respect to distorted weights, then update the original weights using those gradients. Am I doing it right in the example below?
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = nn.Linear(784, 10)
self.gaussian = Normal(loc=0, scale=torch.ones_like(self.linear.weight))
def forward(self, x):
orig_weight = self.linear.weight.clone()
noise = self.gaussian.sample()
self.linear.weight.data = self.linear.weight.data + noise
x = self.linear(x)
self.linear.weight.data = orig_weight.data
return x
model = Net()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
model.train()
for epoch in range(10):
for i in range(100):
output = model(input)
loss = nn.CrossEntropyLoss()(output, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
|
st99057
|
I’ve debugged your code and it seems to do exactly what you wish to achieve.
I have to say I don’t really like the usage of .data in general, but this might be a valid use case.
At least I’m not sure how to make it better without manipulating the linear implementation.
Maybe someone else will have a good idea.
|
st99058
|
I appreciate your help! Initially I was surprised by your answer, because shortly after I posted my question, I realized that the (more) correct way to do it is like this:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear = nn.Linear(784, 10)
def forward(self, x):
return self.linear(x)
model = Net()
optimizer = torch.optim.SGD(model.parameters(), lr=0.001)
model.train()
for epoch in range(10):
for i in range(100):
orig_params = []
for p in model.parameters():
orig_params.append(p.clone())
gaussian = Normal(loc=0, scale=torch.ones_like(p))
p.data = p.data + gaussian.sample()
output = model(input)
loss = nn.CrossEntropyLoss()(output, label)
optimizer.zero_grad()
loss.backward()
for p, orig_p in zip(model.parameters(), orig_params):
p.data = orig_p.data
optimizer.step()
However, now I see that the reason you didn’t see any issues with my initial example is that I simplified it too much. Because there’s no hidden layer to backpropagate through, it does not make any difference. However, if we consider, for example, MLP with a hidden layer:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.linear1 = nn.Linear(784, 100)
self.linear2 = nn.Linear(100, 10)
self.relu = nn.ReLU()
def forward(self, x):
x = self.linear1(x)
x = self.relu(x)
x = self.linear2(x)
return x
Well, now we would have a problem with the method in the first example: if we apply noise to both weight matrices (in linear1 and linear2), then the error would backpropagate through the original linear2 weights, while what we really want is to backpropagate through the distorted linear2 weights, to get the true gradient in respect to distorted weights of the linear1 layer. Do you agree?
By the way, I’m curious, how did you debug/verify the code?
|
st99059
|
You are right, I assumed you are adding the noise and resetting the weights of all layers before and after the update step respectively.
Well, I just initialized the weights to a constant value, calculated the expected gradients and had a look what happens after adding/resetting the weights and the update step.
It was just a numerical check to see, if the “right” gradients are used as I’m always worried about manipulating .data.
|
st99060
|
I am new to pytorch, my task is to fix the weight of the network and update the input of the network iteratively for many times. In the 1st iteration(t = 0), I can get the grad for the input. However for the second run (says t = 1), when I use .backward. I cannot fetch the grad of the input, the grad is None. I’m really apprecaite if anyone here can help. Thank you in advance!
for t in range(self._opt.temperature):
syn_energy = self._Des128(sample_seq) # deep network
print(syn_energy.sum())
syn_energy.sum().backward(retain_graph=True) # backward the output
temp = sample_seq + 0.5 * self._opt.sampling_step * self._opt.sampling_step * sample_seq.grad
sample_seq = sample_seq * (1-mask) + temp * mask
sample_seq.clamp_(0.0, 1.0) # min , max
|
st99061
|
Solved by pytorch-newbie in post #7
Thank you very much!! I have tried datach before but forget to assign it to itself( >____< ). This is my final code which can work.
for t in range(self._opt.temperature):
syn_energy = self._Des128(sample_seq) # deep network
syn_energy.sum().backward() # backward th…
|
st99062
|
pytorch-newbie:
sample_seq
Do you have sample_seq.requires_grad set to True in 2nd iteration?
|
st99063
|
I didn’t set it every iteration. I only set it once. I have added one line in the loop after forward the input
print(t,syn_energy.sum(),sample_seq.requires_grad)
The ouput is as follows:
0 tensor(-1.0209, device=‘cuda:1’, grad_fn=SumBackward0) True
1 tensor(6.3761, device=‘cuda:1’, grad_fn=SumBackward0) True
However, sample_seq.grad is None.
|
st99064
|
I would say, try creating a new tensor before forward() call everytime.
sample_seq = sample_seq.detach().clone()
sample_seq.requires_grad = True
...forward()...
To me, it looks like you are involving the same variable sample_seq in further calculations and its difficult to trace . I am not sure if this is the issue. But you can try.
|
st99065
|
Thank you!! I try it and it can actually works. But this approach will copy the data every iteration which will slow down the process. I agree the problem exists because I reuse the same input in every iteration. I wonder is there any in-place method instead of copy the data to a new memory?
|
st99066
|
pytorch-newbie:
retain_graph=True
I think, retain_graph=True is unnecessary.
I feel bad to write spaghetti code. But try ths
syn_energy.sum().backward()
# creating next input
new_sample_seq = sample_seq.detach()
temp = new_sample_seq + 0.5 * self._opt.sampling_step * self._opt.sampling_step * sample_seq.grad
new_sample_seq = new_sample_seq * (1-mask) + temp * mask
new_sample_seq.clamp_(0.0, 1.0) # min , max
new_sample_seq.requires_grad = True
sample_seq = new_sample_seq
Basically, I am detaching the input variable so that it doesn’t get any gradients from these operations.
|
st99067
|
Thank you very much!! I have tried datach before but forget to assign it to itself( >____< ). This is my final code which can work.
for t in range(self._opt.temperature):
syn_energy = self._Des128(sample_seq) # deep network
syn_energy.sum().backward() # backward the output
# print(t,syn_energy.sum(),sample_seq.requires_grad)
temp = sample_seq + 0.5 * self._opt.sampling_step * self._opt.sampling_step * sample_seq.grad
sample_seq = sample_seq * (1 - mask) + temp * mask
sample_seq.clamp_(0.0, 1.0) # min , max
sample_seq = sample_seq.detach()
sample_seq.requires_grad = True
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.