id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st102400 | For a multiclass setting you usually use nn.NLLLoss+ nn.LogSoftmax or nn.CrossEntropyLoss with logits as criteria.
Both loss functions need a target tensor containing class indices, not a one-hot encoded matrix.
Have a look at the docs 35 for more information and an example of the usage. |
st102401 | Hi ptrblck,
Thanks for your response. I am fairly new to deep learning, even though I went to the documentations as suggested by you I was having trouble understanding the same. Could you point me towards a more practical implementation if possible?
Moreover, I’ll just give you the context of my dataset. I basically have 3 columns in my dataset: user_id, movie_id, and review. Review column is my target output which has 6 classes in it.
I am trying to build an autoencoder which is able to predict the output class for a movie.
I understand the criterion you mentioned and will be able to implement the same, however, much of my doubt revolves around the correct data preprocessing to achieve the same.
Looking forward to your response.
Best,
S |
st102402 | The simplest possible approach would be to use something like this 49. However the loss-function @ptrblck suggested has been used here 14. |
st102403 | I’m not sure, how to pre-process your data as it seems you just have ids as your input.
Also, do you want to use the latent vector of your autoencoder for classification?
Do you have a starter code or do you need one?
Also, just to be clear on the problem: is your use case about multi-class or multi-label classification? |
st102404 | I have a multiclass problem with my classes belonging to - ‘good’, ‘bad’, ‘interesting’, etc.
While researching, I came across autoencoders to be the best way to approach the same. However, I have additional data about the movies such as genre, actors, etc. Is there anyway I could include the same in my model and perhaps not implement an autoencoder but a different neural network?
What do you think would be the correct way to go about the same? |
st102405 | Are different labels possible for one datapoint?
To include your additional data you would have to encode it somehow. The simplest approach would be to use one-hot encoding which, however, is relatively sparse. Another approach would be to use nn.Embedding 2 as this will not be one-hot encoded but in a dense way. |
st102406 | No, so 1 row corresponds to the following:
user_id, movie_id, actor, genre, date_released, total_ratings and review(output class)
1 row only has one output class which can be either (good / bad/ interesting…etc).
I am trying to investigate if it’s possible for me to predict ratings based on past behaviour. If not for autoencoders what can I implement to do this classification problem? |
st102407 | And are your ids fixed or are they usually changing?
In general it sounds like a problem which might be able to solve by neural networks, but intuitively, I would recommend some other methods like clustering. For clustering, however you would probably also need an embedding.
Edit: with changing I mean if they are expanding/will there be any Ids added in the future? |
st102408 | Ids are fixed for all the users.Do you recommend a particular type of neural networks? |
st102409 | Actually no. I would recommend non-deep data-driven methods like standard clustering (an overview of clustering algorithms in scikit-learn is given here 8) and I would probably start with K-Means or K-Nearest-Neighbor clustering. |
st102410 | Using vgg-16 source code, I added on another layer, which is supposed to be an LSTM layer after every convolution. However, when I run the new model with my LSTM layer, it seems that not only is the model not learning, but the training rate is extremely slow (and I am running this on a GPU!). I was wondering why my added classes (RLSTM, RowLSTMCell) are slow and not learning, and if anyone has any suggestions on how to fix it. I think the problem just lies in my custom classes, but I will show code of the whole program to get a better sense of what I am doing.
Here is where I define my models:
__all__ = [
'VGG', 'vgg16'
]
class VGG(nn.Module):
'''
VGG model
'''
def __init__(self, features): # features represents the layers array
super(VGG, self).__init__()
self.features = features
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(512,512),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(512, 512),
nn.ReLU(True),
nn.Linear(512, 10),
)
# Initialize weights
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
m.bias.data.zero_()
def forward(self, x): # x is the image, we run x through the layers
x = self.features(x) # runs through all features, where each feature is a function
x = x.view(x.size(0), -1)
# after running through features, does sequential steps to finally classify
x = self.classifier(x)
return x
def make_layers(cfg, batch_norm=False):
print("Making layers!")
layers = []
# clearing the layers for next vgg model
in_channels = 3
count=0
for v in cfg:
count+=1
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels=v
rlstm =RLSTM(v)
rlstm=rlstm.cuda()
layers+=[rlstm]
in_channels = v
return nn.Sequential(*layers)
global total
total = 0
class RLSTM(nn.Module):
def __init__(self,ch):
super(RLSTM,self).__init__()
self.ch=ch
self.input_to_state = torch.nn.Conv2d(self.ch,4*self.ch,kernel_size=(1,3),padding=(0,1)).cuda()
self.state_to_state = torch.nn.Conv2d(self.ch,4*self.ch,kernel_size=(1,3),padding=(0,1)).cuda() # error is here: hidPrev is an array - not a valid number of input channel
def forward(self, image):
size = image.size()
b = size[0]
indvs = list(image.split(1,0))
tensor_array = []
for i in range(b):
tensor_array.append(self.RowLSTM(indvs[i]))
seq=tuple(tensor_array)
trans = torch.cat(seq,0)
global total
total+=1
return trans.cuda()
def RowLSTM(self, image):
# input-to-state (K_is * x_i) : 1oxx3 convolution. generate h x n x n tensor. hxnxn tensor contains all i -> s info
cell_list=[]
igates = []
n = image.size()[2]
ch=image.size()[1]
for i in range(n):
if i==0:
isgates = self.splitIS(self.input_to_state(image)) # convolve, then split into gates (4 per row)
cell=RowLSTMCell(0,torch.randn(ch,n,1).cuda(),torch.randn(ch,n,1).cuda(),torch.randn(ch,n,1).cuda(),torch.randn(ch,n,1).cuda(),torch.randn(ch,n,1).cuda(),torch.randn(ch,n,1).cuda())
# now have dummy variables for first row
cell_list.append(cell)
else:
cell_prev = cell_list[i-1]
hid_prev = cell_prev.getHiddenState()
ssgates = self.splitSS(self.state_to_state(hid_prev.unsqueeze(0)))
gates = self.addGates(isgates, ssgates,i)
ig, og, fg, gg = gates[0], gates[1], gates[2], gates[3]
cell = RowLSTMCell(cell_prev, ig, og, fg, gg, 0 ,0)
cell.compute()
cell_list.append(cell)
# now have a list of all cell data, concatenate hidden state into 1 x h x n x n
hidden_layers = []
for i in range(n):
hid = cell_list[i].h
hidden_layers.append(torch.unsqueeze(hid,0))
seq = tuple(hidden_layers)
tensor = torch.cat(seq,3)
return tensor
def splitIS(self, tensor): #always going to be splitting into 4 pieces, so no need to add extra parameters
inputStateGates={}
size=tensor.size() # 1 x 4h x n x n
out_ft=size[1] # get 4h for the nxnx4h tensor
num=size[2] # get n for the nxn image
hh=out_ft/4 # we want to split the tensor into 4, for the gates
tensor = torch.squeeze(tensor).cuda() # 4h x n x n
# First, split by row: Creates n tensors of 4h x n x 1
rows = list(tensor.split(1,2))
for i in range(num):
# Each row is a tensor of 4h x n x 1, split it into 4 of h x n x 1
row=rows[i]
# print("Each row using cuda: "+str(row.is_cuda))
inputStateGates[i]=list(row.split(hh,0))
return inputStateGates
def splitSS(self, tensor): # 1 x 4h x n x 1, create 4 of 1 x h x n x 1
size=tensor.size()
out_ft=size[1] # get 4h for the 1x4hxn tensor
num=size[2] # get n for the 1xhxn row
hh=out_ft/4 # we want to split the tensor into 4, for the gates
tensor = tensor.squeeze(0).cuda() # 4h x n x 1
splitted=list(tensor.split(hh,0))
return splitted
def addGates(self, i2s,s2s,key):
""" these dictionaries are of form {key : [[i], [o], [f], [g]]}
we want to add pairwise elemeents """
# i2s is of form key: [[i], [o], [f], [g]] where each gate is hxn
# s2s is of form [[h,n],[h,n],[h,n], [h,n]]
gateSum = []
for i in range(4): # always of length 4, representing the gates
gateSum.append(torch.sigmoid(i2s[key][i] + s2s[i]))
return gateSum
cfg = {
'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M',
512, 512, 512, 512, 'M'],
}
class RowLSTMCell(): #inherit torch.nn.LSTM?
def __init__(self,prev_row, i, o, f, g, c, h):
self.c=c
self.h=h
self.i=i
self.i = self.i.cuda()
self.o=o
self.o = self.o.cuda()
self.g=g
self.g = self.g.cuda()
self.f=f
self.f = self.f.cuda()
self.prev_row=prev_row
def getStateSize(self):
return self._state_size
def getOutputSize(self):
return self._output_size
def compute(self):
c_prev = self.prev_row.getCellState()
h_prev = self.prev_row.getHiddenState()
self.c = self.f * c_prev + self.i * self.g
self.h = torch.tanh(self.c) * self.o
def getHiddenState(self):
return self.h
def getCellState(self):
return self.c
def vgg16():
"""VGG 16-layer model (configuration "D")"""
return VGG(make_layers(cfg['D']))
Next, this is the main method.
import argparse
import os
import shutil
import time
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.optim
import torch.utils.data
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import vgg
model_names = sorted(name for name in vgg.__dict__ # create all the models
if name.islower() and not name.startswith("__")
and name.startswith("vgg")
and callable(vgg.__dict__[name]))
print("Using the following {}".format(model_names))
parser = argparse.ArgumentParser(description='PyTorch ImageNet Training')
parser.add_argument('--arch', '-a', metavar='ARCH', default='vgg16',
choices=model_names,
help='model architecture: ' + ' | '.join(model_names) +
' (default: vgg16)')
parser.add_argument('-j', '--workers', default=4, type=int, metavar='N',
help='number of data loading workers (default: 4)')
parser.add_argument('--epochs', default=300, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='manual epoch number (useful on restarts)')
parser.add_argument('-b', '--batch-size', default=128, type=int,
metavar='N', help='mini-batch size (default: 128)')
parser.add_argument('--lr', '--learning-rate', default=0.05, type=float,
metavar='LR', help='initial learning rate')
parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
help='momentum')
parser.add_argument('--weight-decay', '--wd', default=5e-4, type=float,
metavar='W', help='weight decay (default: 5e-4)')
parser.add_argument('--print-freq', '-p', default=20, type=int,
metavar='N', help='print frequency (default: 20)')
parser.add_argument('--resume', default='', type=str, metavar='PATH',
help='path to latest checkpoint (default: none)')
parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true',
help='evaluate model on validation set')
parser.add_argument('--pretrained', dest='pretrained', action='store_true',
help='use pre-trained model')
parser.add_argument('--half', dest='half', action='store_true',
help='use half-precision(16-bit) ')
parser.add_argument('--save-dir', dest='save_dir',
help='The directory used to save the trained models',
default='save_temp', type=str)
best_prec1 = 0
GPU_INDEX = 0
os.environ["CUDA_VISIBLE_DEVICES"] = str(GPU_INDEX)
def main():
global args, best_prec1
args = parser.parse_args()
# Check the save_dir exists or not
if not os.path.exists(args.save_dir):
os.makedirs(args.save_dir)
model = vgg.__dict__[args.arch]()
model.features = torch.nn.DataParallel(model.features) # features = layers array
for param in model.parameters():
print(param)
model.cuda()
# optionally resume from a checkpoint
if args.resume:
if os.path.isfile(args.resume):
print "=> loading checkpoint '{}'".format(args.resume)
checkpoint = torch.load(args.resume)
args.start_epoch = checkpoint['epoch']
best_prec1 = checkpoint['best_prec1']
model.load_state_dict(checkpoint['state_dict'])
print("=> loaded checkpoint '{}' (epoch {})"
.format(args.evaluate, checkpoint['epoch']))
else:
print "=> no checkpoint found at '{}'".format(args.resume)
cudnn.benchmark = True
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
train_loader = torch.utils.data.DataLoader(
datasets.CIFAR10(root='./data', train=True, transform=transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(32, 4),
transforms.ToTensor(),
normalize,
]), download=True),
batch_size=args.batch_size, shuffle=True,
num_workers=args.workers, pin_memory=True)
val_loader = torch.utils.data.DataLoader(
datasets.CIFAR10(root='./data', train=False, transform=transforms.Compose([
transforms.ToTensor(),
normalize,
])),
batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True)
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda()
if args.half:
model.half()
criterion.half()
optimizer = torch.optim.SGD(model.parameters(), args.lr,
momentum=args.momentum,
weight_decay=args.weight_decay)
if args.evaluate:
validate(val_loader, model, criterion)
return
for epoch in range(args.start_epoch, args.epochs):
adjust_learning_rate(optimizer, epoch)
# train for one epoch
train(train_loader, model, criterion, optimizer, epoch)
# evaluate on validation set
prec1 = validate(val_loader, model, criterion)
# remember best prec@1 and save checkpoint
is_best = prec1 > best_prec1
best_prec1 = max(prec1, best_prec1)
save_checkpoint({
'epoch': epoch + 1,
'state_dict': model.state_dict(),
'best_prec1': best_prec1,
}, is_best, filename=os.path.join(args.save_dir, 'checkpoint_{}.tar'.format(epoch)))
def train(train_loader, model, criterion, optimizer, epoch):
"""
Run one train epoch
"""
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
top1 = AverageMeter()
# switch to train mode
model.train()
end = time.time()
for i, (input, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
target = target.cuda(async=True)
input_var = torch.autograd.Variable(input).cuda()
target_var = torch.autograd.Variable(target)
if args.half:
input_var = input_var.half()
# compute output
output = model(input_var)
loss = criterion(output, target_var)
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
output = output.float()
loss = loss.float()
# measure accuracy and record loss
prec1 = accuracy(output.data, target)[0]
losses.update(loss.data[0], input.size(0))
top1.update(prec1[0], input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
print('Epoch: [{0}][{1}/{2}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Data {data_time.val:.3f} ({data_time.avg:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
'Prec@1 {top1.val:.3f} ({top1.avg:.3f})'.format(
epoch, i, len(train_loader), batch_time=batch_time,
data_time=data_time, loss=losses, top1=top1))
def validate(val_loader, model, criterion):
"""
Run evaluation
"""
batch_time = AverageMeter()
losses = AverageMeter()
top1 = AverageMeter()
# switch to evaluate mode
model.eval()
end = time.time()
for i, (input, target) in enumerate(val_loader):
target = target.cuda(async=True)
input_var = torch.autograd.Variable(input, volatile=True).cuda()
target_var = torch.autograd.Variable(target, volatile=True)
if args.half:
input_var = input_var.half()
# compute output
output = model(input_var)
loss = criterion(output, target_var)
output = output.float()
loss = loss.float()
# measure accuracy and record loss
prec1 = accuracy(output.data, target)[0]
losses.update(loss.data[0], input.size(0))
top1.update(prec1[0], input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if i % args.print_freq == 0:
print('Test: [{0}/{1}]\t'
'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t'
'Loss {loss.val:.4f} ({loss.avg:.4f})\t'
'Prec@1 {top1.val:.3f} ({top1.avg:.3f})'.format(
i, len(val_loader), batch_time=batch_time, loss=losses,
top1=top1))
print(' * Prec@1 {top1.avg:.3f}'
.format(top1=top1))
return top1.avg
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
"""
Save the training model
"""
torch.save(state, filename)
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def adjust_learning_rate(optimizer, epoch):
"""Sets the learning rate to the initial LR decayed by 2 every 30 epochs"""
lr = args.lr * (0.5 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def accuracy(output, target, topk=(1,)):
"""Computes the precision@k for the specified values of k"""
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0)
res.append(correct_k.mul_(100.0 / batch_size))
return res
if __name__ == '__main__':
main()
Am I forgetting to train some part of my model, such as the LSTM layer? Are the for loops making it slow? Any help is much appreciated. |
st102411 | Is there some example code out there somewhere for how to best deal with the vanishing gradient problem as outlined in Pascanu etal 2013?
I have to admit I do not understand how to turn that formula 10 into Pytorch code.
BTW, are there any empirical insights if vanishing or exploding gradients happen more often when using some standard LSTM architecture for sequence classification or tagging?
Gradient clipping appears to be fairly straightforward, even in LSTMs, but the regularization term to use for preventing a vanishing gradient I am not sure about at all. |
st102412 | Hello
When I put some nn.BatchNorm2d() in my network and use nn.DataParallel(net, device-ids=[0,1]), my network always outputs 0 when put in eval() mode, but works great in train() mode.
It works great when not using DataParallel in eval() mode also.
From what I understand nn.BatchNorm2d() can’t handle stats on 2 GPUs is that the explanation ?
Thank you ! |
st102413 | It can handle multiGPU, but BN generally doesn’t quite work when you have low batch size. |
st102414 | Even with 200 images per batch it does this.
I had to put track_running_stats= false for every Batch layers when using DataParallel for it to work, but doesn’t it defeat the purpose of differentiating training and evaluation ? |
st102415 | I am using the below code to load a batch during training. However, my batch may have all zero labels. Thus, the gradient will be going to zeros. I want to ignore the case. How could I check if a batch of label become zero during training? Thanks
for index, batch in enumerate(train_data_loader):
images, targets = batch
# Is my code correct?
if (torch.sum(targets)==0):
continue; |
st102416 | Solved by justusschock in post #2
You should be able to do target.nonzero().any() |
st102417 | While doing hyperparameter optimization using Ray Tune in pytorch, getting the below error:
Traceback (most recent call last):
File "/home/vsl3/anaconda2/lib/python2.7/site-packages/ray/worker.py", line 891, in _process_task
*arguments)
File "/home/vsl3/anaconda2/lib/python2.7/site-packages/ray/actor.py", line 261, in actor_method_executor
method_returns = method(actor, *args)
File "/home/vsl3/anaconda2/lib/python2.7/site-packages/ray/tune/trainable.py", line 117, in train
result = self._train()
File "/home/vsl3/anaconda2/lib/python2.7/site-packages/ray/tune/function_runner.py", line 114, in _train
result = self._status_reporter._get_and_clear_status()
File "/home/vsl3/anaconda2/lib/python2.7/site-packages/ray/tune/function_runner.py", line 39, in _get_and_clear_status
raise TuneError("Error running trial: " + str(self._error))
TuneError: Error running trial: 'bool' object is not callable |
st102418 | Solved by ptrblck in post #4
I"m not familiar with Ray Tune, but the error seems to come from a call to a bool value:
mask = True
mask()
> TypeError: 'bool' object is not callable |
st102419 | Could you check the type of self._train and self._status_reporter._get_and_clear_status?
Apparently you are trying to call a bool value as the error suggests. |
st102420 | Actually, below is my code which is calling boolean:
mask = np.zeros(len(orig_cmc), dtype=np.bool)
If I am not using Ray Tune, the code works. But how to call boolean object using Ray Tune? |
st102421 | I"m not familiar with Ray Tune, but the error seems to come from a call to a bool value:
mask = True
mask()
> TypeError: 'bool' object is not callable |
st102422 | Hi. I’m beginner of pytorch, and I’m trying to version up pytorch from 0.2.0_3 to 0.4.0.
Using conda environment, I tried to follow the instruction they gave(https://pytorch.org/ 27)
I followed codes they gave, (like conda install pytorch torchvision -c pytorch)
the version never changed somehow.(installation worked, but not changed.)
I checked my version of pytorch with
python
import torch
torch.version
But they always returned 0.2.0_3
So I decided to install pytorch from the source.
But I got another error, which is
[ 96%] Linking CXX executable …/bin/broadcast_test
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/undefined_tensor_test.dir/build.make:98: recipe for target ‘bin/undefined_tensor_test’ failed
make[2]: *** [bin/undefined_tensor_test] Error 1
CMakeFiles/Makefile2:1440: recipe for target ‘caffe2/CMakeFiles/undefined_tensor_test.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/undefined_tensor_test.dir/all] Error 2
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/atest.dir/build.make:98: recipe for target ‘bin/atest’ failed
make[2]: *** [bin/atest] Error 1
CMakeFiles/Makefile2:904: recipe for target ‘caffe2/CMakeFiles/atest.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/atest.dir/all] Error 2
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/scalar_test.dir/build.make:98: recipe for target ‘bin/scalar_test’ failed
make[2]: *** [bin/scalar_test] Error 1
CMakeFiles/Makefile2:1058: recipe for target ‘caffe2/CMakeFiles/scalar_test.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/scalar_test.dir/all] Error 2
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/half_test.dir/build.make:98: recipe for target ‘bin/half_test’ failed
make[2]: *** [bin/half_test] Error 1
CMakeFiles/Makefile2:865: recipe for target ‘caffe2/CMakeFiles/half_test.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/half_test.dir/all] Error 2
[ 96%] Linking CXX executable …/bin/apply_utils_test
[ 96%] Linking CXX executable …/bin/scalar_tensor_test
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/broadcast_test.dir/build.make:98: recipe for target ‘bin/broadcast_test’ failed
make[2]: *** [bin/broadcast_test] Error 1
CMakeFiles/Makefile2:982: recipe for target ‘caffe2/CMakeFiles/broadcast_test.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/broadcast_test.dir/all] Error 2
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/apply_utils_test.dir/build.make:98: recipe for target ‘bin/apply_utils_test’ failed
make[2]: *** [bin/apply_utils_test] Error 1
CMakeFiles/Makefile2:1175: recipe for target ‘caffe2/CMakeFiles/apply_utils_test.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/apply_utils_test.dir/all] Error 2
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/scalar_tensor_test.dir/build.make:98: recipe for target ‘bin/scalar_tensor_test’ failed
make[2]: *** [bin/scalar_tensor_test] Error 1
CMakeFiles/Makefile2:1401: recipe for target ‘caffe2/CMakeFiles/scalar_tensor_test.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/scalar_tensor_test.dir/all] Error 2
[ 96%] Linking CXX executable …/bin/basic
[ 96%] Linking CXX executable …/bin/native_test
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/basic.dir/build.make:98: recipe for target ‘bin/basic’ failed
make[2]: *** [bin/basic] Error 1
CMakeFiles/Makefile2:943: recipe for target ‘caffe2/CMakeFiles/basic.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/basic.dir/all] Error 2
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/native_test.dir/build.make:98: recipe for target ‘bin/native_test’ failed
make[2]: *** [bin/native_test] Error 1
CMakeFiles/Makefile2:1596: recipe for target ‘caffe2/CMakeFiles/native_test.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/native_test.dir/all] Error 2
Scanning dependencies of target integer_divider_test
[ 96%] Linking CXX executable …/bin/integer_divider_test
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnRestoreDropoutDescriptor' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetRNNDescriptor_v6’
/home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference to cudnnSetConvolutionGroupCount' /home/yhbyun/pytorch/build/lib/libcaffe2_gpu.so: undefined reference tocudnnSetConvolutionMathType’
collect2: error: ld returned 1 exit status
caffe2/CMakeFiles/integer_divider_test.dir/build.make:94: recipe for target ‘bin/integer_divider_test’ failed
make[2]: *** [bin/integer_divider_test] Error 1
CMakeFiles/Makefile2:1635: recipe for target ‘caffe2/CMakeFiles/integer_divider_test.dir/all’ failed
make[1]: *** [caffe2/CMakeFiles/integer_divider_test.dir/all] Error 2
[ 97%] Linking CXX shared library …/…/lib/libtorch.so
[ 97%] Built target torch
Makefile:140: recipe for target ‘all’ failed
make: *** [all] Error 2
Failed to run ‘bash tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn nccl caffe2 nanopb libshm gloo THD c10d’
With googling, I found two possibilities
cuda version problem -> I already got a cuda version >6 (8.0)
remove onnx, and reinstall it.
-> also failed.
What should I do to fixed that error, or how can I update pytorch version?
Thanks for your advices! |
st102423 | Solved by ptrblck in post #4
Are you building PyTorch from source?
If you don’t need the current master bug fixes and features, it’s simpler to install it using the pre-built binaries.
However, if you want to build from source, try to run python setup.py clean before pyhon setup.py install. |
st102424 | Could you try to uninstall PyTorch again.
Run the following commands until you get an error stating the lib cannot be found:
conda uninstall pytorch
pip uninstall torch
pip uninstall torch # run this command twice
Then try to reinstall it. Let me know, if that helps. |
st102425 | Thanks for your reply.
I tried your command. conda, pip uninstall pytorch and torch.
yhbyun@tako:~$ pip uninstall pytorch
Skipping pytorch as it is not installed.
yhbyun@tako:~$ pip uninstall torch
Skipping torch as it is not installed.
(pytorchyh2) yhbyun@tako:~/pytorch$ conda uninstall pytorch
Solving environment: failed
PackagesNotFoundError: The following packages are missing from the target environment:
Also, check if still I got any pytorch library, which is not.
(pytorchyh2) yhbyun@tako:~/pytorch$ python
Python 3.5.5 |Anaconda, Inc.| (default, May 13 2018, 21:12:35)
[GCC 7.2.0] on linux
Type “help”, “copyright”, “credits” or “license” for more information.
import torch
Traceback (most recent call last):
File “”, line 1, in
File “/home/yhbyun/pytorch/torch/init.py”, line 16, in
from .version import version
ImportError: No module named ‘torch.version’
exit()
After that, I installed pytorch again with source setup.py, but still got same error.
Failed to run ‘bash tools/build_pytorch_libs.sh --use-cuda --use-nnpack --use-mkldnn nccl caffe2 nanopb libshm gloo THD c10d’ |
st102426 | Are you building PyTorch from source?
If you don’t need the current master bug fixes and features, it’s simpler to install it using the pre-built binaries.
However, if you want to build from source, try to run python setup.py clean before pyhon setup.py install. |
st102427 | Thanks for your reply.
I already tried using python setup.py clean, but that didn’t work too.
I tried to build pytorch from source because I couldn’t update pytorch, but after I removed pytorch like you said, pre-built binaries worked.
So I don’t know why that kind of error occurs, but now I can use 0.4.0 version.
Thanks a lot. |
st102428 | That’s good to hear. The latest version is 0.4.1. Are you sure you’ve installed 0.4.0?
If so, you could try to update conda with: conda update -n base conda and re-install PyTorch again. |
st102429 | Hello,
I want to do batch norm over variable length sequences and I am curious how to do it properly so 0’th would not be taken into account for calculating mean/std? Or the only way is to implement batch norm layer myself to solve this issue?
Thank you in advance! |
st102430 | I’m wondering if you use data parallel, will the sub module also be data parallel? For example, I write one model as A and in B I directly used A, when I use data parallel to wrap B, will A also be data parallel? |
st102431 | nn.DataParallel replicates the passed module on your GPUs, so if A is a member of B, both should run in parallel. |
st102432 | how can i only reduce some ranks? for example:
node1: gpu1 (rank0 ) … gpu4 ( rank3)
node2: gpu1 (rank5) … gpu8 (rank11)
node1 and node2 are in the same process group.
sometime, i only want reduce about the ranks that are on the same node (node1: gpu1 … gpu4)? the torch.distributed only can reduce all ranks at the same time |
st102433 | I am a beginner studying mnist example.
I find both torch.nn module and torch.nn.functional has dropout and dropout2d. What’s the difference between them?
Besides, I used F.dropout2d instead of nn.dropout2d class to train the network, and the training parameter was not set in F.droput() of the fc layer, but my network still works. I am confused.
My code is as follow:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(F.dropout2d(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x)
x = F.log_softmax(self.fc2(x))
return x
The original code:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1) |
st102434 | The main difference is that Dropout() works on input tensors of any shape, but Dropout2d is a spatial-dropout designed for 4-D tensors such as images or feature maps from convolution layers. In such cases, the adjacent features might be strongly correlated, therefore, standard dropout will not be able to effectively regularize the network. Dropout2d() or also named SpatialDropout is then designed to ensure that adjacent pixels are either all 0s or they are all active. You can read more about it at this arXiv paper https://arxiv.org/abs/1411.4280 452 |
st102435 | Regarding your second issue:
If you are using the functional API (F.dropout), you have to set the training flag yourself as shown in your second example.
It might be a bit easier to initialize dropout as a module in __init__ and use it as such in forward, as shown with self.conv2_drop. This module will be automatically set to train and eval respectively if you call it on your model. |
st102436 | I have been learning Hopfield networks 46 recently and wanted to see what was the state of the art for this type of network.
What are other examples of similar or better content-addressable (“associative”) memory 43 networks? |
st102437 | I have a cuda tensor on some device, say 7. I am simply trying to convert it to a numpy-like array with
def np_like(probs):
return probs.data.cpu().numpy().squeeze()
What I have found interesting is the way this simple process triggers a lot of device-side assertion errors:
/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/THCTensorIndex.cu:279: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/THCTensorIndex.cu:279: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/THCTensorIndex.cu:279: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/THCTensorIndex.cu:279: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/THCTensorIndex.cu:279: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/THCTensorIndex.cu:279: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/THCTensorIndex.cu:279: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/THCTensorIndex.cu:279: void indexSelectSmallIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2]: block: [0,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/generic/THCTensorCopy.c line=70 error=59 : device-side assert triggered
node.value Traceback (most recent call last):
File "varian_main.py", line 382, in <module>
neural_fsp.train_nets(mask.rsplit(sep=".")[0])
File "varian_main.py", line 307, in train_nets
best_node, self.player = mcts.run_tree_search(root_node, player=self.player)
File "/home/lex/Documents/NNs/RadOncol/beam_optim/scripts/monte_carlo/mcts.py", line 127, in run_tree_search
new_node = self.tree_policy(root_node)
File "/home/lex/Documents/NNs/RadOncol/beam_optim/scripts/monte_carlo/mcts.py", line 162, in tree_policy
return self.expand(node)
File "/home/lex/Documents/NNs/RadOncol/beam_optim/scripts/monte_carlo/mcts.py", line 195, in expand
maybe_child = self.action_score(maybe_child)
File "/home/lex/Documents/NNs/RadOncol/beam_optim/scripts/monte_carlo/mcts.py", line 281, in action_score
print('node.value ', node.value)
File "/home/lex/anaconda3/envs/py35/lib/python3.5/site-packages/torch/autograd/variable.py", line 119, in __repr__
return 'Variable containing:' + self.data.__repr__()
File "/home/lex/anaconda3/envs/py35/lib/python3.5/site-packages/torch/tensor.py", line 133, in __repr__
return str(self)
File "/home/lex/anaconda3/envs/py35/lib/python3.5/site-packages/torch/tensor.py", line 140, in __str__
return _tensor_str._str(self)
File "/home/lex/anaconda3/envs/py35/lib/python3.5/site-packages/torch/_tensor_str.py", line 295, in _str
strt = _vector_str(self)
File "/home/lex/anaconda3/envs/py35/lib/python3.5/site-packages/torch/_tensor_str.py", line 271, in _vector_str
fmt, scale, sz = _number_format(self)
File "/home/lex/anaconda3/envs/py35/lib/python3.5/site-packages/torch/_tensor_str.py", line 79, in _number_format
tensor = torch.DoubleTensor(tensor.size()).copy_(tensor).abs_().view(tensor.nelement())
RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1512383260527/work/torch/lib/THC/generic/THCTensorCopy.c:70
I have seen similar errors on the github issues page and discuss page but no suggestion has been able to fix my problem so far.
I am on pytorch 0.3.0 |
st102438 | Ah never mind, I was using index_select somewhere in my code. This was the culprit as pointed out by others erstwhile.
Sorry for the bother. |
st102439 | Hi, I am facing the same error as your question. And I don’t use “index_select”, just select by generated index, would you please tell me how did you fix your problem? |
st102440 | Hello expert PyTorch folks
I have a question regarding loading the pretrain weights for network.
Lets say I am using VGG16 net.
And i can use load_state_dict to reload the weights, pretty straight forward if my network stays the same!
Now lets say i want to reload the pre-trained vgg16 weights, but i change the architecture of the network in the following way.
I added 2 more layer to my input,
so for e.g. instead of doing
nn.Conv2d( 3, 64, 3, padding=1)
i will do
nn.Conv2d( 5, 64, 3, padding=1)
in the second case, when i want to use the load_state_dict, i get the following error:
RuntimeError: Error(s) in loading state_dict for ModuleList:
size mismatch for 0.weight: copying a param of torch.Size([64, 5, 3, 3]) from checkpoint, where the shape is torch.Size([64, 3, 3, 3]) in current model.
Does anyone know what i need to fix this issue?
is there any way to just update the first 3 parameters with the pre-trained weights?
To be more specific:
def VGG16(cfg, i, batch_norm=False):
layers = []
in_channels = i
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
elif v == 'C':
layers += [nn.MaxPool2d(kernel_size=2, stride=2, ceil_mode=True)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
else:
layers += [conv2d, nn.ReLU(inplace=True)]
in_channels = v
pool5 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)
conv6 = nn.Conv2d(512, 1024, kernel_size=3, padding=6, dilation=6)
conv7 = nn.Conv2d(1024, 1024, kernel_size=1)
layers += [pool5, conv6,
nn.ReLU(inplace=True), conv7, nn.ReLU(inplace=True)]
return layers
InputChannel = 5
bases = {
'A': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'C', 512, 512, 512, 'M', 512, 512, 512]}
base = vgg(bases['A'], InputChannel)
vgg = nn.ModuleList(base)
vgg_weights = torch.load('weights/vgg16_reducedfc.pth')
print('Loading base network...')
vgg.load_state_dict(vgg_weights, strict=False)
for InputChannel = 3 I can load the pretrained weights, however, when I change the input channel to 5 for example), I get the error:
RuntimeError: Error(s) in loading state_dict for ModuleList:
size mismatch for 0.weight: copying a param of torch.Size([64, 5, 3, 3]) from checkpoint, where the shape is torch.Size([64, 3, 3, 3]) in current model.
Please not that the vgg16_reducedfc was trained on InputChannel = 3.
Update:
I also tried this:
vgg = nn.ModuleList(base)
vgg_weights = torch.load('weights/vgg16_reducedfc.pth')
state = vgg.state_dict()
state.update(vgg_weights)
vgg.load_state_dict(state)
but still getting the same error |
st102441 | I did the following, but this is kinda stupid, but it apparently works.
Please let me know if there is any other solutions
import torch
import torch.nn as nn
from vgg import VGG16 as vgg
InputChannel = 5
bases = {
'A': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'C', 512, 512, 512, 'M', 512, 512, 512]}
base = vgg(bases['A'], InputChannel)
vgg = nn.ModuleList(base)
#reading pretrained weights
vgg_weights = torch.load('weights/vgg16_reducedfc.pth')
# Changing the pretrain weights to have the same 5 channels
Main = vgg_weights['0.weight']
SIZEMain = Main.size()
Zeros = torch.zeros(SIZEMain[0],2,SIZEMain[2],SIZEMain[3]).cpu()
NewMain = torch.cat((Main,Zeros),1)
#Update the corresponding weight in state
state = vgg.state_dict()
state.update(vgg_weights)
state['0.weight'] = NewMain
#Give State to VGG
vgg.load_state_dict(state) |
st102442 | The below-code snippets work fine but my implementation makes it very slow. Is there any better way to optimize the code and not break the autograd mechanism?
**Snippet 1**
theta = torch.tril(torch.Tensor(self.N_parameters, self.N_parameters).to(device)).expand(self.N_subjects, self.N_parameters, self.N_parameters)
for i in range(0, self.N_subjects):
self.chol_cov_theta.data[i] = torch.tril(self.chol_cov_theta.data[i])
self.chol_cov_theta.data[i] -= torch.diag(torch.diag(chol_cov_theta.data[i]))
theta[i] = self.chol_cov_theta[i] + torch.diag(torch.exp(self.log_diag_chol_cov_theta[i]))
**Snippet 2**
theta_j = []
for i in range(0, self.N_subjects):
L_j = torch.inverse(theta[i]).t()
theta_j.append((self.m_theta_j[i].view(-1,1) + torch.mm(L_j, sampler_j[i].view(-1,1))).view(1,-1))
theta_j = torch.cat(theta_j) |
st102443 | EDIT: This only seems to be happening on CPU.
Hi,
Is there a quick way to freeze and unfreeze the weights of a network?
Currently I have the two functions to freeze and unfreeze the weights
def freeze_model(model):
model.eval()
for params in model.parameters():
params.requires_grad = False
def unfreeze_model(model):
model.train()
for params in model.parameters():
params.requires_grad = True
I use them in my training code below
# fix adversary take gradient step with classifier and encoder
unfreeze_model(encoder)
unfreeze_model(classifier)
z = encoder(x_batch)
y_hat = classifier(z)
freeze_model(adversary)
a_fixed = adversary(z)
opt_cls.zero_grad()
opt_en.zero_grad()
cls_loss = cls_criterion(y_hat, y_cls_batch)
adv_loss_fixed = adv_criterion(a_fixed, y_adv_batch, y_cls_batch)
cls_en_combinedLoss = cls_loss + adv_loss_fixed
cls_en_combinedLoss.backward()
opt_cls.step()
opt_en.step()
# fix encoder and classifier and take gradient step with adversary
freeze_model(encoder)
freeze_model(classifier)
z_fixed = encoder(x_batch)
y_hat_fixed = classifier(z_fixed)
adversary.train()
a_hat = adversary(z_fixed)
opt_adv.zero_grad()
cls_loss_fixed = cls_criterion(y_hat_fixed, y_cls_batch)
adv_loss = adv_criterion(a_hat, y_adv_batch, y_cls_batch)
adv_combinedLoss = -(cls_loss_fixed + adv_loss)
adv_combinedLoss.backward()
opt_adv.step()
However, due to this freezing and unfreezing for every mini-batch it takes a longer time. Any thoughts? Thank you! |
st102444 | Do you see an unusual long duration for freezing / unfreezing the models?
If freezing and unfreezing takes more time than just calculating the gradients, you could just remove it and clear the gradients corresponding to your use case.
As a side note: Did you forget to unfreeze the adversary at the end of your code? |
st102445 | Cannot understand why window_pool is not on GPU. Entire module is on GPU, so wouldn’t window_pool be on GPU as well? Is gradient info copied when I do sliced tensor copying?
How do I make window_pool to be on GPU as well… Would it be correct just to do window_pool.to("cuda")?
def forward(self, input, lens, args):
window_pool = torch.zeros([lens.shape[0], args.mem_dim*len(self.windows)])
convolved = conv_model(x)[0].transpose(0, 1)
relu_convolved = F.relu(convolved)
start = 0
for i in range(lens.shape[0]):
input_len = relu_convolved[start:start+lens[i]-window+1].shape[0]
input = relu_convolved[start:start+lens[i]-window+1].permute(1,0).unsqueeze(0)
start += lens[i].data.cpu().numpy()-window+1
max_pool1d = F.max_pool1d(input, kernel_size=input_len)
lo = (window-1)*args.mem_dim
hi = lo + args.mem_dim
print("max_pool1d.device ", max_pool1d.device) #GPU
window_pool[i][lo:hi] = max_pool1d
print(window_pool.device) #CPU |
st102446 | You can use
windows_pool=torch.zeros([lens.shape[0], args.mem_dim*len(self.windows)]).to("cuda")
or
window_pool = torch.zeros([lens.shape[0], args.mem_dim*len(self.windows)]).cuda() |
st102447 | But if you do this you cannot switch to CPU again without changing the code.
You could register it as a buffer to your module and then only expand it for batchsize. |
st102448 | This one solved the issue
window_pool = torch.cuda.FloatTensor(lens.shape[0], args.mem_dim*len(self.windows)).fill_(0) |
st102449 | How to feed weights of a module from output of another module?
I dont think I can do:
def __init__(self):
self.h = nn.Linear(...)
def forward(self, x, y):
self.h.weights[:] = x
y = self.h(y)
… because it will give an error about cannot do inplace operation on a leaf node (I think?).
However, I’m guessing that assigning x to .data will not result in an error, but will result in the gradients not back-propagating into x? |
st102450 | Solved by InnovArul in post #2
what about using F.linear()? |
st102451 | For example, two process contains torch.distributed.reduce(...), the process which will block in this function until the other process execute this function too, then they all returns, we can say that they are synchronized in this function.
I would like to know are the collective function of torch.cuda.comm or torch.distributed synchronized actually. |
st102452 | Hi, I’m trying to make a simple fit of the form w1 * sin (w2*x +w3) but the results are somehow strange. Maybe someone can help. Here’s my code:
# create model
import torch
from torch import nn
from torch.autograd import Variable
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.w1 = nn.Parameter(torch.Tensor(np.random.randn(1)))
self.w2 = nn.Parameter(torch.Tensor(np.random.randn(1)))
self.w3 = nn.Parameter(torch.Tensor(np.random.randn(1)))
return
def forward(self, input):
w1, w2, w3 = self.w1, self.w2, self.w3
return w1* torch.sin(w2 * input + w3)
import numpy as np
import holoviews as hv
hv.extension('bokeh')
# helper function
def f(w1, w2, w3):
def f2(x):
return w1 * np.sin(w2*x + w3)
return f2
def run_and_plot(w1=1, w2=1, w3=1 ,noise=0.01):
# make some data
x_numpy = np.linspace(-5, 5, 1000)
y_no_noise_numpy = f(w1, w2, w3)(x_numpy)
# add noise
y_numpy = y_no_noise_numpy + noise * np.random.standard_normal(y_no_noise_numpy.size)
# convert
x = torch.autograd.Variable(torch.Tensor(x_numpy),
requires_grad=False)
y = torch.autograd.Variable(torch.Tensor(y_numpy),
requires_grad=False)
# initialize model, optimizer, loss_function
model = MyModel()
optimizer = torch.optim.RMSprop(model.parameters(), lr=0.01)
loss_function = nn.MSELoss()
for epoch in range(1000):
y_pred = model(x)
loss = loss_function(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
curf = hv.Points((x_numpy, y_numpy))
curf2 = hv.Curve((x_numpy,y_pred.data.numpy()))
return (curf*curf2).relabel('%.2f %.2f %.2f %.2f' % (w1, w2, w3, noise))
np.random.seed(1)
hv.opts('Curve (color="red")')
(run_and_plot(1, 1 ,1) + run_and_plot(1, 2, 1) + run_and_plot(2, 1, 1)
+ run_and_plot(1, 1, 2) + run_and_plot(3, 2, 1) + run_and_plot(1, 4, 4)).cols(3)
Here’s the resulting plot. Points to fit blue and resulting curve red.
pytorch.png900×630 73.6 KB
Thanks |
st102453 | Hi,
probably not.
I would venture that what you are observing is a direct result of the problem being “too difficult” (insert your favourite mathematical definition of that here).
You will see that you often do not get much of a gradient for your parameters. Intuitively, if amplitude, frequency and phase are off, changing any one of these parameters just a bit will not help much with the MSE.
Of course, if you have this specific problem and know that your model is actually a good representation of the data, there are ways to estimate the three quantities you look for that work better than minimizing MSE.
Best regards
Thomas |
st102454 | Hi @jimmy, I’m implementing something very similar to this. Did you manage to solve your problem? |
st102455 | Hi,
well I didn’t investigate it really further, so sorry I can’t help. It was more like a pet problem that turned out to be not that easy. Thanks to tom for his answer. |
st102456 | Hi,
I try to train a model on several GPUs with DataParallel which uses several RNNs inside (pack_padded_sequence and pad_packed_sequence), but I get this error:
ValueError: gather got an input of invalid size: got 10x13x29, but expected 10x14x29
Each RNN is constructed with batch_first=True flag.
Pseudocode of my model is like this.
class Model:
def forward(x, lenghts)
for rnn in self.rrns:
x = rnn(x, lengths)
class RNN:
def forward(x, length):
#...
x = pack_padded_sequence(x, output_lengths, batch_first=True)
x, _ = self.rnn(x)
x, _ = rnn.pad_packed_sequence(x, batch_first=True)
If I don’t use pack_padded_sequence everything works great. What could be a problem? Would appreciate any hints/directions. |
st102457 | Solved by wise_east in post #2
Refer to the FAQ in PyTorch regarding using total lengths. There’s a subtlety in using the pack_padded_sequence and pad_packed_sequence utility functions with DataParallel.
https://pytorch.org/docs/stable/notes/faq.html |
st102458 | Refer to the FAQ in PyTorch regarding using total lengths. There’s a subtlety in using the pack_padded_sequence and pad_packed_sequence utility functions with DataParallel.
https://pytorch.org/docs/stable/notes/faq.html 547 |
st102459 | Traceback (most recent call last):
File "main.py", line 167, in <module>
test(epoch)
File "main.py", line 142, in test
_, predicted = predict(outputs).max(1)
File "main.py", line 43, in predict
preds = dist.sample()
File "/u/sahariac/.local/lib/python3.6/site-packages/torch/distributions/distribution.py", line 97, in sample
return self.rsample(sample_shape)
File "/u/sahariac/.local/lib/python3.6/site-packages/torch/distributions/dirichlet.py", line 68, in rsample
return _Dirichlet.apply(concentration)
File "/u/sahariac/.local/lib/python3.6/site-packages/torch/distributions/dirichlet.py", line 28, in forward
x = _dirichlet_sample_nograd(concentration)
File "/u/sahariac/.local/lib/python3.6/site-packages/torch/distributions/dirichlet.py", line 12, in _dirichlet_sample_nograd
probs = torch._standard_gamma(concentration)
RuntimeError: _standard_gamma is not implemented for type torch.cuda.FloatTensor |
st102460 | Which version of PyTorch are you running on?
For PyTorch version: 0.4.1 when I run the following I get no error
>>> import torch.distributions.dirichlet as d
>>> m = d.Dirichlet(torch.tensor([0.5, 0.5], device='cuda'))
>>> m.sample()
tensor([0.5290, 0.4710], device='cuda:0')
Could you provide some code that causes this error?
If you are on 0.4 you can try upgrading to 0.4.1 and see if that fixes your error.
You can do that by running conda update pytorch -c pytorch |
st102461 | hello, i want to ask whether there is dialated convolution operation in pytorch? |
st102462 | you can set dilation parameter in nn.Conv2d, see
https://pytorch.org/docs/master/nn.html?highlight=conv#torch.nn.functional.conv2d 18 |
st102463 | In fact, I am trying to understand how and where is coded a function like torch.sum(). I guess the code is in the C folders somewhere, I was looking in Aten, but no way to find this thing.
For instance, I would like to add a simple thing: a reversed cumsum operation. It would be a great simplification for computing the returns in reinforcement learning, instead of:
returns = rewards.flip(dims).cumsum(dim).flip(dims)
This is only a simple example among hundreds.
Is there a generic way to contribute to these basic functions? |
st102464 | I think that depends how and where you plan on implementing it.
If it’s in python, just importing it in the __init__.py 5 should do the trick.
If it’s in cpp implemented within pytorch (not aten), then you need to implement it in torch/csrc in the correct file and then it can be added to the torch.xx python namespace by adding it here 4. Note that the function added there will receive PyObject. You can use it to unpack your data and then send them to a C function somewhere else in torch/csrc.
If it’s in cpp inside Aten, I’m less sure. From what I remember they are bound through _C._VariableFunctions. This is created in a templated file here 1 that contains some special functions. The template is filled by functions in this 2 file. The Aten functions are loaded from a declarations yaml file read here 4. This file is in the Aten repo here 2. From there, the cname for each function should be implemented in the libraries for all backend. The ones not implemented won’t be available. For example, the isSetTo method is implemented in TH, THC and THD and referenced in the declaration file as you can see in this search 3.
Note that for the Aten, I followed as I made it so there might be mistakes in it, but that should give you enough pointer to do what you want. Namely adding your method to the Declarations.cwrap file should create the method in torch. (if it actually returns something). Then you just need to add the code in the backend you are interested in. |
st102465 | I’ll chime in with a bit of additional detail:
I think aten/src/ATen/native/native_functions.yaml is probably central and cumsum in particular is in the same directory in ReduceOps.cpp - but only a few wrappers for dealing with the accumulation type. The definition itself is - as @albanD said - in TH/THC. When you start a feature request, it might be worth discussing with the devs if that might be moved to native in that context, too.
I wrote up a short guide about how to go from Python function to the C++ implementation 25, maybe it can be useful as a start (it focuses on native, though, but I think the derivative bits may be relevant to cumsum). I’ll try to flesh out some bits about functions implemented directly in CPU and CUDA.
Best regards
Thomas |
st102466 | Ok, thank you @albanD and @tom. Now I understand better how it’s done.
And the short guide is exactly what I was looking for. |
st102467 | Hey all,
I’m quite new to PyTorch and am currently trying to implement a CNN-based classifier for some multivariate (9 dimensions/axes) timeseries data. I intend to use 1D convolutions and Max pools in the network.
My Dataset class returns each sample (which reflects 125 timesteps) as a 9 x 125 tensor. My (toy) CNN is constructed as described below:
self.conv1 = nn.Conv1d(9, 18, kernel_size=3) #9 input channels, 18 output channels
self.conv2 = nn.Conv1d(18, 36, kernel_size=3) #18 input channels from previous Conv. layer, 36 out
self.conv2_drop = nn.Dropout2d() #dropout
self.fc1 = nn.Linear(36, 72) #Fully-connected classifier layer
self.fc2 = nn.Linear(72, 19) #Fully-connected classifier layer
And the forward method of the CNN is described as follows:
x = F.relu(F.max_pool1d(self.conv1(x), 2))
x = F.relu(F.max_pool1d(self.conv2_drop(self.conv2(x)),2))
#point A
x = x.view(-1, 36)
#point B
x = self.fc1(x)
x = F.relu(x)
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x, dim=1)
The dataloader uses a batch size of 64. At point A in the code above, the shape of x is [64,36,29] - as I understand it, 64 samples (in the batch), each with 36 channels and a length of 29 timesteps. I’d now like to reshape this tensor appropriately for use by the Fully-Connected layer(s) - the code above is how the sample code I’m following (the MNIST example) did it. However, it seems that x.view() is not maintaining the batches, instead folding the tensor into this shape: [1856,36] which is definitely not what I want - or what I think I should be getting. Naturally, I get some error:
ValueError: Expected input batch_size (1856) to match target batch_size (64)
But I do not expect the input batch_size to be anything other than 64 - I believe this is happening because I’m using x.view() wrongly.
Any help with this would be highly appreciated. I’m quite new to PyTorch and don’t know what to do about it.
Thanks in advance |
st102468 | Here is what you might need.
gist.github.com
https://gist.github.com/InnovArul/7fd6898195c2951fb39e5a8398c2f588#file-1d_conv_experiment-py-L11-L21 233
1d_conv_experiment.py
import torch
import torch.nn as nn
import torch.nn.functional as F
class model(nn.Module):
def __init__(self):
super(model, self).__init__()
self.conv1 = nn.Conv1d(9, 18, kernel_size=3) #9 input channels, 18 output channels
self.conv2 = nn.Conv1d(18, 36, kernel_size=3) #18 input channels from previous Conv. layer, 36 out
self.conv2_drop = nn.Dropout2d() #dropout
This file has been truncated. show original
Basically, the fc1 needs to take input of size 1044 and x.view should be applied to preserve batches. |
st102469 | Hello Arul,
Thank you very much - that literally worked like a charm
I’ll edit the post title to reflect that this has been solved.
Thanks again. I really appreciate the help. |
st102470 | Hi there!
I’ve just finished a very exciting project, and it will be cool if I can showcase my project
What is the best (in terms of ease-of-use) FREE Cloud Platform to Deploy my Deep Learning Pytorch Project on? I need to process an image the way how-old.net 2 does.
It’s totally possible to run the model on a CPU (low footprint-it’s a resnet18).
A bonus question: What should I learn when it comes to the server-side/client-side handling of an image a user has sent? (I’ve previously written a simple server using jax rs- but there the client posted a json text file- it might be very different with an image)? |
st102471 | Is there a simple way to calculate the output size of multiple convolution and pooling layers? I mean the automatical way. Suppose I have 10 such layers, then when I decide to change the first layer, I would need to recalculate the output size of input images fed through the whole 10 layers which is too troublesome.
I suggest the convolution and pooling modules should have a method like: conv.output_size(input_size) to make things easier. |
st102472 | Solved by ptrblck in post #2
You can implement such a method in your model class using random input. It should be something like this:
def calculate_size(self, input_size):
x = torch.randn(input_size).unsqueeze(0)
output = self.layers(x)
return output.size()[1:]
This will use one forward pass to calculate the outp… |
st102473 | You can implement such a method in your model class using random input. It should be something like this:
def calculate_size(self, input_size):
x = torch.randn(input_size).unsqueeze(0)
output = self.layers(x)
return output.size()[1:]
This will use one forward pass to calculate the output shape. You should of course use the layers you need the output shape for. |
st102474 | I am trying to assign pre-trained weights to my vgg-face model using the following code:-
netClassifier = Vgg_face()
weight_vggFace = torch.load(weights_path)
netClassifier.load_state_dict(weight_vggFace)
But im facing the following error, for which I could not find a proper solution. The error is as follows:-
While copying the parameter named "fc8.weight", whose dimensions in the model are torch.Size([2622, 4096, 1, 1]) and whose dimensions in the checkpoint are torch.Size([2622, 4096]).
Could someone kindly guide me on how to go about solving it. |
st102475 | Solved by ntoan in post #2
May I ask how did you define your “fc8”? If it’s just torch.nn.Linear(2622, 4096) then there’s no way your weight would have 4 dimensions. |
st102476 | May I ask how did you define your “fc8”? If it’s just torch.nn.Linear(2622, 4096) then there’s no way your weight would have 4 dimensions. |
st102477 | You are right!! I just copy pasted the 1st line of code and changed only the filter sizes, strides, and paddings. Just skipped out of my mind!.
Thank you very very much for pointing this out. |
st102478 | Hi,
we know
class mymod(nn.Module)
def __init__:
self.a = nn.Parameters()
will add self.a to self._parameters by call __new__ method.
My question is : why
class mymod(nn.Module)
def __init__:
self.a = [nn.Parameters() for i in range(3)]
cant add parameters automatically? |
st102479 | This is because of the __setattr__ method of the module:
def __setattr__(self, name, value):
def remove_from(*dicts):
for d in dicts:
if name in d:
del d[name]
params = self.__dict__.get('_parameters')
if isinstance(value, Parameter):
if params is None:
raise AttributeError(
"cannot assign parameters before Module.__init__() call")
remove_from(self.__dict__, self._buffers, self._modules)
self.register_parameter(name, value)
elif params is not None and name in params:
if value is not None:
raise TypeError("cannot assign '{}' as parameter '{}' "
"(torch.nn.Parameter or None expected)"
.format(torch.typename(value), name))
self.register_parameter(name, value)
There is a typecheck and a list is another type as nn.Parameter. For things like this you have ParameterList 13 |
st102480 | Hi
I found version v0.4.1 is released and there is a check in torch/nn/parallel/distributed.py
if dist._backend not in (dist.dist_backend.NCCL, dist.dist_backend.GLOO):
raise ValueError(‘Invalid backend, only NCCL and GLOO backends are supported by DistributedDataParallel’)
Does this mean MPI is not supported anymore? |
st102481 | This is for the DistributedDataParallel module which supports NCCL and GLOO backends and we have DistributedDataParallelCPU which supports MPI in addition to these two. |
st102482 | Hi Deepali,
@Deepali Thanks for your reply. Does DistributedDataParallelCPU only work for modules with all variables on CPU as its name implied? Or we can also use it for the network resides on GPU? I’m asking so because in the previous version DistributedDataParallel also works for GPU network + MPI backend. |
st102483 | I am following the sequence tagging example here 2 and noticed that the inner loop really affect performance. Is it possible to avoid the inner loop, i.e. going through each sequence, and run the model with entire minibatch? Any examples like that. Please let me know. |
st102484 | Every loop i will do forward and then backward only once, why still the Error trigger:
Traceback (most recent call last):
File "toy_layer.py", line 635, in <module>
main()
File "toy_layer.py", line 557, in main
g_grads = torch.autograd.grad(l2, g_vars)
File "/hdd1/liangqu/conda/lib/python3.6/site-packages/torch/autograd/__init__.py", line 144, in grad
inputs, allow_unused)
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
Here is my source code:
import torch, math
import numpy as np
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
from torchvision import datasets, transforms
import visdom
from torch.nn import functional as F
import gc
class Generator:
def __init__(self, z_dim, c_dim, device):
# according to Pytorch w/b format, w = [out_dim, in_dim]
# b = [out_dim]
self.vars = [
# [z+c, 128]
torch.ones(128, z_dim + c_dim, requires_grad=True, device=device),
torch.zeros(128, requires_grad=True, device=device),
torch.rand(128, requires_grad=True, device=device),
torch.zeros(128, requires_grad=True, device=device),
# [128, 256]
torch.ones(256, 128, requires_grad=True, device=device),
torch.zeros(256, requires_grad=True, device=device),
torch.rand(256, requires_grad=True, device=device),
torch.zeros(256, requires_grad=True, device=device),
# [256, 512]
torch.ones(512, 256, requires_grad=True, device=device),
torch.zeros(512, requires_grad=True, device=device),
torch.rand(512, requires_grad=True, device=device),
torch.zeros(512, requires_grad=True, device=device),
# [512, 1024]
torch.ones(1024, 512, requires_grad=True, device=device),
torch.zeros(1024, requires_grad=True, device=device),
torch.rand(1024, requires_grad=True, device=device),
torch.zeros(1024, requires_grad=True, device=device),
# [1024, 28*28]
torch.ones(28 * 28, 1024, requires_grad=True, device=device),
torch.zeros(28 * 28, requires_grad=True, device=device),
torch.rand(28 * 28, requires_grad=True, device=device),
torch.zeros(28 * 28, requires_grad=True, device=device),
]
# moving mean and variance for normalization
# no gradients needed.
self.bns = [
torch.zeros(128, device=device),
torch.ones(128, device=device),
torch.zeros(256, device=device),
torch.ones(256, device=device),
torch.zeros(512, device=device),
torch.ones(512, device=device),
torch.zeros(1024, device=device),
torch.ones(1024, device=device),
torch.zeros(28 * 28, device=device),
torch.ones(28 * 28, device=device),
]
def init_weight(self, vars=None):
"""
init vars and self.bns
:param vars:
:return:
"""
if vars is None:
vars = self.vars
vars_idx = 0
weight, bias = vars[vars_idx], vars[vars_idx + 1]
stdv = 1. / math.sqrt(weight.size(1))
weight.uniform_(-stdv, stdv)
bias.uniform_(-stdv, stdv)
# nn.init.xavier_uniform_(weight)
# bias.data.fill_(0.01)
weight, bias = vars[vars_idx + 2], vars[vars_idx + 3]
weight.uniform_()
bias.zero_()
vars_idx += 4
weight, bias = vars[vars_idx], vars[vars_idx + 1]
stdv = 1. / math.sqrt(weight.size(1))
weight.uniform_(-stdv, stdv)
bias.uniform_(-stdv, stdv)
# nn.init.xavier_uniform_(weight)
# bias.data.fill_(0.01)
weight, bias = vars[vars_idx + 2], vars[vars_idx + 3]
weight.uniform_()
bias.zero_()
vars_idx += 4
weight, bias = vars[vars_idx], vars[vars_idx + 1]
stdv = 1. / math.sqrt(weight.size(1))
weight.uniform_(-stdv, stdv)
bias.uniform_(-stdv, stdv)
# nn.init.xavier_uniform_(weight)
# bias.data.fill_(0.01)
weight, bias = vars[vars_idx + 2], vars[vars_idx + 3]
weight.uniform_()
bias.zero_()
vars_idx += 4
weight, bias = vars[vars_idx], vars[vars_idx + 1]
stdv = 1. / math.sqrt(weight.size(1))
weight.uniform_(-stdv, stdv)
bias.uniform_(-stdv, stdv)
# nn.init.xavier_uniform_(weight)
# bias.data.fill_(0.01)
weight, bias = vars[vars_idx + 2], vars[vars_idx + 3]
weight.uniform_()
bias.zero_()
vars_idx += 4
weight, bias = vars[vars_idx], vars[vars_idx + 1]
stdv = 1. / math.sqrt(weight.size(1))
weight.uniform_(-stdv, stdv)
bias.uniform_(-stdv, stdv)
# nn.init.xavier_uniform_(weight)
# bias.data.fill_(0.01)
weight, bias = vars[vars_idx + 2], vars[vars_idx + 3]
weight.uniform_()
bias.zero_()
vars_idx += 4
# zero mean and one variance.
for i in range(len(self.bns) // 2 ):
# mean
self.bns[i].zero_()
# variance
self.bns[2 * i + 1].fill_(1)
def forward(self, z, c, vars):
"""
:param z:
:param c:
:param vars:
:return:
"""
vars_idx, bns_idx = 0, 0
# [b, z_dim] + [b, c_dim] => [b, new_dim]
x = torch.cat([z, c], dim=1)
# [b, z+c] => [b, 128]
x = F.linear(x, vars[vars_idx], vars[vars_idx + 1])
x = F.batch_norm(x, self.bns[bns_idx + 0], self.bns[bns_idx + 1],
weight=vars[vars_idx + 2], bias= vars[vars_idx + 3],
training=True, momentum=0.1)
x = F.leaky_relu(x, 0.2)
vars_idx += 4
bns_idx += 2
# [b, 128] => [b, 256]
x = F.linear(x, vars[vars_idx], vars[vars_idx + 1])
x = F.batch_norm(x, self.bns[bns_idx + 0], self.bns[bns_idx + 1],
weight=vars[vars_idx + 2], bias= vars[vars_idx + 3],
training=True, momentum=0.1)
x = F.leaky_relu(x, 0.2)
vars_idx += 4
bns_idx += 2
# [b, 256] => [b, 512]
x = F.linear(x, vars[vars_idx], vars[vars_idx + 1])
x = F.batch_norm(x, self.bns[bns_idx + 0], self.bns[bns_idx + 1],
weight=vars[vars_idx + 2], bias= vars[vars_idx + 3],
training=True, momentum=0.1)
x = F.leaky_relu(x, 0.2)
vars_idx += 4
bns_idx += 2
# [b, 512] => [b, 1024]
x = F.linear(x, vars[vars_idx], vars[vars_idx + 1])
x = F.batch_norm(x, self.bns[bns_idx + 0], self.bns[bns_idx + 1],
weight=vars[vars_idx + 2], bias= vars[vars_idx + 3],
training=True, momentum=0.1)
x = F.leaky_relu(x, 0.2)
vars_idx += 4
bns_idx += 2
# [b, 1024] => [b, 28*28]
x = F.linear(x, vars[vars_idx], vars[vars_idx + 1])
x = F.batch_norm(x, self.bns[bns_idx + 0], self.bns[bns_idx + 1],
weight=vars[vars_idx + 2], bias= vars[vars_idx + 3],
training=True, momentum=0.1)
x = F.tanh(x)
vars_idx += 4
bns_idx += 2
# reshape
x = x.view(-1, 1, 28, 28)
return x
class Discriminator:
def __init__(self, n_class, device):
# according to Pytorch w/b format, w = [out_dim, in_dim]
# b = [out_dim]
self.vars = [
# [28*28, 512]
torch.ones(512, 28 * 28, requires_grad=True, device=device),
torch.zeros(512, requires_grad=True, device=device),
# [512, 256]
torch.ones(256, 512, requires_grad=True, device=device),
torch.zeros(256, requires_grad=True, device=device),
# [256, n]
torch.ones(n_class, 256, requires_grad=True, device=device),
torch.zeros(n_class, requires_grad=True, device=device)
]
def init_weight(self, vars=None):
if vars is None:
vars = self.vars
vars_idx = 0
weight, bias = vars[vars_idx], vars[vars_idx + 1]
stdv = 1. / math.sqrt(weight.size(1))
weight.uniform_(-stdv, stdv)
bias.uniform_(-stdv, stdv)
# nn.init.xavier_uniform_(weight)
# bias.data.fill_(0.01)
vars_idx += 2
weight, bias = vars[vars_idx], vars[vars_idx + 1]
stdv = 1. / math.sqrt(weight.size(1))
weight.uniform_(-stdv, stdv)
bias.uniform_(-stdv, stdv)
# nn.init.xavier_uniform_(weight)
# bias.data.fill_(0.01)
vars_idx += 2
weight, bias = vars[vars_idx], vars[vars_idx + 1]
stdv = 1. / math.sqrt(weight.size(1))
weight.uniform_(-stdv, stdv)
bias.uniform_(-stdv, stdv)
# nn.init.xavier_uniform_(weight)
# bias.data.fill_(0.01)
vars_idx += 2
def forward(self, x, vars):
"""
:param x: [b, 1, 28, 28]
:param vars:
:return:
"""
vars_idx = 0
# [b, 1/2, 28, 28]
x = x.view(x.size(0), -1)
# [b, 28*28] => [b, 512]
x = F.linear(x, vars[vars_idx], vars[vars_idx + 1])
# x = self.bn1(x)
x = F.leaky_relu(x, 0.2)
vars_idx += 2
# [b, 512] => [b, 256]
x = F.linear(x, vars[vars_idx], vars[vars_idx + 1])
# x = self.bn2(x)
x = F.leaky_relu(x, 0.2)
vars_idx += 2
# [b, 256] => [b, n_class]
x = F.linear(x, vars[vars_idx], vars[vars_idx + 1])
# x = self.bn3(x)
# here follow by CrossEntroyLoss
# x = F.leaky_relu(x, 0.2)
x = F.sigmoid(x)
vars_idx += 2
return x
def main():
from mnist_class import MNIST
lr_d = 5
lr_g = 2e-4
imagesz = 28
batchsz_d = 100
batchsz_g = 100
z_dim = 100
n_class = 10
device = torch.device('cuda')
vis = visdom.Visdom()
transform = transforms.Compose([transforms.Resize([imagesz, imagesz]),
transforms.ToTensor(),
transforms.Normalize(mean=(0.5,), std=(0.5,))])
# use self defined MNIST
mnist = MNIST('data/mnist', class_idx=range(n_class), train=True, download=True, transform=transform)
db = DataLoader(mnist, batch_size=batchsz_g, shuffle=True)
db_iter = iter(db)
c_dist = torch.distributions.categorical.Categorical(probs=torch.tensor([1/n_class] * n_class))
g = Generator(z_dim, n_class, device)
d = Discriminator(n_class, device)
# init is very important in case gradients==nan
with torch.no_grad():
g.init_weight()
d.init_weight()
g_vars, d_vars = g.vars, d.vars
g_optim = optim.Adam(g_vars, lr=lr_g, betas=(0.5, 0.999))
# d_optim = optim.Adam(d_vars, lr=2e-4, betas=(0.5, 0.999))
# when using MSELoss, append F.sigmoid() at the end of D.
# criteon = nn.CrossEntropyLoss(size_average=True).to(device)
criteon = nn.MSELoss(size_average=True).to(device=device)
criteon2 = nn.MSELoss(size_average=True).to(device=device)
for epoch in range(1000000):
# [b, z]
z = torch.rand(batchsz_d, z_dim).to(device)
# [b] => [b, 1]
y_hat = c_dist.sample((batchsz_d,) ).unsqueeze(1)
# [b, 1] => [b, n_class]
y_hat_oh = torch.zeros(batchsz_d, n_class).scatter_(1, y_hat, 1)
# [b, 1] => [b]
y_hat, y_hat_oh = y_hat.squeeze(1).to(device), y_hat_oh.to(device)
# print(y_hat, y_hat_oh)
# [b, z+c] => [b, 1, 28, 28]
x_hat = g.forward(z, y_hat_oh, g_vars)
# 1. update D nets
losses_d = []
for i in range(10):
pred = d.forward(x_hat, d_vars)
l1 = criteon(pred, y_hat_oh)
# MUST create_graph=True !!! to support 2nd derivate.
d_grads = torch.autograd.grad(l1, d_vars, create_graph=True)
d_vars = list(map(lambda p: p[0] - lr_d * p[1], zip(d_vars, d_grads)))
losses_d += [l1.item()]
# 2. update G nets
# [b, 1, 28, 28], [b]
try:
x, y = next(db_iter)
except StopIteration as err:
db_iter = iter(db)
x, y = next(db_iter)
y_oh = torch.zeros(y.size(0), n_class).scatter_(1, y.unsqueeze(1).long(), 1)
x, y, y_oh = x.to(device), y.to(device), y_oh.to(device)
pred = d.forward(x, d_vars)
l2 = criteon2(pred, y_oh)
g_grads = torch.autograd.grad(l2, g_vars)
# g_vars = list(map(lambda p: p[0] - lr_d * p[1], zip(g_vars, g_grads)))
with torch.no_grad():
for p, grad in zip(g_vars, g_grads):
if p.grad is not None:
p.grad.copy_(grad.detach())
else:
p.grad = grad.detach()
# g_optim.zero_grad()
# l2.backward(retain_graph=True)
g_optim.step()
# # [b, n_class] => [b]
# pred = torch.argmax(pred, dim=1)
# correct = torch.eq(pred, y).sum().float()
# acc = correct.item() / np.prod(y.size()) # NOT y.sum()
# print('>>>>memory check')
# total_tensor, total_mem = 0, 0
# for obj in gc.get_objects():
# if torch.is_tensor(obj):
# total_tensor += 1
# total_mem += np.prod(obj.size())
# print(obj.type(), obj.size())
# print('<<<<', 'tensor:', total_tensor, 'mem:', total_mem//1024//1024)
if __name__ == '__main__':
import argparse
args = argparse.ArgumentParser()
args.add_argument('-g', action='store_true', help='use gan to train')
args = args.parse_args()
main() |
st102485 | I am currently working on patch based super-resolution. Most of the papers divide an image into smaller patches and then use the patches as input to the models. I completed the coding to create patches using custom dataloader. The code is given below:
import torch.utils.data as data
from torchvision.transforms import CenterCrop, ToTensor, Compose, ToPILImage, Resize, RandomHorizontalFlip, RandomVerticalFlip
from os import listdir
from os.path import join
from PIL import Image
import random
import os
import numpy as np
import torch
def is_image_file(filename):
return any(filename.endswith(extension) for extension in [".png", ".jpg", ".jpeg", ".bmp"])
class TrainDatasetFromFolder(data.Dataset):
def __init__(self, dataset_dir, patch_size, is_gray, stride):
super(TrainDatasetFromFolder, self).__init__()
self.imageHrfilenames = []
self.imageHrfilenames.extend(join(dataset_dir, x)
for x in sorted(listdir(dataset_dir)) if is_image_file(x))
self.is_gray = is_gray
self.patchSize = patch_size
self.stride = stride
def _load_file(self, index):
filename = self.imageHrfilenames[index]
hr = Image.open(self.imageHrfilenames[index])
downsizes = (1, 0.7, 0.45)
downsize = 2
w_ = int(hr.width * downsizes[downsize])
h_ = int(hr.height * downsizes[downsize])
aug = Compose([Resize([h_, w_], interpolation=Image.BICUBIC),
RandomHorizontalFlip(),
RandomVerticalFlip()])
hr = aug(hr)
rv = random.randint(0, 4)
hr = hr.rotate(90*rv, expand=1)
filename = os.path.splitext(os.path.split(filename)[-1])[0]
return hr, filename
def _patching(self, img):
img = ToTensor()(img)
LR_ = Compose([ToPILImage(), Resize(self.patchSize//2, interpolation=Image.BICUBIC), ToTensor()])
HR_p, LR_p = [], []
for i in range(0, img.shape[1] - self.patchSize, self.stride):
for j in range(0, img.shape[2] - self.patchSize, self.stride):
temp = img[:, i:i + self.patchSize, j:j + self.patchSize]
HR_p += [temp]
LR_p += [LR_(temp)]
return torch.stack(LR_p),torch.stack(HR_p)
def __getitem__(self, index):
HR_, filename = self._load_file(index)
LR_p, HR_p = self._patching(HR_)
return LR_p, HR_p
def __len__(self):
return len(self.imageHrfilenames)
Suppose the batch size is 1, it takes an image and gives an output of size [x,3,patchsize,patchsize]. When batch size is 2, I will have two different outputs of size[x,3,patchsize,patchsize] (for example image 1 may give[50,3,patchsize,patchsize], image 2 may give[75,3,patchsize,patchsize] ). To handle this a custom collate function was required. The collate function is given below:
def my_collate(batch):
data = torch.cat([item[0] for item in batch],dim = 0)
target = torch.cat([item[1] for item in batch],dim = 0)
return [data, target]
This collate function concatenates along x (From the above example, I finally get[125,3,patchsize,pathsize].
Next, I need to train the model using a minibatch size of say 25. Can anyone help me build a sampler which I can use to directly get an output of size [25 , 3, patchsize, pathsize] from dataloader. I saw Batchsampling recently. Any idea how to incorporate that?
Thank you. |
st102486 | Where do I include this? I used a for loop to split it. However, I was hoping for a custom dataloader which gives me 25 patches per batch by taking necessary number of images. |
st102487 | I’m new to pytorch. I see lots of tutorials that focus on how to use the API to train, but my question is, once I have a trained model, what is the definitive way to execute it on some data, such as picture classification? |
st102488 | Do you have a specific use case in mind?
The easiest way would be to store the trained model and load it in another script for inference.
Have a look at the Serialization semantics 437 for more information.
Your inference code might load the data directly from your hard drive / network storage or you could create a web app using e.g. Flask and feed the data to your model.
You can also deploy your model to Caffe2 using ONNX. Here 114 is a nice tutorial. |
st102489 | Well, right now I’m looking at Module.eval as being how it’s done. But basically, any scenario where I would feed some input to a trained model and get a result - any result, any input - back should be suitable enough to teach me where or how the API is used. I’m not so sure there’s more than one method of using a trained model at all, but in all the example code I’ve seen so far, I don’t see eval, and I see a lot of stuff about training. |
st102490 | For inference / evaluation you should set model.eval() to set all nn.Modules to evaluation.
This will change the behavior of some modules, e.g. nn.Dropout which won’t drop any features anymore and nn.BatchNorm which will use the running estimates of mean and std in the default settings.
That’s basically how you would perform the inference for a classification of images:
# Train your model
...
# After training
model.eval()
data = torch.randn(1, 3, 24, 24) # Load your data here, this is just dummy data
output = model(data)
prediction = torch.argmax(output) |
st102491 | That sequence right there was what I was looking for! I’m pretty sure that argmax isn’t the only way to represent the result of some model evaluation in the whole machine learning ecosystem, but all I need to know is what the model is doing for graph traversal on the inside.
I’m familiar with the serialization semantics and I’ve known about them for a while. I’m sure this will be considered a separate question - I see that there is a C++ version of each of load and save, torch::load and torch::save. But the C/++ and python respective functions don’t use the same format, and the python version pickles the code so that it’s not easily accessible from C++. 1) Is there an easy way I can load and traverse over the model in C++, and 2) will there be any future effort to try and use something a little more portable, like protobuf, for saving models? |
st102492 | Sure, this was just an example for a typical classification use case.
Production-ready PyTorch is planned to be released later this summer / autumn.
Have a look at the road to PyTorch1.0 113 for more information. While it’s not released, you could stick to ONNX and export the models in Caffe2. |
st102493 | The example is very good, and I’m happy with it provided I can dig that traversal I need out. |
st102494 | Does the model traversal get performed at model(data) or argmax?
Also, what is the instance type of model? Because I can’t just do torch.load(xyz), that will give me an OrderedDict. |
st102495 | The forward pass will be performed at model(data).
model itself is an instance of nn.Module (source code 79). |
st102496 | Hey wait, in this example here 40, you can see at line 207 there is a construction of the module sub-class RNN into the variable rnn. Then, at line 221, there is a call of rnn as though it is a function. What does line 234 resolve to?
#first this:
rnn = RNN(n_letters, n_hidden, n_categories)
#then it’s this:
output, next_hidden = rnn(input, hidden) |
st102497 | If call of a model performs it’s forward path.
In your example you pass input and hidden to your model, the forward pass is performed, and finally you’ll get output and next_hidden as the result. |
st102498 | No, I mean what function on the object RNN does the line ‘rnn(input, hidden)’ correspond to?
For example, constructors correspond to init, len corresponds to len, ect. But it looks like I’ve already constructed the object with RNN, how am I calling an object as though it is a function? |
st102499 | I’m running into the following error while using DistributedDataParallel, using code very similar to the code here 1:
Traceback (most recent call last):
File "/root/miniconda2/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/root/miniconda2/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/parallel/distributed.py", line 476, in _reduction_thread_fn
_process_batch() # just to have a clear scope
File "/root/miniconda2/lib/python2.7/site-packages/torch/nn/parallel/distributed.py", line 460, in _process_batch
nccl.reduce(dev_coalesced, root=0, streams=nccl_streams)
File "/root/miniconda2/lib/python2.7/site-packages/torch/cuda/nccl.py", line 51, in reduce
torch._C._nccl_reduce(inputs, outputs, root, op, streams, comms)
RuntimeError: NCCL Error 2: system error
This is with PyTorch v0.4.0, built from source. CUDA version is 9.0, cuDNN version is 7.0.5.
I don’t run into an exception when I use the PyTorch available on Conda.
Any help much appreciated |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.