id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st115968
|
Hi, which version of PyTorch do you use? I’m using 0.2.0 and there is no error with that code.
|
st115969
|
how do i check pytroch version? i downloaded the most recent one from their website.
|
st115970
|
For example pip list or conda list.
And the reason you cannot backpropagete is you use Tensor. You need to use Variable instead to back-propagate.
|
st115971
|
Since PyTorch 0.1.12 (if I am not wrong), they have included a torch.__version__ attribute that helps you find out which version of PyTorch you are using.
Run torch.__version__ on a Python interpreter after importing PyTorch to find out.
|
st115972
|
@mckowski any reason you dont want to make gram a Variable? That would then fix the issue I think?
(Edit: by the way, if you have requires_grad=False, it wont backprop through gram, in case this was a concern?)
|
st115973
|
I still can’t figure out how to make this work yet :’( it is depressing:
When I use expand function, the forward works perfectly, however, the program crash on backward function:
multiplied_mat = CNN_Result.clone() # Clone for each GRU iteration
expanded_alpha_mat = alpha_mat.expand(current_tensor_shape)
multiplied_mat = multiplied_mat * expanded_alpha_mat
alpha_mat is {batchsize} x 16 x 32
multiplied_mat is {batchsize} x 16 x 32 x 128 (this is current_tensor_shape)
And when i run the code, the program crash with error:
Traceback (most recent call last):
File “Main. py”, line 40, in
testnet.train(epoch + 1)
File “E:\Workbench\DatasetReader\new\LVTN_MER-master\Network\CNNNetwork. py”, line 172, in train
loss.backward()
File “E:\Anaconda\lib\site-packages\torch\autograd\variable. py”, line 144, in backward
self._execution_engine.run_backward((self,), (gradient,), retain_variables)
File “E:\Anaconda\lib\site-packages\torch\autograd\function. py”, line 90, in apply
return self._forward_cls.backward(self, *args)
File “E:\Anaconda\lib\site-packages\torch\autograd_functions\tensor. py”, line 95, in backward
return grad_output.contiguous().view(ctx.old_size), None
File “E:\Anaconda\lib\site-packages\torch\autograd\variable. py”, line 468, in view
return View.apply(self, sizes)
File “E:\Anaconda\lib\site-packages\torch\autograd_functions\tensor. py”, line 89, in forward
result = i.view(*sizes)
RuntimeError: size ‘[1 x 512]’ is invalid for input of with 65536 elements at D:\Downloads\pytorch-master-1\torch\lib\TH\THStorage.c:59
The size [1 x 512] (in the last line) comes from the code:
alpha_mat = self.alpha_softmax(alpha_mat.view(current_tensor_shape[0], 512)).view(current_tensor_shape[0], 16, 32)
Thank you in advance :’(
|
st115974
|
In doc:
output_padding (int or tuple, optional): Zero-padding added to one side of the output.
But I don’t really understand what this means.
Can some explain this with some examples?
|
st115975
|
think of ConvTranspose as the opposite operation of Conv and output_padding of ConvTranspose is input padding of Conv
|
st115976
|
smth:
think of ConvTranspose as the opposite operation of Conv and output_padding of ConvTranspose is input padding of Conv
I’m curious: if output_padding of ConvTranspose is input padding of Conv, why is it single-sided?
I.e.,
# input padding = 1 in Conv:
[[ 0 0 0 ]
[ 0 1 0 ]
[ 0 0 0 ]]
# but output padding = 1 in ConvTransposed:
[[ 1 0 ]
[ 0 0 ]]
|
st115977
|
output padding is assymmetric. it’s only applied in the right and the bottom of the image.
|
st115978
|
This is RFCN’s definition, and RPN and VGG16 in it is also nn.Module.
class RFCN(nn.Module):
n_classes = 21
classes = np.asarray(['__background__',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat', 'chair',
'cow', 'diningtable', 'dog', 'horse',
'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor'])
PIXEL_MEANS = np.array([[[102.9801, 115.9465, 122.7717]]])
SCALES = (600,)
MAX_SIZE = 1000
def __init__(self, classes=None, debug=False):
super(RFCN, self).__init__()
if classes is not None:
self.classes = classes
self.n_classes = len(classes)
self.rpn = RPN()
#self.psroi_pool = PSRoIPool(7,7,1.0/16,7,15) This is for test
self.psroi_pool_cls = PSRoIPool(7,7, 1.0/16, 7, self.n_classes)
self.psroi_pool_loc = PSRoIPool(7,7, 1.0/16, 7, 8)
self.new_conv = Conv2d(512, 1024, 1, same_padding=False)
self.rfcn_score = Conv2d(1024,7*7*8, 1,1, bn=False)
self.rfcn_bbox = Conv2d(1024, 7*7*self.n_classes,1,1,bn=False)
self.bbox_pred = nn.AvgPool2d((7,7),stride=(7,7))
self.cls_score = nn.AvgPool2d((7,7),stride=(7,7))
# loss
self.cross_entropy = None
self.loss_box = None
# for log
self.debug = debug
@property
def loss(self):
# print self.cross_entropy
# print self.loss_box
# print self.rpn.cross_entropy
# print self.rpn.loss_box
return self.cross_entropy + self.loss_box * 10
def forward(self, im_data, im_info, gt_boxes=None, gt_ishard=None, dontcare_areas=None):
features, rois = self.rpn(im_data, im_info, gt_boxes, gt_ishard, dontcare_areas)
if self.training:
roi_data = self.proposal_target_layer(rois, gt_boxes, gt_ishard, dontcare_areas, self.n_classes)
rois = roi_data[0]
# roi pool
conv_new1 = self.new_conv(features)
r_score_map = self.rfcn_score(conv_new1)
r_bbox_map = self.rfcn_bbox(conv_new1)
psroi_pooled_cls = self.psroi_pool_cls(r_score_map, rois)
psroi_pooled_loc = self.psroi_pool_loc(r_bbox_map, rois)
bbox_pred = self.bbox_pred(psroi_pooled_loc)
bbox_pred = torch.squeeze(bbox_pred)
cls_score = self.cls_score(psroi_pooled_cls)
cls_score = torch.squeeze(cls_score)
cls_prob = F.softmax(cls_score)
if self.training:
self.cross_entropy, self.loss_box = self.build_loss(cls_score, bbox_pred, roi_data)
return cls_prob, bbox_pred, rois
def build_loss(self, cls_score, bbox_pred, roi_data):
# classification loss
label = roi_data[1].squeeze()
fg_cnt = torch.sum(label.data.ne(0))
bg_cnt = label.data.numel() - fg_cnt
# for log
if self.debug:
maxv, predict = cls_score.data.max(1)
self.tp = torch.sum(predict[:fg_cnt].eq(label.data[:fg_cnt])) if fg_cnt > 0 else 0
self.tf = torch.sum(predict[fg_cnt:].eq(label.data[fg_cnt:]))
self.fg_cnt = fg_cnt
self.bg_cnt = bg_cnt
ce_weights = torch.ones(cls_score.size()[1])
ce_weights[0] = float(fg_cnt) / bg_cnt
ce_weights = ce_weights.cuda()
cross_entropy = F.cross_entropy(cls_score, label, weight=ce_weights)
# bounding box regression L1 loss
bbox_targets, bbox_inside_weights, bbox_outside_weights = roi_data[2:]
bbox_targets = torch.mul(bbox_targets, bbox_inside_weights)
bbox_pred = torch.mul(bbox_pred, bbox_inside_weights)
loss_box = F.smooth_l1_loss(bbox_pred, bbox_targets, size_average=False) / (fg_cnt + 1e-4)
return cross_entropy, loss_box
@staticmethod
def proposal_target_layer(rpn_rois, gt_boxes, gt_ishard, dontcare_areas, num_classes):
"""
----------
rpn_rois: (1 x H x W x A, 5) [0, x1, y1, x2, y2]
gt_boxes: (G, 5) [x1 ,y1 ,x2, y2, class] int
# gt_ishard: (G, 1) {0 | 1} 1 indicates hard
dontcare_areas: (D, 4) [ x1, y1, x2, y2]
num_classes
----------
Returns
----------
rois: (1 x H x W x A, 5) [0, x1, y1, x2, y2]
labels: (1 x H x W x A, 1) {0,1,...,_num_classes-1}
bbox_targets: (1 x H x W x A, K x4) [dx1, dy1, dx2, dy2]
bbox_inside_weights: (1 x H x W x A, Kx4) 0, 1 masks for the computing loss
bbox_outside_weights: (1 x H x W x A, Kx4) 0, 1 masks for the computing loss
"""
rpn_rois = rpn_rois.data.cpu().numpy()
rois, labels, bbox_targets, bbox_inside_weights, bbox_outside_weights = \
proposal_target_layer_py(rpn_rois, gt_boxes, gt_ishard, dontcare_areas, num_classes)
# print labels.shape, bbox_targets.shape, bbox_inside_weights.shape
rois = network.np_to_variable(rois, is_cuda=True)
labels = network.np_to_variable(labels, is_cuda=True, dtype=torch.LongTensor)
bbox_targets = network.np_to_variable(bbox_targets, is_cuda=True)
bbox_inside_weights = network.np_to_variable(bbox_inside_weights, is_cuda=True)
bbox_outside_weights = network.np_to_variable(bbox_outside_weights, is_cuda=True)
return rois, labels, bbox_targets, bbox_inside_weights, bbox_outside_weights
def interpret_faster_rcnn(self, cls_prob, bbox_pred, rois, im_info, im_shape, nms=True, clip=True, min_score=0.0):
# find class
scores, inds = cls_prob.data.max(1)
scores, inds = scores.cpu().numpy(), inds.cpu().numpy()
keep = np.where((inds > 0) & (scores >= min_score))
scores, inds = scores[keep], inds[keep]
# Apply bounding-box regression deltas
keep = keep[0]
box_deltas = bbox_pred.data.cpu().numpy()[keep]
box_deltas = np.asarray([
box_deltas[i, (inds[i] * 4): (inds[i] * 4 + 4)] for i in range(len(inds))
], dtype=np.float)
boxes = rois.data.cpu().numpy()[keep, 1:5] / im_info[0][2]
pred_boxes = bbox_transform_inv(boxes, box_deltas)
if clip:
pred_boxes = clip_boxes(pred_boxes, im_shape)
# nms
if nms and pred_boxes.shape[0] > 0:
pred_boxes, scores, inds = nms_detections(pred_boxes, scores, 0.3, inds=inds)
return pred_boxes, scores, self.classes[inds]
def detect(self, image, thr=0.3):
im_data, im_scales = self.get_image_blob(image)
im_info = np.array(
[[im_data.shape[1], im_data.shape[2], im_scales[0]]],
dtype=np.float32)
cls_prob, bbox_pred, rois = self(im_data, im_info)
pred_boxes, scores, classes = \
self.interpret_faster_rcnn(cls_prob, bbox_pred, rois, im_info, image.shape, min_score=thr)
return pred_boxes, scores, classes
def get_image_blob_noscale(self, im):
im_orig = im.astype(np.float32, copy=True)
im_orig -= self.PIXEL_MEANS
processed_ims = [im]
im_scale_factors = [1.0]
blob = im_list_to_blob(processed_ims)
return blob, np.array(im_scale_factors)
def get_image_blob(self, im):
"""Converts an image into a network input.
Arguments:
im (ndarray): a color image in BGR order
Returns:
blob (ndarray): a data blob holding an image pyramid
im_scale_factors (list): list of image scales (relative to im) used
in the image pyramid
"""
im_orig = im.astype(np.float32, copy=True)
im_orig -= self.PIXEL_MEANS
im_shape = im_orig.shape
im_size_min = np.min(im_shape[0:2])
im_size_max = np.max(im_shape[0:2])
processed_ims = []
im_scale_factors = []
for target_size in self.SCALES:
im_scale = float(target_size) / float(im_size_min)
# Prevent the biggest axis from being more than MAX_SIZE
if np.round(im_scale * im_size_max) > self.MAX_SIZE:
im_scale = float(self.MAX_SIZE) / float(im_size_max)
im = cv2.resize(im_orig, None, None, fx=im_scale, fy=im_scale,
interpolation=cv2.INTER_LINEAR)
im_scale_factors.append(im_scale)
processed_ims.append(im)
# Create a blob to hold the input images
blob = im_list_to_blob(processed_ims)
return blob, np.array(im_scale_factors)
def load_from_npz(self, params):
self.rpn.load_from_npz(params)
pairs = {'fc6.fc': 'fc6', 'fc7.fc': 'fc7', 'score_fc.fc': 'cls_score', 'bbox_fc.fc': 'bbox_pred'}
own_dict = self.state_dict()
for k, v in pairs.items():
key = '{}.weight'.format(k)
param = torch.from_numpy(params['{}/weights:0'.format(v)]).permute(1, 0)
own_dict[key].copy_(param)
key = '{}.bias'.format(k)
param = torch.from_numpy(params['{}/biases:0'.format(v)])
own_dict[key].copy_(param)
When I run the train.py 4, an error occured in net(im_data, im_info, gt_boxes, gt_ishard, dontcare_areas) (net is a RFCN instance)
Runtime Error:aruguments located on different GPUs
The error comes from that I simply add one line net = torch.nn.DataParallel(model, device_ids=[0, 1])
Why does this error happen, and how should I use multi-GPU to train.Is there any clue?
Thanks a lot!!
|
st115979
|
Have you converted the model to a CUDA model?
Maybe in that one line code, you could edit this:
net = torch.nn.DataParallel(model.cuda(), device_ids=[0,1])
AFAIK, you can’t use DataParallel if the model is on the CPU.
|
st115980
|
For a convolutional kernel is of size ©x(H)x(W), I want all the channels within a same (h,w) position to share a same dropout mask. How can i manage it using the dropout function?
|
st115981
|
I do:
In [1]: import torch
In [2]: a = torch.rand(1024*1024*64)
In [3]: a_cuda = a.cuda()
In [4]: type(a_cuda)
Out[4]: torch.cuda.FloatTensor
Then I do nvidia-smi
What I expect to see:
[something something] 256MB
(ie 64 million * 4 bytes per float)
What I actually see:
Tue Aug 15 12:11:49 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.77 Driver Version: 361.77 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 Off | 0000:00:1E.0 Off | 0 |
| N/A 36C P0 39W / 150W | 502MiB / 7618MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 52947 C /mldata/conda/envs/pytorch/bin/python 500MiB |
+-----------------------------------------------------------------------------+
ie ~512MB.
Why is this?
|
st115982
|
(Note, interestingly, if I create another 64 million float tensor, the memory only increases by ~256MB, as I’d expect. I thought maybe some gc thing, but calling:
gc.collect()
gc.collect()
gc.collect()
… memory is still ~756MB:
In [1]: import torch
In [2]: a = torch.rand(1024*1024*64)
In [3]: a_cuda = a.cuda()
In [4]: type(a_cuda)
Out[4]: torch.cuda.FloatTensor
In [5]: a.size()
Out[5]: torch.Size([67108864])
In [6]: b = torch.rand(1024*1024*64)
In [7]: b_cuda = b.cuda()
In [8]: import gc
In [9]: gc.collect()
Out[9]: 87
In [10]: gc.collect()
Out[10]: 7
In [11]: gc.collect()
Out[11]: 7
nvidia-smi:
Tue Aug 15 12:16:56 2017
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 361.77 Driver Version: 361.77 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 Off | 0000:00:1E.0 Off | 0 |
| N/A 38C P0 40W / 150W | 758MiB / 7618MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 52947 C /mldata/conda/envs/pytorch/bin/python 756MiB |
+-----------------------------------------------------------------------------+
|
st115983
|
ASAIK, PyTorch uses a caching allocator, while the memory is “free”, this is not reflected in the view from the device.
How to clear some GPU memory?
Even if that same process can reuse the GPU memory, it doesn’t look like other processes can. I’m running into a similar utilization concern.
Another process will run into Out of Memory errors, while the original process keeps the GPU memory even after it is done using it.
|
st115984
|
The 64 million float tensor only takes up 256 MB. The other ~250 MB is from all the CUDA kernels in libTHC.so and libTHCUNN.so. They’re loaded when CUDA is first initialized, which happened when you called a.cuda(). We have a lot of CUDA kernels since many are defined for every data type.
|
st115985
|
Double-precision floating-point format occupies 8 bytes not 4 I thought
Scratch that I see you have float not double
|
st115986
|
I’m trying to train a model with batch normalization.
Though, one sample is quiet memory consuming and I cannot train with enough number of batch to apply batch normalization.
So, I’m thinking following steps.
Feed some samples and calculate running mean, var, and other parameters.
Copy above parameters to models, then, make batchnorm layers eval mode.
Here, my questions are
Even if batchnorm layers are eval mode, does autograd works correctly?
Is there any efficient way to achieve those process with pytorch or any other functions?
Any help would be appreciated.
Thanks.
|
st115987
|
Hi!
So when trying Scale with a tuple I get a weird error (see the following) while when I’m inputting an int it works fine. Why?!
Here’s what’s happening:
(Pdb) t1=transforms.Scale((5,5))
(Pdb) t2=transforms.Scale(5)
(Pdb) image
<PIL.Image.Image image mode=RGB size=816x816 at 0x7F49A6BA6668>
(Pdb) t1(image)
*** TypeError: unsupported operand type(s) for /: ‘tuple’ and ‘int’
(Pdb) t2(image)
<PIL.Image.Image image mode=RGB size=5x5 at 0x7F49A6B61780>
Thanks in advance guys!
|
st115988
|
You should probably update torchvision, possibly by installing from git https://github.com/pytorch/vision/ 6
Best regards
Thomas
|
st115989
|
I’ve done that twice (the second time I installed from source) and that didn’t help.
Using /usr/local/lib/python3.5/dist-packages
Finished processing dependencies for torchvision==0.1.9
|
st115990
|
Hello all,
I was wondering whether there existed a layer that could perform upsampling in one dimension. For example, keras has the layer Upsampling1D but all the upsampling layers of pytorch seem to be for at least 2-dimensional data.
Any ideas?
|
st115991
|
In the absence of 1D specific methods you could always use the 2D and unsqueeze + squeeze.
a = torch.randn(batch_size, num_chan, 10)
a = a.unsqueeze(dim=3)
a_up = F.upsample(Variable(a), size=(20, 1), mode='bilinear').squeeze(dim=3)
|
st115992
|
I have attempted to use unsqueeze and squeeze to pretend the tensor is actually two dimensional. The bilinear filter works with the specified size but the nearest neighbor one does not (I used nn.upsample2d instead of F.upsample with size set to (x,1) and it reports that the aspect ratio isn’t respected in the nearest neighbor case). I was hoping to be able to copy the values to double the size, not interpolate them between adjacent values.
I will try using F.upsample instead of the normal 2d upsampling module and see if it makes a difference.
|
st115993
|
I think the core of both module vs functional is the same. Nearest neighbour does appear to have an additional aspect ratio constraint. Likely a bit more wasteful, but you could then scale both dimensions equally and throw away the extra dimension instead of squeezing.
F.upsample(torch.autograd.Variable(a), size=(20,2))[:,:,:,0]
|
st115994
|
In lua torch, I needed to preallocate all cuda tensors, in order to:
avoid sync points, associated with allocation
avoid running out of memory…
Is this still a requirement/recommendation for pytorch?
(I’m getting oom errors using LSTM. not sure if this is because I need to pre-allocate stuff, or … ? )
|
st115995
|
No.
In PyTorch:
Freeing CUDA tensors does not synchronize because the caching allocator holds onto and reuses the memory segment.
Tensors are freed immediately when they go out of scope (because of Python’s ref-counting). In Lua Torch, tensors were not freed until the garbage collector ran.
|
st115996
|
Can someone give me some advice to implement deeply supervised network, which has multiple loss? Thank you.
|
st115997
|
I don’t understand what you are confused about. Don’t you just take a (weighted) sum of the different losses?
|
st115998
|
I want to implement the network like this. But I don’t how to implement multiple outputs and losses. I am just a newbie in PyTorch.
|
st115999
|
If you look at a basic PyTorch tutorial like http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html 41, you will see code like this:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
But there is nothing stopping you from having forward return outputs from several layers instead of just the last layer, and then you can use those other outputs to compute losses just like in the tutorial you use the last output to compute a loss. And you can add the losses together to get your overall loss.
|
st116000
|
There’s a lot of loss functions available in torch.nn 46. I am, currently, working on a speech recognizer. I am doing an end-to-end recognition. I need to have a connectionist temporal classification (CTC) layer as the outermost layer. Is there a neat way to do this?
In short, I want to have a bidrectional LSTM architecture which will have an objective to minimize CTC loss.
I don’t want to calculate the error gradients by hand obviously.
|
st116001
|
You might like this:
GitHub
SeanNaren/deepspeech.pytorch 720
deepspeech.pytorch - Speech Recognition using DeepSpeech2 and the CTC activation function. Edit
|
st116002
|
Hey guys, I wrote an encoder-decoder with attention model to predict reverse strings - that is, when getting abc return cba.
I get good results for most of my runs, but every few runs, my network doesn’t converge at all.
Any suggestion how to check what may cause this?
my encoder and decoder:
class EncoderRNN(rnn.RNN):
def __init__(self, hidden_size, emb_size, vocab_size, pre_trained_emb=None, n_layers=1, bidirect=True):
super(EncoderRNN, self).__init__()
self.n_layers = n_layers
self.hidden_size = hidden_size
self.bidirect = bidirect
self.emb = self.create_emb(vocab_size, emb_size, pre_trained_emb)
self.gru = nn.GRU(emb_size, hidden_size, batch_first=True, num_layers=n_layers, bidirectional=bidirect)
self.opt = optim.Adam(self.params())
def forward(self, input_sequence, hidden):
embeddings = self.emb(input_sequence)
output, hidden = self.gru(embeddings, hidden)
return output, hidden
class DecoderRNN(rnn.RNN):
def __init__(self, hidden_size, emb_size, vocab_size, pre_trained_emb=None):
super(DecoderRNN, self).__init__()
self.hidden_size = hidden_size
self.n_layers = config.values.get('n_layers', 1)
self.emb = self.create_emb(vocab_size, emb_size, pre_trained_emb)
self.W1 = Par(hidden_size, hidden_size) # U_a in the paper
self.W2 = Par(hidden_size, hidden_size) # W_a in the paper
self.W3 = Par(emb_size+hidden_size, hidden_size)
self.b2 = Par(hidden_size)
self.b3 = Par(hidden_size)
self.v = Par(hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size, num_layers=self.n_layers)
self.linear = nn.Linear(hidden_size, vocab_size)
self.opt = optim.Adam(self.params(), lr=0.01)
def forward(self, prev_inp, hidden, enc_outputs):
Uh = torch.matmul(enc_outputs, self.W1)
Ws = torch.matmul(torch.cat(hidden, 1)[:,:self.hidden_size], self.W2)
Wsb = torch.add(Ws, self.b2).unsqueeze(1)
u = F.tanh(torch.add(Uh, Wsb))
attn_weights = torch.mul(self.v, u).sum(2)
attn_weights = F.softmax(attn_weights).unsqueeze(2)
context_vector = torch.mul(attn_weights, enc_outputs).sum(1).squeeze(1)
# s_i = f(s_i-1, y_i-1, c_i)
prev_inp_emb = self.emb(prev_inp)
res = torch.matmul(torch.cat([prev_inp_emb, context_vector], 1), self.W3)
res = torch.add(res, self.b3).unsqueeze(0)
res, hidden = self.gru(res, hidden.view(self.n_layers, -1, self.hidden_size))
res = self.linear(res.squeeze(0))
res = F.log_softmax(res)
return res, hidden, attn_weights
A few functions that are shared for the encoder and decoder:
class RNN(nn.Module):
def __init__(self):
super(RNN, self).__init__()
def initHidden(self, batch_size):
return cuda(Variable(torch.zeros(self.n_layers * self.num_directions(), batch_size, self.hidden_size)))
def create_emb(self, output_size, emb_size, pre_trained_emb_mat):
emb = nn.Embedding(output_size, emb_size)
if pre_trained_emb_mat is not None:
emb.load_state_dict({'weight': pre_trained_emb_mat})
for param in emb.parameters():
param.requires_grad = False
return emb
def params(self):
return (p for p in self.parameters() if p.requires_grad)
def num_directions(self):
return 2 if self.bidirect else 1
And a few more functions
def cuda(var):
if torch.cuda.is_available():
return var.cuda()
return var
def Arr(*sizes): return torch.randn(sizes)
def Par(*sizes): return torch.nn.Parameter(Arr(*sizes))
Where I initialized the encoder and decoder in the following way:
encoder = EncoderRNN(hidden_size, emb_size, vocab.n_words, n_layers=n_layers, bidirect=bidirect)
decoder = DecoderRNN(hidden_size*enc_num_of_directions, emb_size, vocab.n_words)
Notice the decoder state size is twice the size of the encoder.
|
st116003
|
Hello, every body, i want to implement a new loss that presented in recent paper in pytorch. this loss is used for instance segmentation. but i find it difficult to implement this loss because the number of instances in different picture are not the same. and i want to get some value of pixels of each instance.the formula iooks like that:
any one can give me some advice? thank you very much!!
|
st116004
|
Hi everyone,
I recently started working in the deep learning field and I am not familiar with GPU computing. I am looking to set up a 8-GPU server environment, which comes in a single and double root complex version. What difference does it make between those version when I run models in the GPU?
|
st116005
|
I have a RNN and after every batch, I have the option of either detaching the hidden states or re-initializing them. It’s not clear to me which one should I choose. If the batches are independent, should you just re-initialize them (like with all zeros) or should you pass the hidden state’s data to the next batch (but call detach so you don’t backprop through the entire dataset).
Also, as as a sanity check: if the batches were dependent you would call detach if you had to choose between re-initializing or detaching?
|
st116006
|
With independent batches, you shouldn’t carry hidden state from one batch to the next by calling detach. That wouldn’t make sense – each batch element would have a different hidden state corresponding to what was appropriate to the end of the last corresponding sequence, but that sequence has no relation to the current one. On the other hand, if the batches are successive parts of a long sequence and you are doing truncated BPTT then you should call detach.
You may find that it helps somewhat to learn the initial hidden state.
|
st116007
|
Q1:
In the pytorch,if we don’t use torch.cuda.streams explicitly,then pytorch only use one cuda stream(default cuda stream), am I right?
Q2:
I want to use multiple cuda stream,so different GPU tasks can be ran concurrently on a same GPU.I think this maybe improve the utilization rate of GPU.
Here is a cuda copy task “input_B.resize_(input_A.size()).copy_(input_A)”.input_B is torch.cuda.FloatTensor type,and input_A is torch.Tensor type.How to create a new cuda stream and put this cuda copy task to the new cuda stream?
Thanks a lot,and I only have one GPU in my computer.
|
st116008
|
torch.cuda.stream(your_stream) 435 do what you want, I guess
And you can use multiprocess 177 to launch the different process with different streams. Note that multiprocess with cuda is only suported in Python 3
|
st116009
|
Hi alexis-jacq. What is the argument we need to pass to torch.cuda.stream() ? I am new to streams so sorry for the stupid question. Thanks in advance too
|
st116010
|
I am also trying to understand how to use streams. I believe that currently my model is getting much less out of the GPU than it could. (It is hard to understand where the bottlenecks are, but one tipoff is that nvidia-smi reports sm usage of only around 33%.)
A prior confusion that I have about pytorch before even getting to the topic of streams is about when pytorch is waiting for kernels to finish running. I have tried line profiling code using the python line profiler (https://github.com/rkern/line_profiler 16), and it seems like the numbers I get for how much time is spent on each line roughly correspond to how long I would expect the corresponding computation to take on the GPU (but, as noted below, simple operations are not as much faster than complex ones as I might expect, and a further caveat is that the line profiler doesn’t provide any kind of variance estimate). While seeing such numbers is good for helping me to see where bottlenecks are, it seems to imply that pytorch is waiting for computations to finish after each line (and not just when, say, I try to print out the result of a computation). And I would think that if pytorch just did an asynchronous kernel launch and immediately returned it would be faster.
If I am right that pytorch waits, that explains why my naive attempt to use streams below fails to improve performance.
What I tried doing was making a simple class to run code in parallel on different streams, like so:
class StreamSpreader():
def __init__(self):
self.streams = []
def __call__(self, *tasks):
if not torch.cuda.is_available():
return [t() for t in tasks]
while len(self.streams) < len(tasks):
self.streams.append(torch.cuda.Stream())
ret = []
for s, t in zip(self.streams, tasks):
with torch.cuda.stream(s):
ret.append(t())
return ret
One question I have is: if this implementation doesn’t work because pytorch will wait for each operation to complete before launching the next, is it at least possible to make a working StreamSpreader class with the same API?
I tried three implementations of an LSTM. In the first, I do separate matrix multiplies of the hidden state and the input and add the results. This is what is like pytorch does when not using cudnn, I believe. (By the way, if you are wondering why I don’t just use the built-in LSTM, it is because I actually want to use a somewhat different architecture that is not supported as a built-in.) In the second implementation, I use streams via my StreamSpreader class. In the third, I concatenate the hidden state and the input and do one matrix multiply. I found that that the last approach improved performance significantly but that the stream approach actually decreased performance slightly.
if mode == 'baseline':
wh_b = torch.addmm(bias_batch, h_0, self.weight_hh)
wi = torch.mm(input, self.weight_ih)
preactivations = wh_b + wi
elif mode == 'multistream':
wh_b, wi = self.spreader(
lambda: torch.addmm(bias_batch, h_0, self.weight_hh),
lambda: torch.mm(input, self.weight_ih),
)
preactivations = wh_b + wi
elif mode == 'fused':
combined_inputs = torch.cat((h_0, input), 1)
preactivations = torch.addmm(bias_batch, combined_inputs, self.combined_weights)
I was a bit surprised at the line-by-line timings though, which makes me wonder if I have indeed misunderstood the pytorch model and that the truth is that it does do something more async. In the baseline model, the two matrix multiples together took an average of 208 microseconds (121 + 87), and adding them to compute the preactivations took an average of 58 microseconds – it surprises me that the simple add operation takes so long. In the fused version, concatenating the hidden state and the input takes an average of 65 microseconds, but the single matrix multiply takes only 117 microseconds. For the stream version, adding the preactivations took an average of 53 microseconds, but doing the matrix multiplies took 262 microseconds.
This is probably obvious, but even if streams don’t make sense for the particular case of implementing an LSTM block, I am still really interested in learning to use them effectively – I just had to choose something as a test case, and this is what I chose.
I didn’t check if there were significant differences in backpropagation speed, by the way, but in general the model spends about half its time backpropagating. I guess a further confusion I may have depending on what the answer to the “prior” confusion is about how asynchronous backpropagation is and whether using multiple streams could be a good way to increase GPU utilization here.
By the way, I have also tried a little using the NVIDIA profiler. An issue I encountered is that my model segfaults after about 30 seconds when run under the profiler. Is there a known bug around this? I can get useful results by quitting the training before the segfault happens, but I am currently still having trouble understanding it due to unfamiliarity with the tool.
|
st116011
|
Hi folks,
Have you guys met a problem that a launched training process does not use GPU (or GPUs) at beginning. After a while, it starts to use the assigned GPU(s). I have encountered this problem many times. Not sure what’s the reason. Any suggestions?
Thanks!
|
st116012
|
Hi all
I’m trying to implement a paper that uses attention networks on a CNN and am a bit lost.
the paper says the attention block is:
attention (w/ 5x1 conv 8 filters + BN + tanh + 5x1 conv 1 filter + BN + softmax)
The input to this block (I think) has dimensions batches x 1 (filter) x length
Can someone explain how the attention aspect works and if possible some (pseudo) code?
Thanks
|
st116013
|
I want to modify train_data and train_labels of torchvision.datasets.CIFAR10 object:
trainset = torchvision.datasets.CIFAR10(root=’./data’, train=True, download=False)
When I modify trainset.train_labels and trainset.train_data, lengths of these list change. But, when I print len() function of CIFAR-10 class, it still prints the old train_data length. Therefore, when I later use this trainset with trainloader:
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True ) I get the following error:
samples = collate_fn([dataset[i] for i in batch_indices])
File “/usr/local/lib/python2.7/dist-packages/torchvision/datasets/cifar.py”, line 83, in getitem
img, target = self.train_data[index], self.train_labels[index]
IndexError: index 36238 is out of bounds for axis 0 with size 27500
Do you know what should I do to avoid this error?
|
st116014
|
The master branch of torchvision should have this fixed as per #211 there 22. If you look at the len() function it used to return a hardcoded 50,000, but now it returns the actual size of the train data. If you can’t get master for some reason you can do what I do and just subclass like so:
class CIFAR10(dset.CIFAR10):
def __len__(self):
if self.train:
return len(self.train_data)
else:
return len(self.test_data)
|
st116015
|
I have the same problem even after upgrading to 0.2.0_1. But, subclass method works!
Thanks.
|
st116016
|
Hi everyone,
I’ve successfully installed CUDA 8.0.83 on OS 10.12.5 (verified by running the ./deviceQuery and ./bandwidthTest sample scripts from the CUDA install and going through the first PyTorch tutorial). I’ve also installed Pytorch successfully using pip. However, when I run torch.cuda.is_available(), I get False in return.
Not quite sure why this step fails when all the other steps are successful. Any help would be appreciated. Cheers!
|
st116017
|
As you can see on the PyTorch install selection screen, on macOS you have to install from source for CUDA support. Check out https://github.com/pytorch/pytorch#from-source 783.
|
st116018
|
Yes, I have followed the instructions to install from source. I am assuming the install was successful since both the ./deviceQuery and ./bandwidthTest scripts ran ok. My issue is still that PyTorch can’t seem to detect that cuda is available when I run “torch.cuda.is_available()”.
|
st116019
|
Sorry, to clarify, you installed PyTorch or CUDA from source? deviceQuery and bandwithTest are scripts to check CUDA worked, and have nothing to do with PyTorch. You need to install PyTorch from source, not with pip, by following the instructions in the link I posted.
|
st116020
|
I installed CUDA from source via the instructions on this site: http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html#axzz4kJrsqXTL 89. I have Xcode 8.2 and CUDA 8.0.83. This installation passes the deviceQuery and bandwithTest scripts.
I have also installed PyTorch from source via the link you posted above. I have anaconda installed and I set “export CMAKE_PREFIX_PATH=~/anaconda”. The step that fails is: python setup.py 3 install.
When I run python setup.py 3, this is what I get from terminal:
sudo MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py 3 install
running install
running build_deps
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/_thnn/utils.py:1: RuntimeWarning: Parent module ‘torch._thnn’ not found while handling absolute import
import os
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/_thnn/utils.py:2: RuntimeWarning: Parent module ‘torch._thnn’ not found while handling absolute import
import itertools
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/_thnn/utils.py:3: RuntimeWarning: Parent module ‘torch._thnn’ not found while handling absolute import
import importlib
– Try OpenMP C flag = [-fopenmp=libomp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [ ]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-fopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [/openmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-Qopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-openmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-xopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [+Oopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-qsmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-mp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-fopenmp=libomp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [ ]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-fopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [/openmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-Qopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-openmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-xopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [+Oopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-qsmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-mp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Could NOT find OpenMP (missing: OpenMP_C_FLAGS OpenMP_CXX_FLAGS)
– Could not find hardware support for NEON on this machine.
– No OMAP3 processor on this machine.
– No OMAP4 processor on this machine.
– SSE2 Found
– SSE3 Found
– AVX Found
– AVX2 Found
– TH_SO_VERSION: 1
– Atomics: using GCC intrinsics
– Checking for [mkl_intel_lp64 - mkl_intel_thread - mkl_core - iomp5 - pthread - m]
– Library mkl_intel_lp64: /Users/zzhao/miniconda2/lib/libmkl_intel_lp64.dylib
– Library mkl_intel_thread: /Users/zzhao/miniconda2/lib/libmkl_intel_thread.dylib
– Library mkl_core: /Users/zzhao/miniconda2/lib/libmkl_core.dylib
– Library iomp5: /Users/zzhao/miniconda2/lib/libiomp5.dylib
– Library pthread: /usr/lib/libpthread.dylib
– Library m: /usr/lib/libm.dylib
– MKL library found
– Found a library with BLAS API (mkl).
– Found a library with LAPACK API. (mkl)
– Configuring done
CMake Warning (dev):
Policy CMP0042 is not set: MACOSX_RPATH is enabled by default. Run “cmake
–help-policy CMP0042” for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
MACOSX_RPATH is not specified for the following targets:
TH
This warning is for project developers. Use -Wno-dev to suppress it.
– Generating done
– Build files have been written to: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/TH
[100%] Built target TH
Install the project…
– Install configuration: “Release”
– Installing: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTH.1.dylib
– Installing: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTH.dylib
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/TH.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THAllocator.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THMath.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THBlas.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THDiskFile.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THFile.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THFilePrivate.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGeneral.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateAllTypes.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateDoubleType.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateFloatType.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateHalfType.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateLongType.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateIntType.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateShortType.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateCharType.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateByteType.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateFloatTypes.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THGenerateIntTypes.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THLapack.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THLogAdd.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THMemoryFile.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THRandom.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THSize.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THStorage.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THTensor.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THTensorApply.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THTensorDimApply.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THTensorMacros.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THVector.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THAtomic.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/THHalf.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/vector/AVX.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/vector/AVX2.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THBlas.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THBlas.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THLapack.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THLapack.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THStorage.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THStorage.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THStorageCopy.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THStorageCopy.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensor.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensor.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorConv.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorConv.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorCopy.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorCopy.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorLapack.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorLapack.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorMath.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorMath.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorRandom.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THTensorRandom.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THVectorDispatch.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH/generic/THVector.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/share/cmake/TH/THConfig.cmake
Updating install_name for libTH.1.dylib
Updating install_name for libTHNN.1.dylib
Updating install_name for libTHS.1.dylib
– TH_LIBRARIES: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTH.1.dylib
– THS_SO_VERSION: 1
– Configuring done
CMake Warning (dev):
Policy CMP0042 is not set: MACOSX_RPATH is enabled by default. Run “cmake
–help-policy CMP0042” for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
MACOSX_RPATH is not specified for the following targets:
THS
This warning is for project developers. Use -Wno-dev to suppress it.
– Generating done
– Build files have been written to: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THS
[ 50%] Linking C shared library libTHS.dylib
[100%] Built target THS
Install the project…
– Install configuration: “Release”
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTHS.1.dylib
– Installing: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTHS.dylib
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/THS.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/THSGenerateAllTypes.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/THSGenerateFloatTypes.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/THSGenerateIntTypes.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/THSTensor.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/generic/THSTensor.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/generic/THSTensor.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/generic/THSTensorMath.c
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS/generic/THSTensorMath.h
Updating install_name for libTH.1.dylib
Updating install_name for libTHNN.1.dylib
Updating install_name for libTHS.1.dylib
– TH_LIBRARIES: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTH.1.dylib
– Try OpenMP C flag = [-fopenmp=libomp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [ ]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-fopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [/openmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-Qopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-openmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-xopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [+Oopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-qsmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP C flag = [-mp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-fopenmp=libomp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [ ]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-fopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [/openmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-Qopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-openmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-xopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [+Oopenmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-qsmp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Try OpenMP CXX flag = [-mp]
– Performing Test OpenMP_FLAG_DETECTED
– Performing Test OpenMP_FLAG_DETECTED - Failed
– Could NOT find OpenMP (missing: OpenMP_C_FLAGS OpenMP_CXX_FLAGS)
CMake Warning (dev) at CMakeLists.txt:61 (LINK_DIRECTORIES):
This command specifies the relative path
as a link directory.
Policy CMP0015 is not set: link_directories() treats paths relative to the
source dir. Run “cmake --help-policy CMP0015” for policy details. Use the
cmake_policy command to set the policy and suppress this warning.
This warning is for project developers. Use -Wno-dev to suppress it.
– THNN_SO_VERSION: 1
– Configuring done
CMake Warning (dev):
Policy CMP0042 is not set: MACOSX_RPATH is enabled by default. Run “cmake
–help-policy CMP0042” for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
MACOSX_RPATH is not specified for the following targets:
THNN
This warning is for project developers. Use -Wno-dev to suppress it.
– Generating done
– Build files have been written to: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THNN
[ 50%] Linking C shared library libTHNN.dylib
[100%] Built target THNN
Install the project…
– Install configuration: “Release”
– Installing: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTHNN.1.dylib
– Installing: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTHNN.dylib
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THNN/THNN.h
– Up-to-date: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THNN/generic/THNN.h
Updating install_name for libTH.1.dylib
Updating install_name for libTHNN.1.dylib
Updating install_name for libTHS.1.dylib
– Removing -DNDEBUG from compile flags
– TH_LIBRARIES: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/lib/libTH.1.dylib
– MAGMA not found. Compiling without MAGMA support
– Automatic GPU detection failed. Building for common architectures.
– Autodetected CUDA architecture(s): 3.0;3.5;5.0;5.2;6.0;6.1;6.1+PTX
– got cuda version 8.0
– Found CUDA with FP16 support, compiling with torch.CudaHalfTensor
– CUDA_NVCC_FLAGS: -DTH_INDEX_BASE=0 -I/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include -I/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/TH -I/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THC -I/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THS -I/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THCS -I/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/tmp_install/include/THPP;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_61,code=compute_61;-DCUDA_HAS_FP16=1
– THC_SO_VERSION: 1
– Configuring done
CMake Warning (dev):
Policy CMP0042 is not set: MACOSX_RPATH is enabled by default. Run “cmake
–help-policy CMP0042” for policy details. Use the cmake_policy command to
set the policy and suppress this warning.
MACOSX_RPATH is not specified for the following targets:
THC
This warning is for project developers. Use -Wno-dev to suppress it.
– Generating done
– Build files have been written to: /Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC
[ 1%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCHalf.cu.o
[ 4%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCBlas.cu.o
[ 4%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCStorageCopy.cu.o
[ 6%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensor.cu.o
[ 6%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCStorage.cu.o
[ 7%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCTensorCopy.cu.o
[ 9%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCSleep.cu.o
[ 9%] Building NVCC (Device) object CMakeFiles/THC.dir/THC_generated_THCReduceApplyUtils.cu.o
nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported
nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported
nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported
nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported
CMake Error at THC_generated_THCSleep.cu.o.cmake:207 (message):
Error generating
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCSleep.cu.o
CMake Error at THC_generated_THCTensor.cu.o.cmake:207 (message):
Error generating
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensor.cu.o
CMake Error at THC_generated_THCStorageCopy.cu.o.cmake:207 (message):
Error generating
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCStorageCopy.cu.o
CMake Error at THC_generated_THCTensorCopy.cu.o.cmake:207 (message):
Error generating
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCTensorCopy.cu.o
nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported
nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported
CMake Error at THC_generated_THCStorage.cu.o.cmake:207 (message):
Error generating
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCStorage.cu.o
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCSleep.cu.o] Error 1
make[2]: *** Waiting for unfinished jobs…
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensorCopy.cu.o] Error 1
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCTensor.cu.o] Error 1
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCStorageCopy.cu.o] Error 1
CMake Error at THC_generated_THCBlas.cu.o.cmake:207 (message):
Error generating
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCBlas.cu.o
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCStorage.cu.o] Error 1
nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCBlas.cu.o] Error 1
CMake Error at THC_generated_THCReduceApplyUtils.cu.o.cmake:207 (message):
Error generating
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCReduceApplyUtils.cu.o
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCReduceApplyUtils.cu.o] Error 1
nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported
CMake Error at THC_generated_THCHalf.cu.o.cmake:207 (message):
Error generating
/Users/zzhao/Desktop/pytorch-master/pytorch/torch/lib/build/THC/CMakeFiles/THC.dir//./THC_generated_THCHalf.cu.o
make[2]: *** [CMakeFiles/THC.dir/THC_generated_THCHalf.cu.o] Error 1
make[1]: *** [CMakeFiles/THC.dir/all] Error 2
make: *** [all] Error 2
I’ve seen that the nvcc fatal error is due to having the wrong Xcode CLT version (https://github.com/arrayfire/arrayfire/issues/1384 4) . So I tried switching to Xcode CLT 7.3—however, I still get the same errors when running setup.py 3.
As of right now, I have CUDA 8.0.83, OS 10.12.5, Apple LLVM Version 8.0.0, and Xcode 8.2 and I’m still having trouble getting torch.cuda.is_available() to return true. Any further help would be appreciated!
|
st116021
|
You should be almost there. This is effectively what I did as well, although I only had to downgrade from Xcode 8.3 to 8.2 (and the same should be true for you with CUDA 8.0.83). What exactly do you get when you run clang --version? You did run xcode-select right?
|
st116022
|
When I run clang --version, I have:
clang --version
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin16.6.0
Thread model: posix
InstalledDir: /Applications/Xcode_8.2.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
I’ve also ran “sudo xcode-select -s /Applications/Xcode_8.2.app/Contents/Developer”.
|
st116023
|
Spruceb—thanks for your help on this topic btw. It seems that this issue will remain unsolved for the time being. Hopefully someone else can work through this issue in the future. Cheers.
|
st116024
|
Unfortunately, no. I ultimately ended up installing PyTorch on Ubuntu 16.04.
Zhen (Tony) Zhao
Research Engineer, Virtual Driver Systems, Autonomous Vehicles and Robotics
Ford Motor Company, Dearborn MI
Semper Fidelis
|
st116025
|
Thanks for the fast reply.
Just as a side thought, “10.9” found in this line:
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py 14 install
isn’t true for all machines. Since you are using Mac OS 10.12.x, it must be changed to 10.12, not 10.9
Could you see if this will do it?
EDIT: Actually, it worked for me. torch.cuda.is_available() now prints “True”.
I also updated my NVIDIA drivers.
|
st116026
|
I have a silly question:
THLongStorage *t1 = THLongStorage_newWithAllocator(2, &THDefaultAllocator, NULL);
THLongStorage_free(t1);
it’s ok.
long data[3] = {2, 3, 5};
THLongStorage *t2 = THLongStorage_newWithDataAndAllocator(data, 3, &THDefaultAllocator, NULL);
THLongStorage_free(t2);
there is an error: process finished with exit code 139 (interrupted by signal 11: SIGSEGV)
thank you advance!!!
|
st116027
|
THStorage_(newWithDataAndAllocator) assumes the data was allocated by the passed-in allocator. It doesn’t copy the data, it just steals the pointer.
The snippet breaks because data is allocated on the stack but the THLongStorage_free() tries to free it via the allocator (free()).
Instead do something like:
THLongStorage *t1 = THLongStorage_newWithAllocator(3, &THDefaultAllocator, NULL);
THLongStorage_free(t1);
long data[3] = {2, 3, 5};
memcpy(t1->data, data, 3 * sizeof(long));
|
st116028
|
I’m trying to implement the Model Agnostic Meta-Learning 3 algorithm in PyTorch based on the TensorFlow code. To summarize:
Compute the gradients of a model parametrized by Theta based on the loss from some training samples.
Compute Theta_ = Theta - lr*Theta_grad
Compute loss of the model when parametrized by Theta_ on some test samples.
Compute the gradients all the way back to Theta
Here is the code I wrote:
train_output = forward(train_images, weights)
train_loss = F.cross_entropy(train_output, train_labels)
grads = th.autograd.grad(train_loss, weights.values(), create_graph=True)
gradients = dict(zip(weights.keys(), grads))
fast_weights = Munch(dict(zip(weights.keys(), [weights[key] - args.update_lr * gradients[key] for key in weights.keys()])))
test_output = forward(test_images, fast_weights)
test_loss = F.cross_entropy(test_output, test_labels)
temp_grad = th.autograd.grad(test_loss, fast_weights.b5, retain_graph=True)
new_grads = th.autograd.grad(fast_weights.b5, weights.b5, grad_outputs=temp_grad)
test_loss.backward()
This code works fine when executed on CPU but does not work when using GPU. I use Pytorch 0.2.0. Complete error message:
Traceback (most recent call last):
File "main_f.py", line 260, in <module>
test_loss.backward()
File "/home/lib/python3.5/site-packages/torch/autograd/variable.py", line 156, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, retain_variables)
File "/home/lib/python3.5/site-packages/torch/autograd/__init__.py", line 98, in backward
variables, grad_variables, retain_graph)
RuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
Why does this happen?
Related Code:
weights = Munch()
weights.W1, weights.b1 = init_conv(1, args.num_filters, (3, 3))
weights.W2, weights.b2 = init_conv(args.num_filters, args.num_filters, (3, 3))
weights.W3, weights.b3 = init_conv(args.num_filters, args.num_filters, (3, 3))
weights.W4, weights.b4 = init_conv(args.num_filters, args.num_filters, (3, 3))
weights.W5, weights.b5 = init_fc(args.num_filters, args.num_classes)
bns = Munch()
bns.bn1 = nn.BatchNorm2d(args.num_filters)
bns.bn2 = nn.BatchNorm2d(args.num_filters)
bns.bn3 = nn.BatchNorm2d(args.num_filters)
bns.bn4 = nn.BatchNorm2d(args.num_filters)
if args.cuda:
weights = Munch({k: w.cuda() for k, w in weights.items()})
bns = Munch({k: bn.cuda() for k, bn in bns.items()})
def conv_block(input, weight, bias, bn):
out = F.conv2d(input, weight, bias, padding=1)
out = bn(out)
out = F.relu(out)
return F.max_pool2d(out, 2)
def forward(input, weights):
out = conv_block(input, weights.W1, weights.b1, bns.bn1)
out = conv_block(out, weights.W2, weights.b2, bns.bn2)
out = conv_block(out, weights.W3, weights.b3, bns.bn3)
out = conv_block(out, weights.W4, weights.b4, bns.bn4)
out = out.view(-1, 64)
return F.linear(out, weights.W5, weights.b5)
|
st116029
|
I find theano’s dot() and broadcasting of basic operators very convenient (ditto for keras, which is designed to be fully compatible with the theano API for these functions). It saves a lot of unsqueeze()ing and expand_as()ing and makes life a lot easier IMO. It also makes it easier to port code from theano and keras to pytorch.
In summary, dot() handles pretty much any sized tensor arguments and does a dot product of the last axis of the first argument with the 2nd last axis of the 2nd argument. And for +*-/ it broadcasts any empty leading or unit axes as needed to make the arguments compatible. In case anyone is interested, here is a pytorch version of theano’s dot() and broadcasted operators - hope some folks find it useful!
def align(x, y, start_dim=2):
xd, yd = x.dim(), y.dim()
if xd > yd:
for i in range(xd - yd): y = y.unsqueeze(0)
elif yd > xd:
for i in range(yd - xd): x = x.unsqueeze(0)
xs = list(x.size())
ys = list(y.size())
nd = len(ys)
for i in range(start_dim, nd):
td = nd-i-1
if ys[td]==1: ys[td] = xs[td]
elif xs[td]==1: xs[td] = ys[td]
return x.expand(*xs), y.expand(*ys)
def dot(x, y):
x, y = align(x, y)
assert(1<y.dim()<5)
if y.dim() == 2:
return x.mm(y)
elif y.dim() == 3:
return x.bmm(y)
else:
xs,ys = x.size(), y.size()
res = torch.zeros(*(xs[:-1] + (ys[-1],)))
for i in range(xs[0]): res[i] = x[i].bmm(y[i])
return res
def aligned_op(x,y,f):
x, y = align(x,y,0)
return f(x, y)
def add(x, y): return aligned_op(x, y, operator.add)
def sub(x, y): return aligned_op(x, y, operator.sub)
def mul(x, y): return aligned_op(x, y, operator.mul)
def div(x, y): return aligned_op(x, y, operator.truediv)
And here are some tests / examples:
def Arr(*sz): return torch.randn(sz)
m = Arr(3, 2)
v = Arr(2)
b = Arr(4,3,2)
t = Arr(5,4,3,2)
mt = m.transpose(0,1)
bt = b.transpose(1,2)
tt = t.transpose(2,3)
def check_eq(x,y): assert(torch.equal(x,y))
check_eq(dot(m,mt),m.mm(mt))
check_eq(dot(v,mt), v.unsqueeze(0).mm(mt))
check_eq(dot(b,bt),b.bmm(bt))
check_eq(dot(b,mt),b.bmm(mt.unsqueeze(0).expand_as(bt)))
exp = t.view(-1,3,2).bmm(tt.contiguous().view(-1,2,3)).view(5,4,3,3)
check_eq(dot(t,tt),exp)
check_eq(add(m,v),m+v.unsqueeze(0).expand_as(m))
check_eq(add(v,m),m+v.unsqueeze(0).expand_as(m))
check_eq(add(m,t),t+m.unsqueeze(0).unsqueeze(0).expand_as(t))
check_eq(sub(m,v),m-v.unsqueeze(0).expand_as(m))
check_eq(mul(m,v),m*v.unsqueeze(0).expand_as(m))
check_eq(div(m,v),m/v.unsqueeze(0).expand_as(m))
|
st116030
|
I’ve made some minor changes to this code to make it a bit faster - here’s the updated version (tests from above will still work fine):
def unit_prefix(x, n=1):
for i in range(n): x = x.unsqueeze(0)
return x
def align(x, y, start_dim=2):
xd, yd = x.dim(), y.dim()
if xd > yd: y = unit_prefix(y, xd - yd)
elif yd > xd: x = unit_prefix(x, yd - xd)
xs, ys = list(x.size()), list(y.size())
nd = len(ys)
for i in range(start_dim, nd):
td = nd-i-1
if ys[td]==1: ys[td] = xs[td]
elif xs[td]==1: xs[td] = ys[td]
return x.expand(*xs), y.expand(*ys)
def dot(x, y):
assert(1<y.dim()<5)
x, y = align(x, y)
if y.dim() == 2: return x.mm(y)
elif y.dim() == 3: return x.bmm(y)
else:
xs,ys = x.size(), y.size()
res = torch.zeros(*(xs[:-1] + (ys[-1],)))
for i in range(xs[0]): res[i].baddbmm_(x[i], (y[i]))
return res
|
st116031
|
The latest version of pytorch now supports proper broadcasting, so you don’t have to use my hacky version any more
|
st116032
|
Hello,
I am using PyCUDA, and recently started using PyTorch due to its native GPU tensor support.
I would like to write a new PyTorch method, lets say converting an RGB image to BW (see the PyCUDA equivalent here:https://github.com/QuantScientist/Data-Science-PyCUDA-GPU/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyCUDA/05%20PyCUDA%20image%20processing.ipynb 21) that uses CUDA directly.
Is that possible?
Thanks,
|
st116033
|
I am going through this example: https://github.com/pytorch/examples/tree/master/regression 9
Is there any way I can learn about the DistributedDataParallel functionality via this example?
If yes, then what are the changes which would be necessary for doing the same?
|
st116034
|
Hi there,
I have a variable in which the size is changeable when input dimension changes. For example, if input is 10x2, then the variable should be 10x10. If input is 25x2, then the variable should be 25x25. As my understanding, the variable is used to store weights, which normally has fixed dimension. However in my case, the dimension of the variable depends on input data, which can change. Does PyTorch currently supports this kind of function?
Thanks!
Yiru
|
st116035
|
I have an input_matrix which is scipy verion of sparse matrix in csr format. It’s a binary representation and consists of only 1’s and 0’s.
> input_matrix
<1500x24995 sparse matrix of type '<type 'numpy.float32'>'
with 1068434 stored elements in Compressed Sparse Row format>
I load it into a DataLoader using the below code:
cuda = torch.cuda.is_available()
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
input_loader = DataLoader(input_matrix.toarray(), batch_size=32, shuffle=True, **kwargs)
Now when I check the input_loader in the interpreter, I see 0’s, 1’s and other values such as 2’s appearing.
> input_loader
1 1 1 ... 0 0 0
0 0 0 ... 0 0 0
0 1 1 ... 0 0 0
... ? ...
0 2 2 ... 0 0 0
0 0 0 ... 0 0 0
1 1 1 ... 0 0 0
[torch.FloatTensor of size 32x24995]
If it helps, when I convert the csr_matrix into tensor using torch.from_numpy(input_matrix) I donot see values other than 0’s and 1’s.
0 1 1 ... 0 0 0
0 0 0 ... 0 0 0
0 1 0 ... 0 0 0
... ? ...
1 1 1 ... 0 0 0
0 0 0 ... 0 0 0
1 1 1 ... 0 0 0
[torch.FloatTensor of size 1500x24995]
Is the method employed to load the data correct? If not can how to correctly load the data into dataloader.
|
st116036
|
I’ve got all my data in NumPy array files. Which way of storing the data would be most efficient for PyTorch, particular in regard to multiple workers and shuffling? Would PyTorch’s approach with multiple workers and shuffling make the memmap reading inefficient? Is the overhead of opening tons of tiny files worse? Will the difference between the two approaches be significant? Thank you!
|
st116037
|
I’m trying to get my head around Conv2d. Here’s 2 bit of code i’ve seen from mnist and cifar10 in pytorch.
the mninst has a 1 channel input of a 28x28 image and produces 10 outputs but the cifar10 takes in 3 channels of a larger 32x32 image and produces only 6 outputs even though they both look to have the same stride of 5. I’m clearly missing something fundamental here !
MNIST
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
CIFAR10
class Net(nn.Module):
def init(self):
super(Net, self).init()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
|
st116038
|
Without double-checking what are the actual parameters to Conv2d, I assume the first two parameters are not input/output size, but number of channels.
The number of channels is the depth of a stack of image. Like, if you have r/g/b channels, you’d have three images, on top of each other, forming a stack of 3 images. Thats 3 channels. But you can, and normally do, have more than 3 channels. Typically, as you go through the cnn, each layer will make the stack deeper (more images per stack), but the width/height of the stack of images will gradually decrease.
ie, the input to a cnn is not 3 dimensional, but 4 dimensions:
batch size
number channels (depth of each stack of images)
width of image
height of image
Think of each batch of inputs as being a bunch of cubes. Each example is a cube of num_channels * image height * image width.
|
st116039
|
yes, the first 2 parameters are in and out for channel
class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
i’m looking at the first layer in each example
mnist: self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
cifar10: self.conv1 = nn.Conv2d(3, 6, 5) # i assume 5 here is the same as kernel_size=5 in the example in the line above
so input channels of 1 for mnist and 3 for cifar10 makes sense.
but how are the outputs 10 for mnist but only 6 for cifar10. these are the numbers i cannot understand.
in my head mnist 1 x 28 x 28 with a kernel of 5 produces 1 x 24 x 24
and cifar10 3 x 32 x 32 with a kernel of 5 produces 3 x 28 x 28.
it’s the concept of how the images get ‘deeper’ that i’m missing.
(thinking of things as cubes when its ‘Conv2D’ is a little confusing. why not Conv3D’ ?)
|
st116040
|
oh hang on. am i just making this more complicated than it is ? !
are the 6 and 10 just arbitrary choices that we’ll create that many clones of the smaller image…
|
st116041
|
well, they’re not really clones, but yeah, you can make the output channel size/count any number you want (wihtin the bounds of available memory etc)
|
st116042
|
great, thanks ! i was thinking it had been calculated as part of the convolutions. d’oh
|
st116043
|
I have pulled out some weight and bias data from a pre-trained tensorflow CNN model and saved them in txt files.
I wonder how can I load these data into a NN model contained in nn.Sequential in my PyTorch code like below?
class CNN(nn.Module):
def init(self):
super(CNN, self).init()
self.conv1 = nn.Sequential(
nn.Conv2d(
in_channels=4,
out_channels=32,
kernel_size=8,
stride=4,
padding=2,
),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2),
)
|
st116044
|
You can access and set the convolution kernel by doing
self.conv1[0].weight.data = pretrained_weight
self.conv1[0].bias.data = pretrained_bias
|
st116045
|
Thx for your answers!! But I have some puzzles!
I have done below:
mycnn = CNN()
print (mycnn.state_dict().keys())
it shows:
[‘conv1.0.weight’, ‘conv1.0.bias’, ‘conv2.0.weight’, ‘conv2.0.bias’, ‘conv3.0.weight’, ‘conv3.0.bias’, ‘fc1.weight’, ‘fc1.bias’, ‘out.weight’, ‘out.bias’]
Then I try to do below:
print (mycnn.conv1[0].bias.data)
print (mycnn.state_dict()[‘conv1.0.bias’].data)
The outputs are different.
And I check the gradient:
It shows
mycnn.conv1[0].bias.grad = None
mycnn.state_dict()[‘conv1.0.bias’].grad is an ERROR
AttributeError: ‘torch.FloatTensor’ object has no attribute ‘grad’
Can you tell the difference between “mycnn.conv1[0].bias” and “mycnn.state_dict()[‘conv1.0.bias’]” in my Pytorch model?
|
st116046
|
Maybe what store in mycnn.state_dict( ) are just pytorch tensors, not variables.
|
st116047
|
I have build exactly same model in both TF and Pytorch. And I trained in TF. For some reason, I have to transfer the pretrained weight to Pytorch.
The network is like:
image.png515×557 38.7 KB
In TF, Conv2d filter shape is [filter_height, filter_width, in_channels, out_channels], while in Pytorch is (out_channels, in_channels, kernel_size[0], kernel_size[1]).
So I have done below in TF:
and I transfer to pytorch like:
It turns out that the DQN in pytorch is not working well as in TF!
|
st116048
|
Hi,
I’m new in PyTorch and trying to reimplement FC-DenseNet.
So, I defined the up sampling layer as:
class TransUpBlock(nn.Module):
def __init__(self, num_input_feat, num_output_feat):
super(TransUpBlock, self).__init__()
self.deconv = nn.ConvTranspose2d(num_input_feat,
num_output_feat, kernel_size=3, stride=2, padding=0, bias=False)
def forward(self, input_, skip):
output_ = self.deconv(input_)
output_ = center_crop(output_, skip.size(2), skip.size(3)) # From PyTorch Tiramiso
output_ = torch.cat([output_, skip], 1) # From PyTorch Tiramiso
return output_
where the training code contains:
model = model.cuda()
.
.
.
for i , (input, target) in enumerate(train_loader):
target = target.cuda(async=True)
input = input.cuda()
input_var = torch.autograd.Variable(input)
target_var = torch.autograd.Variable(target)
output = model(input_var)
When I feed the network, I got this error message:
File "My_Train.py", line 271, in train
output = model(input_var) #(A) performs forward pass
File "/usr/lib64/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/boroujerdi/Dokumente/CVPR_2017_Open_Access_Repository/DenseNet/My_IMP/My_Net.py", line 215, in forward
output_ = self.transUpBlocks[i](output_, skip)
File "/usr/lib64/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/boroujerdi/Dokumente/CVPR_2017_Open_Access_Repository/DenseNet/My_IMP/My_Net.py", line 92, in forward
output_ = self.deconv(input_)
File "/usr/lib64/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/usr/lib64/python2.7/site-packages/torch/nn/modules/conv.py", line 524, in forward
output_padding, self.groups, self.dilation)
File "/usr/lib64/python2.7/site-packages/torch/nn/functional.py", line 137, in conv_transpose2d
return f(input, weight, bias)
RuntimeError: CUDNN_STATUS_BAD_PARAM
Do you think there is something wrong with my cudnn?
Any kind of help will be appreciated.
|
st116049
|
It looks like we often provide our own embedding, prior to LSTM, and then assign input_size == hidden_size, for the LSTM, eg http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html 2 :
self.embedding = nn.Embedding(input_size, hidden_size)
self.gru = nn.GRU(hidden_size, hidden_size)
It seems like this is kind of ‘wasteful’, since it’s adding an additional hidden_size x hidden_size matrix multiply at the input of the LSTM, which we dont need in fact?
|
st116050
|
I want to create a loss function that is able to do ~ this:
> def loss(last_ref , ref, last_ypred, y_pred):
> last = last_ref + last_ypred
> now = ref + y_pred
> return pow(now - last, 2)
It is not clear (to me!) how to do this. I am just trying my 1st crack at moving to PyTorch. Thanks in advance for any thoughts.
-CD
|
st116051
|
I was using
mdl_sgd = torch.nn.Sequential(
torch.nn.Linear(D_sgd,1)
)
but wanted to know the scale of of its random initialization and if its random or uniform.
|
st116052
|
Hello,
I have a module which uses another module as basis, which I use like this:
class FirstModule(nn.Module):
def __init__(self, secondModule):
self.secondModule = secondModule #of self.add_module('secondModule', secondModule)
#other things...
The problem with this is that the parameters of secondModule will show up in the firstModule parameter list, which I don’t want; I need an instance of the second module there, but I don’t need its parameters / won’t backpropagate through them.
So I resorted to wrap the second module instance in a list, so that it’s parameters are invisible:
class FirstModule(nn.Module):
def __init__(self, secondModule):
self.secondModule = [secondModule]
#other things...
The issue with this (apart from being awkward) is that sometimes I would like pytorch to know that the secondModule is there. For example, when calling firstModule.cuda(), I would like secondModule.cuda() to be called, too, which won’t happen in this case.
So what is the cleanest way of solving the situation? Is there a way to remove the parameters of secondModule from the firstModule parameter list, but in such a way that other functions are aware that secondModule is there?
|
st116053
|
If you do not want to backprop through the parameters of self.secondModule, you could do:
for p in self.secondModule.parameters():
p.requires_grad = False
|
st116054
|
Thank you for your answer! But not only I don’t want to backprop, I don’t want those parameters to show up in self.parameters() (as I need to do something on them that I don’t want to do on the parameters of secondModule)
|
st116055
|
filter through the parameters with some kind of lambda:
params = list(decoder.parameters())
new_params=[]
for p in model.parameters()
#filter here
optimizer = torch.optim.Adam(new_params, lr=args.learning_rate)
|
st116056
|
How does one make sure that the updates for parameters indeed happens when one subclasses nn modules (or uses torch.nn.Sequential)? I tried making my own class but I was never able to update the parameters for some reason. The SGD code for the nn module is (https://github.com/brando90/simple_regression/blob/master/minimum_example.py 21):
mdl_sgd = torch.nn.Sequential( torch.nn.Linear(D_sgd,1,bias=False) )
...
for i in range(nb_iter):
# Forward pass: compute predicted Y using operations on Variables
batch_xs, batch_ys = get_batch2(X,Y,M,dtype) # [M, D], [M, 1]
## FORWARD PASS
y_pred = mdl_sgd.forward(X)
## LOSS
loss = (1/N)*(y_pred - batch_ys).pow(2).sum()
## Manually zero the gradients after updating weights
mdl_sgd.zero_grad()
## BACKARD PASS
loss.backward() # Use autograd to compute the backward pass. Now w will have gradients
## SGD update
for W in mdl_sgd.parameters():
#print(W.grad.data)
W.data = W.data - eta*W.grad.data
which does not work for some unknown reason to me, though when I create the variables explicitly then the updates do happen (https://github.com/brando90/simple_regression/blob/master/direct_example.py 12):
X = poly_kernel_matrix(x_true,Degree_mdl) # maps to the feature space of the model
X = Variable(torch.FloatTensor(X).type(dtype), requires_grad=False)
Y = Variable(torch.FloatTensor(Y).type(dtype), requires_grad=False)
w_init=torch.randn(D_sgd,1).type(dtype)
W = Variable( w_init, requires_grad=True)
...
for i in range(nb_iter):
# Forward pass: compute predicted Y using operations on Variables
batch_xs, batch_ys = get_batch2(X,Y,M,dtype) # [M, D], [M, 1]
## FORWARD PASS
#y_pred = mdl_sgd.forward(X)
y_pred = batch_xs.mm(W)
## LOSS
loss = (1/N)*(y_pred - batch_ys).pow(2).sum()
## BACKARD PASS
loss.backward() # Use autograd to compute the backward pass. Now w will have gradients
## SGD update
W.data = W.data - eta*W.grad.data
## Manually zero the gradients after updating weights
#mdl_sgd.zero_grad()
W.grad.data.zero_()
I am not 100% sure what I am doing wrong but if do know feel free to tell me!
I also made a more detailed SO question since I’ve received very good responses from SO in the past (https://stackoverflow.com/questions/45626848/how-does-one-make-sure-that-the-parameters-are-update-manually-in-pytorch-using 36).
|
st116057
|
Generally you shouldn’t be reassigning .data of Variables, but it should work I think. All built in optimizers do the update in-place (W.data.sub_(lr*W.grad.data))
|
st116058
|
then how should I be updating the variables if I want to do it manually? (note I choose SGD as an example since I knew what should happen but I really wanted to play around with different update rules)
Note: I now tried the update rule u suggested, didn’t seem to work:
W.data.sub_(eta*W.grad.data)
I wonder if there is something really small and weird that I am doing that makes code thats seems that it should work not work…
|
st116059
|
note that I am re-asigning .data because the tutorials do it:
http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#pytorch-nn 179
for param in model.parameters():
param.data -= learning_rate * param.grad.data
and
w1.data -= learning_rate * w1.grad.data
w2.data -= learning_rate * w2.grad.data
Also, why should we not be re-assigning .data?
|
st116060
|
Have a look at my answer in What is the recommended way to re-assign/update values in a variable (or tensor)?
Whenever we have an underscore in the end of a function in pytorch, that means that the function is in-place.
So x.sub_(w) is the same as x -= w for x and w tensors.
|
st116061
|
so in place means like the usual in place as in normal algorithms? e.g. [2,1]->[1,2] in place means to me that the 1 and 2 are swapped places without creating copies and copies of the objects.
|
st116062
|
in-place in here means that there is no extra memory allocation.
One more example, when you do
a = a + 1
you allocate a new tensor whose value is a + 1, and you assign it to a tensor called a, overwriting the previous reference to the tensor a. Still, the memory for a+1 had to be freshly allocated.
But when you do
a += 1
no extra memory is allocated, and the addition is performed directly in the original elements of a.
|
st116063
|
@fma makes sense. What intrigues me right now is why
W = W - eta*W.grad
does not update my parameters. I understand the issue you mentioned about new variable allocation etc but despite of that gradient descent should still work (just memory inefficiently)…this is puzzling me.
|
st116064
|
The reason why it doesn’t update your parameters is simple: you have references of your parameters elsewhere, and you are overwriting the variable that is supposed to reference your parameter.
Simple example:
a = Variable(torch.rand(2)) # for example create in a Module
b = a # when you get the parameter
# now perform operation in b
b = b + 1
print(a)
print(b) # they differ!
|
st116065
|
Thanks that makes sense conceptually.
Also, now I actually figured out what was wrong with some other code I was talking about. If I change the update rule to:
W = W - eta*W.grad
it doesn’t work. The reason is because the above is actually nested in a loop that fetches parameters from my Sequential model, so doing (first):
#1st
for W in mdl.parameters():
W = W - eta*W.grad
vs
#2nd
for W in mdl.parameters():
W.data = W.data - eta*W.grad.data
even if both are conceptually wrong its semantically very different. The first one replaces a temporary variable/name with a new variable. Since the original variable held in mdl is never updated the model looks as if it never trains. Thus the 2nd method makes the model actually “work” and get trained but the first one does not. This is because the second actually changes variables inside of mdl.
I know neither is the way pytorch is meant to work (it seems) but at least I understand the behaviour I am seeing now as I change the lines between:
for W in mdl_sgd.parameters():
#print(W.grad.data)
W = W - eta*W.grad
#W.data.sub_(eta*W.grad.data)
#W.data = W.data - eta*W.grad.data
from https://stackoverflow.com/questions/45626848/how-does-one-make-sure-that-the-parameters-are-update-manually-in-pytorch-using#comment78249041_45626848 77
|
st116066
|
oh wow you beat me for like 4 minutes. Yea I figured that out. Essentially since b=b+1 creates a new variable with a new python id it doesn’t mean that a changes (which in ur example we are assuming a is bounded inside some class or module). Thanks so much for ur patience!
|
st116067
|
I was trying to get a mask from multiple conditions like x > 0.5 and x<1
however, if x is a tensor, current framework doesn’t support it.
What I’m doing now is
(x>0.5 + x<1) == 2
I feel like this would be slower than boolean calculation if there were a boolean calculation.
Is there any other more decent way?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.