id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st101900 | This will not send data to the cpu indeed. But you want to add a .detach() to make sure that the computational graph associated with loss is not kept around otherwise your memory usage is going to quickly explode. |
st101901 | import torch
import torch.nn as nn
from torch.autograd.function import Function
class CenterLoss(nn.Module):
def __init__(self, num_classes, feat_dim, size_average=True):
super(CenterLoss, self).__init__()
self.centers = nn.Parameter(torch.randn(num_classes, feat_dim))
self.centerlossfunc = CenterlossFunc.apply
self.feat_dim = feat_dim
self.size_average = size_average
def forward(self, label, feat):
batch_size = feat.size(0)
feat = feat.view(batch_size, -1)
# To check the dim of centers and features
if feat.size(1) != self.feat_dim:
raise ValueError("Center's dim: {0} should be equal to input feature's dim: {1}".format(self.feat_dim,feat.size(1)))
loss = self.centerlossfunc(feat, label, self.centers)
loss /= (batch_size if self.size_average else 1)
return loss
class CenterlossFunc(Function):
@staticmethod
def forward(ctx, feature, label, centers):
ctx.save_for_backward(feature, label, centers)
centers_batch = centers.index_select(0, label.long())
return (feature - centers_batch).pow(2).sum() / 2.0
@staticmethod
def backward(ctx, grad_output):
feature, label, centers = ctx.saved_tensors
centers_batch = centers.index_select(0, label.long())
diff = centers_batch - feature
# init every iteration
counts = centers.new(centers.size(0)).fill_(1)
ones = centers.new(label.size(0)).fill_(1)
grad_centers = centers.new(centers.size()).fill_(0)
counts = counts.scatter_add_(0, label.long(), ones)
grad_centers.scatter_add_(0, label.unsqueeze(1).expand(feature.size()).long(), diff)
grad_centers = grad_centers/counts.view(-1, 1)
return - grad_output * diff, None, grad_centers |
st101902 | A Function is an elementary building block of the autograd engine. It should implement both the forward transformation: input -> output and the backward transformation grad_output -> grad_input. This backward operation corresponds to applying the chain rule for derivation:
If you have a loss L and you are looking for dL/dinput then using the chain rule, you can use dL/dinput = dL/doutput * doutput/dinput. The first term here is grad_output and the second is what is implemented in the backward function. |
st101903 | Only question now I am left with is why there return - grad_output * diff, None, grad_centers, is “-” negative sign in grad_output. |
st101904 | I guess because even though the forward writes (feature - center_batch), diff = centers_batch - feature. And so there is an extra - to get the proper gradients. |
st101905 | If you loss function is differentiable, you don’t actually need to implement a Function. Just write an nn.Module that contains the forward pass and the gradients will be computed for you (if they exist).
If it’s not differentiable, or the gradients that you want are not the one for the loss you compute, then yes creating a Function that way is the way to go. |
st101906 | During the training of deep learning models, we want to maximise the usage of GPUs.
Optimally we want GPU usage to be at 100% at all times.
I’m going to follow up from my previous post Recording loss history without I/O 21 and taking reference from https://www.sagivtech.com/2017/09/19/optimizing-pytorch-training-code/ 12 (don’t worry bout not opening them)
It is often beneficial to record training progress in deep learning pipelines. Typically we would display the training statistics at the end of every batch. To record the training history on CPU is suboptimal due to the wait time needed for the transfer of data from GPU to CPU after every training step. Below is an example of bad training practice.
# We assume that loss_history is an array
# and loss is a cuda tensor with size of [1]
loss_history.append(loss.item())
Why? Because loss.item() requires the value to be passed from GPU to CPU. This takes time, which otherwise could have been used to run inference and backprop.
I would like to ask for the proper way to collate training statistics to optimize GPU usage.
Thanks in advance! |
st101907 | QQ截图20180727222805.png1339×231 11.2 KB
pytorch 0.4 is ok! but at pytorch 0.3, i get this error!
hello guys, Does anyone know why? very thanks for reply!! |
st101908 | target has to be long type, maybe target = target.long() will help
https://pytorch.org/docs/stable/nn.html?highlight=crossentropy#torch.nn.CrossEntropyLoss 26 |
st101909 | thanks for your reply. I wirte my code in early version, but new version of pytorch cannot use. i had try your advice , but it is not work . so I change to the old version of pythorch. |
st101910 | Hi, I want to use Coil20 dataset for my project. I have the dataset in MAT file. Is there any way to use this data in PyTorch? |
st101911 | You can use any Python package that can load MAT files (I think scipy.io 36 can), and then use torch.from_numpy to convert the loaded numpy arrays to PyTorch tensors. |
st101912 | Thanks Adam! I can load MAT files into python using scipy.io 14 as you said. But there is one doubt that in MATLAB we read multidimensional array like HxWxDxN where H is denoting number of rows (1 dimension) , W is number of columns(2nd dimensions), D is depth (3rd dimensions) and N is number of instances(4th dimensions). However, in python the syntax is different, we denote like NxDxWxH. So do i need to reshape the multidimensional array as i have to use the data to train a convolutional neural network? |
st101913 | You don’t need to reshape it, you need a transpose (the permute function might be the most convenient here). |
st101914 | I have used transpose method and then converted the numpy ndarray into tensor. With this done, i combined the data(or features or images) and targets values using data_utils.TensorDataset
and load using data using data_utils.DataLoader. I have used these two things in the following way.
train = data_utils.TensorDataset(features, targets)
train_loader = data_utils.DataLoader(train, batch_size=50, shuffle=True)
is this correct? |
st101915 | Soniya even I want to load .mat files containing MNIST data set so could you please send me your complete code of how to make the dataset iterable using pytorch Dataloader. |
st101916 | Hii everyone.
I wanted to split my MNIST training set, consisting of 60000 images, into training and test set consisting of 50000 and 10000 images respectively.
I got the idea from stackoverflow 9. A part of my code is given below and it’s working correctly. Hope it will be of some help in near future.
transform= transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,),(0.3081,))])
dataset= dsets.MNIST(root='./data/', transform=transform, train= True)
test_set= dsets.MNIST(root='./data/', transform=transform, train= False)
#Setting up hyper-parameters
batch_size= 32
learning_rate= 0.001
epochs= 2
shuffle_dataset= True
random_seed= 7 # so that we get same train and val set everytime
validation_split= .2 # percentage of trainset to make as validation set
# we are taking to be 20 % = .2
dataset_size= len(dataset)
indices= list(range(dataset_size))
#split= int(np.floor(validation_split * dataset_size))
#Since we want 10000 images as validation set
#we can set split= 10000
split= 10000
if shuffle_dataset:
np.random.seed(random_seed)
np.random.shuffle(indices)
train_indices, val_indices = indices[split:], indices[:split]
#Create PyTorch data samplers and loaders
train_sampler= SubsetRandomSampler(train_indices)
val_sampler= SubsetRandomSampler(val_indices)
#Now load the dataset into train and val
train_loader= torch.utils.data.DataLoader(dataset,batch_size= batch_size, sampler= train_sampler)
val_loader= torch.utils.data.DataLoader(dataset, batch_size=batch_size, sampler=val_sampler)
#test loader is as usual from the MNIST test set
test_loader= torch.utils.data.DataLoader(test_set, batch_size= batch_size)
Now |
st101917 | from scipy.io import loadmat
from PIL import Image
import torchvision.transforms as t
img = loadmat(path) # path of images to be loaded
img = img['I'] # img is stored under key 'I' (Assuming)
img = Image.fromarray(img,"L") # Assuming matfile contained np.array.
# we need to convert to PIL image
# "L" for grayscale "RGB" for color
img = t.To_Tensor(img) # convert to channelxheightxwidth
Hope this helps
Happy Coding |
st101918 | I am little confused that the batchsize of distributeddataparallel. Each machine has a process, and the dataloader is to load data with specific batch size. As is in this line[https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py#L49 42], the distributed data parallel will handle its own index of dataset according to the rank.
So, does that mean if I set the batchsize to 64, then loss will will be the 128 samples’s average? Please correct me if there is something with my understanding. |
st101919 | Hi !
While doing some multiprocessing, I ran into this error :
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "/usr/lib/python3.5/multiprocessing/reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "/home/dylan/.virtualenvs/testGo/lib/python3.5/site-packages/torch/multiprocessing/reductions.py", line 104, in reduce_storage
metadata = storage._share_cuda_()
RuntimeError: invalid device pointer: 0x204bc6a00 at /home/dylan/Desktop/superGo/pytorch/aten/src/THC/THCCachingAllocator.cpp:259
The context of this error is the following :
I launched my training in another process (works fine), initializing the model (player) and training a deepcopy of it (new_player) on the same newly launched process. At some point during training I want to asynchronously launch a new process to evaluate the models against each other like this :
pool = MyPool(1)
pool.apply_async(evaluate, args=(player, new_player,), callback=new_agent)
(MyPool is an extension of the class Pool from multiprocessing.Pool with the daemon set to False)
It seems that the parameters of the models can’t get copied to the new process for some reason!
Any idea on how to fix this ?
I’m using Python 3.5.2 and PyTorch from source version 0.4.0a0+d93d41b
Thanks ! |
st101920 | So I managed to reproduce the error following this code :
gist.github.com
https://gist.github.com/dylandjian/05d872c6d3d74e80c04bb70187090c3e 33
test.py
import multiprocessing
import multiprocessing.pool
import torch
class NoDaemonProcess(multiprocessing.Process):
# make 'daemon' attribute always return False
def _get_daemon(self):
return False
def _set_daemon(self, value):
This file has been truncated. show original
Also, I didn’t see that at first, but the copy is correctly sent to the second new_process but fails to get copied to the third ? |
st101921 | Is it solved now? Is it solved now? Is it solved now? Sorry for repeated questions. |
st101922 | Hi help me solve the problem please !!!
Here are some of my installation experience with PyTorch on WIndows 8.1.
JetBrains PyCharm 2017.3.3 x64
Traceback (most recent call last):
File “C:/Users/DmitrySony/PycharmProjects/PyTorch/555y.py”, line 1, in
import torch
File “C:\Users\DmitrySony\AppData\Local\Programs\Python\Python36\lib\site-packages\torch_init_.py”, line 78, in
from torch._C import *
ImportError: DLL load failed: The specified procedure could not be found.
image.png1920×1080 258 KB |
st101923 | I had that problem when I compiled Pytorch for Python 3.6 in Windows then tried installing it again for a Python 3.7 environment. Instead, I did the following:
Switch to the python 3.7 environment, e.g. activate python3.7 (or whatever you named the environment)
Run python setup.py clean
Then recompile (using python setup.py install or cuda90.bat if using Peterjc123’s scripts 73.) |
st101924 | I’ve trained a small autoencoder on MNIST and want to use it to make predictions on an input image. This is what I do, in the same jupyter notebook, after training the model.
example_index = 67
# make example a torch tensor
value = torch.from_numpy(x_train[example_index])
# then put it on the GPU, make it float and insert a fake batch dimension
test_value = Variable(value.cuda())
test_value = test_value.float()
test_value = test_value.unsqueeze(0)
# pass it through the model
prediction = model(test_value)
# get the result out and reshape it
cpu_pred = prediction.cpu()
result = cpu_pred.data.numpy()
array_res = np.reshape(result, (28,28))
then I plot both model output and this input. No matter how I change the input, the output image is exactly the same. Any ideas? |
st101925 | Print out your input values. Make sure they’re acutally changing
Print out intermediate layer activations. You can use module hooks: http://pytorch.org/docs/nn.html#torch.nn.Module.register_forward_hook 725
It’s good practice to call model.eval() to switch to “evaluate” mode before predictions. Currently the only layers this affects are batch norm and dropout (so if you don’t have those types of layers it won’t make a difference) |
st101926 | Printed the inputs, they do change, they’re fine.
Printed out all intermediary layers. None look like they should. I don’t really understand why this happens.
Don’t understand what model.eval() does. My model class has no method called eval. I included it into my code and it does absolutely nothing.
This is what I get on mnist:
Output
Output is exactly the same no matter what input example I give. ( I wanted to put all layers, but this forum does not allow that…)
Still trying to make it work. Here is my training log:
Epoch 1 training loss: 0.148
Epoch 1 validation loss: 0.097
Epoch 2 training loss: 0.083
Epoch 2 validation loss: 0.075
Epoch 3 training loss: 0.071
Epoch 3 validation loss: 0.070
Epoch 4 training loss: 0.067
Epoch 4 validation loss: 0.070
Epoch 5 training loss: 0.067
Epoch 5 validation loss: 0.069
Epoch 6 training loss: 0.067
Epoch 6 validation loss: 0.069
Epoch 7 training loss: 0.067
Epoch 7 validation loss: 0.069
Epoch 8 training loss: 0.067
Epoch 8 validation loss: 0.069
Epoch 9 training loss: 0.067
Epoch 9 validation loss: 0.069
Epoch 10 training loss: 0.067
Epoch 10 validation loss: 0.069
The validation loss is slightly higher than the training loss because I’m averaging it over the entire test set, as opposed to over 2000 minibatches (as with the test set). This is a small issue I will solve later.
Loss function is MSE and learning rate is 0.0001.
EDIT: WORKS !
The only change I made was to go from optimizer.SGD to optimizer.Adam. Any idea why this has such a huge effect? |
st101927 | I know this is super late, but I think that problem is related to a local minima, like the average of all the training images, and SGD gets stuck in that minimum. |
st101928 | Hi Everyone.
I’m confused by the ordering of the parameters when you call model.parameters() and model.named_parameters().
Here is a portion of my model summary (using pytorch-summary 21) where I’ve added in the divisions between layers/blocks:
# ----------------- conv2dtranspose
ConvTranspose2d-32 [-1, 128, 114, 114] 131,200
# ----------------- conv-block
Conv2d-33 [-1, 128, 112, 112] 295,040
ReLU-34 [-1, 128, 112, 112] 0
Conv2d-35 [-1, 128, 110, 110] 147,584
ReLU-36 [-1, 128, 110, 110] 0
# ----------------- squeeze and excitation block
AdaptiveAvgPool2d-37 [-1, 128, 1, 1] 0
Linear-38 [-1, 8] 1,032
ReLU-39 [-1, 8] 0
Linear-40 [-1, 128] 1,152
Sigmoid-41 [-1, 128] 0
SqueezeExcitation-42 [-1, 128, 110, 110] 0
The corresponding parameters (layer-index, name, shape) looks like…
# ----------------- conv2dtranspose
20 up_blocks.0.up.weight torch.Size([256, 128, 2, 2])
21 up_blocks.0.up.bias torch.Size([128])
# ----------------- squeeze and excitation block
22 up_blocks.0.conv_block.se.fc.0.weight torch.Size([8, 128])
23 up_blocks.0.conv_block.se.fc.0.bias torch.Size([8])
24 up_blocks.0.conv_block.se.fc.2.weight torch.Size([128, 8])
25 up_blocks.0.conv_block.se.fc.2.bias torch.Size([128])
# ----------------- conv-block
26 up_blocks.0.conv_block.conv_layers.0.weight torch.Size([128, 256, 3, 3])
27 up_blocks.0.conv_block.conv_layers.0.bias torch.Size([128])
28 up_blocks.0.conv_block.conv_layers.2.weight torch.Size([128, 128, 3, 3])
29 up_blocks.0.conv_block.conv_layers.2.bias torch.Size([128])
The issue is then that although the model.parameters() is reversing the order of the conv-block and the squeeze-and-excitation-block.
Is this the expected behavior? Is there a way to ensure model.parameters() returns the weights in the order that they are executed in the forward pass?
CONTEXT: I was looking to transfer weights from a keras model to pytorch. My approach was simple - I converted the keras weights to list of numpy arrays where I used np.swapaxes to change from bands-last to bands first. I was then planing on doing something like
def update_weights(model):
for i,((tn,tp),kp) in enumerate(zip(model.named_parameters(),keras_weights)):
kp=torch.tensor(kp)
if tp.shape==kp.shape:
tp.data=kp.data
else:
print("SHAPE MISMATCH:",i,tn,tp.shape,kp.shape)
return model |
st101929 | Solved by albanD in post #2
Hi,
The thing is that parameters are created in the __init__ method and no information about how they’re going to be used in forward is known in advance.
If you want to enforce an order, storing your Modules in an nn.Sequential or a nn.ModuleList should work. |
st101930 | Hi,
The thing is that parameters are created in the __init__ method and no information about how they’re going to be used in forward is known in advance.
If you want to enforce an order, storing your Modules in an nn.Sequential or a nn.ModuleList should work. |
st101931 | Hey, I have a time series prediction problem.
I would like to have a model which uses LSTM followed by a linear regression layer and to use the previous time step output from the linear regression layer as an additional feature for the LSTM input in the next time step.
I have added a picture to clarify
Is there a simple way to implement this in Pytorch?
I’m having trouble because the input to the LSTM is a sequence of say x(0),x(1),…x(200) and the LSTM simply outputs the cell state and output/hidden state for the sequence at one shot, so I can’t incorporate the predictions of the additional linear layer
Untitled.jpg813×699 36.2 KB
Or maybe a more suitable way for doing this is to have a linear transformation of previous output: Vy(t)+b and add this to the hidden state of the LSTM?
Any help please? |
st101932 | Hey, so I am using a convolutional layer as the first layer of a neural network for deep reinforcement learning to get the spatial features out of a simulation I built. The simulation gives different maps that are of different lengths and heights to process. If I understand convolutional networks, this should not matter since the channel size is kept constant. In between the convolutional network and the fully connected layers there is a spatial pyramid pooling layer so that the varying image sizes does not matter. Also the spatial data is pretty sparse. Usually it is able to go through a few states and sometimes a few episodes before the first convolutional layer spits out all Nans. Even when I fix the map size this happens. I do not know where the problem lies, where can the problem lie? |
st101933 | Hi, currently I encountered a problem regarding to torch.matmul function. The linear operation in a neural network defined in functional module was output = input.matmul(weight.t()). I tried to play around this, and got confused.
First, I tried to modify it to output = input.detach().matmul(weight.t().detach()), and output = input.data.matmul(weight.t().data). They both gave the wrong training results (i.e. loss didn’t get reduced).
I realized output generated in these ways have the requires_grad setting to false, however, even I manually set this to true using output.requires_grad_(True) after this multiplication, the training still produced the wrong result. Why did this happen?
Also, I noticed the output computed by the modified multiplication is a leaf variable, while the original version is not. Why it that?
Thanks!! |
st101934 | Solved by albanD in post #4
If you use non-pytorch operations (like numpy) then the autograd engine will not work.
If you really need these operations, you will have to create a custom autograd.Function for which you have to implement both the forward and the backward pass. In this, you will get Tensors that do not require gr… |
st101935 | Hi,
This has nothing to do with matmul.
.detach() or .data are used to explicitly ask to break the computational graph, and thus prevent gradients from flowing. So if you add these in the middle of your network, no gradients will flow back and your network won’t be able to learn. |
st101936 | Thanks for your reply. In this case, if I want to use numpy to perform some computations in the middle of my network, is there a way to do it? I noticed the only way to convert a tensor to numpy in a network is to first detach it |
st101937 | If you use non-pytorch operations (like numpy) then the autograd engine will not work.
If you really need these operations, you will have to create a custom autograd.Function for which you have to implement both the forward and the backward pass. In this, you will get Tensors that do not require gradient and you can use numpy. But you will need to implement the backward pass yourself. You can read the doc here 31 on how to do this. |
st101938 | Problem solved! Thanks! Just to make sure, if I want to define some new operations using either Pytorch or numpy, I need to work on autograd.Function layer; when I want to add some features to the existing functions (like adding noise to the linear operations), working on nn.Module layer should be enough. Is it correct? |
st101939 | Yes,
As long as you just want to change what your function outputs and you can implement everything with pytorch methods, then you can stick with nn.Module.
If it’s not implemented in pytorch or you want your backward pass not to represent the “true” derivative of your forward pass (to get wrong but smoother gradients for example), then you should work with autograd.Function. |
st101940 | I am trying to write a compact transform, instead of the if-else, if-else statement. I am, thus, getting an error saying Object None is not callable. Any trick around this, or should I use something else than None? Or, I need to go back to the if-else if else…etc?
image_transfrom = transforms.Compose([
ImageThinning(p = cf.thinning_threshold) if cf.thinning_threshold> 0 else None,
PadImage((cf.MAX_IMAGE_WIDTH, cf.MAX_IMAGE_HEIGHT)) if cf.pad_images else None,
transforms.ToPILImage(),
transforms.Scale((cf.input_size[0], cf.input_size[1])) if cf.resize_images else None,
transforms.ToTensor(),
transforms.Normalize( (0.5, 0.5, 0.5), (0.25, 0.25 , 0.25) ) if cf.normalize_images else None,
])
Of course one way to resolve this issue is to write a class called NoneTransform that does nothing to the image and pass it to compose instead of the value None.
Update: The NoneTransform does the job, here it is:
class NoneTransform(object):
""" Does nothing to the image, to be used instead of None
Args:
image in, image out, nothing is done
"""
def __call__(self, image):
return image |
st101941 | Hi there,
I was training a model with SGD and decided to move to Adam.
I was using batch size 20 for SGD, however max BS i can use with Adam is 2.
optimizer = torch.optim.SGD([{'params': model.unet_model.parameters()},{'params': model.audio_s.parameters()}, {'params': model.drn_model.parameters(), 'lr': args.DRNlr},
], lr=LR,
weight_decay=WEIGTH_DECAY)
All I did is changing this line
optimizer = torch.optim.Adam([{'params': model.unet_model.parameters()},{'params': model.audio_s.parameters()}, {'params': model.drn_model.parameters(), 'lr': args.DRNlr},
], lr=LR,
weight_decay=WEIGTH_DECAY)
is there any memory usage comparison among all the optimizers? or is that memory usage normal? |
st101942 | Solved by ptrblck in post #2
An increased memory usage for optimizers using running estimates is normal.
As the memory usage depends on the number of your parameters, I’m not sure someone already compared it. |
st101943 | An increased memory usage for optimizers using running estimates is normal.
As the memory usage depends on the number of your parameters, I’m not sure someone already compared it. |
st101944 | @ptrblck Let me ask an additional question.
How can i choose where those estimators are stored?
I guess they are stored in gpu0 by default but even using several gpus, the gpu which stores those estimators get out of memory. I would like to use 1 gpu for computing them and the rest for dealing with the model using dataparallel.
Is that possible in pytorch? |
st101945 | As far as I know, the optimizer will store the internal parameters on the same GPU the model was transferred to.
Usually it’s cuda:0. Since my multi-GPU machine is currently busy, I cannot test it.
You could check the device placement with print(optimizer.param_groups) and see where the tensors are stored. |
st101946 | @ptrblck
Those parameters are stored in gpu0.
When you talk about transferring model in case of data parallel: the main gpu would be
model = torch.nn.DataParallel(model).cuda()
cuda:0 in this case right?
Well i’m using 3 gpus so I’m trying to do
model = torch.nn.DataParallel(model,device_ids=[1,2]).cuda()
To enforce pytorch deal with the model in cuda:1 and cuda:2 but pytorch does not allow to do that, it requires all tensors to be in devices[0]
Traceback (most recent call last):
File "train.py", line 300, in <module>
main()
File "train.py", line 260, in main
output = model(video,audio)
File "/home/jfmontesinos/.local/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/jfmontesinos/.local/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 113, in forward
replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
File "/home/jfmontesinos/.local/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 118, in replicate
return replicate(module, device_ids)
File "/home/jfmontesinos/.local/lib/python2.7/site-packages/torch/nn/parallel/replicate.py", line 12, in replicate
param_copies = Broadcast.apply(devices, *params)
File "/home/jfmontesinos/.local/lib/python2.7/site-packages/torch/nn/parallel/_functions.py", line 17, in forward
outputs = comm.broadcast_coalesced(inputs, ctx.target_gpus)
File "/home/jfmontesinos/.local/lib/python2.7/site-packages/torch/cuda/comm.py", line 40, in broadcast_coalesced
return torch._C._broadcast_coalesced(tensors, devices, buffer_size)
RuntimeError: all tensors must be on devices[0]
Is not possible either to store optimizers’ parameters in an arbitrary gpu or to reduce workload in cuda:0 not to get out of memory? |
st101947 | I wrote the codes as follow which is copyed from pytorch.org 1.
input = torch.randn(3, 5)
print(torch.pinverse(input))
AttributeError: module ‘torch’ has no attribute ‘pinverse’.
Have somebody can help me? |
st101948 | Solved by albanD in post #4
From the 0.4.0 doc it doesn’t seem to exist. It does exist in the 0.4.1 doc though. |
st101949 | From the 0.4.0 1 doc it doesn’t seem to exist. It does exist in the 0.4.1 1 doc though. |
st101950 | I am using the code of a variational auto-encoder from here 3. This is the relevant code:
class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__()
self.fc1 = nn.Linear(784, 400)
self.fc21 = nn.Linear(400, 20)
self.fc22 = nn.Linear(400, 20)
self.fc3 = nn.Linear(20, 400)
self.fc4 = nn.Linear(400, 784)
def encode(self, x):
h1 = F.relu(self.fc1(x))
return self.fc21(h1), self.fc22(h1)
def reparametrize(self, mu, logvar):
std = logvar.mul(0.5).exp_()
if torch.cuda.is_available():
eps = torch.cuda.FloatTensor(std.size()).normal_()
else:
eps = torch.FloatTensor(std.size()).normal_()
eps = Variable(eps)
return eps.mul(std).add_(mu)
def decode(self, z):
h3 = F.relu(self.fc3(z))
return F.sigmoid(self.fc4(h3))
def forward(self, x):
mu, logvar = self.encode(x)
z = self.reparametrize(mu, logvar)
return self.decode(z), mu, logvar, z
Suppose that, during test time, I would like to transpose the decode function - that is, reverse that part of the network: feed it an input of size (400, 784) and get an output of (20, 400). Is there a way to transpose a network (without manually copying the weights)?
Thanks! |
st101951 | Solved by albanD in post #2
Hi,
Unfortunately there is no way to do this, mainly because for most operation, inverting them is not easy to do or easily defined.
You will need to write this by hand. |
st101952 | Hi,
Unfortunately there is no way to do this, mainly because for most operation, inverting them is not easy to do or easily defined.
You will need to write this by hand. |
st101953 | Thanks @albanD ! When you say “by hand” - you mean manually defining a new network and copying the weights? (just trying to make sure I didn’t miss something) |
st101954 | Well you don’t really need to create a new net. In your current net you could have a decode_transpose method that does what you want. |
st101955 | I am trying to grab an entire model’s weight before training and also after training to check their differences but for some reason pytorch is telling me the old stored weight is the same as the newly stored weights after my optimizer.step()?
My code is below, one thing to notice is I only trained the input with layer1, layer2 should not update it’s weight because it is not used. After 1 forward/backward propagation I should have different weights for layer1 and the activation, layer2’s weight should be untouched.
class mymodel(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(10, 5)
self.layer2 = nn.Conv2d(1, 5, 4, 2, 1)
self.act = nn.Sigmoid()
def forward(self, x):
x = self.layer1(x) #only layer1 and act are used layer 2 is ignored so only layer1 and act's weight should be updated
x = self.act(x)
return x
model = mymodel()
weights = []
for param in model.parameters(): # loop the weights in the model before updating and store them
print(param.size())
weights.append(param)
critertion = nn.BCELoss() #criterion and optimizer setup
optimizer = optim.Adam(model.parameters(), lr = 0.001)
foo = torch.randn(3, 10) #fake input
target = torch.randn(3, 5) #fake target
result = model(foo) #predictions and comparison and backprop
loss = criterion(result, target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
weights_after_backprop = [] # weights after backprop
for param in model.parameters():
weights_after_backprop.append(param) # only layer1's weight should update, layer2 is not used
for i in zip(weights, weights_after_backprop):
print(torch.equal(i[0], i[1]))
# **prints all Trues when "layer1" and "act" should be different, I have also tried to call param.detach in the loop but I got the same result. |
st101956 | Solved by albanD in post #2
Hi,
The thing is to reduce memory usage, all optimizer modify the weights inplace.
Which means that the content of your list called weights contains the same tensors that are modified inplace by the optimizer.
You can add a weights.append(param.clone()) to make sure your store a different copy of… |
st101957 | Hi,
The thing is to reduce memory usage, all optimizer modify the weights inplace.
Which means that the content of your list called weights contains the same tensors that are modified inplace by the optimizer.
You can add a weights.append(param.clone()) to make sure your store a different copy of the weights. |
st101958 | I wanted to install Detectron 1, so I tried to install Caffe2 before that. I followed the official instructions.
After I typed conda install -c caffe2 caffe2-cuda8.0-cudnn7, I got the following:
Solving environment: failed
UnsatisfiableError: The following specifications were found to be in conflict:
blaze
caffe2-cuda8.0-cudnn7
Use "conda info " to see the dependencies for each package.
Any idea what should I do please? Thank you in advance. |
st101959 | Solved by tengerye in post #2
I updated my conda to the latest. Then I tried to build from source and it worked. |
st101960 | I updated my conda to the latest. Then I tried to build from source and it worked. |
st101961 | I’m working on a dependency tree oracle network, and I’m using the pretrained 50d GLOVE vectors. However, as is common in CL, I also have special tokens – in my case, for a situation where a feature refers to a word that’s not there (e.g. the dependency of a leaf word), for unknown words, and for the root of the tree. I want the network to learn these special tokens vectors starting at the first epoch, but I want to freeze the other non-special token vectors until the network has learned enough to fine-tune them.
My code is set up so these special tokens are indices 0…2 of vocab[], and I initialize the embedding like so (I’m on torch 0.3.1 so I can’t use Embedding.load_pretrained):
word_embedding = nn.Embedding(len(vocab), vocab_embedding_size)
word_embedding.load_state_dict({'weight': torch.cuda.FloatTensor([
get_pretrained(k) or ((np.random.random(vocab_embedding_size) * 2 - 1) * 0.01)
for k in vocab
])})
my forward() method then looks like this:
def forward(self, s_w_idxs, ...):
s_w_embeds = self.word_embeddings(s_w_idxs).view((1, -1))
....
Is there a good way to get this to work? I’m a definite newbie with PyTorch so nothing is coming to mind – I could maybe map over the tensor and mask out the values before passing it into word_embeddings but I’m not sure exactly how to do that.
Thanks! |
st101962 | I’m trying to understand how to profile cuda code - especially understanding bottlenecks in training loops. When I use something like lprun, which profiles lines of code in jupyter notebooks, I find that a large percentage of the time is spent at the cuda.synchronize lines which makes it difficult to identify which parts of the code are actually responsible for most of the time. Furthermore, I’d like to understand how gpu-utilization impacts total time training time for one epoch. For example, if I have multiple dataloaders that are identical except for batch size, I might notice that the time to complete one epoch is shortest at a batch size of 32, longer for batch sizes both smaller and greater than 32. However, GPU-utilization for all three loaders during training is 100%. Lastly, if GPU-utilization is not at 100%, then its likely that something with the dataloader is bottlenecking, but since data-loading is happening in parallel, I don’t really know how to figure out how to improve this problem. Would anyone care to share how I might start looking into these things? Maybe tools, or workflows?
Thanks |
st101963 | That’s strange, epoch time should be smaller with a larger batch size. I assume you already moved all necessary tensors to the GPU before the start of an epoch. Could you provide a minimal example? |
st101964 | Yes, its odd…if just do a normal 224x224 batch loader for resnets and increase batch size, you get the expected behavior of increasing efficiency with larger batch size. But for my particular problem, in which there is a multi-task loss (and thus am calling next(dataloader_iterator_i) multiple times per loop for each dataloader, and many image transformations, I’m seeing a slight slow down as batch size goes to 256 from 32…maybe by about 4%. GPU utilization is still 100% to my knowledge, although I can only tell this by sampling nvidia-smi views, so it could be falling down more periodically. It would be great tho to know how to inspect whats going on under the hood with respect to the parallelism between loading and training. |
st101965 | Hi
Say I have 2 GPU cores with 8GB memory each, then I can use nn.DataParallel and set device_ids=[0, 1] to execute on multiple-GPUs.
In which case I can load my_tensor to GPU using my_tensor.cuda(0) - as by convention, all tensors must be in the first GPU of device_ids.
This works as long as my_tensor < 8GB and Out-of-Memory exception otherwise. Is there a way to load tensors > 8GB (i.e. larger than a single GPU memory, say my_tensor ~ 12 GB ) such that it can be shared between the 2 GPUs’ memory?
In other words, if I have to forward pass 2 tensors of 8GB each to the model, I get OOM exception when I try to do it the second time as model(my_tensor2.cuda(0)).
Thanks |
st101966 | Hi, I’m having a different (better) accuracy when .eval() is not called and I can’t figure out why is that.
I use MSE as the loss function, getting an error of 0.002 (avg) in the training set and 0.08 (avg) in the validation/test set, all of this calling .eval() . When I comment the .eval() line, the MSE of the test set goes down to 0.002.
Here’s my model:
class Net3(nn.Module): # U-net without skip connections
def __init__(self, kernel_sz):
super(Net3, self).__init__()
i_size = 16
self.mp = nn.MaxPool3d(kernel_size=2, padding=0, stride=2, dilation=1, return_indices=True) # Reduces size by half
self.batchNorm_i = nn.BatchNorm3d(i_size)
self.batchNorm_2i = nn.BatchNorm3d(i_size * 2)
self.batchNorm_4i = nn.BatchNorm3d(i_size * 4)
self.batchNorm_8i = nn.BatchNorm3d(i_size * 8)
self.batchNorm_16i = nn.BatchNorm3d(i_size * 16)
self.dropout50 = nn.Dropout3d(p = 0.5) # p is the probability of being zeroed
self.dropout20 = nn.Dropout3d(p = 0.2)
self.dropout10 = nn.Dropout3d(p = 0.1)
self.conv1_i = nn.Conv3d(1, i_size, kernel_size=kernel_sz, padding=1)
self.convi_i = nn.Conv3d(i_size, i_size, kernel_size=kernel_sz, padding=1)
self.convi_2i = nn.Conv3d(i_size, i_size * 2, kernel_size=kernel_sz, padding=1)
self.conv2i_2i = nn.Conv3d(i_size * 2, i_size * 2, kernel_size=kernel_sz, padding=1)
self.conv2i_4i = nn.Conv3d(i_size * 2, i_size * 4, kernel_size=kernel_sz, padding=1)
self.conv4i_4i = nn.Conv3d(i_size * 4, i_size * 4, kernel_size=kernel_sz, padding=1)
self.conv4i_8i = nn.Conv3d(i_size * 4, i_size * 8, kernel_size=kernel_sz, padding=1)
self.conv8i_8i = nn.Conv3d(i_size * 8, i_size * 8, kernel_size=kernel_sz, padding=1)
self.conv8i_16i = nn.Conv3d(i_size * 8, i_size * 16, kernel_size=kernel_sz, padding=1)
self.conv16i_16i = nn.Conv3d(i_size * 16, i_size * 16, kernel_size=kernel_sz, padding=1)
self.conv16i_8i = nn.Conv3d(i_size * 16, i_size * 8, kernel_size=kernel_sz, padding=1)
self.conv8i_4i = nn.Conv3d(i_size * 8, i_size * 4, kernel_size=kernel_sz, padding=1)
self.conv4i_2i = nn.Conv3d(i_size * 4, i_size * 2, kernel_size=kernel_sz, padding=1)
self.conv2i_i = nn.Conv3d(i_size * 2, i_size, kernel_size=kernel_sz, padding=1)
self.convi_1 = nn.Conv3d(i_size, 1, kernel_size=1) # 1x1 conv
self.upconv16i_16i = nn.ConvTranspose3d(i_size * 16, i_size * 16, kernel_size=2, stride=2)
self.upconv8i_8i = nn.ConvTranspose3d(i_size * 8, i_size * 8, kernel_size=2, stride=2)
self.upconv4i_4i = nn.ConvTranspose3d(i_size * 4, i_size * 4, kernel_size=2, stride=2)
self.upconv2i_2i = nn.ConvTranspose3d(i_size * 2, i_size * 2, kernel_size=2, stride=2)
def forward(self, x):
c1 = self.batchNorm_i(self.conv1_i(self.dropout50(x)))
r1 = F.relu(c1)
c2 = self.batchNorm_i(self.convi_i(r1))
r2 = F.relu(c2)
mp1, idxi = self.mp(r2) # 1st max-pooling
c3 = self.batchNorm_2i(self.convi_2i(mp1))
r3 = F.relu(c3)
c4 = self.batchNorm_2i(self.conv2i_2i(r3))
r4 = F.relu(c4)
mp2, idx2i = self.mp(r4) # 2nd max-pooling
c5 = self.batchNorm_4i(self.conv2i_4i(mp2))
r5 = F.relu(c5)
c6 = self.batchNorm_4i(self.conv4i_4i(r5))
r6 = F.relu(c6)
mp3, idx4i = self.mp(r6) # 3rd max-pooling
c7 = self.batchNorm_8i(self.conv4i_8i(mp3))
r7 = F.relu(c7)
c8 = self.batchNorm_8i(self.conv8i_8i(r7))
r8 = F.relu(c8)
mp4, idx8i = self.mp(r8) # 4th max-pooling
c9 = self.batchNorm_16i(self.conv8i_16i(mp4)) # Lowest resolution
r9 = F.relu(c9)
c10 = self.batchNorm_16i(self.conv16i_16i(r9))
r10 = F.relu(c10)
uc1 = self.upconv16i_16i(r10) # 1st upconvolution
c11 = self.batchNorm_8i(self.conv16i_8i(uc1))
r11 = F.relu(c11)
c12 = self.batchNorm_8i(self.conv8i_8i(r11))
r12 = F.relu(c12)
uc2 = self.upconv8i_8i(r12) # 2nd upconvolution
c13 = self.batchNorm_4i(self.conv8i_4i(uc2))
r13 = F.relu(c13)
c14 = self.batchNorm_4i(self.conv4i_4i(r13))
r14 = F.relu(c14)
uc3 = self.upconv4i_4i(r14) # 3rd upconvolution
c15 = self.batchNorm_2i(self.conv4i_2i(uc3))
r15 = F.relu(c15)
c16 = self.batchNorm_2i(self.conv2i_2i(r15))
r16 = F.relu(c16)
uc4 = self.upconv2i_2i(r16) # 4th upconvolution
c17 = self.batchNorm_i(self.conv2i_i(uc4))
r17 = F.relu(c17)
c18 = self.batchNorm_i(self.convi_i(r17))
r18 = F.relu(c18)
c19 = self.convi_1(r18)
return c19
Thanks! |
st101967 | Hi,
The difference in your case is that the batchnorm layers will used the saved statistics instead of the one for the current batch.
And the dropout layers will be deactivated. |
st101968 | Great, I fixed that, but I’ve figured out that the problem was that I wasn’t setting the momentum parameter 21 in the BatchNorm layers initializations. Since my input data is highly variable, the default value of momentum (0.1) was not proper for my data distribution.
Thanks! |
st101969 | # calculating loss
cen = self.centers.clone()
feat = features.clone()
x_2 = torch.pow(feat,2)
c_2 = torch.pow(cen,2)
x_2_s = torch.sum(x_2,1)
x_2_s = x_2_s.view(self.num_batch,1)
c_2_s = torch.sum(c_2,1)
c_2_s = c_2_s.view(self.num_class,1)
temp_c = torch.mm(labels.cuda(),c_2_s)
x_2_s_e = x_2_s.repeat(1,self.num_class)
c_2_s_e = c_2_s.t().repeat(self.num_batch,1)
# c_2_s_e = temp_c
xc = 2*torch.mm(feat,cen.t())
# we want only positive values,
dis = x_2_s_e + c_2_s_e - xc
di = dis.type(torch.FloatTensor)
di2 = torch.sqrt(torch.clamp(di, min=0))
# since center loss focuses on intra distances, we are not concerened about the distance that we calculated from
#other centers, we will use the other centers to increase inter loss.
bl = labels.type(torch.ByteTensor)
dii = torch.masked_select(di2,bl)
center_loss = dii.sum()/self.num_batch
with torch.no_grad():
self.centers.copy_(centers_update)
return center_loss
This is how I am calculating my loss value |
st101970 | Another question, when should I use .detach() and .clone()?.
I understand .clone() also has backward gradient and .detach() has no gradient.
In the above loss function,
I have created a variable Center which needs a manual update. So, I believe it doesn’t requires gradient.
Comment If I am wrong
I have the features from the data, and here I am very confused, do I need to calculate gradient for the features or not? |
st101971 | Hi,
I cant find any documentation on the differences between these versions:
cu80/torch-0.3.1-cp27-cp27mu-linux_x86_64.whl
cu80/torch-0.3.1-cp27-cp27m-linux_x86_64.whl
What does the m and mu denote?
Thanks! |
st101972 | Solved by albanD in post #2
The cp27m and cp27mu define which version of python this wheel is for. In that case, CPython 2.7 with some specific compile arguments.
If i’m not mistaken, the m is when it has pymalloc. And the u is when you have wide unicodes. |
st101973 | The cp27m and cp27mu define which version of python this wheel is for. In that case, CPython 2.7 with some specific compile arguments.
If i’m not mistaken, the m is when it has pymalloc. And the u is when you have wide unicodes. |
st101974 | Hi, I’m trying to train on ImageNet the same way it is done in torch(based on this 12). All transformations are present in Pytorch, except one. AlexNet lighting method.
The torch implementation can be found and viewed from fb.resnet.torch/datasets/transforms.lua#L183 18
However, I dont know how to port this to Pytorch!
Here is the torch implementation :
-- Lighting noise (AlexNet-style PCA-based noise)
function M.Lighting(alphastd, eigval, eigvec)
return function(input)
if alphastd == 0 then
return input
end
local alpha = torch.Tensor(3):normal(0, alphastd)
local rgb = eigvec:clone()
:cmul(alpha:view(1, 3):expand(3, 3))
:cmul(eigval:view(1, 3):expand(3, 3))
:sum(2)
:squeeze()
input = input:clone()
for i=1,3 do
input[i]:add(rgb[i])
end
return input
end
end
I found a seemingly Pytorch port (preprocess.py 23), but it does not work and complains about this line:
alpha = img.new().resize_(3).normal_(0, self.alphastd)
and gives the error :
AttributeError: ‘Image’ object has no attribute ‘new’
Here is how it looks :
__imagenet_pca = {
'eigval': torch.Tensor([0.2175, 0.0188, 0.0045]),
'eigvec': torch.Tensor([
[-0.5675, 0.7192, 0.4009],
[-0.5808, -0.0045, -0.8140],
[-0.5836, -0.6948, 0.4203],
])
}
# Lighting data augmentation take from here - https://github.com/eladhoffer/convNet.pytorch/blob/master/preprocess.py
class Lighting(object):
"""Lighting noise(AlexNet - style PCA - based noise)"""
def __init__(self, alphastd, eigval, eigvec):
self.alphastd = alphastd
self.eigval = eigval
self.eigvec = eigvec
def __call__(self, img):
if self.alphastd == 0:
return img
alpha = img.new().resize_(3).normal_(0, self.alphastd)
rgb = self.eigvec.type_as(img).clone()\
.mul(alpha.view(1, 3).expand(3, 3))\
.mul(self.eigval.view(1, 3).expand(3, 3))\
.sum(1).squeeze()
return img.add(rgb.view(3, 1, 1).expand_as(img))
Can anyone please help me on this ?
Thanks alot in advance |
st101975 | Solved by Shisho_Sama in post #3
Thank you very very much. That indeed helped me spot the culprit!
I had to use transform.ToTensor() prior to calling the Lighting() method. Failing to do so resulted in a PIL image being sent as the input for lightining() which expects a Tensor and that’s why the error occurs.
So it should have be… |
st101976 | img seems to be a PIL.Image. Try to cast it to a tensor before the new() call e.g. with:
img = = torch.from_numpy(np.array(img))
# or
import torchvision.transforms.functional as TF
img = TF.to_tensor(img)
The former approach will keep the data as it was loaded (uint8 in range [0, 255]), while the latter will rescale your image to [0, 1] and cast it to float. |
st101977 | Thank you very very much. That indeed helped me spot the culprit!
I had to use transform.ToTensor() prior to calling the Lighting() method. Failing to do so resulted in a PIL image being sent as the input for lightining() which expects a Tensor and that’s why the error occurs.
So it should have been like :
train_tfms = transforms.Compose([
transforms.RandomResizedCrop(size),
transforms.RandomHorizontalFlip(),
transforms.ColorJitter(.4,.4,.4),
transforms.ToTensor(),
Lighting(0.1, __imagenet_pca['eigval'], __imagenet_pca['eigvec']),
normalize,
]) |
st101978 | Hello all
I’m unable to create a torch.cuda.FloatTensor. Please respond if you have any idea.
>>> temp = torch.cuda.FloatTensor(10, 10).fill_(0)
>>> type(temp)
<class 'torch.Tensor'>
I use Tesla K80 GPU AWS instance, pytorch version 0.4.0 and python 3. Thank you. |
st101979 | Solved by albanD in post #2
That is expected, all tensors are of type torch.Tensor.
You can use .type() to get a more precise type. And attributes like .is_cuda. |
st101980 | That is expected, all tensors are of type torch.Tensor.
You can use .type() to get a more precise type. And attributes like .is_cuda. |
st101981 | Hi all. I am newbie at ML. Trying to predict noisy sin function. Dataset is just x, y coordinates. I split data into SEQUNCE_LENGTH sequences and trying to learn with GRU, but something is totally wrong. Maybe some parameters are wrong, or forward pass should be implemented some other way. I don’t know what to try.
Here is original inpit, loss function and predicted result.
imgbb.com
Selection 020 3
Image Selection 020 hosted in imgbb.com
imgbb.com
Selection 018 2
Image Selection 018 hosted in imgbb.com
imgbb.com
Selection 019 1
Image Selection 019 hosted in imgbb.com
Code:
import numpy as np
import torch
import torch.nn as nn
import matplotlib.pyplot as plt
INPUT_SIZE = 1
SEQUENCE_SIZE = 200
BATCH_SIZE = 1
HIDDEN_SIZE = 30
NUM_LAYERS = 2
N_EPOCHS = 20
class MyRNN(nn.Module):
def __init__(self):
super(MyRNN, self).__init__()
self.hidden_size = HIDDEN_SIZE
self.l1 = nn.GRU(INPUT_SIZE, HIDDEN_SIZE, NUM_LAYERS)
self.l2 = nn.Linear(HIDDEN_SIZE, 1)
self.hidden = None
def forward(self, inputs):
out, hidden = self.l1(inputs, self.hidden)
self.hidden = hidden.detach() # reuse hidden
out = self.l2(out.squeeze(1))
return out, hidden
x, y = np.loadtxt('series.txt', np.float32, '#', ',', unpack=True)
sequence = []
target = []
model = MyRNN()
criterion = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.0001, momentum=0.5)
lossValues = [] # for plotting
model.train()
for epoch in range(N_EPOCHS):
for idx, xCoord in enumerate(x):
if idx % SEQUENCE_SIZE == 0 and idx != 0:
# perform learning on a sequence of SEQUENCE_SIZE elements
xTensor = torch.tensor(sequence, dtype=torch.float32, requires_grad=True)
xTensor = xTensor.unsqueeze(dim=1) # batch size = 1
xTensor = xTensor.unsqueeze(dim=1) # input size = 1
yTensor = torch.tensor(target, dtype=torch.float32)
# reset buffers
sequence = []
target = []
# forward pass
optimizer.zero_grad()
out, hidden = model(xTensor)
loss = criterion(out.squeeze(dim=1), yTensor)
loss.backward()
optimizer.step()
lossValues.append(loss.item())
print(loss.item())
sequence.append(xCoord) # buffer x values
target.append(y[idx]) # buffer y values
# plot after training
model.eval()
sequence = []
xTensor = torch.tensor(x, dtype=torch.float32)
xTensor = xTensor.unsqueeze(dim=1)
xTensor = xTensor.unsqueeze(dim=1)
out, hidden = model(xTensor)
out = out.squeeze(dim=1)
plt.plot(out.detach().numpy())
plt.show()
# plt.plot(lossValues)
# plt.show() |
st101982 | I’m using to save my model and re-load it these functions,
model.load_state_dict(torch.load())
torch.save(model.state_dict())
I want to add a new layer to my model and re-save it again as a new model instead of calling two models in my new model. how can I load the parameters only to the pre-trained part in the new model that matches the old model ? |
st101983 | model.load_state_dict(sd, strict=False)
This argument matches layers, you have to be sure to have clear names. If your new layer has the same name as an old layer you will probably have an error |
st101984 | I’ll re-interpret.
you mean I should create my new model and assure matching the name of old layers in both models but new layer should have different names.
Once I use model.load_state_dict(sd, strict=False), trained layers will be loaded while new layers will not be altered ? and this is enough to migrate from old to new model preserving pre-trained layers ? |
st101985 | Yes. I mean, I guess this works when you add some layers. I guess state_dict is an ordered layer therefore if you do a very big change it’s easier this method fails.
Anyway state dic is a python dictionary, you can manage it like that. You can access each weight to check their values. You can also access model parameters and compare some test weights to check if loading worked as expected. |
st101986 | Hi, I would like to apply a convolution over the embeddings of words of a sentence.
For example let’s say I have a sentence with 5 words. My embedding size is 128. Then my input shape is 5x128. I would like to convolve this input with a convolution whose output channel is 4 and input channel is 1 and kernel is the size of 2x128 (I guess convolution used in this way with textual input right ? )
How can I do it in pytorch? Is it the correct way to do it? My aim is to get a combination of the word representations of a sentence
x = torch.rand(5,128)
w = nn.conv2d(1,2,128,4)
F.conv2d(x,w) |
st101987 | I’m not sure how to use convolutions with textual input, but based on your description, I assume you would like to use the 5 different words as one spatial dimension and the embedding size as the other.
If that’s the case, you could try the following:
x = torch.randn(1, 1, 5, 128) # batch_size, channels, height, width
conv = nn.Conv2d(1, 4, (2, 128))
output = conv(x) |
st101988 | Hi @ptrblck
Sorry for late response. Actually it is not what I want to do.
I put the image that visualize the model in my mind (together with the link for paper):
paper: https://arxiv.org/pdf/1408.5882.pdf 3
Screenshot 2018-08-06 14.10.26.png722×324 27.6 KB |
st101989 | I think I made it but I am not sure. Could you review it if possible. Let’s assume we have 100 unique words in our vocabulary, and the sentence I am interested in contains the first 20 of them. My embedding vectors are 300d and I will consider bi-grams and use 4 filters:
import torch
import torch.nn as nn
from random import randint
e = nn.Embedding(100,300)
ic = 1;oc =4;h=2;w=300
w = nn.Conv2d(ic,oc,(h,w))
embeds = e(torch.tensor([randint(0,99) for i in range(20)],dtype=torch.long))
embeds2 = embeds.view(1,1,20,300) # reshape it so that I can use it as input to convolution
conv_out = w(embeds2)
conv_out.shape
torch.Size([1, 4, 19, 1])
If I am correct until that point, I also wonder how could I perform the pooling operation. Basically I would like to take the maximum number in each of 1x19 feature vectors. Should I use pooling2d or pooling1d for that ? I couldn’t understand which one is the correct choice for me from the docs. |
st101990 | If you want to use the current shape with the training 1 for the width, you can use nn.MaxPool2d with a kernel_size of (19, 1):
nn.MaxPool2d(kernel_size=(19, 1), stride=1) |
st101991 | Hi guys
Thesedays, I’m interested in Pytorch and I wanna translate it into Korean with others.
But, I just wanna check any problem before translating it. If I do that, many Koreans will be able to read and access pytorch document easiliy!!
Can I translate pytorch document in Korean version and share them to all ??
Thank you. |
st101992 | HI I am new to pytorch.
I use dropout in training, so when I test the data, must I change
to model.eval()? I find I get different accuracy if I don’t add “eval()”
.
lass RNN(nn.Module):
def init(self,):
super(RNN, self).init()
self.lstm = nn.LSTM(input_size, hidden_size, num_layers, batch_first=False, dropout=0.5)
self.keke_drop = nn.Dropout(p=0.5)
self.fc = nn.Linear(hidden_size, num_classes)
Thank you for the help. |
st101993 | Yes, otherwise, dropout layer still masks random outputs and the accuracy might be lower. |
st101994 | Sure, Dropout works as a regularization for preventing overfitting during training.
It randomly zeros the elements of inputs in Dropout layer on forward call.
It should be disabled during testing since you may want to use full model (no element is masked) |
st101995 | Hello.
I am curious about the implementation of backward in PyTorch.
As far as I know, every forward function object generated will have a corresponding backward function object. What I am confused about is the order of execution those backward function object.
Start from the code loss.backward(), we need to traverse the computation graph in a order that a backward function can’t be executed until all of it’s input have been generated. The problem is how to determine the order in a efficient way, I search online, however maybe because of suboptimal key word, I can’t get a pleasing answer.
I think the following two method can solve the question, but I want to know about the Pytorch inplementation:
stack based solution. Suppose variable a rely on b, then a can’t backward until all of the variables that b rely on have done the backward. This obviously can be solved recursively. However, when the computation graph is huge and complex, will it cause stack overflow ?
counter based solution. I think we can determine the order of backward by the order of forward as these two procedure can be symmetric. Therefore, one can assign a counter to every forward function object based on the order of execution. When the code loss.backward() comes, he can do the backward based on the inverse order of counter. I think this solution maybe more memory efficient… However, it is based on the assumption that the computation flow is sequential, which may hinder the parallel execution of some irrelevent forward and backward function…
Now, can anyone tell me something about PyTorch’s solution ? Thank you ! |
st101996 | PyTorch’s current traversal order is exactly as 2, but prioritizes accumulating gradients step over all other steps. |
st101997 | Hello all I was wondering if someone could explain to me why this snippet of code causes my GPU to run out of memory:
outputs = []
slices = torch.rand(100, 3, 512, 512)
batch_size=4
for batch_idx in range(0, 100, batch_size):
slice_ = slices[batch_idx: batch_idx + batch_size].cuda()
u_net_output = unet(slice_)
outputs.append(u_net_output.data.cpu())
Since I’ve moved the output of unet to the cpu, doesn’t that mean that it does not reside on the GPU anymore? Why is it causing OOM errors? |
st101998 | Don’t worry I figure it out, it was because the buffers aren’t flushed as I’m not doing the update step |
st101999 | What is the current and future best practice regarding the use of torch.nn.functional.XXX or torch.XXX?
For instance I see that torch.conv{1,2,3}d exist, but only torch.avg_pool1d … Should these functions be already used instead of their torch.nn.functional equivalent? If yes where are the missing average pooling? |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.