id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st31068
|
Hy there
tenet2001tenet:
model = torch.load(‘pytorch_model.bin’, map_location=‘cpu’)
This will load the pretrained parameters of the model not the model defination itself, which are stored in the form of Ordered Dict hence the error.
You should create your model class first.
class Net(nn.Module):
// Your Model for which you want to load parameters
model = Net()
torch.optim.SGD(lr=0.001) #According to your own Configuration.
checkpoint = torch.load(pytorch_model)
model.load_state_dict(checkpoint['model'])
optimizer.load_state_dict(checkpoint['opt'])
Also if you want to use pretrained model in pytorch, the best way is to get them through
torchvision package. Check this tutorial.
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html 115
|
st31069
|
Hey! Considering this:
github.com
microsoft/unilm/blob/master/layoutlm/layoutlm/modeling/layoutlm.py 68
import logging
import torch
from torch import nn
from torch.nn import CrossEntropyLoss, MSELoss
from transformers import BertConfig, BertModel, BertPreTrainedModel
from transformers.modeling_bert import BertLayerNorm
logger = logging.getLogger(__name__)
LAYOUTLM_PRETRAINED_MODEL_ARCHIVE_MAP = {}
LAYOUTLM_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
class LayoutlmConfig(BertConfig):
pretrained_config_archive_map = LAYOUTLM_PRETRAINED_CONFIG_ARCHIVE_MAP
model_type = "bert"
def __init__(self, max_2d_position_embeddings=1024, **kwargs):
This file has been truncated. show original
Should I use on of these?
src https://github.com/microsoft/unilm/tree/master/layoutlm/layoutlm 17
|
st31070
|
It depends on what your goal is, if you want to learn I highly recommend you to check the tutorial at pytorch website. else if you want to test a model on some dataset you can try this one too, but I guess it would be hard considering the complexity of the model.
https://pytorch.org/tutorials/ 51
|
st31071
|
Firstly i’d like to run it and see the result. And yes, this is the model i’d like to use. Can you give me some help? Thanks!
|
st31072
|
Sure, the community can help you alot if you ran into an error, or find it difficult to understand something. Plus I’ll still suggest you to get some basic understanding of pytorch first.
|
st31073
|
Hello. @tenet2001tenet . Have you run the pre-trained model of LyoutLM? Which model definition did you use in loading the model using state_dict ?
|
st31074
|
I have a 3D tensor of size say 100x5x2 and mean of the tensor across axis=1 which gives shape 100x2.
100 here is the batch size. Normally without batch, the division of tensor of shape 5x2 and 2 works perfectly but in the case of the 3D tensor with batch, I’m receiving error.
a = torch.rand(100,5,2)
b = torch.rand(100,2)
z=a/b
The size of tensor a (5) must match the size of tensor b (100) at non-singleton dimension 1.
How to divide these tensors such that my output is of shape 100x5x2 ? Something like bmm for division?
|
st31075
|
Solved by pascal_notsawo in post #3
I think you can unsqueeze b so that a.dim() = b.dim()
import torch
n, m, p = 100,5,2
a = torch.rand(n, m, p)
b = torch.rand(n, p)
b = b.unsqueeze(dim=1) # n x 1 x p
z=a/b
|
st31076
|
I think you can unsqueeze b so that a.dim() = b.dim()
import torch
n, m, p = 100,5,2
a = torch.rand(n, m, p)
b = torch.rand(n, p)
b = b.unsqueeze(dim=1) # n x 1 x p
z=a/b
|
st31077
|
Hello. I am currently in the process of creating a GRU model to guess the location of a user in a simulation. I have quantized the 100x100 2D location into a matrix of 0s and 1s.
Ex:
Agent is in location x = 5 in a 1D map size of 10:
encoded label tensor = [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Based on this, I am trying to write a differentiable loss that is based on the distance between the highest probable encoded output position and label:.
Ex:
Agent is in location x = 5 in a 1D map size of 10:
encoded label tensor = [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0] => argmax = 5th cell
output tensor = [0.5, 0.1, 0.2, 0.1, 0.2, 0.6, 0.3, 0.4, 0.4, 0.2] => argmax = 6th cell
Calculated distance = 5 - 6.
This is a simple example and I work on a 2D space so I have to calculate the Euclidean distance. I understand how the argmax argument isn’t differentiable but I am thinking of a way to incorporate this distance value into the loss so that my penalties make sense.
I tried using the Gumbel function to create the one-hot-encoded output tensor but not sure where to go from there since I still would need an argmax.
Sorry if the problem is confusing or if my explanation of it isn’t good enough.
Any help is appreciated.
|
st31078
|
I am wondering if I can modify __get_item__ in Dataset to accept multiple indices instead of one index at a time to improve data loading speed from disk using H5 file.
My dataset looks something like this
class HDFDataset(Dataset):
def __init__(self, path):
self.path = path
def __len__(self):
return self.len
def __getitem__(self, idx):
hdf = h5py.File(path, 'r')
data = hdf['data']
X = data[idx,:]
hdf.close()
return X
dset = HDFDataset(path)
I also have a custom batch sampler
def chunk(indices, chunk_size):
return torch.split(torch.tensor(indices), chunk_size)
class BatchSampler(Sampler):
def __init__(self, batch_size, dataset):
self.batch_size = batch_size
self.dataset = dataset
def __iter__(self):
#some function
return iter(list_of_list_idx)
Sample output from BatchSampler looks as below. Each list represents a batch and each values indicate the indices in a batch
batch_sampler = BatchSampler(batch_size, dataset)
for x in batch_sampler:
print(x)
[12, 3, 8, 6, 17]
[7, 9, 1, 19, 18]
[13, 4, 2, 5, 14]
[0, 3, 10, 11, 20]
Dataloader looks like
train_dataloader = DataLoader(dset, num_workers=8, batch_sampler=batch_sampler)
This approach works fine for me but data loading takes time as __get_item__ loads one index at a time from disk.
Since I already know the indices I need to load in each batch using BatchSampler, is there a way I can load entire batch at once in dataset
for e.g if batch 1 indices is
batch_idx = [12, 3, 8, 6, 17]
if __get_item__ can accept list of indices than single indice, something like below
def __getitem__(self, batch_idx):
hdf = h5py.File(path, 'r')
data = hdf['data']
X = data[batch_idx,:]
hdf.close()
return X
Solution 1: Since I already know the indices in each batch I can just load the data to model as tensors, however I won’t be able to utilize num_workers parameter in DataLoader to speed up.
If there is a way to load data in chunks using Dataset & DataLoader, it would solve my issue.
Appreciate any suggestions.
|
st31079
|
In theory, it should work, since fetcher can take a list of tuple indexes.
github.com
pytorch/pytorch/blob/780faf52caf170a39414097654d325f6e128414e/torch/utils/data/_utils/fetch.py#L47-L52 13
def fetch(self, possibly_batched_index):
if self.auto_collation:
data = [self.dataset[idx] for idx in possibly_batched_index]
else:
data = self.dataset[possibly_batched_index]
return self.collate_fn(data)
For __getitem__, you can take a tuple of indexes to accomplish your request.
|
st31080
|
When I executed the source code 72.
The code is from the tutorial https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html 406
It said that:
(base) [yq@local pytorch-examples]$python3 tv-training-code.py
Traceback (most recent call last):
File "tv-training-code.py", line 13, in <module>
from engine import train_one_epoch, evaluate
ModuleNotFoundError: No module named 'engine'
I don’t know which package is called ‘engine’ here.
|
st31081
|
Solved by ptrblck in post #2
From the tutorial:
In references/detection/ , we have a number of helper functions to simplify training and evaluating detection models. Here, we will use references/detection/engine.py , references/detection/utils.py and references/detection/transforms.py . Just copy them to your folder and use t…
|
st31082
|
From the tutorial:
In references/detection/ , we have a number of helper functions to simplify training and evaluating detection models. Here, we will use references/detection/engine.py , references/detection/utils.py and references/detection/transforms.py . Just copy them to your folder and use them here.
|
st31083
|
So how do we get the references/detection/ folders? What should we download and install? I have installed the pytorch, torchvision in my environment, but I could not find those files. Thanks
|
st31084
|
You could copy/paste these files fro the torchvision repository or clone it locally.
|
st31085
|
GitHub
pytorch/vision 505
Datasets, Transforms and Models specific to Computer Vision - pytorch/vision
|
st31086
|
git clone https://github.com/pytorch/vision 569
git clone https://github.com/cocodataset/cocoapi 239
In vision/references/detection: copy all .py file to [tv-training-code.py] directory
Copy cocoapi/PythonAPI/pycocotools to [tv-training-code.py] directory
|
st31087
|
I am working on a binary classification problem for which I am using the BCELosswithLogits and (since my data is imbalanced) the BinaryFocalLoss. To get the probability of the sample being positive, I apply the sigmoid function to the output of the model. However, even though I could improve my model from 68% AUC to 81% AUC, when I look at the predicted probabilities, they are always below 0.5. Am I doing something wrong?
|
st31088
|
If the probabilities are always below 0.5, which I assume is the used threshold to get the predictions, I would also guess that the model is only predicting class0, i.e. the negative class in a binary classification setup?
If so, did you check the confusion matrix or other metrics?
|
st31089
|
Thanks for your reply! I will plot the confusion matrix but how do I know which is the correct threshold? So far, I assumed it is 0.5 but it could be different, couldn’t it?
|
st31090
|
0.5 could be a good starting point and you could check the ROC for different thresholds and pick the best suitable one for your use case.
|
st31091
|
Thanks, I will try do to that as well.
So when using 0.5 as the threshold, I get the following confusion matrix:
That means, that my model actually predicts both classes but it only predicts FP. So it doesn’t get any of the TP, correct? I don’t understand how that is possible. Do you think the threshold could affect this?
|
st31092
|
Yes, it seems that no FP are returned. Also yes, you can check the ROC to select the threshold for the TPR vs. FPR value you want.
|
st31093
|
I adjusted the threshold accordingly (it is 0.25). Does it make sense that my lowest probability is only 0.126 (I’d have assumed something closer to 0) and my highest (best predictions for TP) is still below 0.5? Should I not expect something close to 1?
|
st31094
|
The smaller range of values might indicate that the model wasn’t trained for many epochs and thus the logits have a relative narrow range. You could thus try to train the model for more epochs, in case it’s not overfitting.
|
st31095
|
Thanks for your reply! So far it is overfitting after more than 30 epochs but I will try to make further adjustments.
|
st31096
|
I need to aggregate the outputs from different networks and then do some calculations based on this aggregated outputs, and finally update individual networks. I am unclear about how to keep the outputs which are obtained in the first for loop (from individual networks) and use it in the second for loop to calculate the loss and update individual networks. I very much appreciate if you give me any suggestion.
outputs = []
for i in range(noOfModels):
optimizer[i].zero_grad()
tmp = net[i](img) # this will return an output of size 10x10
outputs.append(tmp)
outputs = torch.stack(outputs, 2)
x = do_somthing(outputs) # do some calculations with outputs
for i in range(noOfModels):
loss = criterion(outputs[:,:,i], x)
loss.backward()
optimizer[i].step()
|
st31097
|
Solved by tom in post #2
I would probably just add the losses like this:
loss = 0
for i in range(noOfModels):
loss = loss + criterion(outputs[:,:,i], x)
loss.backward()
for i in range(noOfModels):
optimizer[i].step()
This should work if you didn’t break the computational graph somewhere.
|
st31098
|
I would probably just add the losses like this:
loss = 0
for i in range(noOfModels):
loss = loss + criterion(outputs[:,:,i], x)
loss.backward()
for i in range(noOfModels):
optimizer[i].step()
This should work if you didn’t break the computational graph somewhere.
|
st31099
|
So I have this network that has the following architecture:
image1960×1464 527 KB
The goal of this architecture is to extract features out of a sequence of 30 frames, without knowing what those features are. The 30 frames are represented as images, but they aren’t real images, which is why we can’t have a classification prior to passing it through the CNN.
After the “images” go through the CNN, they become vectors, which are fed to an LSTM who will try to find patterns through time. We have the label that should be output by the LSTM, a 0 or a 1, but we don’t have the classification for the CNN. This architecture’s whole goal is to get rid of feature engineering, basically (based on this paper: https://www.researchgate.net/publication/319897145_Deep_Model_for_Dropout_Prediction_in_MOOCs).
Now, my question is whether or not our model would backpropagate properly. Here is our code:
class MyCNN(nn.Module):
def __init__(self):
super(MyCNN, self).__init__()
# convolutional layer (sees matrices 24x48x1)
self.conv1 = nn.Conv2d(1, 20, 5)
# convolutional layer (sees matrices 20x44x20)
self.conv2 = nn.Conv2d(20, 50, 5)
# max pooling layer
self.pool = nn.MaxPool2d(2, 2)
# fully conntected layer (takes in input a matric of 3x9x50, makes it into a vector of 20)
self.fc = nn.Linear(3 * 9 * 50, 20)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
# flatten image input (is it to keep???)
x = x.view(-1, 3 * 9 * 50)
# add 1st hidden layer, with relu activation function
x = F.relu(self.fc(x))
return x
We then define the ConRec network, who takes as input the output of MyCNN():
class ConRec(nn.Module):
def __init__(self):
super(ConRec, self).__init__()
# lstm layer (20 long vectors)
self.lstm = nn.LSTM(20, 50, 1, batch_first=True)
def forward(self, x):
x = F.relu(self.lstm(x))
return x
And finally, we gather the output of the 30 frames passed through the CNN and concatenate them (the code might be wrong, I’m still a beginner!). The CNN is the same for all 30 branches, so it only has one set of weights.
And then we feed that through the LSTM.
for d in data:
output=MyCNN(d)
full_out=torch.cat([full_out, output], dim=1)
#30 items in data
result=ConRec(full_out)
When I apply the loss, would it backpropagate properly? I’m afraid it would only apply the loss to the LSTM and not the CNN, since they’re not directly connected.
|
st31100
|
Solved by KFrank in post #2
Hi Louis-Vincent!
I haven’t looked at your code, but yes, in general the kind of thing
you propose doing should work just fine.
Make sure you understand the basics of how pytorch uses the
require_grad property of a tensor. Normally the weights, etc.,
of your model, and the tensors that depend…
|
st31101
|
Hi Louis-Vincent!
lv_Poellhuber:
When I apply the loss, would it backpropagate properly? I’m afraid it would only apply the loss to the LSTM and not the CNN, since they’re not directly connected.
I haven’t looked at your code, but yes, in general the kind of thing
you propose doing should work just fine.
Make sure you understand the basics of how pytorch uses the
require_grad property of a tensor. Normally the weights, etc.,
of your model, and the tensors that depend on them, will have
requires_grad = True. The input to your CNN would normally
have requires_grad = False. However, the output of your CNN
will have requires_grad = True (because it depends on the
weights of the CNN). The “connection” between the LSTM and
the CNN is provided by the fact that the output of your CNN – that
has requires_grad = True – is also the input to your LSTM. This
is what permits pytorch’s autograd system to backpropagate your
final loss through the LSTM all the way back to the weights of your
upstream CNN.
Best.
K. Frank
|
st31102
|
def predict(image_path, model, top_k=top_num):
model.to(power)
model.eval()
torch_image =torch.from_numpy(np.expand_dims(process_image(image_path),
axis = 0)).type(torch.FloatTensor).to(power)
log_prob = model.forward(torch_image)
linear_probs = torch.exp(log_prob)
top_probs, top_labs = linear_probs.topk(top_k, dim =1)
idx_to_class = {}
for key, value in model.class_to_idx.items():
idx_to_class[value] = key
np_top_labs = top_labs[0].numpy()
top_labels = []
for label in np_top_labs:
top_labels.append(int(idx_to_class[label]))
top_flowers = [cat_to_name[str(lab)] for lab in top_labels]
return top_probs, top_labels, top_flowers
image_path = path_image
prob, classes, flowers = predict(image_path,model)
|
st31103
|
As the error describes, you would have to transfer the CUDATensor to the CPU first, since numpy cannot use GPU tensors.
Assuming this line of code raises the issue:
np_top_labs = top_labs[0].numpy()
use this instead:
np_top_labs = top_labs[0].cpu().numpy()
(and .detach(), if necessary).
|
st31104
|
I have the value tensor([4, 3, 6]) and index tensor([1, 1, 2])
Is there some way that I could get the result tensor([[4, 3], [6,0]]) ?
|
st31105
|
Solved by eduardo4jesus in post #14
Ok, I understood now what you want. I can’t see an elegant way to have a solution using plain PyTorch. The “naive” way to do it would be via a for loop.
First, let me redefine what I think you are asking:
Having an input tensor, and another tensor selection, I want to produce the output tensor as…
|
st31106
|
I don’t quite understand the operation you are trying to achive.
Are you looking for reshaping the tensor? or creating a tensor based on a selection from a tensor of indexes? Also, where the zero came from on the resulting tensor tensor([[4, 3], [6,0]])?
|
st31107
|
I want to create a 2-D tensor from the value tensor and index tensor, but here the index tensor is just a 1-D or is there some way to get the resulting tensor tensor([[4, 3], [6,]]) ?
|
st31108
|
I am sorry, but I am not sure if I fully understand what you want yet.
Is this what you are looking for?
input = torch.randint(10, (10, ))
input
# tensor([2, 9, 7, 9, 4, 4, 6, 1, 3, 3])
selection = torch.Tensor([0, 5, 4, 7])
selection
# tensor([0., 5., 4., 7.])
input[selection.long()].reshape((2, 2))
# tensor([[2, 4],
# [4, 1]])
|
st31109
|
I am very sorry for my poor expression, if the value tensor is tensor([2, 9, 7, 8, 4])
the index is tensor([0, 0, 1, 2, 3])
I want to get a tensor : tensor([[2, 9], [7], [8], [4]]) or tensor([[2, 9], [7, 0], [8, 0], [4, 0]])
if there is still something that I did not let you understand, plz let me know, I will try another way to express
maybe I need to create a zero tensor
tensor([[0., 0.],
[0., 0.],
[0., 0.]])
then the index tensor tensor([0, 0, 1, 2, 3]) is the row index
|
st31110
|
novice7:
index is tensor([0, 0, 1, 2, 3])
You are giving
input = torch.Tensor([2, 9, 7, 8, 4])
input
# tensor([2., 9., 7., 8., 4.])
indexes = torch.Tensor([0, 0, 1, 2, 3])
indexes
# tensor([0., 0., 1., 2., 3.])
desired = torch.Tensor([[2, 9], [7, 0], [8, 0], [4, 0]])
desired
# tensor([[2., 9.],
# [7., 0.],
# [8., 0.],
# [4., 0.]])
What does it mean to have 0 inside the index tensor? Here I see you having the value 0 twice. If you are selecting elements from the given tensor, input = torch.Tensor([2., 9., 7., 8., 4.]) I would expect to see five elements been selected; and the first element, input[0] which is 2, to be repeated twice. But I don’t see that on the desired output you gave. So, I am not following what is this operation you want.
Can you describe what index does to the input tensor? How come the maximum value in index is 3, but the last element of the input tensor, which is input[4] with value 4 is present on the desired output tensor?
|
st31111
|
Additionally, as a good practice, try to be as much precise as possible which the post subject. That way, you can lead the right people to click on your post faster.
|
st31112
|
input[4] is match the indexes[4]=3, it means, put the input[4]=4 in the third raw of the desired tensor
|
st31113
|
I am very appreciate for your reply!!!
I do want get the tensor
tensor([[2., 9.],
[7., 0.],
[8., 0.],
[4., 0.]])
input = torch.Tensor([2, 9, 7, 8, 4])
The index tensor([0, 0, 1, 2, 3]) is just the raw index you can find, here I do not give the column index!
From index tensor, we can find out that\ the 0th raw has the max length 2 because 0 repeated twice in index tensor , others are just 1, so in order to fill the desired tensor, there are some 0 value in desired tensor!
|
st31114
|
I also create a new post which describes my means, may be it will let you get my thought
I have a value tensor and a raw index, how can I get a 2-D tensor
the value tensor is value = torch.Tensor([2, 9, 7, 8, 4])
the raw index tensor = torch.Tensor([0, 0, 1, 2, 3])
2 matches 0 which means 2 in 0th raw, and because the desired[0][0]=2, the 9 matches 0 but the desired[0][0]==2, so desired[0][1]=9
I do want get the 2-D tensor
desired tensor:
tensor([[2., 9.],
[7., 0.],
[8., 0.],
[4., 0.]])
|
st31115
|
Ok, I understood now what you want. I can’t see an elegant way to have a solution using plain PyTorch. The “naive” way to do it would be via a for loop.
First, let me redefine what I think you are asking:
Having an input tensor, and another tensor selection, I want to produce the output tensor as the given below.
input = torch.Tensor([2, 9, 7, 8, 4])
input
# tensor([2., 9., 7., 8., 4.])
selection = torch.Tensor([0, 0, 1, 2, 3])
selection
# tensor([0., 0., 1., 2., 3.])
desired = torch.Tensor([[2, 9], [7, 0], [8, 0], [4, 0]])
desired
# tensor([[2., 9.],
# [7., 0.],
# [8., 0.],
# [4., 0.]])
Explanation
selection has same size as input and contains the row in which each input data will be mapped into. If the row index repeats, then that next raw data input will be placed into a new column. All the other rows are padded with zero on that case.
The following assumption was not stated initially. But if it holds true, the code is more straight forward. Also, the provided example allow us to infer that.
Assumption: All rows indexes are represented in selection tensor.
As I said earlier, I can’t think of a way to solve it in an elegant manner, relying in short nested PyTorch calls. From what I know, I don’t think it is possible. But a solution can be definitely done by using a for loop.
solution using for loop.
unique, counters = selection.unique(return_counts=True)
n_rows, n_cols = int(torch.max(unique))+1, int(torch.max(counters.int()))
col = np.zeros((n_rows, ), dtype=int)
output = torch.zeros((n_rows, n_cols))
for data, row in zip(input, selection.tolist()):
output[row, col[row]] = data
col[row] += 1
print(col)
output
# tensor([[2., 9.],
# [7., 0.],
# [8., 0.],
# [4., 0.]])
PS: I guess this operation you are looking for can also be explained as a sort of group by
|
st31116
|
I think you should be able to rename this post. So, you didn’t have to create a new one.
Also, I encourage you to enhance the format of your posts. There is a bar with text style options to assist you. That way people can read the content you wrote more comfortably.
|
st31117
|
I am trying to load a model, but I am getting this error… I am working on windows, I searched the web and this forum but I count not find anything…
Thanks for the help.
gpu = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
gpu
model = torch.load("./faceforensics_models/faceforensics++_models_subset/full/xception/full_raw.p", map_location=gpu)
----------------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-5-a95c0d9b8209> in <module>
----> 1 model = torch.load("./faceforensics_models/faceforensics++_models_subset/full/xception/full_raw.p", map_location=gpu)
c:\users\oscar\appdata\local\programs\python\python36\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
527 with _open_zipfile_reader(f) as opened_zipfile:
528 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
--> 529 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
530
531
c:\users\oscar\appdata\local\programs\python\python36\lib\site-packages\torch\serialization.py in _legacy_load(f, map_location, pickle_module, **pickle_load_args)
700 unpickler = pickle_module.Unpickler(f, **pickle_load_args)
701 unpickler.persistent_load = persistent_load
--> 702 result = unpickler.load()
703
704 deserialized_storage_keys = pickle_module.load(f, **pickle_load_args)
ModuleNotFoundError: No module named 'network'
|
st31118
|
If you store a model directly via torch.save(model, file_path), you would need to restore the file and folder structure in order to load this model again as explained here 230.
Based on the error message it seems that some files with the network definition are missing.
|
st31119
|
Oscar_Rangel:
./faceforensics_models/faceforensics++_models_subset/full/xception/full_raw.p"
Hey amigo @ptrblck, thanks for the response!
are you talking about this ?
“./faceforensics_models/faceforensics++_models_subset/full/xception/full_raw.p”
it need to be the same ?
or you talking about this?
the_model = TheModelClass() # declare the class and then try to load it ?
Thanks.
|
st31120
|
If you are using this approach: model = torch.load(path), you would need to make sure that all necessary files are in the corresponding folders as they were while storing the model.
The other approach of creating the model first and load the state_dict is more flexible, as you might change the actual file and folder structure and would just have to make sure to create a model with matching parameters.
|
st31121
|
@Oscar_Rangel I encountered the same problem a few days ago. You have to replicate the model github directory structure to be able to open the file. To avoid that in the future, you can save the model’s state_dict instead. Below is how I solved the error:
image993×124 7.42 KB
|
st31122
|
Hi @ayrts, I have tried your code to load the model, however it returns the same error as below…
ModuleNotFoundError: No module named ‘network’
Could you help me to fix it? or could you provide me the .pth files? Thank you.
|
st31123
|
@Oscar_Rangel I fix the error. You cannot load the model directly since the state_dict contains a structure like
network
|--- __init__.py
|--- models.py
|--- xception.py
Therefore, you need to build a folder named network and then copy models.py and xception.py from FF++, then you can import the model file. Remember to add init.py then you can import these two files without an absolute path.
|
st31124
|
Solved by SimonW in post #2
module.training is the boolean you are looking for.
|
st31125
|
This is almost exactly the same as this question:
stackoverflow.com
PyTorch data loading from multiple different-sized datasets 2
python, pytorch
asked by
helium4
on 08:33AM - 14 Aug 18 UTC
I have two datasets A and B. A contains tensors of shape [256,4096] and B contains tensors of shape [32,4096].
Now i can use ConcatDataset to merge A and B, but how do i guarantee that each batch only contains elements from either A or B.
Note, I don’t want to resize elements withing A and B. These are not images.
The answer on stack overflow mentions batch_sampler.
Can somebody elaborate and give a minimal example?
|
st31126
|
Here is an example of custom batch_sampler for your case
def chunk(indices, size):
return torch.split(torch.tensor(indices), size)
class MyBatchSampler(Sampler):
def __init__(self, a_indices, b_indices, batch_size):
self.a_indices = a_indices
self.b_indices = b_indices
self.batch_size = batch_size
def __iter__(self):
random.shuffle(self.a_indices)
random.shuffle(self.b_indices)
a_batches = chunk(self.a_indices, self.batch_size)
b_batches = chunk(self.b_indices, self.batch_size)
all_batches = list(a_batches + b_batches)
all_batches = [batch.tolist() for batch in all_batches]
random.shuffle(all_batches)
return iter(all_batches)
new_dataset = ConcatDataset((dataset_a, dataset_b))
a_len = dataset_a.__len__()
ab_len = a_len + dataset_b.__len__()
a_indices = list(range(a_len))
b_indices = list(range(a_len, b_len))
batch_sampler = MyBatchSampler(a_indices, b_indices, batch_size)
dl = DataLoader(new_dataset, batch_sampler=batch_sampler)
to verify if each batch only contains elements from either A or B
for x in batch_sampler:
print(x)
|
st31127
|
def __len__(self):
return (len(self.a_indices) + len(self.b_indices)) // self.batch_size
??
|
st31128
|
I am trying to average subword embeddings to form a word-level representation. Each word has a corresponding start and end index, indicating which subwords make up that word.
sequence_output is a tensor of B * 384 * 768, where 384 is the max sequence length, and 768 is the number of features.
all_token_mapping is a tensor of B * 384 * 2, which contains a start and end index. It is padded with [-1, -1].
initial_reps is a tensor of num_nodes * 768, num_nodes is the sum of all the number of words (not subwords) in the different samples.
initial_reps = torch.empty((num_nodes, 768), dtype=torch.float32)
current_idx = 0
for i, feature_tokens_mapping in enumerate(all_token_mapping):
for j, token_mapping in enumerate(feature_tokens_mapping):
if token_mapping[0] == -1: # reached the end for this particular sequence
break
initial_reps[current_idx] = torch.mean(sequence_output[i][token_mapping[0]:token_mapping[-1] + 1], 0, keepdim=True)
current_idx += 1
My current code will create an empty tensor of length num_nodes, and a for loop will calculate the values at each index, by checking token_mapping[0] and token_mapping[1] for the correct slice of sequence_output to average.
Is there a way to vectorize this code?
In addition, I have a list that holds the number of words for each sample. i.e. the sum of all the elements in the list == num_nodes
Thank you.
|
st31129
|
Solved by wetfire in post #4
Thanks to tom, I found out that scatter_add exists, and from there I found torch_scatter’s segment_coo
Here’s my solution now:
initial_reps_list = []
for i, sample_output in enumerate(sequence_output):
token_mapping = all_token_mapping[i]
token_mapping = token_mapping[token…
|
st31130
|
Not sure how to edit the original post so posting as comment instead.
Will use a simpler example.
sequence_output is a tensor of B * 3 * 2, where 3 is the max sequence length, and 2 is the number of features.
all_token_mapping is a tensor of B * 3 * 2, which contains a start and end index.
initial_reps is a tensor of num_nodes * 2, num_nodes is the sum of all the number of words (not subwords) in the different samples.
sequence_output = torch.arange(2*3*2).float().reshape(2, 3, 2)
tensor([[[ 0., 1.],
[ 2., 3.],
[ 4., 5.]],
[[ 6., 7.],
[ 8., 9.],
[10., 11.]]])
all_token_mapping = torch.tensor([[[0,0],[1,2],[-1,-1]], [[0,2],[-1,-1],[-1,-1]]])
tensor([[[ 0, 0],
[ 1, 2],
[-1, -1]],
[[ 0, 2],
[-1, -1],
[-1, -1]]])
num_nodes = 0
for sample in all_token_mapping:
for mapping in sample:
if mapping[0] != -1:
num_nodes += 1
3
initial_reps = torch.empty((num_nodes, 2), dtype=torch.float32)
current_idx = 0
for i, feature_tokens_mapping in enumerate(all_token_mapping):
for j, token_mapping in enumerate(feature_tokens_mapping):
if token_mapping[0] == -1: # reached the end for this particular sequence
break
initial_reps[current_idx] = torch.mean(sequence_output[i][token_mapping[0]:token_mapping[-1] + 1], 0, keepdim=True)
current_idx += 1
initial_reps
tensor([[0., 1.],
[3., 4.],
[8., 9.]])
In the example above, initial_reps[0] will be the mean of sequence_output[0][0:1], initial_reps[1] will be the mean of sequence_output[0][1:3], and initial_reps[2] will be the mean of sequence_output[1][0:3].
My current code will create an empty tensor of length num_nodes, and a for loop will calculate the values at each index, by checking token_mapping[0] and token_mapping[1] for the correct slice of sequence_output to average.
Is there a way to vectorize this code?
In addition, I have a list that holds the number of words for each sample. i.e. the sum of all the elements in the list == num_nodes
|
st31131
|
I think the encoding with “break” makes it a bit hard.
If you encode sequence ids instead (can be done with comparison and cumsum if you don’t have it already), you can use index_add or some scatter_add implementation.
Last I looked the public implementations for PyTorch were not terribly optimized but what can you do.
|
st31132
|
Thanks to tom, I found out that scatter_add exists, and from there I found torch_scatter’s segment_coo
Here’s my solution now:
initial_reps_list = []
for i, sample_output in enumerate(sequence_output):
token_mapping = all_token_mapping[i]
token_mapping = token_mapping[token_mapping != -1]
non_padded_outputs = sample_output[:num_bert_tokens[i]]
initial_reps_list.append(torch_scatter.segment_coo(non_padded_outputs, token_mapping, reduce="mean"))
initial_reps = torch.cat(initial_reps_list)
token_mapping is a list of indices in ascending order up to the max sequence length, padded with -1. I loop through the batch, for each sample, I get the token mapping, and only keep the non-negative indices.
num_bert_tokens is a list that holds, for each sample, the number of tokens (no padding). I get the non-padded outputs, use segment_coo to reduce them according to the token_mapping, and append them all to a list.
After the loop, I concatenate all the tensors in the list together.
The method segment_coo reduces all values from the src tensor into out at the indices specified in the index tensor along the last dimension of index. More details can be found at: Segment COO — pytorch_scatter 2.0.6 documentation
Code runs much faster now
|
st31133
|
Are the tensor saved for backward as below freed or deleted automatically after the backward pass?
ctx.save_for_backward(input, weight, bias)
I am trying to get around memory used problems.
|
st31134
|
Solved by ptrblck in post #2
Yes, these tensors should be freed after the backward().
To double check it, you could use this example and add some print statements to check the memory:
for t in range(5):
# To apply our Function, we use Function.apply method. We alias this as 'relu'.
relu = MyReLU.apply
# Forward p…
|
st31135
|
Yes, these tensors should be freed after the backward().
To double check it, you could use this example and add some print statements to check the memory:
for t in range(5):
# To apply our Function, we use Function.apply method. We alias this as 'relu'.
relu = MyReLU.apply
# Forward pass: compute predicted y using operations; we compute
# ReLU using our custom autograd operation.
print('='*10, t, '='*10)
print(torch.cuda.memory_allocated()/1024)
y_pred = relu(x.mm(w1)).mm(w2)
print(torch.cuda.memory_allocated()/1024)
# Compute and print loss
loss = (y_pred - y).pow(2).sum()
print(torch.cuda.memory_allocated()/1024)
if t % 100 == 99:
print(t, loss.item())
# Use autograd to compute the backward pass.
loss.backward()
print(torch.cuda.memory_allocated()/1024)
# Update weights using gradient descent
with torch.no_grad():
w1 -= learning_rate * w1.grad
w2 -= learning_rate * w2.grad
# Manually zero the gradients after updating weights
w1.grad = None
w2.grad = None
print(torch.cuda.memory_allocated()/1024)
which shows that the memory falls down to the initial usage:
========== 0 ==========
647.5
700.0
703.0
1045.5
650.5
========== 1 ==========
650.5
700.5
703.0
1045.5
650.5
========== 2 ==========
650.5
700.5
703.0
1045.5
650.5
========== 3 ==========
650.5
700.5
703.0
1045.5
650.5
========== 4 ==========
650.5
700.5
703.0
1045.5
650.5
Note that I’ve replaced the .grad attributes with None to free these tensors as well instead of zeroing them out.
|
st31136
|
ptrblck:
Note that I’ve replaced the .grad attributes with None to free these tensors as well instead of zeroing them out.
Excellent, thank you so much. I had not thought yet about assigning the .grad to None after the backward. I totally missed that!
Currently, I was only deleting tensors whenever they were not needed anymore, such as in X = torch.fft.fft2d(x); del x;. Deleting the model weight’s .grad will definitely improve things!
Thanks a lot!
|
st31137
|
In case you are using an optimizer or are calling zero_grad on the module itself, note that you can use the set_to_none=True argument in zero_grad 1 for the exact same reason of saving memory.
|
st31138
|
Thank you so much again for these precious tips. I just had another question on this topic. Is there a way to free the tensors saved for backwards or the grad_output before the end of backward?
Say I have something like:
def backward(cls, ctx, grad_output):
.
.
.
del grad_output;
.
.
.
I imagine that the above is pointless since there would still be a reference for grad_output on the call stack, right? Would there be a way do to something like this?
|
st31139
|
It doesn’t seem to make a difference, if I del the tensor and check the memory before and after this operation.
|
st31140
|
ptrblck:
It doesn’t seem to make a difference, if I del the tensor and check the memory before and after this operation.
Yes, I totally agree.
eduardo4jesus:
I imagine that the above is pointless since there would still be a reference for grad_output on the call stack
As far as I know, Python is heavily based on references instead of pointers and it manages memory allocation on its own. But, I wonder if there would be away to do something inside backward that would invalidate the reference in the call stack, or somehow just forcing a memory deallocation. I am just realizing now this would be a OFF-(Torch)-Topic question.
|
st31141
|
Tensors saved for the backward pass from forward ARE actually freed automatically during the corresponding backward!
If you are curious you can check in the autograd engine’s evaluate function, we call fn.release_variables if keep_graph (i.e., the retain_graph parameter) is set to False.
github.com
pytorch/pytorch/blob/master/torch/csrc/autograd/engine.cpp#L769
at::ThreadLocalStateGuard tls_guard(graph_task->thread_locals_);
// Switches to a function's CUDA stream (if applicable) before calling it
const auto opt_parent_stream = (*func).stream(c10::DeviceType::CUDA);
c10::OptionalStreamGuard parent_stream_guard{opt_parent_stream};
auto outputs = call_function(graph_task, func, inputs);
auto& fn = *func;
if (!graph_task->keep_graph_) {
fn.release_variables();
}
int num_outputs = outputs.size();
if (num_outputs == 0) { // Note: doesn't acquire the mutex
// Records leaf stream (if applicable)
// See note "Streaming backwards"
if (opt_parent_stream) {
std::lock_guard<std::mutex> lock(graph_task->mutex_);
graph_task->leaf_streams.emplace(*opt_parent_stream);
}
|
st31142
|
Thanks for showing that piece of code. It’s very nice to know where that happens. But that was already pointed by @ptrblck. Now I am wondering if there is a way to free these tensors (the ones saved. for backwards) and also (if possible) the grad_output somewhere inside the backward function.
One of the issues I face is that the calculations I do in the backward use a lot of memory. It is not really that much considering only a single layer. But, considering that some CNN architecture can get really deep and wide, I just run out of memory in such scenarios. I am looking for releasing those tensors right after using then, while still having other computations left.
Maybe my only hope for those scenarios is to be heavily based on in-place operations.
|
st31143
|
Hi, I have a model (nn.Module with multiple nested nn.Module as variable). I want to get all its parameter in a 1D vector and perform some operations on it, without changing length and put the result back into model as new parameters.
For getting parameter I am thinking of something like
all_param = []
for param in model.parameters():
all_param.append(param.view(-1))
vec = torch.cat(all_param, dim=0)
# do some operations on vec
# ? put vec back into model.
I am looking into state dictionary to put back, but there can be nested module in model, so like parameters() just follows an ordered traversal over nesting, can I use it to update Module also or is there a shorter way to do so?
|
st31144
|
Solved by soulitzer in post #4
You can use copy_ under no_grad mode -
model = torch.nn.Linear(10, 10)
def f(t):
return t * t
params = [f(p) for p in model.parameters()]
with torch.no_grad():
params = [p.copy_(q) for (p, q) in zip(model.parameters(), params)]
|
st31145
|
Can you give an example of “some operation”? Because what you are trying to do can be very inefficient on large models, especially if you do it repeatedly.
But one way to do it as you ask is to keep the initial shapes of the parameters in the first loop, and their length after view(). Once your operations are finished, you separate the vector resulting from the concatenation according to the lengths stored before, then you give the initial shapes back to each parameter.
|
st31146
|
Thank for reply.
These are small toy models, use case is an experiment I am trying to do to generate models.
Do you know, if I have weights of model in vector with correct shape how can I initialise a model with those weights.
|
st31147
|
You can use copy_ under no_grad mode -
model = torch.nn.Linear(10, 10)
def f(t):
return t * t
params = [f(p) for p in model.parameters()]
with torch.no_grad():
params = [p.copy_(q) for (p, q) in zip(model.parameters(), params)]
|
st31148
|
Hello,
I was training a modified GAN (One Class Adversarial Network) for 2 class classification, and I encountered a weird bug :
As I wanted to get probs from logits using softmax (I tried both using nn.functional.softmax and nn.Softmax), the output was just close to zero (and this was NOT caused by a wrong dim along which it was computed). The logits ranged from -20 to 20, which at first I thought could be a problem, but even for the small differences like -2 / 2 it output like 10^-5 and 10^-8. I then used a simple sigmoid on the first coordinate of the logits and it was all fine, but I wanted to discuss this issue and maybe why a robust function like softmax could have this problem. By the way, softmax was well executed at first (for a couple of hundred epochs), but then it suddenly output something wrong.
|
st31149
|
Hi Basile!
Basile:
The logits ranged from -20 to 20, which at first I thought could be a problem, but even for the small differences like -2 / 2 it output like 10^-5 and 10^-8.
As you’ve described things, this can’t happen (unless I give a perverse
interpretation to your description).
Could you capture the issue numerically, and post a complete, runnable
script that reproduces the issue? Please also let us know what version
of pytorch you are running.
Best.
K. Frank
|
st31150
|
Hi -
Is there a way to get the layers in the order of the data flow? I need to change the size of the Conv2d input and output channels one by one, which means the number of input channels should be set to the number of the output channels from the previous convolution. I tried model.children() but it doesn’t work.
|
st31151
|
You can use forward hook in this way:
import torch
import torch.nn as nn
class Foo(nn.Module):
def __init__(self):
super(Foo, self).__init__()
self.m1 = nn.Conv2d(1, 2, 3)
self.m2 = nn.BatchNorm2d(2)
self.m3 = nn.ReLU()
self.m4 = nn.Conv2d(2, 3, 3)
def forward(self, x):
x = self.m1(x)
x = self.m2(x)
x = self.m3(x)
x = self.m4(x)
return x
modules = []
def add_hook(m):
def forward_hook(module, input, output):
modules.append(module)
m.register_forward_hook(forward_hook)
foo = Foo()
foo.apply(add_hook) # function `add_hook` is applied to the every submodule including self.
input = torch.rand(1, 1, 10, 10)
foo(input) # hooks are fired sequentially from model input to the output
print(modules)
which prints out:
[Conv2d(1, 2, kernel_size=(3, 3), stride=(1, 1)),
BatchNorm2d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True),
ReLU(),
Conv2d(2, 3, kernel_size=(3, 3), stride=(1, 1)),
Foo(
(m1): Conv2d(1, 2, kernel_size=(3, 3), stride=(1, 1))
(m2): BatchNorm2d(2, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(m3): ReLU()
(m4): Conv2d(2, 3, kernel_size=(3, 3), stride=(1, 1))
)]
|
st31152
|
Also note that model.modules() or model.children() do not guarantee that we can get resident modules sequentially from model input to the output - they print out modules in an order of registration.
|
st31153
|
Thanks, I’m also looking at torch.nn.Sequential(*list(model.children())) but I’m not sure if it works with residual connections.
|
st31154
|
For a residual connection, I’d like to recommend writing a customized class. See official torch code here: vision/resnet.py at master · pytorch/vision · GitHub 2.
|
st31155
|
I’ve searched on google and suggested threads here for an answer to this but couldn’t find one.
So, let’s say I have these 4 images as input:
########################
### Images / Dataset ###
Image1 = torch.rand((3, 255, 255))
Image2 = torch.rand((3, 320, 320))
Image3 = torch.rand((3, 320, 320))
Image4 = torch.rand((3, 120, 120))
I tried to create a simple dataset and data loader for this:
def VariedSizedImagesCollate(batch):
return [item for item in batch]
class Images_X_Dataset(Dataset):
def __init__(self, ListOfImages):
self.data = ListOfImages
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return len(self.data)
MyDataset = Images_X_Dataset([Image1, Image2, Image3, Image4])
MyDataLoader = torch.utils.data.DataLoader(dataset = MyDataset, batch_size = 4, shuffle = True, collate_fn = VariedSizedImagesCollate, pin_memory = True)
We can take one batch and put it in the variable MyX
MyX = next(iter(MyDataLoader))
Now we have a simple fully convolutional network (so that the network itself can handle different size pictures without any tricks).
#############################################
### Model to work with Varied Size Images ###
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.Conv1 = nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size = (7, 7))
self.Conv2 = nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size = (5, 5))
self.Decv1 = nn.ConvTranspose2d(in_channels = 64, out_channels = 1, kernel_size = (4, 4))
self.Sigm1 = nn.Sigmoid()
def forward(self, X):
out = self.Conv1(X)
out = self.Conv2(out)
out = self.Decv1(out)
out = self.Sigm1(out)
return out
model = Net()
Now if I instantiate the model and pass the 1st image of MyX, I will get an output as I should.
MyOutImg = model(MyX[0].unsqueeze(0))
print("Original shape of 1st image:", MyX[0].shape, "Output's 1st image shape: ", MyOutImg[0].shape)
Original shape of 1st image: torch.Size([3, 120, 120]) Output’s 1st image shape: torch.Size([1, 113, 113])
The problem arises when I want to pass the whole batch:
MyOutImg = model(MyX)
print("Original shape of 1st image:", MyX[0].shape, "Output's 1st image shape: ", MyOutImg[0].shape)
TypeError: conv2d(): argument ‘input’ (position 1) must be Tensor, not list
Which I understand. It couldn’t be a tensor (as far as I understand) because of the fact that pictures have different sizes and tensors require same sizes for all pictures to have N pictures in a tensor (NxCxHxW), so VariedSizedImagesCollate() does return a list.
I tried torch.cat and torch.stack but they both seem to require same size images. So what’s a way where I can pass the whole batch?
I also need to it work on the backpropagation as well, but I guess if it works in the forward it should work on back too.
|
st31156
|
You can pad the images to the same size if you truly want to avoid resizing.
There are two fundamental as to why variable sized images cannot be combined in a batch:
(1) When trying to stack or concat the tensors when their spatial dimensions are not the same, what should the output tensor shape be? How can the spatial dimensions be unified as there is a single output tensor?
(2) Even suspending the first issue for a moment and considering the possibility that a tensor could exist with different spatial dimensions for different batch indices, this would wreak havoc in terms of batch-level parallelization. This fundamentally introduces data-dependent control flow, which means algorithms that are written to repeat the same computation (e.g., sliding a filter across multiple input examples) have to be modified to consider the potential different dimensions of each input.
|
st31157
|
For the 1st point, yeah, I can imagine.
But because I’ve seen papers where they use different sizes of images as input, they explicitly say that they don’t resize, but they don’t mension how they handle the different sizes in the end.
So I imagined, since I can use this network with different sizes 1 by 1, just not in a batch, that perhaps since that’s possible, there would be a way to make a list-like kind of batch instead of a torch tensor.
You know what I mean?
Certainly it’s impossible with tensors, but perhaps there’s another way?
On the 2nd point, for this list-like batch, I don’t think there will be any fundamental change since predicting with different size images on this network already works, so long as you do it 1 picture at a time. Just, instead of iterating on the 1st dimension of N x C x H x W tensor, you iterate over the length of the list with C x H_i x W_i tensors as elements of the list
(Now if a picture is too small for the current architecture, ofc that will throw an error)
|
st31158
|
N1h1l1sT:
On the 2nd point, for this list-like batch, I don’t think there will be any fundamental change since predicting with different size images on this network already works, so long as you do it 1 picture at a time. Just, instead of iterating on the 1st dimension of N x C x H x W tensor, you iterate over the length of the list with C x H_i x W_i tensors as elements of the list
The issue is that the underlying implementations on CPU and GPU are usually written to take advantage of batching for data reuse. Of course, you can always write your own function that takes in a list of tensors, one per each image. The performance will likely be much lower than batching (even if it means zero-padding is used).
I am referring to changes in the underlying kernel implementations. These kernels are highly shape dependent (e.g., the best axis for data reuse changes depending on the relative sizes) of which batch size is an important parameter.
|
st31159
|
I see.
Well, in that case, I guess the best I can do is abandon hope for list of 3D tensors and pad the images to create 1 actual 4D tensor.
Perhaps I could sort by Height and width so that each batch can have a different amount of padding, as little as possible for it to get to the same size as the biggest picture in each current batch.
|
st31160
|
I am experimenting with training word embeddings so here’s how it is going on. I have input nxn matrix which is n words of n lengths. So through my model, I get an output of nx2 where 2 is the dimension of each word embedding. My training runs perfectly fine with nll loss. So now if I want to do say training of 100 such nxn inputs, how do I implement it in Pytorch? I’ve read about custom dataset and dataloader, however my doubt is regarding the batches. Do I have to change the shape of my tensors to include batch size across all my implementation? Or do I just give 100xnxn as input to the dataset? What I want is that my model gets an input of n*n from the dataloader. How is that possible?
|
st31161
|
I’m unsure if the n dimension would already refer to the batch dimension or if it would be a seperate dimension (e.g. a temporal dimension).
Since PyTorch models expect inputs already containing a batch dimension, I would guess that the first dimension is already treated as the batch dim.
Depending on the used layers, you could use a DataLoader, specify a batch_size, and pass the data with the additional batch dimension to the model and check, if shape mismatch errors would be raised.
|
st31162
|
Thanks for your response. n is a separate dimension.
I have created a custom model and I’m not using any pytorch’s defined model. I haven’t implemented batch size in my custom model. So my guess I would have to add it there, right?
|
st31163
|
I have a loop, and I am getting a 10x10 tensor for each iteration of that loop. Lets assume that I am running that loop five times, and the output after the loop completes should be the concatenation of these tensors, i.e., a size of 10x10x5. How to concatenate this?
outx = []
for i in range(5):
tmp = net(x) # this will return a 10x10 tensor
outx = # need to cat tmp with outx in dim=2
outx - should have a dimension of 10x10x5
|
st31164
|
Solved by tom in post #2
Building the list and then using stack at the end is reasonable:
outx = []
for i in range(5):
tmp = net(x) # this will return a 10x10 tensor
outx.append(tmp)
outx = torch.stack(outx, 2)
Best regards
Thomas
|
st31165
|
Building the list and then using stack at the end is reasonable:
outx = []
for i in range(5):
tmp = net(x) # this will return a 10x10 tensor
outx.append(tmp)
outx = torch.stack(outx, 2)
Best regards
Thomas
|
st31166
|
Hello everyone,
why do these two implementations of the KL divergence give me different results, can anybody find the error? The difference is about 5%
note: I’ve commented out the ’ .sum() ’ because if I don’t, the code colours in the forum text editor change, can anybody suggest the reason?
Version 1:
def q_z_dist(self, mu, logvar):
var = torch.exp(logvar)
std = torch.sqrt(var)
cov = torch.diag_embed(var)
return td.MultivariateNormal(mu, cov)
def p_z_dist(self, mu, logvar):
mu_prior = torch.zeros_like(mu)
var_prior = torch.ones_like(logvar)
cov_prior = torch.diag_embed(var_prior)
return td.MultivariateNormal(mu_prior, cov_prior)
def KL_divergence(self, mu, logvar):
p_dist = self.p_z_dist(mu, logvar)
q_dist = self.q_z_dist(mu, logvar)
KL = td.kl_divergence(p_dist, q_dist)
KL_batch = KL #.sum()
return KL_batch
Version 2:
def KL_divergence(self, mu, logvar):
KL = -0.5 * (1 + logvar - mu**2 - torch.exp(logvar)).sum(dim = 1)
KL_batch = KL.sum()
return KL_batch
|
st31167
|
Solved by tom in post #2
So you have prior = p = N(0,1) and q = N(mu, diag(var))
It seems that you compute D_KL(p || q) in the first and the more common D_KL(q || p) in the second. KL Divergence is not symmetric, so these will differ.
Best regards
Thomas
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.