id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st45668
|
Hi all,
I have a (huge) theoretical question connected with a regression problem of neural network: what approach should I follow if my training dataset is incompleted and or my measurements are affected by strong biases?
let’s assume I want my net to predict the Y variable given the X,W, Z variables. Unfortunately I know that Y depends also on another variable, K, of whom I have no records. Of course the training dataset is essential to produce a “optimal” net, but I am wondering if there is any approach that considers, I don’t know, the inclusion of a stochastic variable that may take into account this lack of data.
Connected with the previous: what if my training dataset is not exact? how can I consider the uncertainty related to my measurements in training my net?
thank you!
|
st45669
|
Solved by qmeeus in post #6
Here is one example that can give you a good starting point: https://en.wikipedia.org/wiki/Errors-in-variables_models
|
st45670
|
Assuming that K is related to X, W, Z (no conditional independence: P(K, D) = P(K | D) * P(D)), then it should not be a problem:
P(Y, K | D) = P(Y | K, D) * P(K, D) = P(Y | K, D) * P(K | D) * P(D)
where the first and second terms in the right-most side of the equation are estimated by your model and K is a hidden variable.
If the K variable cannot be predicted from the data (i.e. P(K, D) = P(K) * P(D) or equivalently P(K | D) = P(K)) then your model will be missing a variable. It does not mean that it will not manage to predict anything good but rather that it does not have all the information needed to predict your target variable correctly.
|
st45671
|
correct, thank you for your answer so you’r saying train with what you have. And what about the uncertainty related to the input measurements?
|
st45672
|
There are methods to account for uncertainty in the training data. If you can quantify it or estimate, then you can modelize the error but this has more to do with statistics and with statistical analysis in general than with neural networks (I mean, the answer with DNNs will not be different than for any predictive model).
Also, you should know whether it’s your inputs, your outputs or both that are biased. Unfortunately, if your outputs are biased, then your model will be biased. If your inputs are biased, then your model might learn to cope with that (to some extend of course: the expression “garbage in, garbage out” still holds)
|
st45673
|
Can you provide the name of some of these statistical analyses you’r talking about? thanks
|
st45674
|
Here is one example that can give you a good starting point: https://en.wikipedia.org/wiki/Errors-in-variables_models 2
|
st45675
|
I have a cifar-10 input data of size x_train : (50000,3072), y_train : (50000,). I wanted to use the x_train, y_train and pack them into the trainloader function which uses a batch size of 100, so that when I call as follows:
for batch_idx, (inputs, targets) in enumerate(trainloader):
print(inputs.shape)
print(targets.shape)
Output:
(100,32,32,3)
(100,)
I wasted a lot of time in doing so, Can some one help me with this?
|
st45676
|
You could write a basic custom dataset and use that with a dataloader.
class DS(torch.utils.data.Dataset):
def __init__(this, X=None, y=None, mode="train"):
this.mode = mode
this.X = X #Maybe do reshaping here
if mode == "train":
this.y = y
def __len__(this):
return this.X.shape[0]
def __getitem__(this, idx):
if this.mode == "train":
return torch.FloatTensor(this.X[idx]), torch.LongTensor(this.y[idx]) #or torch.FloatTensor(this.y[idx]) depending on use case
else:
return torch.FloatTensor(this.X[idx])
tr_data_setup = DS(X_train, y_train.reshape(-1,1)) #
trainloader = torch.data.utils.DataLoader(tr_data_setup, batch_size=100, ......)
You could also expand this to perform augmentations on the image if necessary.
|
st45677
|
I have a tensor X with dimesions E x p and a second tensor Y with dimenisons C x p. Then I calculated the distance matrix with cdist() which leads me to a tensor Z with dimension E x C. Now, I am searching for an operation in pytorch that maps matrices of size (E,C) and (C, p) to a matrix of size (E, p). What is the best to do this in pytoch?
|
st45678
|
Hi I got an error while trying to install pytorch:
PS C:\windows\system32> pip install torch===1.7.0+cu110 torchvision===0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch===1.7.0+cu110 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch===1.7.0+cu110
|
st45679
|
Solved by Redcxx in post #5
Ah my bad sorry, I installed the 32 bit python instead of 64 bits, so it is not supported
|
st45680
|
What version of Python and Pip are you running?
Have you tried updating pip? python -m pip install –upgrade pip
|
st45681
|
Hi I am using python 3.8.6 and pip 20.2.4
PS C:\windows\system32> py -m pip install --upgrade pip
Requirement already up-to-date: pip in d:\environments\python\lib\site-packages (20.2.4)
PS C:\windows\system32> py --version
Python 3.8.6
|
st45682
|
I tried to install it directly from the website but it does not work:
PS C:\windows\system32> pip install https://download.pytorch.org/whl/cu110/torch-1.7.0%2Bcu110-cp38-cp38-win_amd64.whl
ERROR: torch-1.7.0+cu110-cp38-cp38-win_amd64.whl is not a supported wheel on this platform.
|
st45683
|
Ah my bad sorry, I installed the 32 bit python instead of 64 bits, so it is not supported
|
st45684
|
Hi,
when using torch.clamp(), the derivative w.r.t. to its input is zero if the input is outside [min, max]. This results in all gradients for previous operations in the graph to become zero due to the chain rule:
In tensorflow, one can use tf.stop_gradient (https://www.tensorflow.org/api_docs/python/tf/stop_gradient 17) to prevent this behavior. Is there something similar for PyTorch?
|
st45685
|
Below is a minimal working example. A neural net with one weight is supposed to push its input towards 5. To reduce the range of possible values, the output is clipped to [4, 6].
Ideally, the network should still find the optimal value despite the “prior”. But it gets stuck due to zero gradient.
This would not happen, if the derivative of the clamp-function could be excluded from backpropagation.
import torch
import numpy as np
import torch.nn as nn
x = torch.from_numpy(np.array(([1])).astype(np.float32)) # one scalar as input
layer = nn.Linear(1, 1, bias=False) # neural net with one weight
optimizer = torch.optim.Adam(params=layer.parameters(), lr=1e-3)
for i in range(101):
w = list(layer.parameters())[0] # weight before backprop
y = layer(x) # y = w * x
f_y = torch.clamp(y, min=4, max=6) # f(y) = clip(y)
loss = torch.abs(f_y - 5) # absolute error, zero if f(y) = 5
optimizer.zero_grad()
loss.backward()
grad = w.grad
if (i % 100 == 0) or (i == 0):
print('iteration {}'.format(i))
print('w: {:.2f}'.format(w.detach().numpy()[0][0]))
print('y: {:.2f}'.format(y.detach().numpy()[0]))
print('f_y: {:.2f}'.format(f_y.detach().numpy()[0]))
print('loss: {:.2f}'.format(loss.detach().numpy()[0]))
print('grad: {:.2f}\n'.format(grad.detach().numpy()[0][0]))
optimizer.step()
iteration 0
w: 0.96
y: 0.96
f_y: 4.00
loss: 1.00
grad: 0.00
iteration 100
w: 0.96
y: 0.96
f_y: 4.00
loss: 1.00
grad: 0.00
|
st45686
|
It turns out that the problem can be solved by creating a custom Clamp class with custom backward-method. The only remaining issue is that I do not know how to pass the min/max-values as an argument.
import torch
import numpy as np
import torch.nn as nn
class Clamp(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
return input.clamp(min=4, max=6)
@staticmethod
def backward(ctx, grad_output):
return grad_output.clone()
clamp_class = Clamp()
x = torch.from_numpy(np.array(([1])).astype(np.float32)) # one scalar as input
layer = nn.Linear(1, 1, bias=False) # neural net with one weight
optimizer = torch.optim.Adam(params=layer.parameters(), lr=1e-3)
for i in range(10001):
w = list(layer.parameters())[0] # weight before backprop
y = layer(x) # y = w * x
clamp = clamp_class.apply
f_y = clamp(y) # f(y) = clip(y)
loss = torch.abs(f_y - 5) # absolute error, zero if f(y) = 2
optimizer.zero_grad()
loss.backward()
grad = w.grad
if (i % 100 == 0) or (i == 0):
print('iteration {}'.format(i))
print('w: {:.2f}'.format(w.detach().numpy()[0][0]))
print('y: {:.2f}'.format(y.detach().numpy()[0]))
print('f_y: {:.2f}'.format(f_y.detach().numpy()[0]))
print('loss: {:.2f}'.format(loss.detach().numpy()[0]))
print('grad: {:.2f}\n'.format(grad.detach().numpy()[0][0]))
optimizer.step()
The plot below shows output over iterations. The output finally reaches the target value!
fig1.png1100×1100 15.7 KB
|
st45687
|
Personally I use torch.sigmoid as clamping function. It is more expensive, but the gradients (almost) never vanish.
|
st45688
|
always:
The only remaining issue is that I do not know how to pass the min/max-values as an argument.
I found @always’s solution quite elegant for my own use case and so have solved the problem of the min/max arguments:
from torch.cuda.amp import custom_bwd, custom_fwd
class DifferentiableClamp(torch.autograd.Function):
"""
In the forward pass this operation behaves like torch.clamp.
But in the backward pass its gradient is 1 everywhere, as if instead of clamp one had used the identity function.
"""
@staticmethod
@custom_fwd
def forward(ctx, input, min, max):
return input.clamp(min=min, max=max)
@staticmethod
@custom_bwd
def backward(ctx, grad_output):
return grad_output.clone(), None, None
def dclamp(input, min, max):
"""
Like torch.clamp, but with a constant 1-gradient.
:param input: The input that is to be clamped.
:param min: The minimum value of the output.
:param max: The maximum value of the output.
"""
return DifferentiableClamp.apply(input, min, max)
|
st45689
|
I am writing a MLP in pytorch using sequential model, but I am not understanding if the model is actually updating weights when I call :
optimizer.zero_grad()
scores = model(data)
loss = criterion(scores, targets)
# backward
loss.backward()
# gradient descent or adam step
optimizer.step()
My model is as below:
def init(self, input_size, out_size):
super(Feedforward, self).init()
self.layer1 = nn.Sequential()
self.layer1.add_module(“fc1”, torch.nn.Linear(input_size, 65))
self.layer1.add_module(“bn1”, nn.BatchNorm1d(num_features=65, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))
self.layer1.add_module(“Relu1”, torch.nn.ReLU())
self.layer1.add_module(“dropout”,nn.Dropout(p=0.2))
self.layer1.add_module(“fc2”, torch.nn.Linear(65, 60))
self.layer1.add_module(“bn2”, nn.BatchNorm1d(num_features=60, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))
self.layer1.add_module(“Relu2”, torch.nn.ReLU())
self.layer1.add_module(“dropout2”,nn.Dropout(p=0.2))
self.layer1.add_module(“fc4”, torch.nn.Linear(60, out_size))
self.layer1.add_module(“Softmax”,torch.nn.Softmax(dim=1))
def forward(self, x):
x = self.layer1(x)
return self.fc.forward(x)
def initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
elif isinstance(m, nn.Linear):
nn.init. xavier_normal_(m.weight)
|
st45690
|
The code should update the model parameters, if you’ve previously passed them to the optimizer.
You can print a specific parameter before and after the optimizer.step() operation and compare the values to make sure it’s working as intended.
PS: you can post code snippets by wrapping them into three backticks ```, which would make debugging easier.
|
st45691
|
Well, the issue is I am not sure if its working. I have a keras program with same number of layers and other hyperparameters, it gives 92% accuracy. But the pytorch model gives 20% accuracy on the same data. Can you please explain what is the difference between return x and return self.fc.forward(x) in the forward function.
|
st45692
|
I am sorry for interruption. I don’t see in your code you are initializing self.fc anywhere.
With regard to the last question: return x is doing self.layer1(x) will do forward pass of your data through the whole layer1 and returning result. If you are returning self.fc.forward(x) it is doing forward pass through it with previously achieved result from layer1.
It is also worth mention, you have to make sure the loss function you are using is awaiting as in input a ‘softmaxed’ version or raw logits (output of the last linear layer).
|
st45693
|
Hi everyone,
I’m trying to use couple of different models simultaneously in one big algorithm. Is it possible to run them on single GPU at the same time? And as I see, its kinda hard to deploy pytorch models in production pipeline. Should I turn to ONNX+Caffe2?
Thanks,
Anton
|
st45694
|
Solved by roaffix in post #5
So, as I understood:
I save a model this way
# ... some code here
torch.save(model.state_dict(), "{}.pt".format(output_name))
# NOTE: output_name is modelX_name in next steps
Then I load model this way
model1 = ... # your model1. E.g, model1 = CatDogClassifier().to('cuda')
model2 = ... # your…
|
st45695
|
It should be possible to run different models on the same GPU, however I think you could lose a lot of performance, since the models would have to wait for each other to finish the processing.
Maybe multiprocessing 74 might help, but I’m not really familiar with all the limitations.
What kind of deployment environment do you have?
You could easily setup a webserver using flask or any other framework and serve your models there.
If you need a lot of throughput on a local machine, I would go for ONNX and Caffe2.
PyTorch 1.0 will support easy deployment with Caffe2 as stated here 24. You would have to wait a few months though, because the version is scheduled to be released this summer/autumn as far as I know.
|
st45696
|
however I think you could lose a lot of performance, since the models would have to wait for each other to finish the processing.
What kind of deployment environment do you have?
I used a local machine and it was OK for other frameworks. E.g., in Tensorflow I created two different sessions and used them in the “production pipeline”. So each session was used individually and they were manually allocated in memory. Is it possible to make such a trick with pytorch or onnx + caffe2?
PyTorch 1.0 will support easy deployment with Caffe2 as stated here.
As I understood, release 1.0 will just simplify the onnx+caffe2 conversion. Am I wrong and it will be another solution? Anyway, I would like to find an alternative 1.0 solution now, if it possible
|
st45697
|
roaffix:
I used a local machine and it was OK for other frameworks. E.g., in Tensorflow I created two different sessions and used them in the “production pipeline”. So each session was used individually and they were manually allocated in memory. Is it possible to make such a trick with pytorch or onnx + caffe2?
You could just create several different models, push them onto the GPU and feed your data.
I suppose Tensorflow is doing the same.
However, since the GPU has limited resources, your performance might be limited, since the models might have to wait for each other.
You could try to use the CPU instead, if you have single input images for example.
As a side note: maybe glow 49 might be interesting for you.
roaffix:
As I understood, release 1.0 will just simplify the onnx+caffe2 conversion. Am I wrong and it will be another solution? Anyway, I would like to find an alternative 1.0 solution now, if it possible
Yeah, that’s also how I understand it.
|
st45698
|
ptrblck:
You could just create several different models, push them onto the GPU and feed your data.
So, as I understood:
I save a model this way
# ... some code here
torch.save(model.state_dict(), "{}.pt".format(output_name))
# NOTE: output_name is modelX_name in next steps
Then I load model this way
model1 = ... # your model1. E.g, model1 = CatDogClassifier().to('cuda')
model2 = ... # your model2
model1.load_state_dict(torch.load(model1_name))
model2.load_state_dict(torch.load(model2_name))
And If I want to make a prediction I simply type
X = ... #some data.
# e.g, X = torch.tensor(X, requires_grad=False, dtype=torch.float).to('cuda')
out1 = torch.max(model1(X), 1)[1]
# and for numpy output if you use 'cuda':
# out1 = torch.max(model1(X), 1)[1].cpu().numpy()
Please, correct me if I’m wrong. Can you provide something like pseudocode if I’m missed something?
|
st45699
|
Looks perfectly fine!
Your model might not be located at torch.Model(), but I assume that’s just a typo.
You should definitely check, if CPU won’t be faster for single inputs.
|
st45700
|
So, in my typo model1 = torch.Model() I loaded a model as the class. Is it possible to avoid this step?
Also, state_dict raises the error with unexpected and missing keys.
|
st45701
|
I’m not sure, how else you would like to get your predictions.
A model might be the easiest solution.
Why do you want to avoid it?
The state_dict errors is thrown, if you save a model and change its architecture after.
|
st45702
|
Just tried to avoid long dependencies.
upd. Solved problem with state_dict. That was my mistake
|
st45703
|
Note for hackers, who will look through this post later. I changed a little bit post with a solution, so don’t get confused about later my or @ptrblck comments
|
st45704
|
I am sorry, I don’t understand how to run two models at the same time, your code could do this?
|
st45705
|
Hello!
I’m trying to use the SWA code in Pytorch 1.6 (https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ 1) and I’m following the sample structure similar to the one in the blog:
from torch.optim.swa_utils import AveragedModel, SWALR
from torch.optim.lr_scheduler import CosineAnnealingLR
loader, optimizer, model, loss_fn = ...
swa_model = AveragedModel(model)
scheduler = CosineAnnealingLR(optimizer, T_max=100)
swa_start = 5
swa_scheduler = SWALR(optimizer, swa_lr=0.05)
for epoch in range(100):
for input, target in loader:
optimizer.zero_grad()
loss_fn(model(input), target).backward()
optimizer.step()
if epoch > swa_start:
swa_model.update_parameters(model)
swa_scheduler.step()
else:
scheduler.step()
# Update bn statistics for the swa_model at the end
torch.optim.swa_utils.update_bn(loader, swa_model)
However, I’m not certain about a couple of things. After entering the SWA regime, if the SWA scheduler learning rate is the default (0.05), the model in training becomes unstable quickly (NaN), if I lower it (to 0.001), it appears to work (at least no NaN).
Should this be the case that the SWA copy of the model is affecting the model in training? (I had understood that the SWA copy should be updated separatedly from the model in training).
|
st45706
|
I would also assume that the SWALR object is not interfering with the standard training routine and it seems an additional swa_lr entry is created in the para_groups as seen here 2.
However, I’m currently unsure where this swa_lr is used, since the model update doesn’t seem to use here 2.
|
st45707
|
Hello @ptrblck! I’m still unsure about the logic, but I’ll continue testing. Cheers!
|
st45708
|
I use the pytorch’s summary library to summarize the size of your deep learning model.
There is a model with a small number of parameters, but a forward pass size is a large model. The forward pass size seems to be related to the speed of the model.
Probably I think that forward pass size means computed size. Is it the same as FLOPs?
I also wonder if the size of the model we are talking commonly about is the sum of both, or just the number of parameters. I am looking forward to your answer.
|
st45709
|
Hi,
All these numbers in MB correspond to the expected memory needed to run the model.
|
st45710
|
Not sure about its exact meaning. But I tried to change the input_shape and the statistics of “Params” STAY the same and the “Forward/backward pass size” changed. So guess it’s about FLOPs.
|
st45711
|
I was working on two model that achieve similar accuracy. Can someone help me know what can one infer from graph1 and graph2 loss functions plotted as below ?? I will be posting the other graph in replies as forum allows meh to post only one picture
image727×419 37.2 KB
|
st45712
|
My model is converted into GPU, cuda device is active (cuda.is_available() returns True), the model is checked if it is converted to cuda (next(model.parameters()).is_cuda returns True). The data X and Y in training function is also on GPU and has been confirmed. But still when I run the code, my model runs slow, GPU utilization is only 4 to 5% and the CPU usage becomes 100% for some instant of time.
Please guide me what should I double check.
def main():
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = '0'
print('__Number of CUDA Devices:', cuda.device_count(), ', active:', cuda.current_device())
print ('Device name: .... ', cuda.get_device_name(cuda.current_device()), ', available >', cuda.is_available())
model = BaseNetwork.TestModel()
model = nn.DataParallel(model, device_ids=[0])
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
cudnn.benchmark = True
model.to('cuda')
summary(model, (3, 236,236))
base_lr = 0.0001
epochs = 200
workers = 0
momentum = 0.9
weight_decay = 1e-3
best_prec1 = 1e20
k = 0
optimizerr = torch.optim.Adam(model.parameters(), lr=base_lr, weight_decay=weight_decay, betas=(0.9, 0.95))
criterion = nn.MSELoss().cuda()
print(next(model.parameters()).is_cuda)
dataset_path = r'D:\My Research\Video Summarization\VS via Saliency\SIP'
d_type = ['Train', 'Test']
train_data = DatasetLoader(dataset_path, d_type[0])
train_loader = DataLoader(train_data, 4, shuffle=True, num_workers=2, pin_memory=True, drop_last=True)
test_data = DatasetLoader(dataset_path, d_type[1])
test_loader = DataLoader(test_data, 4, shuffle=False, num_workers=2, pin_memory=True, drop_last=True)
for epoch in range(0, epochs):
train(model, optimizerr, criterion, train_loader)
print("Epoch: %d, of epochs: %d"%(epoch,epochs))
torch.save(model, 'SIP_Test.pt')
def train(model, opt, crit, train_loader):
model.train()
for i, (X, Y) in enumerate(train_loader):
X = X.to(‘cuda’)
#print('X in train model is on GPU: ', X.is_cuda)
Y = Y.to(‘cuda’)
#print('Y in train model is on GPU: ', Y.is_cuda)
output = model(X)
loss = crit(output, Y)
opt.zero_grad()
loss.backward()
opt.step()
|
st45713
|
Solved by tinu445 in post #3
Thanks a lot, @ptrblck, this comment from your link worked for me…
Don’t leave the dataloader pin_memory=‘True’ on by default in your code. There was a reason why PyTorch authors left it as False. I’ve run into many situations where True definitely does cause extremely negative paging/memory subsy…
|
st45714
|
Your training might suffer from e.g. a data loading bottleneck (or any other bottleneck, which starves the GPU). Have a look at e.g. this post 2 for more information about the data loading and try to profile your code to check, which is the slowest part.
|
st45715
|
Thanks a lot, @ptrblck, this comment from your link worked for me…
Don’t leave the dataloader pin_memory=‘True’ on by default in your code. There was a reason why PyTorch authors left it as False. I’ve run into many situations where True definitely does cause extremely negative paging/memory subsystem impact . Try both.
|
st45716
|
I want to linearly interpolate between two PyTorch trained model checkpoints. For all layers except the batch normalization, I load the stated dict and simply do the linear inteporaltion as follow:
def interpolate_state_dicts(state_dict_1, state_dict_2, weight):
return {key: (1 - weight) * state_dict_1[key] + weight * state_dict_2[key]
for key in state_dict_1.keys()}
I do not know if we can simply do the same for BN layer parameters (Weight, Bias, Running mean, running std) or not? I guess it is not that simple, as mean and std are calculated for a specific batch.
|
st45717
|
The running stats were updated using all training batches, so if you assume that an interpolation of the parameters works fine, it might also work on the running stats.
EDIT: your use case might also be similar to Stochastic Weight Averaging 7, so you could take a look how the parameters are averaged and how batchnorm layers are treated.
|
st45718
|
I got training dataset 0 : 1 = 545 : 63 and validation dataset 11: 58.
if i do nn.CrossEntropy(weights=tensor([0.1036, 0.8964], device=‘cuda:0’)), is it okay? I want to do classification with deep learning… but I don’t know how to do with these dataset…please help me…
|
st45719
|
If your model is overfitting to the majority class, you could pass the weights as e.g. the inverse of the class counts. I’m not sure how the current weights were calculated, but you could check if it’s improving the training by checking the training and validation losses.
|
st45720
|
I have a list like [x1, x2, ..., xn], and xi is a Tensor with size torch.Size([2, 3]), how to convert this list a Tensor with size torch.Size([n, 2, 3]). I am looking for an elegant method. Thanks for your reply.
|
st45721
|
Solved by ptrblck in post #2
You could use torch.stack:
n = 10
l = [torch.randn(2, 3) for _ in range(n)]
t = torch.stack(l)
print(t.shape)
> torch.Size([10, 2, 3])
|
st45722
|
You could use torch.stack:
n = 10
l = [torch.randn(2, 3) for _ in range(n)]
t = torch.stack(l)
print(t.shape)
> torch.Size([10, 2, 3])
|
st45723
|
I am having different errors training my model that supposed to segment a given image into different categories. Right now I just wanna understand what happens in these steps that is usually used in training scripts:
outputs = model(batch_img_train)
loss = loss_function(outputs, batch_mask_train)
loss.backward
optimizer.step()
I just dont understand what is the output of OUTPUTS and what LOSS is gonna do with my OUTPUTS and MASK, i just want to understand its inner working. Can someone explain it?
|
st45724
|
I tried doing this by the way and I dont understand what I printed
BATCH_SIZE = 10
EPOCHS = 1
def train(model):
model.train()
for epoch in range(EPOCHS):
for i in tqdm(range(0, len(img_train), BATCH_SIZE)):
batch_img_train = img_train[i:i+BATCH_SIZE].view(-1, 3, 224, 224)
batch_mask_train = mask_train[i:i+BATCH_SIZE].view(-1, 1, 224, 224 )
model.zero_grad()
outputs = model(batch_img_train)
loss = loss_function(outputs, batch_mask_train)
loss.backward
optimizer.step()
return outputs, loss
outputs, loss = train(model)
print(outputs[0])
print(loss)
tensor([[[-0.0091, -0.1961, 0.0587, ..., -0.1641, -0.0139, -0.2890],
[-0.0064, 0.0030, -0.1327, ..., 0.0016, -0.0392, 0.0583],
[ 0.0580, -0.1432, 0.0927, ..., -0.0062, -0.0150, -0.2169],
...,
[-0.0160, -0.0555, -0.0218, ..., -0.0440, 0.0779, 0.0119],
[ 0.0780, -0.2582, 0.3273, ..., -0.1301, -0.0121, -0.3491],
[-0.0095, 0.0300, 0.2434, ..., 0.0927, -0.1081, 0.1011]],
[[ 0.0240, 0.0760, 0.1297, ..., -0.0281, 0.1930, -0.0558],
[ 0.2875, -0.0392, 0.1630, ..., -0.2731, 0.1639, -0.1631],
[ 0.1795, 0.1011, 0.0933, ..., -0.1308, 0.1352, -0.1574],
...,
[ 0.2370, -0.0927, 0.1744, ..., 0.0010, 0.2705, -0.2871],
[ 0.2685, 0.0470, 0.0728, ..., -0.0878, 0.3259, -0.0947],
[ 0.0521, -0.0432, 0.2411, ..., -0.0805, 0.0145, -0.1734]],
[[-0.0826, -0.0991, -0.0454, ..., -0.0914, -0.0570, -0.1069],
[-0.0284, -0.2223, 0.2041, ..., -0.2442, -0.0794, -0.2244],
[-0.1062, -0.1029, 0.2294, ..., -0.0914, -0.1032, 0.0496],
...,
[-0.0181, -0.2399, 0.0967, ..., -0.3608, -0.0362, -0.2599],
[ 0.0174, -0.0861, -0.0526, ..., 0.0006, -0.0621, 0.0562],
[-0.0683, -0.2384, -0.1297, ..., -0.2269, -0.1719, -0.2036]],
...,
[[ 0.0466, 0.0729, 0.1712, ..., 0.0808, -0.0174, 0.0344],
[ 0.0591, 0.1214, 0.2544, ..., -0.1711, 0.0215, -0.1528],
[ 0.0919, 0.0274, -0.1394, ..., 0.0419, 0.1209, 0.0010],
...,
[ 0.1275, -0.0068, 0.1960, ..., -0.0925, 0.0209, -0.0808],
[-0.0907, -0.0289, 0.0956, ..., -0.0043, 0.0141, -0.0482],
[-0.0100, -0.0397, 0.1704, ..., -0.0348, 0.0571, 0.0355]],
[[-0.1661, -0.2054, -0.2219, ..., -0.3749, -0.1241, -0.1909],
[ 0.0185, -0.1433, -0.1410, ..., -0.1159, 0.0940, -0.0041],
[-0.1563, -0.1719, -0.0610, ..., 0.0081, 0.0230, -0.1936],
...,
[-0.0505, -0.0652, -0.1203, ..., 0.0068, 0.1381, -0.0275],
[-0.0941, -0.2070, -0.1704, ..., -0.1199, -0.0481, -0.2115],
[-0.0044, -0.0275, -0.1157, ..., 0.0380, -0.0144, 0.1001]],
[[-0.0658, 0.0374, 0.0149, ..., 0.2753, -0.0432, 0.1743],
[ 0.3474, 0.0585, 0.2438, ..., 0.0770, 0.1662, 0.0813],
[-0.0568, 0.0906, 0.1045, ..., 0.1397, 0.1213, 0.0352],
...,
[ 0.3072, 0.2205, 0.1899, ..., 0.0265, 0.2470, 0.0975],
[-0.1063, 0.1827, 0.0146, ..., 0.1447, -0.0308, 0.0969],
[ 0.1026, 0.1702, 0.2469, ..., 0.0686, 0.1107, 0.1228]]],
grad_fn=<SelectBackward>)
tensor(105.4350, grad_fn=<MseLossBackward>)
|
st45725
|
The outputs tensor is created by your model and represents the output of the forward method.
Usually it would contain e.g. class logits for your use case.
The loss_function calculates the loss, which will be used to compute the gradients of this loss w.r.t. all parameters of the model during the loss.backward() call. The optimizer.step() uses the gradients (and internal estimates depending which optimizer is used) to update all passed parameters.
I would recommend to take a look at some courses e.g. FastAI, which might be a good starter.
|
st45726
|
I think there’s an error in the documentation for BatchNorm1d 2. It currently says that num_features is “L from input of size (N,L)”. Shouldn’t it say “C from input of size (N,C)”, as the input shape specification correctly states?
Also, should BatchNorm0d not have its own function? The current API seems confusing since it doesn’t fit the expected pattern of
BatchNorm3d for inputs of shape of shape [C,D,H,W]
BatchNorm2d for inputs of shape [C,H,W]
BatchNorm1d for inputs of shape [C,L]
BatchNorm0d for inputs of shape [C]
|
st45727
|
randbit:
It currently says that num_features is “L from input of size (N,L)”. Shouldn’t it say “C from input of size (N,C)”, as the input shape specification correctly states?
Yeah, it might make sense to use the same naming for the num_features and the input documentation. Would you be interested in fixing the docs? If so, feel free to create a GitHub issue with your description of the issue and create the PR after an initial discussion there.
randbit:
Also, should BatchNorm0d not have its own function?
I don’t think it’ll improve the usability of this modules, but let’s see what others think about it.
|
st45728
|
Hi,
Given 1D differentiable vectors A=[Nx1] and B=[Mx1], I am looking to compute pairwise kernel operation .
def kernel(a, b):
a*b*torch.exp(-torch.abs(a-b)/0.4)
Is there a way to avoid looping over individual items? I need to perform the kernel operation for pairwise entries in {A, A}, {A, B} and {B, B} which might make it computationally heavy if done iteratively.
|
st45729
|
You could try to use broadcasting as seen here:
a = torch.arange(4).float().view(4, 1)
b = torch.arange(4).float().view(4, 1)
# element-wise
print(a - b)
# pair-wise
print(a.unsqueeze(1) - b)
which would result in a higher memory footprint, but might be faster than your sequential approach.
|
st45730
|
I’m going to develop a flask web application using yolov5 trained model. so as described in the doc, it works fine with the command-line argument, what I tried was I tried to apply the oop concept and create a model object for use with every single frame.
class Model(object):
def __init__(self, weights, save_img=False):
self.view_img = True,
self.save_txt = False,
self.imgsz = 640
self.device = select_device()
print(self.device, "llll")
self.output = "output"
if os.path.exists(self.output):
shutil.rmtree(self.output) # delete output folder
os.makedirs(self.output) # make new output folder
self.half = self.device.type != 'cpu' # half precision only supported on CUDA
# Load model
self.model = torch.load(weights, map_location=self.device)['model'].float() # load to FP32
self.model.to(self.device).eval()
if self.half:
self.model.half() # to FP16
In side Camera.py i tried to create model using below line
model = Model('weights/best.pt')
but now I’m getting below error in the stack
File "C:\Users\D.ShaN\Documents\projects\python\fyp\helmet_covers_yolo\flask_app\app.py", line 4, in <module>
from camera import Camera
File "C:\Users\D.ShaN\Documents\projects\python\fyp\helmet_covers_yolo\flask_app\camera.py", line 8, in <module>
from detect_image import detect_image
File "C:\Users\D.ShaN\Documents\projects\python\fyp\helmet_covers_yolo\flask_app\detect_image.py", line 12, in <module>
model = Model("weights/best.pt")
File "C:\Users\D.ShaN\Documents\projects\python\fyp\helmet_covers_yolo\flask_app\model.py", line 27, in __init__
self.model = torch.load(weights, map_location=self.device)['model'].float() # load to FP32
File "C:\Users\D.ShaN\AppData\Local\conda\conda\envs\fyp\lib\site-packages\torch\serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "C:\Users\D.ShaN\AppData\Local\conda\conda\envs\fyp\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'models'
ass describe in Yolo I tried to execute below line in the same manner with the same weight parameters in both cases (command line and model = Model("weights/best.pt"))
self.model = torch.load(weights, map_location=self.device)['model'].float()
any suggestions or solutions
thank you
|
st45731
|
It seems you are trying to load the model directly, which is not the recommended way since it may break in various ways, if you don’t keep the file structure equal.
The recommended way would be to save and load the state_dict as described here 238, which would avoid these import errors.
|
st45732
|
I’m getting following problem, which I’m not able to solve.
RuntimeError: CUDA out of memory. Tried to allocate 598.00 MiB (GPU 0; 14.73 GiB total capacity; 13.46 GiB already allocated; 337.88 MiB free; 13.46 GiB reserved in total by PyTorch)
This is happening when I’m running my model on test images to produce results.
def buildG(UPSCALE_FACTOR=4):
netG = Generator(UPSCALE_FACTOR)
netG.train()
netG.load_state_dict(torch.load(G_weights_load))
netG.cuda()
return netG
netG = buildG()
def test_on_single_image(path='/content/data1/lrtest.jpg',UPSCALE_FACTOR=4):
img = Image.open(path)
layer = ToTensor()
img1 = layer(img)
sh = img1.shape
img1 =img1.reshape((1, sh[0], sh[1], sh[2]))
img2 = netG(img1.cuda(0))
utils.save_image(img2, imgs_save)
files.download('/content/temp1.jpg')
My model is runnig good on the training loop (on the GPU). Then why is it giving such GPU memory full error? And I’m not getting why pytorch is occupying memory? There is no clear indication on what is occupying more memory! Weights of netG are just 3MB in size.
|
st45733
|
You might be accidentally storing tensor, which are still attached to the computation graph, which should be visible by an increasing usage of the device memory.
Your current code snippet looks fine, so you might have the problematic line of code in another function.
Wrap your code into a with torch.no_grad() block during the validation to avoid storing intermediate tensors and the computation graph.
|
st45734
|
I’m training a CNN that classifies small regions (7x7) from 40 large input layers that are all used to do the classification. So the CNN takes in a (40, 7, 7) input to be classified.
But the 40 input layers themselves though are very large (over 10k x 10k pixels), and so I can’t read all 40 files in the dataset constructor because that goes beyond the amount of RAM I have.
So instead, I’m forced to read all 40 files during every __getitem__ call of the dataset loader, and just read the desired 7x7 location inside the input layers for this iteration.
But this has made my training very slow as I think it’s just taking a long time to open and read the windows from all 40 files every iteration.
I am already using multiple workers
torch.utils.data.DataLoader(dset, batch_size=32, num_workers=4)
class MyDataset(Dataset):
def __init__(self):
self.all_input_layers = ["file1", "file2", ..., "file40"]
def __getitem__(self, idx):
all_layer_data = []
for (file in self.all_input_layers):
curr_window = read_file_window(file) # returns 7x7 np array
all_layer_data.append(curr_window)
data = np.concatenate(all_layer_data, axis=0) # 40 x 7 x 7 data that will be inputted into CNN
return data
What are the other things I should try to speed this up?
Should I use torch.multiprocessing.set_sharing_strategy('file_system')?
I don’t have any issues with too many file descriptors being open, which is the case that using this ‘file_system’ strategy seems to be recommended for, so not sure if it would help me.
What else?
|
st45735
|
You could profile the method to read the data as well as create the windows from these tensors and check where most of the time is spent.
Once you’ve isolated it, you could try to accelerate the code (e.g. with a 3rd party library if possible) or think about changing the overall data loading (e.g. would it be possible to store the data in another format and only load the desired window instead of the whole data array).
For a more general advice have a look at this post 5.
|
st45736
|
In the past I wrote everything myself in c++ / cuda. So I have seen all this mention of handy libraries for ML. So I thought I would see what it was all about.
So I follow the steps on PyTorch website. I install Anaconda.
First problem, no instructions on what command to install pytorch via Annaconda. just some vague reference to a command you need to run.
So I google it and find a random website with a script to do it!! https://deeplizard.com/learn/video/UWlFM0R_x6I 1
wtf guys, anyway the script is an old version of Cuda.
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
so I try a current version
conda install pytorch cudatoolkit=11.1 -c pytorch
Next problem:
Anaconda complains that “The following packages are not available from current channels”
cudatoolkit=11.1
so thanks for wasting my time Pytorch. I have no way to use you with the current cuda libraries. So back to doing it myself.
How do people work like this?
|
st45737
|
Hi,
Yes it can be hard at times. but the instructions are given in the official site. just navigate to https://pytorch.org/ 32 and you’ll see this :
image1597×694 73.7 KB
Then you simply choose your method, for example I’d like to use pip and install the cuda11 version, I click on pip, then cuda 11.0 and bingo I get my commands to install pytorch :
pip install torch===1.7.0+cu110 torchvision===0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
its as simple as that.
You could also post a question and we’d be more than happy to help you out.
Happy Pytorching
|
st45738
|
Thanks I was able to install Cuda 11.0,but where is Cuda 11.1? Its 3 months old and has updates for Ampere based GPUs.
Not just 11.1, there is 11.1.1 released already.
I hate using old versions. You guys should know with ML every drop counts. So why the super slowness to update?
Say you are spending thousands on A100s, you would not use pytorch because its not maximising the latest CUDA!
|
st45739
|
Its up to the Team Pytorch @ptrblck may have a better answer as to why.
It maybe that the Cuda11 was released back in May, and the codebase was tested on it. Cuda11.1 was released around september 23 I guess, around 1 month before releasing the stable 1.7.
So it makes sense to stick to the version you have been working on for the past couple of months and not make yourself deal with some probable regression/bugs caused by the new update. (take into account the OSes, drivers, etc as well)
If there is a need for bleeding edge cuda, anyone can easily build from source and call it a day. For the majority of users though the stability is the primary goal.
Nevertheless, we will be expecting a 1.7.1 in a near future (possibily in December?) and that might include cuda 11.1 I’m not sure though.
Again as I stated earlier, you can always build Pytorch against the latest cudatoolkit and enjoy it. the build process is pretty easy to follow and usually is very straightforward.
Cheers.
|
st45740
|
The binaries are not shipping with CUDA11.1, as we couldn’t prune libs with this version and were running into errors. If you need to use the latest library versions (CUDA, cudnn, NCCL etc.), you could always build PyTorch from source or use the NGC docker containers 14.
LukePoga:
Say you are spending thousands on A100s, you would not use pytorch because its not maximising the latest CUDA!
Your A100s work with CUDA11.0 and since sm86 is SASS binary compatible with sm80, also the 30XX series works.
|
st45741
|
Hi everyone
I have a script that trains a CNN and I am able to reproduce the results using:
def set_seed(seed):
torch.manual_seed(seed)
np.random.seed(seed)
random.seed(seed)
# for cuda
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.enabled = False
I also save a checkpoint whenever the accuracy on the validation set increases. I do so like this:
checkpoint = {
'run': run_count,
'epoch': epoch_count,
'model_state': model.state_dict(),
'optimizer_state': optimizer.state_dict()
}
torch.save(checkpoint, 'path to folder')
However, when I resume training with a checkpoint and the same seed, I get different results to when I train the CNN from scratch up to the epoch I compare it to. Say for example, I train the network for 25 epochs and the best one is at epoch 15. Then, I load the checkpoint from epoch 15 and continue training. I would expect these results to be the same as the one from the first training process at epoch 16 and upwards. But they are not…
Does anyone know why this could be the case? Any help is very much appreciated!
All the best
snowe
|
st45742
|
Solved by ptrblck in post #4
The order of the returned batches from your DataLoader would still be different, which would yield non-deterministic results. Note that this is usually not a problem, as your model should converge with different seeds.
If the DataLoader was the only source of randomness, you could use:
for _ in …
|
st45743
|
The difference might come from e.g. the data shuffling, as you are reseeding the code in epoch 15 again.
You could try to iterate the DataLoader for 15 epochs or could alternatively seed the workers in each epoch with an “epoch seed” so that it would be easier to restore.
|
st45744
|
Hi @ptrblck, thank you for your response!
What do you mean with iterate the DataLoader for 15 epochs?
Also: when I set the seed again within my loop over the epochs and then resume training, I get the same results, you are right. However, these results then differ from the one I get when I don’t seed every epoch. And the difference is quite big. How does that make sense?
All the best
snowe
|
st45745
|
snowe:
How does that make sense?
The order of the returned batches from your DataLoader would still be different, which would yield non-deterministic results. Note that this is usually not a problem, as your model should converge with different seeds.
snowe:
What do you mean with iterate the DataLoader for 15 epochs?
If the DataLoader was the only source of randomness, you could use:
for _ in range(epochs_to_restore):
for batch in loader:
pass
However, this approach is brittle and will not work, if you had other calls into the random number generator.
|
st45746
|
Thank you @ptrblck, it does work like that!
Out of curiosity… is it save to say that although without this approach the results are not exactly the same as they were when I left off, it will still yield comparable results and also reproducible because the only difference is the order of the batches? This can be seen as almost another layer of data shuffling, which a CNN should be able to handle anyways, if we aim to generalise the network?
|
st45747
|
Yes, I think as long as you shuffle the data and stick to your workflow, the final results should be comparable, i.e. the model should converge to the same final accuracy (+/- a small difference).
You might have some use cases where you need to restore exactly the same data ordering etc., which is more tricky as explained before. Usually these steps are necessary to debug some issues and your “standard” training shouldn’t depend on a specific seed or ordering of the shuffled data.
|
st45748
|
Hello everybody. I have recently started using Captum for model interpretability and I’ve found it really interesting. My main question is: Does Captum support Graph Neural Networks too? Thanks a lot.
|
st45749
|
It looks so! There is a Google Colab from Amin 19 at the list of Pytorch Geometric that uses Captum:
pytorch-geometric.readthedocs.io
Colab Notebooks — pytorch_geometric 1.6.2 documentation 56
Regards,
Joaquín.
|
st45750
|
Hi,
I have a 3 dimension tensor “prob: tensor([[[0.2793, 0.3314, 0.3893]]], grad_fn=)”
and I am sampling an action by Categorical distribution.
print("prob:",prob)
action = Categorical(prob).sample().detach()
print("action:",action)
log_prob_a = log_prob.gather(1, action)
This is the output I am getting.
prob: tensor([[[0.3201, 0.3268, 0.3531]]], grad_fn=)
action: tensor([[2]])
Traceback (most recent call last):
File “D:\get_data\ind_a3c_lstm.py”, line 99, in train
log_prob_a = log_prob.gather(1, action)
RuntimeError: Index tensor must have the same number of dimensions as input tensor
My, Question is why action is not have same number of dimensions as prob tensor.
|
st45751
|
Hi,
I want to calculate mAP score for each epoch. However, keeping the each prediction in memory for an entire epoch is not possible on my machine. So I was wondering if I could simply compute the mAP for the predicitions of each batch and sum them up at the end of the epoch.
Is that possible or will the result be wrong if I do that?
Thanks in advance!
|
st45752
|
I am trying to reconstruct a neural net for shape and appearance disentanglement. To track progress, I want to produce some color maps that look as presented in the paper:
The maps have the following shape: [batch_size, n_maps, 64, 64].
n_maps equals 16, so I basically want to plot 16 different color maps. Is there an elegant way to do that?
|
st45753
|
I have a large dataset to train and short of cloud RAM and disk space (memory). I think one of the approaches to training all the dataset is by creating a checkpoint to save the best model parameter based on validation and likely the last epoch. I will be glad for guidance on implementing this i.e ensuring training continues from the last epoch with the best-saved model parameter from the previous trainig session
|
st45754
|
Hi @moreshud
You could do something like:
if accuracy_val > max_accuracy_val:
checkpoint = {
'epoch': epoch,
'model_state': model.state_dict(),
'optimizer_state': optimizer.state_dict(),
}
torch.save(checkpoint, 'path/to/folder/filename.pth')
max_accuracy_val = accuracy_val
That way you save the state of the model whenever you have reached a new maximum accuracy on the validation set.
When you want to continue training you can do:
loaded_checkpoint = torch.load('path/to/folder/filename.pth')
loaded_epoch = loaded_checkpoint['epoch']
loaded_model = model() # instantiate your model
loaded_model.load_state_dict(loaded_checkpoint['model_state'])
loaded_optimizer = torch.optim.SGD(loaded_model.parameters(), lr=0, momentum=0) # or whatever optimizer you use
loaded_optimizer.load_state_dict(loaded_checkpoint['optimizer_state'])
You can then continue training with the loaded model and loaded optimizer.
DISCLAIMER: I am relatively new to PyTorch myself. This approach works for me but I cannot guarantee that there are no better options
Anyways, I hope it helps!
All the best
snowe
|
st45755
|
Hello,
I’m still confused with detach(), altough i searched and read a lot… When I’m plotting tensors in each epoch, like input images or decode one hot encoded output images and plot them, is it correct to access this tensors by .data.cpu() or do I need .detach().cpu() ? Im running my model on a gpu, it is basically a VAE.
1st example:
self.create_output_image_grids(img_data.detach().cpu(), recon_data.detach().cpu())
In this function I basically use torch.cat() to create a grid of images…
2nd example
If I keep track of latent var’s (mu, logvar) to make some plots e.g. every 5 epochs:
recon_data, mu, logvar, z = model(img_data)
latent_mu.append(mu.detach().cpu().squeeze())
Big thanks in advance!
|
st45756
|
It is the same, though it has been suggested that .data is internal/private, it is used too often to break it.
|
st45757
|
Hi, I am trying to implement BiCOGAN which has a structure like below.
스크린샷 2020-11-29 오후 9.55.39910×438 57 KB
The author says that the Encoder is trained jointly with generator and discriminator. Can anyone give me some advice on how I can implement this in pytorch? How should the update be implemented? and Should I create a separate optimizer just for Encoder? Any advice would be helpful. Thanks
|
st45758
|
Hi, I am not sure about num_layers in RNN module. To be clarify, could you check whether my understanding is right or not. I uploaded an image when num_layers==2. In my understanding, num_layers is similar to CNN’s out_channels. It is just a RNN layer with different filters (So we can train different weights variable for outputting h ). Right?
Screen Shot 2017-11-12 at 11.33.01 AM.png1252×810 102 KB
I am probably right…
class TestLSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers):
super(TestLSTM, self).__init__()
self.rnn = nn.LSTM(input_size, hidden_size, num_layers, batch_first=False)
def forward(self, x, h, c):
out = self.rnn(x, (h, c))
return out
bs = 10
seq_len = 7
input_size = 28
hidden_size = 50
num_layers = 2
test_lstm = TestLSTM(input_size, hidden_size, num_layers)
print(test_lstm)
input = Variable(torch.randn(seq_len, bs, input_size))
h0 = Variable(torch.randn(num_layers, bs, hidden_size))
c0 = Variable(torch.randn(num_layers, bs, hidden_size))
output, h = test_lstm(input, h0, c0)
print('output', output.size())
print('h and c', h[0].size(), h[1].size())
---
TestLSTM (
(rnn): LSTM(28, 50, num_layers=2)
)
output torch.Size([7, 10, 50])
h and c torch.Size([2, 10, 50]) torch.Size([2, 10, 50])
|
st45759
|
No, your understanding is wrong. num_layers in RNN is just stacking RNNs on top of each other. So you get a hidden from each layer and an output only from the topmost layer.
|
st45760
|
Screen Shot 2017-11-12 at 1.29.49 PM.png806×588 100 KB
I found a nice image. Does this mean num_layers==2? And we can get last hidden. Right?
|
st45761
|
I have two questions:
Consider:
self.lstm1 = nn.LSTM(input_dim, hidden_dim, num_layers=1)
self.lstm2 = nn.LSTM(input_dim, hidden_dim, num_layers=2)
Why are the weights the same values? Are the weights reused?lstm1.weight_ih_l0.size() == lstm2.weight_ih_l0.size()
self.lstm1a = nn.LSTM(input_dim, hidden_dim, num_layers=1)
self.lstm1b = nn.LSTM(hidden_dim, hidden_dim, num_layers=1)
self.lstm2= nn.LSTM(input_dim, hidden_dim, num_layers=2)
y2 = self.lstm2(x, …)
y1 = self.lstm1b(self.lstm1a(x, …),…)
Are y1,y2 the same thing?
|
st45762
|
They are definitely not same values:
>>> lstm1 = nn.LSTM(input_dim, hidden_dim, num_layers=1)
>>> lstm2 = nn.LSTM(input_dim, hidden_dim, num_layers=2)
>>> lstm1.weight_ih_l0
Parameter containing:
-0.3027 -0.2689 -0.3551
0.5509 0.1728 0.0360
-0.1964 0.1770 0.2209
-0.4915 0.3696 0.5712
0.2401 0.0593 -0.4117
0.4066 0.3684 0.3482
0.2870 -0.0531 0.1953
0.0928 -0.4165 0.5613
-0.4697 0.4112 0.1346
0.3438 -0.1885 0.5242
0.3756 0.2288 0.2949
-0.1401 0.0173 -0.0247
[torch.FloatTensor of size 12x3]
>>> lstm2.weight_ih_l0
Parameter containing:
-0.3672 -0.0299 0.1597
0.0828 -0.2755 0.4451
0.1861 0.1213 -0.5596
-0.2776 -0.4791 -0.2322
-0.5063 0.0437 0.1145
-0.2652 -0.0932 0.0865
-0.3323 0.4274 -0.3038
-0.1449 -0.1430 0.5393
0.5589 0.1293 -0.5174
-0.4502 0.5351 0.2430
-0.5448 -0.4007 -0.2560
0.5424 -0.1821 -0.0779
[torch.FloatTensor of size 12x3]
No, they are computed by different LSTMs with different parameters. They are different.
|
st45763
|
@SimonW @smth
Hello guys, I’d like to ask you one thing about this parameter (num_layers in RNN module) and how it relates to the LSTM stable documentation.
Looking at the picture posted above, I’d say that the hidden state at time t of the first hidden layer receives as input the hidden state at time (t-1) of the same layer. Similarly, the hidden state at time t of the second layer receives as input the hiddent state at time (t-1) of the second layer.
Yet, in the nn.LSTM doc (https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM 60) there is:
“h(t−1) is the hidden state of the previous layer at time t-1”
Considering that the gates receive h(t-1) as input, does this mean that the l-th layer should look at the (l-1)-th layer? Or am I reading it wrong?
Thanks.
|
st45764
|
That’s a bug in the documentation, your interpretation of the picture is right.
The t-dimension stays in the same layer. The connection between the layers is that the output of the l-1st layer is the input of the lth layer, possibly multiplied by drop, i.e. h^(l-1)(t) delta^(l-1)(t) = i^(l)(t).
Best regards
Thomas
|
st45765
|
Hi @FAlex,
thanks for pointing out the potential for improvement in the documentation!
I’ve put this into a PR on github 62, so hopefully PyTorch 1.0 ships with clearer documentation.
Best regards
Thomas
|
st45766
|
I trained an autoencoder that accepts an image and produces an encoding from the encoder. The encoding is 64x64x64 so flattening it into a row vector of size 262144.
I have 5000 images, and given a test image( which goes through the encoder) , I need to find n similar encodings and corresponding images, and maybe later cluster the whole dataset.
one GitHub repo owner seems to have done it by concatenating the image encoding into a 5000,262144 matrix, and then running a knn on it with the test encoding.
I can’t do this on Colab cuz the instance crashes when RAM fills up.
Tried to use a regular for loop to convert each picture and save it as a .npy, but that was extremely painful since Colab just freezes when the loop is more than 500 big.
Even if I do have the .npy files, not sure how to classify, or cluster since I can’t put the whole thing on the RAM.
|
st45767
|
My computer shuts down when running PyTorch. I have the following specs:
Ubuntu 20.04
GPU: GTX1080
CUDA 10.2
Python 3.7
PyTorch 1.6.0
I’m running some model using the image from the webcam or video, but the CPU heats up and then shuts down. First I thought it might be the model/code that causing this, but I’ve run several different models, but the issue persists.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.