id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st48868 | I fixed this temporarily by exporting LD_LIBRARY_PATH=/usr/lib64, but it’s strange because I could use ctypes.CDLL to import “libstdc++.so.6” directly without doing this…idk |
st48869 | I have a network with following architecture
def forward(self, W, z):
W1 = self.SM(W)
W2 = torch.argmax(W1, dim=3).float()
h_5 = W2.view(-1, self.n_max_atom * self.n_max_atom)
h_6 = self.leaky((self.dec_fc_5(h_5)))
h_6 = h_6.view(-1, self.n_max_atom, self.n_atom_features)
return h_6
where self.SM = nn.Softmax(dim=3). When running this, I receive the error:
File “/Users/Blade/model/VAEtrain.py”, line 154, in trainepoch
loss.backward()
File “/Users/Blade/anaconda3/lib/python3.7/site-packages/torch/tensor.py”, line 198, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File “/Users/Blade/anaconda3/lib/python3.7/site-packages/torch/autograd/init.py”, line 100, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Expected isFloatingType(grads[i].scalar_type()) to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) (validate_outputs at /Users/distiller/project/conda/conda-bld/pytorch_1587428061935/work/torch/csrc/autograd/engine.cpp:476)
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&) + 135 (0x112602ab7 in libc10.dylib)
frame #1: torch::autograd::validate_outputs(std::__1::vector<torch::autograd::Edge, std::__1::allocatortorch::autograd::Edge > const&, std::__1::vector<at::Tensor, std::__1::allocatorat::Tensor >&, std::__1::function<std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > (std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&)> const&) + 5884 (0x1193d614c in libtorch_cpu.dylib)
frame #2: torch::autograd::Engine::evaluate_function(std::__1::shared_ptrtorch::autograd::GraphTask&, torch::autograd::Node*, torch::autograd::InputBuffer&) + 1996 (0x1193d18ec in libtorch_cpu.dylib)
frame #3: torch::autograd::Engine::thread_main(std::__1::shared_ptrtorch::autograd::GraphTask const&, bool) + 497 (0x1193d08b1 in libtorch_cpu.dylib)
frame #4: torch::autograd::Engine::thread_init(int) + 152 (0x1193d0648 in libtorch_cpu.dylib)
frame #5: torch::autograd::python::PythonEngine::thread_init(int) + 52 (0x111b43a04 in libtorch_python.dylib)
frame #6: void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_deletestd::__1::__thread_struct >, void (torch::autograd::Engine::)(int), torch::autograd::Engine, int> >(void*) + 66 (0x1193dfc32 in libtorch_cpu.dylib)
frame #7: _pthread_body + 126 (0x7fff6e8632eb in libsystem_pthread.dylib)
frame #8: _pthread_start + 66 (0x7fff6e866249 in libsystem_pthread.dylib)
frame #9: thread_start + 13 (0x7fff6e86240d in libsystem_pthread.dylib)
It seems that the torch.argmax function brakes the backpropagation. But I believe that it should work fine. Does the model architecture have a problem? |
st48870 | Solved by KFrank in post #6
Hello Blade!
I have two networks that are trained together.
…
What you see is my downstream network: it takes in output of the upstream W
…
I want to turn them into either a one-hot vector or class labels W2
Once you turn the output of your upstream network into a one-hot
vector or a class… |
st48871 | Hello Blade!
blade:
It seems that the torch.argmax function brakes the backpropagation.
argmax() is not usefully differentiable, and so, indeed, does break
backpropagation.
Does the model architecture have a problem?
Yes, the argmax() piece isn’t differentiable.
Make sure you understand why argmax() isn’t differentiable, and then
see if you can reformulate what you are doing in a way that avoids the
discrete jumps inherent in argmax().
Good luck.
K. Frank |
st48872 | Hello Blade!
blade:
Is there a work around for this?
“Work around” isn’t really the right term, as it implies that you are
trying to do something that makes sense, and you need to work
around a bug or limitation to reach the same result by a different
path.
Conceptually, what are the index values in W2 supposed to mean?
How do you want your optimizer to respond when one of the values
in W2 suddenly makes a discrete jump from 2 to 1?
To illustrate my point, here is a “work around”:
Replace W2 = torch.argmax (W1, dim = 3).float()
with W2 = 0.0 * torch.sum (W1, dim = 3).float().
Backpropagation will now work (but all of your gradients will be zero).
softmax() is a smooth (differentiable) approximation to the one-hot
encoding of argmax(). But this comment will only be helpful if you
understand the conceptual role you want the piece-wise constant (not
usefully differentiable) W2 values to play in your network training.
Good luck.
K. Frank |
st48873 | KFrank:
Conceptually, what are the index values in W2 supposed to mean?
I have two networks that are trained together. The networks are in series, i.e. output of one is the input of the other. Upstream network is solving a classification problem for graph edge types [Batch, node, node, class]. What you see is my downstream network: it takes in output of the upstream W, applies Softmax to turn them into probabilities W1, and then I want to turn them into either a one-hot vector or class labels W2 before reshaping and feeding it to the linear layer dec_fc_5. |
st48874 | Hello Blade!
blade:
KFrank:
Conceptually, what are the index values in W2 supposed to mean?
I have two networks that are trained together.
…
What you see is my downstream network: it takes in output of the upstream W
…
I want to turn them into either a one-hot vector or class labels W2
Once you turn the output of your upstream network into a one-hot
vector or a class label, you have done something that is not
differentiable and that breaks backpropagation.
So … You have to do something else.
Why not just pass the output of your upstream directly to your
downstream network (i.e., directly to fc_5)? What breaks?
If the output of your upstream network comes directly from a Linear
without any subsequent activation function, you will want an activation
function in between. How about relu() or sigmoid() (or even
softmax())?
If passing a one-hot vector to your downstream vector makes the
most sense (ignoring the fact that it isn’t differentiable), perhaps you
should consider my observation that softmax() is a differentiable
approximation to the one-hot encoding of argmax().
You can sharpen this result – making softmax() “less soft” – by
scaling its argument, e.g., softmax (scale * W1). As scale is
increased to approach +inf, the result of softmax (scale * W1)
will approach one_hot (argmax (W1)).
Best.
K. Frank |
st48875 | I have a 2d Tensor A of shape (d, d) and I want to get the indices of its maximal element.
torch.argmax only returns a single index. For now I got the result doing the following, but it seems involuted.
vals, row_idx = A.max(0)
col_idx = vals.argmax(0)
And then, A[row_idx, col_idx] is the correct maximal value. Is there any more straightforward way to get this? |
st48876 | Solved by ptrblck in post #2
Alternatively, this code should also work:
x = torch.randn(10, 10)
print((x==torch.max(x)).nonzero()) |
st48877 | Alternatively, this code should also work:
x = torch.randn(10, 10)
print((x==torch.max(x)).nonzero()) |
st48878 | For anyone who stumbles in here and wonders which approach is faster (that provided by GeoffNN or ptrblck), the one-liner by ptrblck appears to be at least twice as fast.
In my (not rigorous) benchmarking, ptrblck’s code found the max indices of 100k random tensors in an average of 1.6 seconds, and the solution by GeoffNN found the max indices of 100k random tensors in an average of 3.5 seconds. |
st48879 | Is there any way to batch this?
Suppose that now I have a 3D tensor of shape (batch_size, d, d) and I want to get for each 2D datapoint in the batch the index of its maximal element? I can’t figue out how to generalize the (x==torch.max(x)).nonzero() approach. |
st48880 | I was wondering if there was an implementation of MAML++ in pytorch?
Ideally in higher for example.
See: https://github.com/facebookresearch/higher/issues/84 17 |
st48881 | Hi,
I know that the softmax function outputs probabilities with sum equal to 1. However, if we give it a probability vector (which already sums up to 1) , why does not it return the same values? For example, if I input [0.1 0.8 0.1] to softmax, it returns [0.2491 0.5017 0.2491], isn’t this wrong in some sense? |
st48882 | It is because of the way softmax is calulated. When you compute exp(0.1)/(exp(0.1)+exp(0.8)+exp(0.1)), the value turns out to be 0.2491. |
st48883 | Thanks for the answer. Yeah yeah that I know. But my question is, isn’t it wrong in some sense? |
st48884 | Softmax is an activation function. The purpose is not just to ensure that the values are normalized (or rescaled) to sum = 1, but also allow to be used as input to cross-entropy loss (hence the function needs to be differentiable).
For your case, the inputs can be arbitrary values (not necessarily probability vectors). It is possible that there’s a mix of positive and negative values which still sum = 1 (eg: [0.3, 0.8, -0.2].
Since softmax picks the class with the highest value, with the values being softly rescaled, hence the name ‘soft’-‘max’. |
st48885 | Hi mbehzad!
mbehzad:
is, isn’t it wrong in some sense?
Well, I suppose it depends on what your expectations are …
But you might wish to base your expectations on some other functions:
x**2 maps (-inf, inf) to [0.0, inf), but we don’t expect x**2 = x
to hold true for x >= 0.0, that is for values of x in [0.0, inf).
Or, back in the pytorch activation function world, torch.sigmoid() maps
(-inf, inf) to (0.0, 1.0), but torch.sigmoid (torch.sigmoid())
isn’t equal to torch.sigmoid().
Here’s another thing to consider:
softmax ([0.0 + delta, 1.0 - delta])
How would you like softmax() to behave when a negative delta
becomes zero and then crosses over to become positive? Bear in
mind, you want this behavior to be usefully differentiable to support
backpropagation.
K. Frank |
st48886 | I am training a custom patch-based network (4 layers) and I realized that, unless I set the lr to 0.0000001 to start converging somehow.
I feel like something is wrong with such a tuning.
However, if I run the training with lr = 0.01, loss will get huge, like several billions and eventually NaN.
Can this be related to initialisation ? |
st48887 | Solved by ptrblck in post #6
From skimming your code, it looks like you are not zeroing out the gradients after the weight update.
In this case the gradients get accumulated and the weight updates will be quite useless.
Add this line into your for loop and run it again:
self.optimizer.zero_grad()
It is also recommended to c… |
st48888 | It could be related to the weight init or other hyper-parameters.
How do you initialize your model?
Which optimizer are you using? Did you change the momentum (if available)? |
st48889 | Weight initialisation is done through Xavier’s approach :
m.weight.data.normal_(0, math.sqrt(2. / n)), for each conv module m in the network
m.weight.data.normal_(0, 0.01), for the fc layer on top of the net.
As for momentum, I used the commonly 0.9 value although I do not know it that suits the learning rate I have since this one is really low.
I have to admit I am not really familiar with the relation between learning rate and momentum since both of them seem to impact the weight update on different aspects.
I was looking for more documentation about these two hyper-parameters when I came across weight decay. I understood that it aims at reducing the magnitude increase of the weights. Is that correct ?
Not sure how to tune momentum (and weight decay ?) so as to get a decent learning rate. |
st48890 | Ok, thanks for the update!
Could you post the code calculating the loss, the optimizer and if possible the model?
Maybe your gradients are really high somehow.
Usually a momentum of 0.9 should work fine.
Yes, weight decay penalizes the weight magnitude, forcing it to get lower values. |
st48891 | Thank you for such quick and helpful answers, really nice community !
Model (just the initilisation) --> comes from torchvision:
def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
m.weight.data.normal_(0, math.sqrt(2. / n))
if m.bias is not None:
m.bias.data.zero_()
elif isinstance(m, nn.BatchNorm2d):
m.weight.data.fill_(1)
m.bias.data.zero_()
elif isinstance(m, nn.Linear):
m.weight.data.normal_(0, 0.01)
m.bias.data.zero_()
Optimizer:
optimizer = optim.SGD(net.parameters(),
lr=0.00000001,
momentum=0.9,
weight_decay=0.0005)
Loss computation:
def runEpoch(self):
loss_list = np.zeros(100000)
for it, batch in enumerate(tqdm(self.data_loader)):
data = Variable(batch['image'])
target = Variable(batch['class_code'])
# forward
if self.mode == 'cuda':
data = data.cuda()
target = target.long().cuda()
output = self.model.forward(data)
loss = self.criterion(output.float(), target)
loss.backward()
self.optimizer.step()
loss_list[it] = loss.item()
self.avg_loss = np.mean(loss_list[np.nonzero(loss_list)])
Also, note that I am using 4-band 8-bit encoded images as input. |
st48892 | From skimming your code, it looks like you are not zeroing out the gradients after the weight update.
In this case the gradients get accumulated and the weight updates will be quite useless.
Add this line into your for loop and run it again:
self.optimizer.zero_grad()
It is also recommended to call the model directly to compute the forward pass instead of model.forward().
If you use model.forward() the hooks won’t be called, which might be unimportant in your current code, but might lead to errors in the future.
Also, you could update to the latest stable release (0.4.0). You can find the install instructions on the website 1. |
st48893 | Thank you very much, just tried it and it improves the performance by almost 8%!
I am also able to use a much more reasonable learning rate (0.001, i will try with learning rate decay as well) !
Why is it so important to zero gradients ? Is that because it would otherwise compute gradients and add them addition to those from the previous iteration (sorry… not sure of the meaning of accumulating here) ?
Can you imagine a situation were you would not do such a thing ? From what I have seen, it could be implicit when using optimizer.step()
I am already using 0.4.0 release
Thank you again ! |
st48894 | Yes, your explanation is right. The gradients from each backward pass would be summed to the previous gradients.
You could use it to artificially increase your batch size.
If you don’t have enough GPU memory, but need a larger batch size, you could sum the gradients for several smaller batches, average them, and finally perform the optimization step.
Ah ok, then you don’t need the Variable wrapper anymore, since they were merged with tensors. |
st48895 | I am using Adamax optimizer with a learning rate = 0.001 and momentum = 0.9, It is working fine for less number of classes (1-50 classes) but for a large number of classes (200-300) I am getting low accuracy and high loss. what should I do in this case?
Thanks in advance |
st48896 | Hi,
I have converted the tensors which I’m getting as an output(test set) from my UNet to images. When I’m testing the model it is throwing out of index exception which I’m not getting when I didn’t convert the tensors to images. I have 74 images in my test set but after getting tested with one image it is throwing out of index exception in my custom dataset class. any idea will there will be any problem if I do a conversion like that?
# this is where im converting
def test_step(self, batch, batch_nb):
x, y = batch
y_hat = self.forward(x)
# print(y_hat.size())
img = y_hat.cpu().detach().numpy()
directory = r'D:\Mobile-research project\dataset\gaussian_blur\target_outputs'
os.chdir(directory)
print(os.listdir(directory))
filename = 'savedImage.jpg'
cv2.imwrite(filename, img)
loss = torch.nn.MSELoss()
op_loss = loss(y_hat, y)
return {'test_loss': op_loss}
#custom dataset
def __getitem__(self, i):
idx = self.ids[i]
img_files = glob.glob(os.path.join(self.img_dir, idx+'.*'))
mask_files = glob.glob(os.path.join(self.mask_dir, idx+'.*'))
assert len(img_files) == 1, f'{idx}: {img_files}'
assert len(mask_files) == 1, f'{idx}: {mask_files}'
# use Pillow's Image to read .gif mask
# https://answers.opencv.org/question/185929/how-to-read-gif-in-python/
img = Image.open(img_files[0])
mask = Image.open(mask_files[0])
# to_tensor = transforms.ToTensor()
# img_t = to_tensor(img)
# a = img_t.shape
assert img.size == mask.size, f'{img.shape} # {mask.shape}'
return self.transforms(img),\
self.transforms(mask)
I’m getting this error:
File “D:\unet-lightning-master -original\unet-lightning-master-original\dataset.py”, line 35, in getitem
assert len(img_files) == 1, f’{idx}: {img_files}’
AssertionError: 0101434: []
Testing: 1%|▏ | 1/72 [00:00<00:40, 1.77it/s]
Process finished with exit code 1 |
st48897 | Hi folks!
I’m writing an autoencoder using Resnet as an encoder and then some convolutions and concatenations with skip connections for the decoder. The use of this is image denoising. Following this, I’m testing a novel idea of using Pytorch to do some parallelised frequency domain filtering. It works ok.
However, my second stage frequency-domain filter expects my tensor values to be scaled between 0 and 1. I tried using a sigmoid output layer, however I was given the impression that this is a bad idea. I’m now considering using softmax.
Has anyone with more knowledge have any ideas.
My network backpropagates through my second stage frequency domain/empirical filter so if my autoencoder spits out values greater than 1 it makes the network trend to zero. I should also note that it is important that I stay between 0 and 1 for the autencoder because I use a measurement of noise power on this scale for the second stage of my algorithm.
Thanks folks. |
st48898 | Related to Subclassing torch.Tensor 4
But I am trying to extend tensor with dtype=torch.int64
My code like this:
class MyTensor(torch.LongTensor):
@staticmethod
def __new__(cls, data, stats, *args, **kwargs):
return super().__new__(cls, data, *args, **kwargs)
def __init(self, stats, *args, **kwargs):
self._stats = stats
x = MyTensor([0,1,2], 3)
This works for subclassing torch.Tensor, however it not work for torch.LongTensor
TypeError: type ‘torch.LongTensor’ is not an acceptable base type
How to subclass torch.LongTensor |
st48899 | Don’t use torch.LongTensor, it’s not a proper class/type but a hack!
The proper thing is to subclass Tensor and have the dtype set fixed to torch.long. |
st48900 | I have tried to subclass torch.Tensor, however dtype cannot be param of torch.Tensor.new. I could not find out how to init dtype
Here is what I have tried
def __new__(self, data, stats):
tensor = torch.as_tensor(data, dtype=torch.long)
return torch.Tensor.__new__(self, tensor) # Error: expected Float (got Long)
or like
def __new__(self, data, stats):
tensor = torch.as_tensor(data, dtype=torch.long)
return tensor
def __init__(self, data, stats):
self.stats = stats
x = MyTensor([1,2,3],3) # Actually it returns a torch.LongTensor not MyTensor object
x.stats # Error |
st48901 | You’re diving right into the tricky bits, but:
__new__ is a class method implicitly (don’t ask).
You can use nn.Parameter as an example. Adapted this gives:
class MyTensor(torch.Tensor):
def __new__(cls, data, stats, requires_grad=False):
data = torch.as_tensor(data, dtype=torch.long)
tensor = torch.Tensor._make_subclass(cls, data, requires_grad)
tensor.stats = stats
return tensor
So I’m still not 100% sure what you’re trying to achieve, but if it is a subclass that is similar to nn.Parameter, this would be what I’d use.
Best regards
Thomas |
st48902 | I was wondering if function specified at module.register_forward_hook(function_to_call) is executed in a separate Python process or not? |
st48903 | Solved by ErikJ in post #2
Never mind I just checked. They are executed in same process. |
st48904 | Hello. I have a script which processes given data and produces a pytorch binary output (.bin and .idx files). When I run this script in isolation everything works fine. But when I run multiple instances (using thread pool) of this script in parallel, I see that some of the output folders have missing .idx files. Any ideas why this could be happening ?
Also can we regenerate the .idx files based on the .bin files ? |
st48905 | I’m not sure if this issue is PyTorch-related or is rather an issue in your multi-threaded application.
Are these files only missing, if you created them from PyTorch or also as plain Python files? |
st48906 | Help!!!
I have a problem during training. I get the error message in the title when using:
import os
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
The error is triggered by the following line going to NaN
BCE = F.binary_cross_entropy(out,real_data_I,size_average=False)
I have tried using Binary cross entropy with Logits and this gives the same error. Since the error happens at random times, I have deduced that its something to do with the input date, however i have looked at the input data its not same input data that triggers the error.
Does anyone have any ideas?
Chaslie |
st48907 | Start of by checking if you have any nan’s in the input. The fact that another loss function causes the error as well tells that there is likely a problem with the input. |
st48908 | Pchandrasekaren,
I have checked the inputs and there are no NaN’s in the input.
I have re-run the model removing the sigmoid and using binary cross entropy with logits loss function it seems to be running, at the moment.
The model consists of 2 CVAE’s, and both run and work well with the input data and with the BCE = F.binary_cross_entropy(out,real_data_I,size_average=False) + Sigmoid and the BCE_withLogits and no sigmoid…
Update, the model as crashed with **RuntimeError: after reduction step 2: cudaErrorAssert: device-side assert triggered** after 5 epochs, this is using F.binary_cross_entropy_with_logits
chaslie |
st48909 | I have also set batchsize to 1 and shuffle to False in dataloader, then run the model, the model failed at increments 251,286 and 145, this suggests its not a problem with the input data. |
st48910 | Which PyTorch version are you using?
If you are using 1.5, could you please update, as assert statements in 1.5.0 were not working properly.
Also, could you post the complete stack trace you are seeing? |
st48911 | Hi Ptrblk,
How are you? I hope you are avoiding the worst that covid is throwing at you.
I am using 1.4.0 version.
Changing the loss function from F.binary_cross_entropy_with_logits to torch.nn.BCELoss and putting the sigmoid function at end of the network seems to have resolved the error.
I think there maybe an issue with F.binary_cross_entropy_with_logits and binary_cross_entropy?
I have decreased the learning rate as well by an order of magnitude and the model is now running.
I will post the stack trace tomorrow once the model has finished training.
chaslie |
st48912 | I have a few lines of code that are contributing to a growing memory leak (torch 1.6.0 and cuda 11.0). I don’t know how to fix them and I would appreciate some feedback.
They are all in one method: agent_update_network_parameters() (shown below). I have marked the lines that have a leak with ‘# MEMORY LEAK’. There are three lines, each with a leak that is contributing to increasing memory usage (1 MB per call).
@profile
def agent_update_network_parameters(self):
"""
Update the parameters for the NN(s).
Note: This is performed in the following order:
- value network
- both q value network's
- policy network
"""
self.num_updates += 1
state, action, reward, next_state, terminal = self.replay_buffer.sample(self.batch_size)
state = torch.FloatTensor(state).to(device=self.device)
action = torch.FloatTensor(action).to(device=self.device)
reward = torch.FloatTensor(reward).unsqueeze(1).to(device=self.device)
next_state = torch.FloatTensor(next_state).to(device=self.device)
# MEMORY LEAK in terminal (originally was numpy array of boolean values)
terminal = torch.FloatTensor(terminal).unsqueeze(1).to(device=self.device)
# q_value network
predicted_q_value_1, predicted_q_value_2 = self.q_network(state, action)
with torch.no_grad():
next_state_sampled_action, next_state_log_prob, _ = self.policy_network.sample(next_state)
predicted_target_q_value_1, predicted_target_q_value_2 = self.target_q_network(next_state, next_state_sampled_action)
estimated_value = torch.min(predicted_target_q_value_1, predicted_target_q_value_2) - self.alpha * next_state_log_prob
estimated_q_value = reward + self.gamma * (1 - terminal) * estimated_value
q_value_loss_1 = self.q_criterion_1(predicted_q_value_1, estimated_q_value)
q_value_loss_2 = self.q_criterion_2(predicted_q_value_2, estimated_q_value)
self.q_optimizer_1.zero_grad()
q_value_loss_1.backward()
self.q_optimizer_1.step()
self.q_optimizer_2.zero_grad()
q_value_loss_2.backward()
self.q_optimizer_2.step()
# policy network
sampled_action, log_prob, _ = self.policy_network.sample(state)
sampled_q_value_1, sampled_q_value_2 = self.q_network(state, sampled_action)
sampled_q_value = torch.min(sampled_q_value_1, sampled_q_value_2)
policy_loss = ((self.alpha * log_prob) - sampled_q_value).mean()
self.policy_optimizer.zero_grad()
# MEMORY LEAK in call to policy_loss.backward
policy_loss.backward()
self.policy_optimizer.step()
# adjust temperature
if self.automatic_entropy_tuning:
alpha_loss = -(self.log_alpha * (log_prob + self.target_entropy).detach()).mean()
self.alpha_optimizer.zero_grad()
alpha_loss.backward()
self.alpha_optimizer.step()
self.alpha = self.log_alpha.exp()
else:
alpha_loss = torch.tensor(0.).to(self.device)
# (soft update) target q_value network
if self.num_updates % self.target_update_interval == 0:
for target_param, param in zip(self.target_q_network.parameters(), self.q_network.parameters()):
# MEMORY LEAK in polyak averaging below
target_param.data.copy_(self.tau * param.data + (1.0 - self.tau) * target_param.data)
index = self.num_updates - 1
# self.loss_data[index] = [self.num_updates, q_value_loss_1.item(), q_value_loss_2.item(), policy_loss.item(), alpha_loss.item(), self.alpha.item()]
del state, action, reward, next_state, terminal
del predicted_q_value_1, predicted_q_value_2
del next_state_sampled_action, next_state_log_prob
del predicted_target_q_value_1, predicted_target_q_value_2
del estimated_value, estimated_q_value
del q_value_loss_1, q_value_loss_2
del sampled_action, log_prob
del sampled_q_value_1, sampled_q_value_2
del sampled_q_value, policy_loss
del alpha_loss
del zipped
del target_param, param
del index
I am also providing code for my policy network.
class GaussianPolicyNetwork(nn.Module):
"""
Agent policy.
"""
LOG_SIG_MAX = 2
LOG_SIG_MIN = -20
epsilon = 1e-6
def __init__(self, state_dim, action_dim, hidden_dim):
"""
Initialize policy network.
@param state_dim: int
environment state dimension
@param action_dim: int
action dimension
@param hidden_dim: int
hidden layer dimension
"""
super(GaussianPolicyNetwork, self).__init__()
self.linear1 = nn.Linear(state_dim, hidden_dim)
self.linear2 = nn.Linear(hidden_dim, hidden_dim)
self.mean_linear = nn.Linear(hidden_dim, action_dim)
self.log_std_linear = nn.Linear(hidden_dim, action_dim)
self.apply(init_weights)
def forward(self, state):
"""
Calculate the mean and log standard deviation of the policy distribution.
@param state: torch.float32 tensor with shape torch.Size([1, state_dim]) or torch.Size([batch_size, state_dim])
state of the environment
@return (mean, log_std):
mean: torch.float32 tensor with shape torch.Size([1, action_dim]) or torch.Size([batch_size, action_dim])
mean of the policy distribution
log_std: torch.float32 tensor with shape torch.Size([1, action_dim]) or torch.Size([batch_size, action_dim])
log standard deviation of the policy distribution
"""
x = torch.relu(self.linear1(state))
x = torch.relu(self.linear2(x))
mean = self.mean_linear(x)
log_std = self.log_std_linear(x)
log_std = torch.clamp(log_std, min=self.LOG_SIG_MIN, max=self.LOG_SIG_MAX)
return mean, log_std
def sample(self, state):
"""
Sample an action using the reparameterization trick:
- sample noise from a normal distribution,
- multiply it with the standard deviation of the policy distribution,
- add it to the mean of the policy distribution, and
- apply the tanh function to the result.
@param state: torch.float32 tensor with shape torch.Size([1, state_dim]) or torch.Size([batch_size, state_dim])
state of the environment
@return action, log_prob, mean, log_std:
action: torch.float32 tensor with shape torch.Size([1, action_dim]) or torch.Size([batch_size, action_dim])
(normalized) action selected by the agent
log_prob: torch.float32 tensor with shape torch.Size([1, 1]) or torch.Size([batch_size, 1])
log probability of the action
mean: torch.float32 tensor with shape torch.Size([1, action_dim]) or torch.Size([batch_size, action_dim])
mean of the policy distribution
"""
mean, log_std = self.forward(state) # torch.float32 torch.Size([batch_size, action_dim])
std = log_std.exp() # torch.float32 torch.Size([batch_size, action_dim])
normal = Normal(mean, std)
z = normal.rsample()
action = torch.tanh(z)
log_prob = normal.log_prob(z) - torch.log(1 - action.pow(2) + self.epsilon)
log_prob = log_prob.sum(1, keepdim=True)
return action, log_prob, mean |
st48913 | I had a memory leak in a similar situation (to your first instance), making sarsd tensors from numpy arrays for rl. The solution for my instance was changing the dtype in a tensor constructor. I can’t offer a principled solution but you could try changing the constructor for terminal to:
torch.as_tensor(terminal, dtype=torch.bool).unsqueeze(1).to(device=self.device)
and estimated q_value can be made like:
estimated_q_value = reward + self.gamma * (~terminal) * estimated_value
Boolean dtype should be preferred in this case right? |
st48914 | Hello everyone,
I’m trying to create a simple example with Pytorch that would detect if an input number is odd or even.
I do not know exactly if it is possible, if my understanding is correct pytorch uses a linear function whose coefficients are adjusted with training and then a sigmoid function which allows to trigger or not to create the classification.
I started coding something that creates the dataset and then performs the training.
Note that I use the EarlyStopping algorithm to stop the learning when the error goes bigger.
X, y = split_sequences(dataset, 1)
model = nn.Sequential(
nn.Linear(1, 1),
nn.Sigmoid())
patience = 3
early_stopping = EarlyStopping(patience=patience, verbose=True)
train_losses = []
valid_losses = []
avg_train_losses = []
avg_valid_losses = []
criterion = nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.003)
epochs = 1000
for e in range(epochs):
running_loss = 0
model.train()
batchi1 = 1
for batchi in range(5000,len(X),5000):
for x in range(batchi1,batchi):
line = torch.tensor([X[x]],dtype=torch.float32)
out = torch.tensor([y[x]],dtype=torch.float32)
optimizer.zero_grad()
output = model(line)
loss = criterion(output, out)
train_losses.append(loss.item())
loss.backward()
optimizer.step()
running_loss += loss.item()
model.eval()
for x in range(batchi1,batchi):
line = torch.tensor([X[x]],dtype=torch.float32)
out = torch.tensor([y[x]],dtype=torch.float32)
optimizer.zero_grad()
output = model(line)
loss = criterion(output, out)
valid_losses.append(loss.item())
train_loss = np.average(train_losses)
valid_loss = np.average(valid_losses)
avg_train_losses.append(train_loss)
avg_valid_losses.append(valid_loss)
epoch_len = len(str(train_episodes))
print_msg = (f'[{e:>{epoch_len}}/{train_episodes:>{epoch_len}}] ' +
f'train_loss: {train_loss:.5f} ' +
f'valid_loss: {valid_loss:.5f}')
train_losses = []
valid_losses = []
early_stopping.trace_func = noprint
early_stopping.path = 'NEURALNN.pt'
early_stopping(valid_loss, model)
print(f"Training loss: {running_loss/len(X)}")
if early_stopping.early_stop:
print("Early stopping")
break
batchi1 = batchi1 + batchi
model.load_state_dict(torch.load('NEURALNN.pt'))
My example doesn’t really work because I’m getting a very high error:
Training loss: 24.53215186495483
Training loss: 24.53096993828714
Training loss: 24.530958538565038
Training loss: 24.537694978424906
Training loss: 24.537682025301457
Training loss: 24.53767285807431
Training loss: 24.53766483396888
Training loss: 24.537656717956065
Training loss: 24.53767231979668
Training loss: 24.537667768600585
Training loss: 24.537658959439398
Training loss: 24.537649419358374
Early stopping
<All keys matched successfully>
And with a test :
print(model(torch.tensor([[1]],dtype=torch.float32) ))
print(model(torch.tensor([[2]],dtype=torch.float32) ))
print(model(torch.tensor([[3]],dtype=torch.float32) ))
print(model(torch.tensor([[4]],dtype=torch.float32) ))
print(model(torch.tensor([[5]],dtype=torch.float32) ))
I get :
tensor([[0.4762]], grad_fn=<SigmoidBackward>)
tensor([[0.5165]], grad_fn=<SigmoidBackward>)
tensor([[0.5567]], grad_fn=<SigmoidBackward>)
tensor([[0.5961]], grad_fn=<SigmoidBackward>)
tensor([[0.6343]], grad_fn=<SigmoidBackward>)
Can you give me some advice on how to implement this problem?
Of course I am a beginner and looking to learn about pytorch, thank you for your indulgence.
Thank you in advance.
Note that the code to make my dataset is
dataset = []
for A in range(10000):
B = 2
C = 0
if A%B == 0:
C=1
dataset.append([A,C])
dataset = np.array(dataset) |
st48915 | Based on your current model implementation (single linear layer with 1 weight and bias value) I doubt the model will be able to learn the dataset.
The increasing input numbers in [0, 10000] would only be multiplied with the weight value (thus scaled) and then the bias would be added (thus shifted). I don’t see how this operation can predict an oscillating target so you would need to change the model architecture (e.g. adding hidden layers etc.). |
st48916 | Hello,
@ptrblck Thank you for your answer. I took your advice and created a model that uses the sinus function with a training parameter that matches the frequency of the oscillation function. The objective of learning is then to find the right frequency. In the case of even or odd it is pi / 2. Here is my code below.
According to you is it possible to make it easier?
class AbsSinTensorFucntion(nn.Module):
def __init__(self,alpha = None):
super().__init__()
if alpha == None:
self.alpha = Parameter(torch.tensor(1.0))
else:
self.alpha = Parameter(torch.tensor(alpha))
self.alpha.requiresGrad = True
#alpha is a learnable parameter
def forward(self, input):
return torch.abs(torch.sin(self.alpha*input)) #Will return 0 on pair number and 1 on impair if alpha is pi/2
##CASE of pair impair
absSinInput = AbsSinTensorFucntion() #to be oscillating COS(X)+1
model = nn.Sequential(OrderedDict([
('absSinInput', absSinInput)
]))
optimizer = optim.Adam(model.parameters(), lr=0.001)
optimizer.zero_grad()
loss = nn.BCELoss() #because it's binary
##
savedalpha = absSinInput.alpha
print("**************************************")
print("Start alpha = " + str(savedalpha.item()))
print("**************************************")
#It should stop when alpha is around a multiple of pi/2
step = 0
bestvalidloss = np.inf
for epoch in range(50):
for i in range(1,200,1):
inputz = torch.tensor([[float(i)]], requires_grad=True, dtype=torch.float)
target = torch.tensor([[float(i%2)]])
bestloss = np.inf
print("###############learn with " + str(i) + "###############")
for improve in range(10): #for each sample try to find best alpha
result = model(inputz)
lossoutput = loss(result, target)
lossoutput.backward()
optimizer.step()
step +=1
print("loss output = " + str(lossoutput.item())+"#alpha = " + str(absSinInput.alpha.item()))
if(lossoutput.item() < bestloss):
bestloss = lossoutput.item()
savedalpha = absSinInput.alpha
else:
absSinInput.alpha = savedalpha
#print("stop and restore the best #alpha = " + str(absSinInput.alpha.item()))
break
validpred = model(torch.tensor([[555.0]]))
validloss = loss(validpred, torch.tensor([[1.0]]))
print("**************************************")
print("valid loss = " + str(validloss.item()))
print("**************************************")
if(validloss.item() < bestvalidloss):
bestvalidloss = validloss.item()
else:
break
print("**************************************")
print("Best alpha = " + str(absSinInput.alpha.item()))
print("pi/2 = " + str(np.pi/2))
print("**************************************")
print("Predict(1) = " + str(model(torch.tensor([[1]])).item()))
print("Predict(2) = " + str(model(torch.tensor([[2]])).item()))
print("Predict(3) = " + str(model(torch.tensor([[3]])).item()))
print("Predict(4) = " + str(model(torch.tensor([[4]])).item()))
print("Predict(5) = " + str(model(torch.tensor([[5]])).item()))
print("Predict(11111111) = " + str(model(torch.tensor([[11111111]])).item()))
print("Predict(990) = " + str(model(torch.tensor([[990]])).item()))
I get the output : so it’s 1 when the input is an odd number and 0 for even number.
**************************************
Best alpha = 1.5897823572158813
pi/2 = 1.5707963267948966
**************************************
Predict(1) = 0.9998197555541992
Predict(2) = 0.037962935864925385
Predict(3) = 0.998378336429596
Predict(4) = 0.07587113976478577
Predict(5) = 0.9954975247383118
Predict(11111111) = 0.6603634357452393
Predict(990) = 0.053372591733932495 |
st48917 | pytorchtester0:
model.eval()
I was wondering if you’re setting model.eval() but not back to model.train() for the next batch.
Although the Universal Approximation Theorem says that we should be able to approximate most functions with just 1 hidden layer, your architecture looks too simple to learn even vs odd. So, as mentioned by @ptrblck, try increasing the layer size or add more linear layers and see if the results improve. |
st48918 | Hello,
I increase the layer size to six and it is not really better
**************************************
valid loss = 0.8264439105987549
**************************************
**************************************
Best alpha = Parameter containing:
tensor([1.0429, 0.9948, 0.9344, 0.9563, 0.9674, 1.0093], requires_grad=True)
pi/2 = 1.5707963267948966
**************************************
Predict([1.00,2.00,3.00,4.00,5.00,6.00]) = tensor([[0.8639, 0.9135, 0.3321, 0.6317, 0.9923, 0.2253]],
grad_fn=<AbsBackward>)
Predict([7.00,8.00,9.00,10.00,11.00,12.00]) = tensor([[0.8506, 0.9945, 0.8497, 0.1382, 0.9379, 0.4391]],
grad_fn=<AbsBackward>)
Predict([11.00,21.00,31.00,41.00,51.00,61.00]) = tensor([[0.8886, 0.8909, 0.6373, 0.9982, 0.8008, 0.9532]],
grad_fn=<AbsBackward>)
Predict([122.00,222.00,333.00,444.00,564.00,688.00]) = tensor([[1.0000, 0.8100, 0.1261, 0.4811, 0.8569, 0.1180]],
grad_fn=<AbsBackward>)
Predict([122.00,222.00,333.00,444.00,564.00,688.00]) = tensor([[0.0077, 0.5432, 0.9584, 0.3508, 0.9991, 0.3045]],
grad_fn=<AbsBackward>)
Predict([1223.00,211.00,3334.00,411.00,544.00,644.00]) = tensor([[0.9194, 0.9865, 0.5616, 0.3956, 0.8123, 0.5632]],
grad_fn=<AbsBackward>)
I add a train() and eval() on model, what is it for ?
Another try with a linear layer (6,6) before the sinus layer
**************************************
Best alpha = Parameter containing:
tensor([0.9966, 1.0089, 1.0270, 1.0027, 0.9761, 1.0345], requires_grad=True)
pi/2 = 1.5707963267948966
**************************************
Predict([1.00,2.00,3.00,4.00,5.00,6.00]) = tensor([[0.6157, 0.0282, 0.8108, 0.4695, 0.9369, 0.5707]],
grad_fn=<AbsBackward>)
Predict([7.00,8.00,9.00,10.00,11.00,12.00]) = tensor([[0.2550, 0.4073, 0.5556, 0.7958, 0.4690, 0.8168]],
grad_fn=<AbsBackward>)
Predict([11.00,21.00,31.00,41.00,51.00,61.00]) = tensor([[0.7027, 0.7106, 0.2578, 0.8775, 0.8660, 0.9601]],
grad_fn=<AbsBackward>)
Predict([122.00,222.00,333.00,444.00,564.00,688.00]) = tensor([[0.9732, 0.3742, 0.2200, 0.0033, 0.2818, 0.1370]],
grad_fn=<AbsBackward>)
Predict([122.00,222.00,333.00,444.00,564.00,688.00]) = tensor([[0.6943, 0.5853, 0.9817, 0.3463, 0.6813, 0.2033]],
grad_fn=<AbsBackward>) |
st48919 | The code is kind of confusing and hard to debug. Make sure to use optimizer.zero_grad() within the loop each time before:
pytorchtester0:
result = model(inputz) |
st48920 | I have a feature map of size 8 x 32 x 10 x 10, where 8 is the batch size, 32 is the number of channels and 10 represents the width and the height respectively.
I want to apply cross channel pooling over each of the 8 feature maps of 32 maps (32 maps are organized as 8,8,8,8). The output I am expecting is 8x4x10x10.
How can I do it? |
st48921 | I’m trying to display the optimizer currently in use by the model for documentation purposes.
So the following snippet
optimizer = torch.optim.SGD(model.parameters(), lr=1e-5)
print(optimizer)
gives the following output
SGD (
Parameter Group 0
dampening: 0
lr: 1e-05
momentum: 0
nesterov: False
weight_decay: 0
)
Is there a way in which I could just get SGD as the output (and same for other optimizers also)?
I figured I could take the substring of the displayed output till the first parenthesis. Is there a cleaner way of doing this? |
st48922 | Solved by KFrank in post #2
Hi Nihal!
print (type (optimizer).__name__)
will do what you want.
(This is generally true for python objects, and is not specific to pytorch.)
Best.
K. Frank |
st48923 | Hi Nihal!
Eulerian:
print(optimizer)
Is there a way in which I could just get SGD as the output (and same for other optimizers also)?
print (type (optimizer).__name__)
will do what you want.
(This is generally true for python objects, and is not specific to pytorch.)
Best.
K. Frank |
st48924 | def load_pretrained_weights(model, model_name, load_fc=True):
""" Loads pretrained weights, and downloads if loading for the first time. """
state_dict = model_zoo.load_url(url_map[model_name])
train.py:302: UserWarning: You have chosen to seed training. This will turn on the CUDNN deterministic setting, which can slow down your training considerably! You may see unexpected behavior when restarting from checkpoints.
warnings.warn('You have chosen to seed training. ’
Downloading: “http://storage.googleapis.com/public-models/efficientnet/efficientnet-b0-355c32eb.pth 3” to /home/jake/.cache/torch/hub/checkpoints/efficientnet-b0-355c32eb.pth
Traceback (most recent call last):
File “train.py”, line 333, in
main()
File “train.py”, line 329, in main
main_worker(args.gpu, ngpus_per_node, args)
File “train.py”, line 232, in main_worker
D_class=EFFICIENTDET[args.network][‘D_class’]
File “/home/jake/Gits/EfficientDet.Pytorch/models/efficientdet.py”, line 33, in init
self.backbone = EfficientNet.from_pretrained(MODEL_MAP[network])
File “/home/jake/Gits/EfficientDet.Pytorch/models/efficientnet.py”, line 243, in from_pretrained
model, model_name, load_fc=(num_classes == 1000))
File “/home/jake/Gits/EfficientDet.Pytorch/models/utils.py”, line 319, in load_pretrained_weights
state_dict = model_zoo.load_url(url_map[model_name])
File “/home/jake/venv/lib/python3.6/site-packages/torch/hub.py”, line 483, in load_state_dict_from_url
download_url_to_file(url, cached_file, hash_prefix, progress=progress)
File “/home/jake/venv/lib/python3.6/site-packages/torch/hub.py”, line 381, in download_url_to_file
u = urlopen(req)
File “/usr/lib/python3.6/urllib/request.py”, line 223, in urlopen
return opener.open(url, data, timeout)
File “/usr/lib/python3.6/urllib/request.py”, line 532, in open
response = meth(req, response)
File “/usr/lib/python3.6/urllib/request.py”, line 642, in http_response
‘http’, request, response, code, msg, hdrs)
File “/usr/lib/python3.6/urllib/request.py”, line 570, in error
return self._call_chain(*args)
File “/usr/lib/python3.6/urllib/request.py”, line 504, in _call_chain
result = func(*args)
File “/usr/lib/python3.6/urllib/request.py”, line 650, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden |
st48925 | Could you post an executable code snippet to reproduce this issue and the repository you are using for the model implementation? |
st48926 | I think it problem when load the url.
url_map = {
'efficientnet-b0': 'http://storage.googleapis.com/public-models/efficientnet/efficientnet-b0-355c32eb.pth',
'efficientnet-b1': 'http://storage.googleapis.com/public-models/efficientnet/efficientnet-b1-f1951068.pth',
'efficientnet-b2': 'http://storage.googleapis.com/public-models/efficientnet/efficientnet-b2-8bb594d6.pth',
'efficientnet-b3': 'http://storage.googleapis.com/public-models/efficientnet/efficientnet-b3-5fb5a3c3.pth',
'efficientnet-b4': 'http://storage.googleapis.com/public-models/efficientnet/efficientnet-b4-6ed6700e.pth',
'efficientnet-b5': 'http://storage.googleapis.com/public-models/efficientnet/efficientnet-b5-b6417697.pth',
'efficientnet-b6': 'http://storage.googleapis.com/public-models/efficientnet/efficientnet-b6-c76e70fd.pth',
'efficientnet-b7': 'http://storage.googleapis.com/public-models/efficientnet/efficientnet-b7-dcc49843.pth',
}
def load_pretrained_weights(model, model_name, load_fc=True):
""" Loads pretrained weights, and downloads if loading for the first time. """
state_dict = model_zoo.load_url(url_map[model_name])
if load_fc:
model.load_state_dict(state_dict)
else:
state_dict.pop('_fc.weight')
state_dict.pop('_fc.bias')
res = model.load_state_dict(state_dict, strict=False)
assert set(res.missing_keys) == set(
['_fc.weight', '_fc.bias']), 'issue loading pretrained weights'
print('Loaded pretrained weights for {}'.format(model_name)) |
st48927 | I have opencv running in the same virtual environment in which I installed pytorch from source. However, when compiling pytorch I saw a warning (not an error):
“Excluding image processing operators due to no opencv”
Thank You
Tom
Thank You
Tom |
st48928 | pytorch itself uses the PIL however I recommend to install opencv. It is easy to use and there are various function which is helpful for data augmentation |
st48929 | Thank you so much for your reply.
Yes, I use OpenCV and it is running in the same virtual environment. However, I do get a message while compiling pytorch
-- Excluding image processing operators due to no opencv
-- Excluding video processing operators due to no opencv
In a terminal, I can run
>>> import torch
>>> import cv2
What is the difference between compiling OpenCV with torch or separately?
Thank You |
st48930 | Some of pip or conda opencv does not include the some of function because of the license or patent issue or cmake flag.
I usually use the below link for opencv python.
https://github.com/conda-forge/opencv-feedstock 13.
I also use the miniconda system so it doesn not include large space in your storage. |
st48931 | It entirely depends on the user. Most of the PyTorch transformations are performed on PIL images. You can convert from PIL to Tensor and vice verse and same with PIL to numpy. |
st48932 | I have been struggling to manage and create batches for a 3D tensor. I have used it before as a way to create batches for 1D tensor. However, in my current research, I need to create batches out of a tensor with shape (1024,1024,2).
I created custom data to use as my input for the DataLoader method in pytorch. I created the following for the 1D array:
class CustomDataset(Dataset):
def __init__(self, x_tensor, y_tensor):
self.xdomain = x_tensor
self.ydomain = y_tensor
def __getitem__(self, index):
return (self.xdomain[index], self.ydomain[index])
def __len__(self):
return len(self.xdomain)
It works pretty well, however, I realized that this doesn’t work for tensors x_tensor and y_tensor of shape (1024,1024,2) and (1024,1024,1) respectively. I understand that I have to change the __ getitem __ and __ len __ function in a way so it can divide the tensors into batches.
I tried many things, but one I know it could work is that I could flatten these tensors into shapes (1024 x1024,2) and (1024x1024,1). However, I would have to not only change my NN definition but must of my code.
So I want to keep it as is and try to understand how to create these functions if possible. What I understand of these functions are:
__ len __ so that len(dataset) returns the size of the dataset.
__ getitem __ to support the indexing such that dataset[i] can be used to get ith sample.
With this knowledge, I created this class, that finds the indexes of the first 2 dimensions(to find the ith sample). However, this created the input of the NN to be (1024x1024,2) and output (1024x1024,1). And I want it to be (1024,1024,2) and (1024,1024,1).
If someone with a better understanding of Data Loader and mini-batches could explain what am I missing, that could be amazing. An first of all is this possible?
Thanks for reading this, sorry if this question is too basic. I hope is clear. |
st48933 | I don’t completely understand the issue and the posted shapes and I get the expected output as [batch_size, 1024, 2] and [batch_size, 1024, 1] as seen here:
class CustomDataset(Dataset):
def __init__(self, x_tensor, y_tensor):
self.xdomain = x_tensor
self.ydomain = y_tensor
def __getitem__(self, index):
return (self.xdomain[index], self.ydomain[index])
def __len__(self):
return len(self.xdomain)
x_tensor = torch.randn(1024, 1024, 2)
y_tensor = torch.randn(1024, 1024, 1)
dataset = CustomDataset(x_tensor, y_tensor)
loader = DataLoader(dataset, batch_size=2)
for x, y in loader:
print(x.shape, y.shape)
> torch.Size([2, 1024, 2]) torch.Size([2, 1024, 1])
torch.Size([2, 1024, 2]) torch.Size([2, 1024, 1])
torch.Size([2, 1024, 2]) torch.Size([2, 1024, 1])
[...] |
st48934 | I have 3 tensors with:
x.shape = (2500,2) # in this case the each value in this array are unique(like a grid)
y.shape = (2500,1)
The last tensor is a subset of x, let’s call it z, and has a shape (650,2), I would like to find the indexes where x = z. For me to use the indexes in tensor “y” and slice it to find the corresponding values.
I decided to do this problem as I did it with only one dimension I used torch.where(). However, I found different problems with this way of using it.
First of all, I can’t use a torch.where(x==z), since it has different lengths.
Second of all, I believe this function isn’t pairwise. Meaning that it can’t check each of the pairs of numbers for each of them. (This might not be true).
I created a way that finds this indexes:
def findIndexes(domain,quadrants):
noleaf = torch.tensor([])
for j in range(0,quadrants[i].shape[0]):
index = torch.where((domain[:,0] == quadrants[j,0]) & (domain[:,1] == quadrants[j,1]))[0]
noleaf = torch.cat((noleaf,index))
return noleaf
I was wondering if this way could be too inefficient, in terms of memory. I wanted to pick your brains to see if there is a more sophisticated way to do it. If so how could I do it? |
st48935 | I am trying to do batch learning on a large dataset that will not fit on the GPU. I am not sure where to clear the gradients and compute the loss. Is this the correct way to use a DataLoader and move the data to the GPU in pieces for batch learning?
train_dataset = torch.utils.data.TensorDataset(X_train, y_train)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size, shuffle=True)
for epoch in range(n_epochs):
for loc_X_train,loc_y_train in train_dataloader:
# move data to the device
loc_X_train, loc_y_train = loc_X_train.cuda(), loc_y_train.cuda()
# clear any calculated gradients
optimizer.zero_grad()
# forward pass, compute outputs
outputs = model.forward(loc_X_train)
# compute loss
loss = loss_function(outputs, loc_y_train)
# backward pass, compute gradients
loss.backward()
# update learnable parameters
optimizer.step() |
st48936 | Solved by ptrblck in post #4
Yes, in that case you shouldn’t zero out the gradients in each iteration and note that each backward() call would accumulate the gradients. In batch training the gradients are usually calculated using the mean, so you might also need to scale the gradients before applying the optimizer.step() operat… |
st48937 | The DataLoader loop looks alright.
You shouldn’t call model.forward, but the model directly via outputs = model(loc_X_train) so that registered hooks will be properly called.
Also, usually you would update the parameters after the backward call in each iteration not once per epoch, so you might want to call optimizer.step() inside the DataLoader loop. |
st48938 | In batch learning, the weights are only updated once per epoch, so the optimizer.step() call needs to be at the end of the outer loop. I’m starting to think that the optimizer.zero_grad() call should be just before the inner loop. |
st48939 | Yes, in that case you shouldn’t zero out the gradients in each iteration and note that each backward() call would accumulate the gradients. In batch training the gradients are usually calculated using the mean, so you might also need to scale the gradients before applying the optimizer.step() operation. |
st48940 | What’s the best way to do this? If I put the following between the loss calculation and backward step, would it take care of it?
if loss_function.reduction != "sum":
loss *= len(outputs) |
st48941 | No, this would increase the loss even further.
Often you are scaling the loss via:
loss = loss / accumulation_steps
in each iteration. |
st48942 | /pytorch/aten/src/THC/THCTensorIndex.cu:272: indexSelectLargeIndex: block: [192,0,0], thread: [127,0,0] Assertion srcIndex < srcSelectDimSize failed.
0
TITAN RTX
Memory Usage:
Allocated: 0.0 GB
Cached: 0.0 GB
Traceback (most recent call last):
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 331, in
train(args)
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 317, in train
valid_bleu = run_eval_bleu(epoch, (rebatch(b, field_names) for b in valid_iter), model_par, field_names)
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 187, in run_eval_bleu
hypo_scores = torch.stack([model(getattr(batch, field).cuda()) for field in field_names[1:1 + sample_count]]).view(sample_count, -1).t()
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 187, in
hypo_scores = torch.stack([model(getattr(batch, field).cuda()) for field in field_names[1:1 + sample_count]]).view(sample_count, -1).t()
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/users/sbhatta9/emnlp_code/train_ebrwt.py”, line 71, in forward
_, sample_hidden = self.bert_model(hypo_sample)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py”, line 845, in forward
attention_mask=attention_mask, head_mask=head_mask)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py”, line 707, in forward
embedding_output = self.embeddings(input_ids, position_ids=position_ids, token_type_ids=token_type_ids)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 722, in _call_impl
result = self.forward(*input, **kwargs)
File “/users/sbhatta9/sumanta/lib/python3.7/site-packages/pytorch_transformers/modeling_bert.py”, line 255, in forward
embeddings = words_embeddings + position_embeddings + token_type_embeddings
Why this error means in the middle of the training? |
st48943 | Pasting the code here will give a better idea.
Maybe the id for a train (or validation) example token is not found in the vocab. You may want to double-check the vocab creation. |
st48944 | I have a very simple discriminator for a toy GAN problem where I’m trying to find the magnitude of the gradient in order to apply a penalty to the gradient. In order to do that, I need the gradient norm to be differentiable.
When I calculate the loss function for the generator I get the following computational graph:
# This code produces a tensor with a GradFn
bce = -1 * d_z.mean()
And then when I differentiate the loss with respect to the parameters I get a valid graph:
# This code produces a tensor with a GradFn
gradient = grad(bce, gan.G.parameters(), retain_graph=True, create_graph=True)
Here is the code for the generator:
class Linear(nn.Module):
def __init__(self, in_features: int, out_features: int):
super(Linear, self).__init__()
self.W = nn.Linear(in_features, out_features)
def forward(self, x: Tensor) -> Tensor:
return self.W(x)
Now… when I do the same thing for the discriminator, it doesn’t work. The loss works fine:
# This code produces a tensor with a GradFn
d_x = gan.D(X)
d_z = gan.D(g_X.detach())
bce = d_z.mean() - d_x.mean()
But when I try to differentiate it again I get no graph:
Screen Shot 2020-10-20 at 8.51.22 PM698×98 11 KB
And here is the code for the discriminator. As you can see, I’m only using building blocks from torch.nn so requires_grad should be true for everything:
class Quadratic(nn.Module):
def __init__(self, in_features: int, out_features: int):
super(Quadratic, self).__init__()
self.a = nn.Bilinear(in_features, in_features, out_features, bias=False)
self.b = nn.Linear(in_features, out_features, bias=False)
def forward(self, x: Tensor) -> Tensor:
return self.a(x, x) + self.b(x)
I’ve been trying to debug this all day, any help would be greatly appreciated! |
st48945 | Solved by import-antigravity in post #13
Ah, I figured it out! The problem is that I’m using the pytorch-lightning package, and it was freezing the weights of the other model for each step. On the discriminator step, the gradient norm is actually fully a function of the generator weights, so that was causing the problem. So it looks like I… |
st48946 | Hi,
Do you set requires_grad afterwards for some reason on the params of gan.D ?
Are you sure that your function is twice differentiable with respect to your parameters (and non-zero) though?
What happens if for the sake of the experiment you do bce = bce.exp() to ensure the function is properly differentiable? |
st48947 | I’m using just nn.Linear() and nn.Bilinear() layers so it should be twice differentiable.
Well I’ll be damned, when I do that I get a proper graph:
Screen Shot 2020-10-21 at 11.28.15 AM824×1198 75.5 KB |
st48948 | They are twice differentiable but at least for Linear, the second derivative is 0. And the autograd detects that the gradient is independent of the input and does not create graph for it. |
st48949 | Well that’s the thing… The equation for the generator is Ax + b, so if anything it should be the one not returning a gradient, but it does. |
st48950 | Unfortunately in the autograd, we don’t make a difference between “is independent” and “is 0”.
Also such gradient can be represented as:
A Tensor full of 0s
None (or undefined tensor on the c++ side)
An error if you use autograd.grad(..., allow_unused=False)
And the reason for this is that sometimes it will create a graph that produces only 0s and sometimes won’t create the graph at all.
The problem is that it is very hard to be always consistent here as we would like to never create the graph ideally, but we don’t want to have to do extra work either to know if we should create it or not. |
st48951 | I feel like there might be a miscommunication here. Ultimately, what I’m trying to do is get the 2-norm of the gradient (first derivative), and add it as a term to the overall loss function. While it is true that the generator is linear, I’ve worked the math out in closed form and the second derivative (Hessian) of the loss function with respect to the parameters is in fact non-zero. Once again, the generator returns a proper graph and the gradient is non-zero, as expected. The problem is the discriminator, which is bilinear and should have a non-zero hessian no matter what. Sorry if I’m not doing a good job of explaining. |
st48952 | But the example you showed takes the mean, not the 2 norm! That makes a big difference! |
st48953 | Ho that is a different part of the code ok!
The thing is that if the autograd does not create the graph, that means that the gradient will just always be 0 based on what you computed.
Maybe you can share a full code sample that shows your problem? |
st48954 | Here is a self-contained example, but as you can see it actually works now! This confirms that there is a bug somewhere in my code. I’ll take a closer look and report back once I find it:
import torch as tr
from torch.autograd import grad
# Generate data, 1000 samples from a normal distribution
X = tr.normal(tr.ones(1000)).view(-1, 1)
Z = tr.normal(tr.zeros(1000)).view(-1, 1)
# Define the generator and discriminator
# Generator: Az + b
class G(tr.nn.Module):
def __init__(self):
super(G, self).__init__()
self.Ab = tr.nn.Linear(1, 1)
def forward(self, x):
return self.Ab(x)
# Discriminator: x^T C x + d^T x
class D(tr.nn.Module):
def __init__(self):
super(D, self).__init__()
self.C = tr.nn.Bilinear(1, 1, 1, bias=False)
self.d = tr.nn.Linear(1, 1, bias=False)
def forward(self, x):
return self.C(x, x) + self.d(x)
g = G()
d = D()
# Test to make sure outputs are valid
print("Generator Test:", g(Z).mean())
print("Discriminator Test:", d(X).mean(), d(g(Z)).mean())
# Define loss:
generator_loss = -1 * d(g(Z)).mean()
discriminator_loss = d(g(Z)).mean() - d(X).mean()
# Test to make sure loss is differentiable
print("Generator Loss:", generator_loss)
print("Discriminator Loss:", discriminator_loss)
# Get gradient wrt parameters
generator_gradient = grad(generator_loss, g.parameters(), retain_graph=True, create_graph=True)
discriminator_gradient = grad(discriminator_loss, d.parameters(), retain_graph=True, create_graph=True)
# Both tensors should still have grad_fn
print("Generator Gradient:", generator_gradient)
print("Discriminator Gradient:", discriminator_gradient)
# Take 2-norm of gradient components
generator_norm = tr.norm(tr.cat([tr.flatten(i) for i in generator_gradient]))
discriminator_norm = tr.norm(tr.cat([tr.flatten(i) for i in discriminator_gradient]))
# Add to loss function, should STILL have grad_fn
final_gen_loss = generator_loss + generator_norm
final_disc_loss = discriminator_loss + discriminator_norm
print("Final Generator Loss:", final_gen_loss)
print("Final Discriminator Loss:", final_disc_loss)
Output:
Generator Test: tensor(-0.4425, grad_fn=<MeanBackward0>)
Discriminator Test: tensor(-0.9662, grad_fn=<MeanBackward0>) tensor(0.3021, grad_fn=<MeanBackward0>)
Generator Loss: tensor(-0.3021, grad_fn=<MulBackward0>)
Discriminator Loss: tensor(1.2683, grad_fn=<SubBackward0>)
Generator Gradient: (tensor([[-0.0995]], grad_fn=<TBackward>), tensor([0.6430], grad_fn=<ViewBackward>))
Discriminator Gradient: (tensor([[[-1.6031]]], grad_fn=<AddBackward0>), tensor([[-1.3847]], grad_fn=<AddBackward0>))
Final Generator Loss: tensor(0.3486, grad_fn=<AddBackward0>)
Final Discriminator Loss: tensor(3.3866, grad_fn=<AddBackward0>) |
st48955 | Ah, I figured it out! The problem is that I’m using the pytorch-lightning package, and it was freezing the weights of the other model for each step. On the discriminator step, the gradient norm is actually fully a function of the generator weights, so that was causing the problem. So it looks like I’ll have to probably write a custom optimizer to get the functionality I want. |
st48956 | If I’m bringing a tensor back from the GPU to the CPU and need to use it in multiple procs, can I move it directly in to shared memory and avoid the copy with .share_memory_() ? |
st48957 | I’m running into problems with training (fairseq code) across 2 machines. The script worked in one of our cloud environments, but not in another and I’m trying to figure out why. The drivers are not exactly the same across the machines but we don’t have permissions to fix that in the second environment.
The following code:
Code sample
NUM_NODES=2
NODE_RANK=0
MASTER_IP=192.168.0.34
MASTER_PORT=1234
DATA_DIR=~/wikitext_103
# Change the above for every node #####
TOTAL_UPDATES=125000 # Total number of training steps
WARMUP_UPDATES=10000 # Warmup the learning rate over this many updates
PEAK_LR=0.0005 # Peak learning rate, adjust as needed
TOKENS_PER_SAMPLE=512 # Max sequence length
MAX_POSITIONS=512 # Num. positional embeddings (usually same as above)
MAX_SENTENCES=4 # Number of sequences per batch (batch size)
UPDATE_FREQ=24 # Increase the batch size 16x
python3 -m torch.distributed.launch --nproc_per_node=1 \
--nnodes=$NUM_NODES --node_rank=$NODE_RANK --master_addr=$MASTER_IP \
--master_port=$MASTER_PORT \
$(which fairseq-train) --fp16 $DATA_DIR \
--task masked_lm --criterion masked_lm \
--arch roberta_large --sample-break-mode complete --tokens-per-sample $TOKENS_PER_SAMPLE \
--optimizer adam --adam-betas '(0.9,0.98)' --adam-eps 1e-6 --clip-norm 0.0 \
--lr-scheduler polynomial_decay --lr $PEAK_LR --warmup-updates $WARMUP_UPDATES --total-num-update $TOTAL_UPDATES \
--dropout 0.1 --attention-dropout 0.1 --weight-decay 0.01 \
--max-sentences $MAX_SENTENCES --update-freq $UPDATE_FREQ \
--max-update $TOTAL_UPDATES --log-format simple --log-interval 1
yields the following error:
-- Process 2 terminated with the following error:
Traceback (most recent call last):
File "/usr/local/lib64/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/tmp/src/fairseq/fairseq_cli/train.py", line 281, in distributed_main
main(args, init_distributed=True)
File "/tmp/src/fairseq/fairseq_cli/train.py", line 46, in main
args.distributed_rank = distributed_utils.distributed_init(args)
File "/tmp/src/fairseq/fairseq/distributed_utils.py", line 100, in distributed_init
dist.all_reduce(torch.zeros(1).cuda())
File "/usr/local/lib64/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 902, in all_reduce
work = _default_pg.allreduce([tensor], opts)
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/../c10d/NCCLUtils.hpp:78, invalid argument, NCCL version 2.4.8
Any tips or hints for where to look would be greatly appreciated!
Environment
PyTorch Version:1.4.0
fairseq Version: 0.9.0
OS: CentOS Linux release 7.6.1810
Python version: 3.6.8
CUDA/cuDNN version: Cuda compilation tools, release 10.2, V10.2.89
GPU models and configuration: V100’s across 2 machines |
st48958 | Absolutely:
| distributed init (rank 1): env://
| distributed init (rank 5): env://
| distributed init (rank 3): env://
| distributed init (rank 4): env://
| distributed init (rank 0): env://
| distributed init (rank 7): env://
| initialized host seskscpg054.prim.scp as rank 7
| initialized host seskscpg054.prim.scp as rank 0
| distributed init (rank 6): env://
| initialized host seskscpg054.prim.scp as rank 6
| distributed init (rank 8): env://
| initialized host seskscpg054.prim.scp as rank 8
| distributed init (rank 2): env://
| initialized host seskscpg054.prim.scp as rank 2
| distributed init (rank 9): env://
| initialized host seskscpg054.prim.scp as rank 9
| initialized host seskscpg054.prim.scp as rank 1
| initialized host seskscpg054.prim.scp as rank 5
| initialized host seskscpg054.prim.scp as rank 3
| initialized host seskscpg054.prim.scp as rank 4
seskscpg054:70822:70822 [0] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70822:70822 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70822:70822 [0] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70822:70822 [0] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
NCCL version 2.4.8+cuda10.1
seskscpg054:70824:70824 [2] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70829:70829 [7] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70831:70831 [9] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70824:70824 [2] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70829:70829 [7] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70831:70831 [9] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70824:70824 [2] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70829:70829 [7] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70831:70831 [9] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70828:70828 [6] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70830:70830 [8] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70828:70828 [6] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70830:70830 [8] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70828:70828 [6] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70830:70830 [8] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70829:70829 [7] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70824:70824 [2] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70831:70831 [9] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70829:70829 [7] init.cc:981 NCCL WARN Invalid rank requested : 7/2
seskscpg054:70824:70824 [2] init.cc:981 NCCL WARN Invalid rank requested : 2/2
seskscpg054:70831:70831 [9] init.cc:981 NCCL WARN Invalid rank requested : 9/2
seskscpg054:70828:70828 [6] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70830:70830 [8] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70828:70828 [6] init.cc:981 NCCL WARN Invalid rank requested : 6/2
seskscpg054:70830:70830 [8] init.cc:981 NCCL WARN Invalid rank requested : 8/2
seskscpg054:70822:71520 [0] NCCL INFO Setting affinity for GPU 0 to 5555,55555555,55555555
seskscpg054:70827:70827 [5] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70827:70827 [5] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70827:70827 [5] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70827:70827 [5] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70827:70827 [5] init.cc:981 NCCL WARN Invalid rank requested : 5/2
seskscpg054:70823:70823 [1] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70823:70823 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70823:70823 [1] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70823:70823 [1] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70826:70826 [4] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70826:70826 [4] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70826:70826 [4] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70826:70826 [4] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70826:70826 [4] init.cc:981 NCCL WARN Invalid rank requested : 4/2
seskscpg054:70825:70825 [3] NCCL INFO Bootstrap : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70825:70825 [3] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so).
seskscpg054:70825:70825 [3] misc/ibvwrap.cc:63 NCCL WARN Failed to open libibverbs.so[.1]
seskscpg054:70825:70825 [3] NCCL INFO NET/Socket : Using [0]enp124s0f0:10.96.65.162<0> [1]enp124s0f1:fe80::42a6:b7ff:fe02:f131%enp124s0f1<0>
seskscpg054:70825:70825 [3] init.cc:981 NCCL WARN Invalid rank requested : 3/2
seskscpg054:70823:71521 [1] NCCL INFO Setting affinity for GPU 1 to 5555,55555555,55555555
seskscpg054:70822:71520 [0] NCCL INFO Channel 00 : 0 1
seskscpg054:70823:71521 [1] NCCL INFO Ring 00 : 1[1] -> 0[0] via P2P/IPC
seskscpg054:70822:71520 [0] NCCL INFO Ring 00 : 0[0] -> 1[1] via P2P/IPC
seskscpg054:70823:71521 [1] NCCL INFO comm 0x7f909c001e30 rank 1 nranks 2 cudaDev 1 nvmlDev 1 - Init COMPLETE
Thank you for taking the time! |
st48959 | Never got to the bottom of the problem unfortunately, but after reinstalling everything on all machines, the error disappeared and it ran smoothly. |
st48960 | I’m training a model, performing a forward pass, and comparing it to that from loading the same model. The outputs are similar and perform similarly when used for classification, but not exactly the same as expected.
I set up the model and optimizer:
class LinearNet(nn.Module):
def __init__(self):
super().__init__()
self.classifier = nn.Sequential(
nn.Linear(98, 98*3),
nn.ReLU(inplace=True),
nn.Linear(98*3, 98*2),
nn.ReLU(inplace=True),
nn.Linear(98*2, 98*1)
)
def forward(self, x):
x = x.view(-1, 98*1)
x = self.classifier(x)
return x
net = LinearNet()
optimizer = optim.Adam(net.parameters(), **{'lr':0.001, 'betas':(0.9, 0.999), 'eps':1e-08, 'weight_decay':0, 'amsgrad':False})
Train the model and save a checkpoint at every 2000 mini-batches:
for epoch in range(1, epochs+1):
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels.flatten())
loss.backward()
optimizer.step()
# get statistics every 2000 mini-batchss
running_loss += loss.item()
if i % 2000 == 1999:
# log the running training loss
training_loss.append(running_loss / 2000)
# log the running validation loss
with torch.no_grad():
running_val_loss = 0.0
for i_val, data_val in enumerate(valloader, 0):
inputs_val, labels_val = data_val
outputs_val = net(inputs_val)
loss_val = criterion(outputs_val, labels_val.flatten()).item()
running_val_loss += loss_val
validation_loss.append(running_val_loss / len(valloader))
print('[%d, %5d] train_loss: %.3f | val_loss: %.3f' %
(epoch, i + 1, running_loss / 2000, running_val_loss / len(valloader)))
# save checkpoint
torch.save({
'epoch': epoch,
'model_state_dict': net.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'training_loss': running_loss / 2000,
'validation_loss': running_val_loss / len(valloader)
}, PATH+'/epoch{}_model.pt'.format(epoch))
running_loss = 0.0
net.eval()
Load the last saved checkpoint:
checkpoint = torch.load(PATH+'/epoch{}_model.pt'.format(epoch))
loaded_net = LinearNet()
loaded_net.load_state_dict(checkpoint['model_state_dict'])
loaded_net.to(device)
for parameter in loaded_net.parameters():
parameter.requires_grad = False
loaded_net.eval()
And finally compare results with the same code:
output = net(torch.tensor(inputs).float().to(device)).cpu().detach().numpy()
I’m wondering why the outputs are not exactly the same? I’m thinking maybe my checkpoint process is not correct somehow? |
st48961 | Solved by Abhilash_Srivastava in post #2
Before calculating the validation_loss, set net.eval() and back to net.train() for training.
Also, note that the model checkpoint is saved at the following point, not at the end of the loop. So, ensure that the same checkpoint and data is used for fair evaluation.
If these conditions are met, you… |
st48962 | Before calculating the validation_loss, set net.eval() and back to net.train() for training.
Also, note that the model checkpoint is saved at the following point, not at the end of the loop. So, ensure that the same checkpoint and data is used for fair evaluation.
dpalbrecht:
if i % 2000 == 1999:
If these conditions are met, your loss values and predictions should match exactly. |
st48963 | Thank you very much! Of course, the model checkpoint is saved before the end of the epoch which I was not accounting for. The number of mini-batches before saving happened to be enough for a single epoch and so I overlooked that. This should definitely solve it. I’ll give it a try.
Further, could you explain when to use net.eval() vs. torch.no_grad() in the training loop above? I’ve read a few topics on this but am still unsure. |
st48964 | Your use of torch.no_grad() looks fine to me. model.eval should be set just before you enter the validation (or evaluation loop). Set the mode back to model.train once you exit that loop. Similarly, ensure whenever you’re performing evaluation of any kind, mode is set to eval.
The reason is nicely discussed here 10. |
st48965 | Hi! I was wondering if there is a way to have the data loading class being able to grab data from a somewhat complex folder structure. the main issue is that getitem seems to have one input defined as idx and getlen only stores one length value, for example, a folder structure like this seems to fail under such structure:
–train
----Date1
-----------rgb
-----------depth
----Date2
-----------rgb
-----------depth
----Date3
-----------rgb
-----------depth
The file count under each date folder varies, is there a way to iterate through all the datas without resorting to pilling all the files under one large folder? |
st48966 | Solved by RoySadaka in post #4
Hmm, I believe the official Dataloader has some logic regarding to indexing (like you mentioned), maybe also depends on if you set shuffle=True/False ,sampler, etc, but I always add an extra guard with modulo operation ("%") even if it is redundant.
I recommend reading the official guide to have a … |
st48967 | Although getitem receives an index, you can do whatever you want in it, you can ignore it and maintain your own indexing logic, or use it if it helps, see below example (I didn’t test it and for multiple workers, it probably needs to more work, but just to get you the idea for your needs)
class MyDataset(Dataset):
def __init__(self, root_folder):
self.root_folder = root_folder
self.idx_to_date, self.idx_to_len = self._get_idx_to_date_folder_name()
self.next_index = 0 # or any random int
def _get_idx_to_date_folder_name(self):
# iterate over the inner folder/files in root folder, and collect meta data
prefix = 'Date'
idx_to_date_folder = {}
idx_to_len = {}
idx = 0
for _, dir_names, file_names in os.walk(self.root_folder):
for dir_name in dir_names:
if not dir_name.startswith(prefix):
continue
idx_to_date_folder[idx] = dir_name
idx_to_len[idx] = len(file_names)
idx += 1
return idx_to_date_folder, idx_to_len
def _load_data_from_folder(self, folder_name):
raise NotImplementedError('your loading logic')
def __len__(self):
# next data from __getitem__ will have this len (for single worker mode)
return self.idx_to_len[self.next_index]
def __getitem__(self, index):
# ignore index for now, use self.next_index, it will help you return the correct "__len__"
date_index = self.next_index % len(self.idx_to_date)
folder_name = self.idx_to_date[date_index]
X, y = self._load_data_from_folder(folder_name)
# get ready for next time you request __len__ or __getitem__
self.next_index = index
return X, y
Also, worth mentioning that you don’t have to use the DataLoader, or Dataset, you can create your own python generator (using yield) if it easier for you situation
Roy. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.