id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st32368
|
Thanks for the clarification! I am indeed using DataParallel()
Please see the edit I made. The split batch dimensions are getting coalesced for the output from my LSTM encoder, but for the hidden features the dimensions are not being coalesced in the same way. The hidden features are not being returned as 1x16x128 (num_layers x batch_size xhidden_size) but instead as 4x4x128 on multi gpu training.
|
st32369
|
The coalescing might be getting confused by the ordering of the dimensions. Can you temporarily permute() the batch dimension to the first dimension and then permute() again as necessary for the desired data layout?
|
st32370
|
Sure I could always do that! But I feel the default behaviour for data parallelisation should be to re-organise the batches for any tensor that gets returned from the model. I wonder if I should create a GitHub issue to get the devs’ opinions
|
st32371
|
The issue is that I don’t think tensors have “named” dimensions at the moment (a prototype feature here). So there is no way for PyTorch to actually know which dimension is the batch dimension (e.g., imagine your features have dimensions of (4, 4, 4, 4) or (16, 16, 16, 16)). So the current default behavior often assumes the first dimension is the batch.
|
st32372
|
I am trying to run Google Colaboratory 15 but the program crashes in the second cell. How can I fix the error?
|
st32373
|
Briefly:
you have installed :
!pip install -q torch-scatter -f https://pytorch-geometric.com/whl/torch-1.7.0+cu101.html
!pip install -q torch-sparse -f https://pytorch-geometric.com/whl/torch-1.7.0+cu101.html
!pip install -q torch-geometric
!pip install ogb
when you do this import :
from torch_geometric.datasets import TUDataset
you get this error :
---------------------------------------------------------------------------
/usr/local/lib/python3.7/dist-packages/torch_geometric/__init__.py in <module>()
3
4 from .debug import is_debug_enabled, debug, set_debug
----> 5 import torch_geometric.data
6 import torch_geometric.transforms
7 import torch_geometric.utils
/usr/local/lib/python3.7/dist-packages/torch_geometric/data/__init__.py in <module>()
----> 1 from .data import Data
2 from .temporal import TemporalData
3 from .batch import Batch
4 from .dataset import Dataset
5 from .in_memory_dataset import InMemoryDataset
/usr/local/lib/python3.7/dist-packages/torch_geometric/data/data.py in <module>()
6 import torch
7 import torch_geometric
----> 8 from torch_sparse import coalesce, SparseTensor
9 from torch_geometric.utils import (contains_isolated_nodes,
10 contains_self_loops, is_undirected)
/usr/local/lib/python3.7/dist-packages/torch_sparse/__init__.py in <module>()
13 ]:
14 torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
---> 15 f'{library}_{suffix}', [osp.dirname(__file__)]).origin)
16
17 if torch.cuda.is_available(): # pragma: no cover
/usr/local/lib/python3.7/dist-packages/torch/_ops.py in load_library(self, path)
102 # static (global) initialization code in order to register custom
103 # operators with the JIT.
--> 104 ctypes.CDLL(path)
105 self.loaded_libraries.add(path)
106
/usr/lib/python3.7/ctypes/__init__.py in __init__(self, name, mode, handle, use_errno, use_last_error)
362
363 if handle is None:
--> 364 self._handle = _dlopen(self._name, mode)
365 else:
366 self._handle = handle
OSError: /usr/local/lib/python3.7/dist-packages/torch_sparse/_convert_cuda.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIdEEPKNS_6detail12TypeMetaDataEv
|
st32374
|
my training code takes up about 9GB and I have 2080Ti which has 11GB of memory
when I am running this on a single GPU,
the process is pretty stable, as the GPU usage stays under 10GB
However, when I add extra 2080Ti device and use torch.multiprocessing for parallel training,
I have noticed that it takes up little more GPU and rarely cause out of memory issue.
I am wondering if the increase in GPU memory usage is expected.
The documentation says that when sharing CUDA memory, the original tensor will exist until it’s moved to the other one (Multiprocessing package - torch.multiprocessing — PyTorch 1.8.1 documentation 1).
Is this why multiprocessing case consumes little more memory?
|
st32375
|
I want to create a model with sharing weights, for example: given two input A, B, the first 3 NN layers share the same weights, and the next 2 NN layers are for A, B respectively.
How to create such model, and perform optimally?
|
st32376
|
EDIT: we do support sharing Parameters between modules, but it’s recommended to decompose your model into many pieces that don’t share parameters if possible.
We don’t support using the same Parameters in many modules. Just reuse the base for two inputs:
class MyModel(nn.Module):
def __init__(self):
self.base = ...
self.head_A = ...
self.head_B = ...
def forward(self, input1, input2):
return self.head_A(self.base(input1)), self.head_B(self.base(input2))
|
st32377
|
in your example, what will happen to gradients of self.base? will they be calculated taking into account both input1 and input2?
|
st32378
|
There are lots of cases where you can’t just reuse a Module but you still want to share parameters (e.g. in language modeling you sometimes want your word embeddings and output linear layers to share weight matrices). I thought reusing Parameters was ok? It’s used in the PTBLM example https://github.com/pytorch/examples/blob/master/word_language_model/model.py 1.4k and it’s something people will keep doing (and expect to work) unless it throws an error.
|
st32379
|
Yeah they are supported, sorry for this. But it’s still considered better practice to not do it. I’ve updated the answer.
|
st32380
|
github.com
seokinj/coPassGAN/blob/master/models.py#L27 230
nn.ReLU(True),
nn.Conv1d(DIM, DIM, 5, padding=2),#nn.Linear(DIM, DIM),
nn.ReLU(True),
nn.Conv1d(DIM, DIM, 5, padding=2),#nn.Linear(DIM, DIM),
)
def forward(self, input):
output = self.res_block(input)
return input + (0.3*output)
G_block_share1 = ResBlock()
G_block_share2 = ResBlock()
D_block_share4 = ResBlock()
D_block_share5 = ResBlock()
class Generator_A(nn.Module):
def __init__(self, charmap):
super(Generator_A, self).__init__()
In this code, he make share modules(‘G_block_share/D_block_share’) out of class, and then use these share modules in different two classes(‘Generator A&B or Discriminator A&B)…
This code is right way to share weights between two generator/discriminators?
|
st32381
|
apaszke:
Yeah they are supported, sorry for this. But it’s still considered better practice to not do it. I’ve updated the answer.
Could you please tell us why it is better to not do it? Thanks
|
st32382
|
Dear Apaszke, thank you for your updates! But I am still a little confused about your answer. Like in your example, you have 3 modules (base, headA, headB), but how could you decompose them into pieces that don’t share parameters? Looking forward to your answer, please! Thank you for your attention.
|
st32383
|
I think it is wrong. Just define the G_block_share=ResBlock() is right.
https://pytorch.org/tutorials/beginner/examples_nn/dynamic_net.html#pytorch-control-flow-weight-sharing 185
|
st32384
|
class MyModel(nn.Module):
def __init__(self):
self.base1 = ...
self.base2=...
self.head_A = ...
self.head_B = ...
def forward(self, input1, input2):
return self.head_A(self.base1(input1)), self.head_B(self.base2(input2))
|
st32385
|
But in this case, how would base1 and base2 share the same weights? It seems like base1 + head_A and base2 + head_B are totally separate models.
|
st32386
|
Hi @apaszke, in regards to this mechanism for sharing weights, which is the standard way of masking?
I mean, in a batch of inputs, not all the members of the batch have the same number of inputs,
for instance in a batch of 16 sequences you may well have some sequences with 10 elements while others with 2 elements
so you can fill the sorter ones with 0s in order to match the maximum sequence length, but you do not want your weights to be adjusted in base to such padding inputs.
Therefore, which is the standard technique in pytorch for masking this weight sharing?
Thanks!
|
st32387
|
Is there a general way to change the output shape of any Pytorch architecture?
Based on this article (Finetuning Torchvision Models — PyTorch Tutorials 1.2.0 documentation 2) it seems like each architecture requires a different method to change the number of output nodes (classes) - for instance,
for resnet18 you can replace model.fc with a nn.Linear layer but for vgg11_bn you have to replace the model.classifier[6] layer. Unfortunately, that article does not have all of the currently implemented PyTorch architectures. In general as more architectures are added it would be hard to keep adding new wrapper functions for changing the output shape. It would be very helpful if there were a consistent way to modify the output shape of any pytorch architecture.
|
st32388
|
I’m trying to find the definition of Tensor as used in the C++ parts of PyTorch, for example in pytorch/aten/src/ATen/native/ReduceOps.cpp in line 746 we have the function
Tensor prod(const Tensor& self, int64_t dim, bool keepdim, c10::optional<ScalarType> opt_dtype) {
ScalarType dtype = get_dtype_from_self(self, opt_dtype, true);
Tensor result = create_reduction_result(self, dim, keepdim, dtype);
native::prod_out_impl(result, self, dim, keepdim, dtype);
return result;
}
but for some reason the IDE I’m using (CLion) can’t find the declaration of the type Tensor.
Could you kindly direct me to the declaration.
|
st32389
|
This is declared in a generated header include/ATen/core/TensorBody.h which you find in LibTorch / PyTorch installations. The template is in aten/src/ATen/templates/TensorBody.h 4.
|
st32390
|
I asked this because I wanted to understand the following: I have a function
- func: function(Tensor self) -> Tensor
variants: method, function
dispatch:
CPU: function_cpu
CUDA: function_cuda
whose implementation I want to change.
The current version’s return tensor is declared this way:
Tensor result;
result = at::empty({}, self.options().dtype(dtype));
I computed the result I want to return with std::vectors (probably not optimal, feel free to suggest something better). Now I’m wondering how I can pass this vector to the result tensor. I think I have to use following in some way, and was trying to find its implementation by asking about the location of the Tensor class:
self.data_ptr<scalar_t>()
I can’t seem to infer this from the information you gave me above. Can you explain to me to how I can properly give my data to the return tensor? Is there maybe an instructive function somewhere I could look at.
|
st32391
|
I have a convolutional neural network that predicts 3 quantities: Ux, Uy, and P. They are all 2D arrays of size [100,60], and my batch size is 10.
I want to compute the loss and update the network by calculating the CURL of the predicted velocity with the CURL of the target velocity. I have a function that does this: discrete_curl, and it computes it using (Ux_pred, Uy_pred), the predicted Ux and Uy. I want to compute the loss by comparing it to ground truth targets that I have: true_curl = curl(Ux_true, Uy_true)
To do this, I need to compute the curl loss in terms of Ux and Uy. I have not gotten past this error in Pytorch. this is my code so far:
# Curl function defined seperately
def discrete_curl(self,x,y,curl):
for m in range(100):
for n in range(60):
if n <= 58:
if m <= 98:
if x[m,n] != 0 and y[m,n] != 0:
curl[m,n] = ((y[m+1,n] - y[m-1,n]) / 2*1) - ((x[m,n+1] - x[m,n-1]) / 2*1)
return curl
Code:
inputs= torch.from_numpy(x)
targets= torch.from_numpy(y[:,0:3,:,:]) # Ux,Uy,P target
pred = self.forward(inputs)
pred_ux = pred[:,0,:,:] #Ux of batch: size [10,100,60]
pred_uy = pred[:,1,:,:]. #Uy of batch: size [10,100,60]
predicted_curl = np.zeros((len(pred),100,60), dtype=float)
predicted_curl = torch.from_numpy(predicted_curl)
for i in range(len(pred)):
pred_ux[i] = Variable(pred_ux[i], requires_grad=True) #Ux_pred
pred_uy[i] = Variable(pred_uy[i], requires_grad=True) #Uy_pred
predicted_curl[i] = Variable(predicted_curl[i], requires_grad=True) #curl from predicted velocity values, to be updated with curl function below
predicted_curl[i] = self.discrete_curl(pred_ux[i], pred_uy[i], predicted_curl[i])
grad_tensor = torch.autograd.grad(outputs=predicted_curl[i], inputs=(pred_ux[i], pred_uy[i]), grad_outputs=torch.ones_like(new_arr[i]), retain_graph=True)
print(grad_tensor)
However, it fails when I try to define grad_tensor, before I can even compute the loss. It fails with this error:
grad_tensor = torch.autograd.grad(outputs=new_arr[i], inputs=(pred_ux[i], pred_uy[i]), grad_outputs=torch.ones_like(new_arr[i]), retain_graph=True)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/torch/autograd/__init__.py", line 204, in grad
inputs, allow_unused)
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.
How do I get past this?
|
st32392
|
Have you tried to add allow_unused=True in torch.autograd? If you enable it the gradients would be None for the unused variables and then you can look at which variable’s gradients were None and then set that variable’s requires_grad=False or you can set the gradient of all variables with None gradient equal to 0, as you do not need to learn these things.
|
st32393
|
I haven’t tried that, I’m not sure what it means because the variables should be used.
|
st32394
|
The error means not all the tensors for which requires_grad=True are used in gradient computation. An example of this would be
a = torch.randn(3, 4, requires_grad=True)
b = torch.randn(3, 4, requires_grad=True)
loss = a.sum()
torch.autograd.grad(loss, (a,b))
So try setting allow_unused=True and it should work.
|
st32395
|
Hi,
I get the following error when using tune.run() and I do not know why? Could someone please advise me?
AttributeError Traceback (most recent call last)
<ipython-input-145-af69a8390d75> in <module>()
13 best_trial = result.get_best_trial(metric = "loss", mode="min")
14
---> 15 print("Best trial config: {}".format(best_trial.config))
16 print("Best trial final validation loss: {}".format(
17 best_trial.last_result["loss"]))
AttributeError: 'NoneType' object has no attribute 'config'
My code is:
checkpoint_dir = '/content/gdrive/MyDrive/MSc_Thesis/Exon_Mobil_Data/Checkpoint_dir'
data_dir = '/content/gdrive/MyDrive/MSc_Thesis/Exon_Mobil_Data/Data_dir'
epochs=5
def custom_train_part(config, checkpoint_dir=None, data_dir=None):
model = LSTM(len(True_IMF_df.T), config["Hidden"], config["Layers"], 1)
device = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
model.to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=config["lr"])
criterion = nn.MSELoss()
if checkpoint_dir:
model_state, optimizer_state = torch.load(
os.path.join(checkpoint_dir, "checkpoint"))
model.load_state_dict(model_state)
optimizer.load_state_dict(optimizer_state)
for e in range(epochs):
running_loss = 0.0
epoch_steps = 0
model.train() # put model to training mode
x = x_hht_train.to(device)
y = y_hht_train.to(device)
scores = model(x)
loss = criterion(scores, y_hht_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
#print(f"Running loss: {running_loss}")
epoch_steps += 1
if e % 5 == 0:
print(f'Epoch: {e}, loss = {loss.cpu().item()}')
# check_accuracy(loader_val, model)
print()
# Validation loss
val_loss = 0.0
val_steps = 0
total = 0
correct = 0
with torch.no_grad():
x = x_hht_val.to(device) # move to device, e.g. GPU
y = y_hht_val.to(device)
scores = model(x)
scores = scores.cpu()
y = y.cpu()
correct += (np.sign(scores) == np.sign(y)).sum().item()
#print(f"Correct: {correct}")
loss = criterion(scores, y)
#print(f"Val Loss: {loss.item()}")
val_loss += loss.cpu()
val_steps += 1
with tune.checkpoint_dir(e) as checkpoint_dir:
path = os.path.join(checkpoint_dir, "checkpoint")
torch.save((model.state_dict(), optimizer.state_dict()), path)
tune.report(loss=(val_loss / val_steps), accuracy=correct / len(y))
print("Finished Training")
config = {
"lr": tune.loguniform(1e-6, 1e-1),
"Layers": tune.sample_from(lambda _: np.random.randint(1, 10)),
"Hidden" : tune.sample_from(lambda _: np.random.randint(2, 100))
}
scheduler = ASHAScheduler(
metric="loss",
mode="min",
max_t=10,
grace_period=1,
reduction_factor=2)
reporter = CLIReporter(
#parameter_columns=["lr", "Layers", "Hidden"],
metric_columns=["loss", "accuracy", "training_iteration"]
)
result = tune.run(
partial(custom_train_part, data_dir=data_dir),
resources_per_trial={"cpu": 4, "gpu": 1},
config=config,
num_samples=1,
scheduler=scheduler,
progress_reporter=reporter)
print(f"Results: {result}")
#print(f"Best Config: {result.get_best_config(metric = "loss", mode = "min")}")
df = result.results_df
df
best_trial = result.get_best_trial(metric = "loss", mode="min")
print("Best trial config: {}".format(best_trial.config))
print("Best trial final validation loss: {}".format(
best_trial.last_result["loss"]))
print("Best trial final validation accuracy: {}".format(
best_trial.last_result["accuracy"]))
##############################################################
# AFTER TUNNING #
##############################################################
best_trained_model = LSTM(len(True_IMF_df.T), best_trial.config["Hidden"], best_trial.config["Layers"], 1)
best_checkpoint_dir = best_trial.checkpoint.value
model_state, optimizer_state = torch.load(os.path.join(
best_checkpoint_dir, "checkpoint"))
best_trained_model.load_state_dict(model_state)
|
st32396
|
I’m not familiar with Ray Tune, but it seems that result.get_best_trial doesn’t return anything so that best_trial is a None object and lets the following operation fail.
Based on the docs it seems that the return value is optional and also the source 1 shows that best_trial might be None and will raise a warning:
if not best_trial:
logger.warning(
"Could not find best trial. Did you pass the correct `metric` "
"parameter?")
return best_trial
|
st32397
|
Thanks for your response. Do you know why it is returning a None object? I used the same code roughly 6 months ago and it worked perfectly?
I thought it may be an issue that the state_dicts are not saving in the file path and so a there is nothing to ‘get’ when calling get_best_trial.
|
st32398
|
Can you recommend any other hyperparameter optimisation libraries with good documentation that is easily useable for PyTorch?
|
st32399
|
I’m not sure how the state_dict interacts with this method, but you could check the linked source code to see which conditions are not met and why the return value is None.
No, unfortunately I cannot recommend a specific hyperparameter optimization library, so let’s wait for others to chime in.
|
st32400
|
I’ve trained a Transformer to perform translation. The final loss before saving the state dict was ~20, on reloading and running inference it is ~82 (close to starting loss which was 103).
I’ve been banging my head against the wall for days on this one. Does anyone have any idea on where to start with finding the issue?
Setup details:
torch version: 1.7.1
device: 2x GPU
Below I put my code snippets:
Saving the model:
torch.save(model.state_dict(), best_path)
Loading the model:
model = Model(config, vocab)
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
model.load_state_dict(torch.load(best_path))
model.to(config.device)
model.eval()
I set the random seed at the start of each session like this:
def set_seed(seed):
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
set_seed(config.seed)
I use the same pre-tokenised (SPM) data each time. I load it in the following way:
def dataset_fn(df, root, fields, seed=1234, dev_size=1000):
train, dev, test = np.split(df.sample(frac=1, random_state=seed), [len(df) - 2 * dev_size, len(df) - dev_size])
train.to_csv(os.path.join(root, "train.csv"), index=False)
dev.to_csv(os.path.join(root, "dev.csv"), index=False)
test.to_csv(os.path.join(root, "test.csv"), index=False)
train, dev, test = TabularDataset.splits(
path=root,
train='train.csv',
validation='dev.csv',
test='test.csv',
format='csv',
fields=fields)
return train, dev, test
device = config.device
print("Device:{}".format(device))
root = config.data_path
seed = config.seed
data_paths = {'pre_src': os.path.join(root, config.pre_src_path),
'pre_trg': os.path.join(root, config.pre_trg_path)}
for key, value in data_paths.items():
df = pd.read_csv(value, sep='delimiter', index_col=None, header=None, skip_blank_lines=False)
dfs[key] = df
pre_df = make_df(dfs['pre_src'], dfs['pre_trg'])
print("Data loaded.\n")
TEXT = Field(tokenize=tokenize,
init_token='<sos>',
eos_token='<eos>',
lower=False,
batch_first=True)
print("Field defined")
data_fields = [('src', TEXT), ('trg', TEXT)]
train_pre_set, dev_pre_set, test_pre_set = dataset_fn(pre_df, root, data_fields, seed=seed, dev_size=1000)
# vocab is read using pickle from a .pkl file, previously generated using .build_vocab on this data and saved to .pkl
vocab = read_vocab(config.vocab_file)
TEXT.vocab = vocab
dataiter_fn = lambda dataset, train: BucketIterator(
dataset=dataset,
batch_size=config.batch_size,
shuffle=train,
repeat=train,
sort_key=lambda x: len(x.trg),
sort_within_batch=False,
device=device
)
# Create iterators
train_pre_iter = dataiter_fn(train_pre_set, True)
dev_pre_iter = dataiter_fn(dev_pre_set, False)
test_pre_iter = dataiter_fn(test_pre_set, False)
# This is wrapped in a function which returns iters & vocab
#return train_pre_iter, dev_pre_iter, test_pre_iter, vocab
Let me know if you need any more info!
|
st32401
|
Solved by st-vincent1 in post #3
Thanks @ptrblck . I was about to do this after reading this suggestion of yours somewhere else, when miraculously it worked now (touch wood).
The only thing I changed was the way I save and load the model like this (I’m using DataParallel):
model = MyModel(config, vocab)
model.to(config.device)
p…
|
st32402
|
These issues are often caused by a difference in the model usage or the data loading.
To isolate the root cause further, you could store the outputs of the trained model using a static input (e.g. torch.ones) after calling model.eval() and before saving the state_dict().
Afterwards, load the model in your inference script and compare the reference outputs to a new run using the same static inputs. If these outputs differ, the difference would come from the model itself and you could debug further what might be causing the difference (missing keys in the state_dict etc.).
On the other hand, if the outputs are equal (up to floating point precision), you could compare the data loading pipelines in your training and inference scripts and check, if the same preprocessing was applied (e.g. normalization etc.).
|
st32403
|
Thanks @ptrblck . I was about to do this after reading this suggestion of yours somewhere else, when miraculously it worked now (touch wood).
The only thing I changed was the way I save and load the model like this (I’m using DataParallel):
model = MyModel(config, vocab)
model.to(config.device)
print(config.device)
if torch.cuda.device_count() > 1:
model = nn.DataParallel(model)
model.module.load_state_dict(torch.load(load_path))
model.eval()
And save:
torch.save(model.module.state_dict(), best_path)
In other words, I’m saving the state dict directly rather than it wrapped in the module.
Does it make any sense that this could be causing the issue?
|
st32404
|
It could explain the issue, but you should get a warning while loading e.g. the module.state_dict() into the nn.DataParallel model.
The reason for this is that the nn.DataParallel(model).state_dict() will add the .module names to each parameter, which will then create a mismatch in the model.load_state_dict() operation (the same applies for the reversed workflow).
However, if no errors were raised, I’m unsure what might have been the root cause of the issue.
|
st32405
|
Hi @st-vincent1, I’m having a similar issue with the loaded model. Were you able to find what’s causing the problem?
|
st32406
|
It shouldn’t raise a warning because I saved a model.state_dict() while model was wrapped in nn.DataParallel(), and then loading it by first wrapping an initialised model in DataParallel and then loading the state dict into that. But that didn’t work - hence my original post. Once I changed to saving the state dict explicitly (not wrapped in the module) and loading it into model.module.state_dict(), then it all worked.
@pattiJane , see above how I worked it out.
|
st32407
|
I am going through my dataset using the data loader and I get the following error:
ERROR: Unexpected segmentation fault encountered in worker.
Traceback (most recent call last):
File “/home/kvougiou/miniconda3/envs/dev/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 986, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File “/home/kvougiou/miniconda3/envs/dev/lib/python3.8/multiprocessing/queues.py”, line 107, in get
if not self._poll(timeout):
File “/home/kvougiou/miniconda3/envs/dev/lib/python3.8/multiprocessing/connection.py”, line 257, in poll
return self._poll(timeout)
File “/home/kvougiou/miniconda3/envs/dev/lib/python3.8/multiprocessing/connection.py”, line 424, in _poll
r = wait([self], timeout)
File “/home/kvougiou/miniconda3/envs/dev/lib/python3.8/multiprocessing/connection.py”, line 931, in wait
ready = selector.select(timeout)
File “/home/kvougiou/miniconda3/envs/dev/lib/python3.8/selectors.py”, line 415, in select
fd_event_list = self._selector.poll(timeout)
File “/home/kvougiou/miniconda3/envs/dev/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py”, line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 89199) is killed by signal: Segmentation fault.
The dataset that I am using is massive 1.6M videos but I can never seem to do even a single pass of my data before I see this crash. In the dataset, I am reading videos (using decord), reading audio (using torchaudio which shouldn’t be the problem since I have used this before on similar data and it works) and loading numpy arrays.
Specs of my machine:
128 GB Ram
CPU: AMD Ryzen Threadripper 1950X 16 Core CPU
The versions of the libraries I am using are:
torch 1.8.1
torchaudio 0.8.0a0+e4e171a
torchmetrics 0.3.2
torchvision 0.9.1
My shared memory is:
kernel.shm_rmid_forced = 0
kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096
Other info:
I am using 10 workers in the dataloader
I have now looked at similar issues and I have no idea why this happens. It has happened even when setting workers=0 (this was before some changes I tried but I expect it to happen again). Also when I run the exact same code on a server with 256 RAM and way more cores it works so this is specific to this one machine. Does anyone have any idea on how I can debug this further because I am now stomped?
Update:
I noticed that if I increase the number of workers then the issue happens faster. If I increase to 32 (same as the cores) then it sometimes happens instantly. Then my computer goes to a state that it will keep segfaulting instantly on the code until I reduce the workers.
|
st32408
|
I have a single model (Net ) that contains two separate ResNet models(without FC , the FC is after the contact )
the input of net1 are images with resolution (51,50,4) and the input of net2 are images with resolution (15,15,40)
import torch
import torch.nn as nn
class block(nn.Module):
def init(
self, in_channels, intermediate_channels, identity_downsample=None, stride=1
):
super(block, self).init()
self.expansion = 4
self.conv1 = nn.Conv2d(
in_channels, intermediate_channels, kernel_size=1, stride=1, padding=0, bias=False
)
self.bn1 = nn.BatchNorm2d(intermediate_channels)
self.conv2 = nn.Conv2d(
intermediate_channels,
intermediate_channels,
kernel_size=3,
stride=stride,
padding=1,
bias=False
)
self.bn2 = nn.BatchNorm2d(intermediate_channels)
self.conv3 = nn.Conv2d(
intermediate_channels,
intermediate_channels * self.expansion,
kernel_size=1,
stride=1,
padding=0,
bias=False
)
self.bn3 = nn.BatchNorm2d(intermediate_channels * self.expansion)
self.relu = nn.ReLU()
self.identity_downsample = identity_downsample
self.stride = stride
def forward(self, x):
identity = x.clone()
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu(x)
x = self.conv3(x)
x = self.bn3(x)
if self.identity_downsample is not None:
identity = self.identity_downsample(identity)
x += identity
x = self.relu(x)
return x
class ResNet(nn.Module):
def init(self, block, layers, image_channels, num_classes):
super(ResNet, self).init()
self.expansion = 4
self.in_channels = 64
self.conv1 = nn.Conv2d(image_channels, 64, kernel_size=1, stride=1, padding=0, bias=False)
self.bn1 = nn.BatchNorm2d(64)
self.relu = nn.ReLU()
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
# Essentially the entire ResNet architecture are in these 4 lines below
self.layer1 = self._make_layer(
block, layers[0], intermediate_channels=64, stride=1
)
self.layer2 = self._make_layer(
block, layers[1], intermediate_channels=128, stride=2
)
self.layer3 = self._make_layer(
block, layers[2], intermediate_channels=128, stride=2
)
self.layer4 = self._make_layer(
block, layers[3], intermediate_channels=128, stride=2
)
self.layer5 = self._make_layer(
block, layers[4], intermediate_channels=256, stride=2
)
self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
def forward(self, x):
x = self.conv1(x)
x = self.bn1(x)
x = self.relu(x)
x = self.maxpool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.layer5(x)
x = self.avgpool(x)
x = x.reshape(x.shape[0], -1)
return x
def _make_layer(self, block, num_residual_blocks, intermediate_channels, stride):
identity_downsample = None
layers = []
# Either if we half the input space for ex, 56x56 -> 28x28 (stride=2), or channels changes
# we need to adapt the Identity (skip connection) so it will be able to be added
# to the layer that's ahead
if stride != 1 or self.in_channels != intermediate_channels * 4:
identity_downsample = nn.Sequential(
nn.Conv2d(
self.in_channels,
intermediate_channels * 4,
kernel_size=1,
stride=stride,
bias=False
),
nn.BatchNorm2d(intermediate_channels * 4),
)
layers.append(
block(self.in_channels, intermediate_channels, identity_downsample, stride)
)
# The expansion size is always 4 for ResNet 50,101,152
self.in_channels = intermediate_channels * 4
# For example for first resnet layer: 256 will be mapped to 64 as intermediate layer,
# then finally back to 256. Hence no identity downsample is needed, since stride = 1,
# and also same amount of channels.
for i in range(num_residual_blocks - 1):
layers.append(block(self.in_channels, intermediate_channels))
return nn.Sequential(*layers)
def ResNet50(img_channel=4, num_classes=2):
return ResNet(block, [3, 4,6, 3,3], img_channel, num_classes)
import torch.optim as optim
net1 = ResNet50(img_channel=4, num_classes=2)
when i turn this model the accuracy is 0,114%
who can help me to find the problem in my code ? thank you
|
st32409
|
I am trying to create a CAM for my binary classification model. My model has a CNN (ResNet50) + bidirectional LSTM structure for which the last layers look like this:
grafik711×379 8.75 KB
I know that for CAM usually we grab the last Conv layer and then use a softmax. But which layer do I have to use considering my model structure?
|
st32410
|
Hello !
I’d like to train a very basic Mixture of 2 Gaussians to segment background in a 2d image.
However I think I’m confused on how to use torch.distribution.
This is what I’m doing:
first I prepare my 2d numpy array by doing:
x = torch.from_numpy(image.reshape((image.size, 1)))
then I define a Module as bellow:
class GaussianMixtureModel(torch.nn.Module):
def __init__(self, n_components: int=2):
super().__init__()
weights = torch.ones(n_components, )
means = torch.randn(n_components, )
stdevs = torch.tensor(np.abs(np.random.randn(n_components, )))
self.weights = torch.nn.Parameter(weights)
self.means = torch.nn.Parameter(means)
self.stdevs = torch.nn.Parameter(stdevs)
def forward(self, x):
mix = D.Categorical(self.weights)
comp = D.Normal(self.means, self.stdevs)
gmm = D.MixtureSameFamily(mix, comp)
return - gmm.log_prob(x).mean()
then I define a basic training procedure:
model = GaussianMixtureModel(n_components=2)
optim = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
n_iter = 1_000
for _ in range(n_iter):
loss = model(x)
loss.backward()
optim.step()
optim.zero_grad()
But I get either:
The parameter probs has invalid values (error caught when defining mix in the forward method)
The parameters scale has invalid values (error caught when defining comp in the forward method
So I guess I’m doing something wrong with my distributions definitions.
Could you please help me ?
Thanks a lot !
|
st32411
|
Hi,
I wondered if anyone could help me with hyperparameter tuning an LSTM? I have elected to go with Ray Tune as I used it previously with CNNs for a piece of coursework but I seem to constantly run into errors that I don’t know how to solve when using it to tune an LSTM.
I am not set on Ray Tune - if someone knows an easier option please let me know! I have yet to see a tutorial online that does not use a CNN which is not helpful!
I would greatly appreciate some help on this as it is for my masters project!
Here is my code:
checkpoint_dir = '/content/Checkpoint'
epochs=5
def custom_train_part(config, checkpoint_dir=checkpoint_dir, data_dir=None):
model = LSTM(len(True_IMF_df.T), config["Hidden"], config["Layers"], 1)
optimizer = torch.optim.Adam(hht_model.parameters(), lr=config["lr"])
model = model.to(device) # move the model parameters to CPU/GPU
evice = "cpu"
if torch.cuda.is_available():
device = "cuda:0"
if torch.cuda.device_count() > 1:
net = nn.DataParallel(model)
model.to(device)
if checkpoint_dir:
model_state, optimizer_state = torch.load(
os.path.join(checkpoint_dir, "checkpoint"))
model.load_state_dict(model_state)
optimizer.load_state_dict(optimizer_state)
for e in range(epochs):
running_loss = 0.0
epoch_steps = 0
model.train() # put model to training mode
x = x_hht_train.to(device) # move to device, e.g. GPU
y = y_hht_train.to(device)
print(f"The shape of the data is: {x.shape}")
scores = model(x)
loss = F.mse_loss(scores, y_hht_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
epoch_steps += 1
if e % 5 == 0:
print('Epoch: %d, Iteration %d, loss = %.4f' % (e, t, loss.item()))
# check_accuracy(loader_val, model)
print()
# Validation loss
val_loss = 0.0
val_steps = 0
total = 0
correct = 0
with torch.no_grad():
x = x_hht_val.to(device) # move to device, e.g. GPU
y = y_hht_val.to(device)
scores = model(x)
correct += (np.sign(scores) == np.sign(y)).sum().item()
loss = F.mse_loss(scores, y)
val_loss += loss.cpu().numpy()
val_steps += 1
with tune.checkpoint_dir(e) as checkpoint_dir:
path = os.path.join(checkpoint_dir, "checkpoint")
torch.save((model.state_dict(), optimizer.state_dict()), path)
tune.report(loss=(val_loss / val_steps), accuracy = correct / len(y_hht_val))
print("Finished Training")
config = {
"lr": tune.loguniform(1e-6, 1e-1),
"Layers": tune.sample_from(lambda _: np.random.randint(1, 20)),
"Hidden" : tune.sample_from(lambda _: np.random.randint(2, 200))
}
reporter = CLIReporter(
parameter_columns=["lr", "Layers", "Hidden"],
metric_columns=["loss", "accuracy", "training_iteration"]
)
result = tune.run(
partial(custom_train_part, checkpoint_dir=checkpoint_dir, data_dir=None),
resources_per_trial={"cpu": 2, "gpu": 1},
config=config,
num_samples=1,
progress_reporter=reporter
)
best_trial = result.get_best_trial("loss", "min", "last")
print("Best trial config: {}".format(best_trial.config))
print("Best trial final validation loss: {}".format(best_trial.last_result["loss"]))
print("Best trial final validation accuracy: {}".format(best_trial.last_result["accuracy"]))
##############################################################
# AFTER TUNNING #
##############################################################
best_trained_model = hht_model(len(True_IMF_df.T), best_trial.config["Hidden"], best_trial.config["Layers"], 1)
best_checkpoint_dir = best_trial.checkpoint.value
model_state, optimizer_state = torch.load(os.path.join(
best_checkpoint_dir, "checkpoint"))
best_trained_model.load_state_dict(model_state)
The error message:
2021-05-19 12:06:48,454 WARNING experiment.py:294 -- No name detected on trainable. Using DEFAULT.
2021-05-19 12:06:48,455 INFO registry.py:65 -- Detected unknown callable for trainable. Converting to class.
== Status ==
Memory usage on this node: 5.4/25.5 GiB
Using AsyncHyperBand: num_stopped=0
Bracket: Iter 8.000: None | Iter 4.000: None | Iter 2.000: None | Iter 1.000: None
Resources requested: 0/4 CPUs, 0/1 GPUs, 0.0/14.99 GiB heap, 0.0/7.5 GiB objects (0.0/1.0 accelerator_type:V100)
Result logdir: /root/ray_results/DEFAULT_2021-05-19_12-06-48
Number of trials: 1/1 (1 PENDING)
+---------------------+----------+-------+----------+----------+-------------+
| Trial name | status | loc | Hidden | Layers | lr |
|---------------------+----------+-------+----------+----------+-------------|
| DEFAULT_b0772_00000 | PENDING | | 75 | 1 | 4.51358e-05 |
+---------------------+----------+-------+----------+----------+-------------+
2021-05-19 12:06:50,877 WARNING worker.py:1115 -- Warning: The actor ImplicitFunc has size 70352637 when pickled. It will be stored in Redis, which could cause memory issues. This may mean that its definition uses a large array or other object.
2021-05-19 12:06:50,950 WARNING util.py:162 -- The `start_trial` operation took 0.934 s, which may be a performance bottleneck.
(pid=1741) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/rnn.py:63: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
(pid=1741) "num_layers={}".format(dropout, num_layers))
(pid=1741) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([4208, 1])) that is different to the input size (torch.Size([4208, 75])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
(pid=1741) return F.mse_loss(input, target, reduction=self.reduction)
(pid=1741) 2021-05-19 12:06:55,607 ERROR function_runner.py:254 -- Runner Thread raised error.
(pid=1741) Traceback (most recent call last):
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 248, in run
(pid=1741) self._entrypoint()
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 316, in entrypoint
(pid=1741) self._status_reporter.get_checkpoint())
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 580, in _trainable_func
(pid=1741) output = fn()
(pid=1741) File "<ipython-input-198-4b7222b1d1ac>", line 58, in custom_train_part
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/torch/tensor.py", line 621, in __array__
(pid=1741) return self.numpy()
(pid=1741) TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
(pid=1741) Exception in thread Thread-2:
(pid=1741) Traceback (most recent call last):
(pid=1741) File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
(pid=1741) self.run()
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 267, in run
(pid=1741) raise e
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 248, in run
(pid=1741) self._entrypoint()
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 316, in entrypoint
(pid=1741) self._status_reporter.get_checkpoint())
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 580, in _trainable_func
(pid=1741) output = fn()
(pid=1741) File "<ipython-input-198-4b7222b1d1ac>", line 58, in custom_train_part
(pid=1741) File "/usr/local/lib/python3.7/dist-packages/torch/tensor.py", line 621, in __array__
(pid=1741) return self.numpy()
(pid=1741) TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
(pid=1741)
2021-05-19 12:06:55,747 ERROR trial_runner.py:732 -- Trial DEFAULT_b0772_00000: Error processing event.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trial_runner.py", line 702, in _process_trial
results = self.trial_executor.fetch_result(trial)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/ray_trial_executor.py", line 686, in fetch_result
result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 47, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1481, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=1741, ip=172.28.0.2)
File "python/ray/_raylet.pyx", line 505, in ray._raylet.execute_task
File "python/ray/_raylet.pyx", line 449, in ray._raylet.execute_task.function_executor
File "/usr/local/lib/python3.7/dist-packages/ray/_private/function_manager.py", line 556, in actor_method_executor
return method(__ray_actor, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 173, in train_buffered
result = self.train()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 232, in train
result = self.step()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 366, in step
self._report_thread_runner_error(block=True)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 513, in _report_thread_runner_error
("Trial raised an exception. Traceback:\n{}".format(err_tb_str)
ray.tune.error.TuneError: Trial raised an exception. Traceback:
ray::ImplicitFunc.train_buffered() (pid=1741, ip=172.28.0.2)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 248, in run
self._entrypoint()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 316, in entrypoint
self._status_reporter.get_checkpoint())
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 580, in _trainable_func
output = fn()
File "<ipython-input-198-4b7222b1d1ac>", line 58, in custom_train_part
File "/usr/local/lib/python3.7/dist-packages/torch/tensor.py", line 621, in __array__
return self.numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
(pid=1741) Running loss: 17.041641235351562
(pid=1741) Epoch: 0, loss = 17.041641235351562.4f
(pid=1741)
Result for DEFAULT_b0772_00000:
{}
== Status ==
Memory usage on this node: 7.1/25.5 GiB
Using AsyncHyperBand: num_stopped=0
Bracket: Iter 8.000: None | Iter 4.000: None | Iter 2.000: None | Iter 1.000: None
Resources requested: 0/4 CPUs, 0/1 GPUs, 0.0/14.99 GiB heap, 0.0/7.5 GiB objects (0.0/1.0 accelerator_type:V100)
Result logdir: /root/ray_results/DEFAULT_2021-05-19_12-06-48
Number of trials: 1/1 (1 ERROR)
+---------------------+----------+-------+----------+----------+-------------+
| Trial name | status | loc | Hidden | Layers | lr |
|---------------------+----------+-------+----------+----------+-------------|
| DEFAULT_b0772_00000 | ERROR | | 75 | 1 | 4.51358e-05 |
+---------------------+----------+-------+----------+----------+-------------+
Number of errored trials: 1
+---------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|---------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------|
| DEFAULT_b0772_00000 | 1 | /root/ray_results/DEFAULT_2021-05-19_12-06-48/DEFAULT_b0772_00000_0_Hidden=75,Layers=1,lr=4.5136e-05_2021-05-19_12-06-49/error.txt |
+---------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------+
== Status ==
Memory usage on this node: 7.1/25.5 GiB
Using AsyncHyperBand: num_stopped=0
Bracket: Iter 8.000: None | Iter 4.000: None | Iter 2.000: None | Iter 1.000: None
Resources requested: 0/4 CPUs, 0/1 GPUs, 0.0/14.99 GiB heap, 0.0/7.5 GiB objects (0.0/1.0 accelerator_type:V100)
Result logdir: /root/ray_results/DEFAULT_2021-05-19_12-06-48
Number of trials: 1/1 (1 ERROR)
+---------------------+----------+-------+----------+----------+-------------+
| Trial name | status | loc | Hidden | Layers | lr |
|---------------------+----------+-------+----------+----------+-------------|
| DEFAULT_b0772_00000 | ERROR | | 75 | 1 | 4.51358e-05 |
+---------------------+----------+-------+----------+----------+-------------+
Number of errored trials: 1
+---------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|---------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------|
| DEFAULT_b0772_00000 | 1 | /root/ray_results/DEFAULT_2021-05-19_12-06-48/DEFAULT_b0772_00000_0_Hidden=75,Layers=1,lr=4.5136e-05_2021-05-19_12-06-49/error.txt |
+---------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------+
---------------------------------------------------------------------------
TuneError Traceback (most recent call last)
<ipython-input-198-4b7222b1d1ac> in <module>()
95 num_samples=1,
96 scheduler = scheduler,
---> 97 progress_reporter=reporter
98 )
99
/usr/local/lib/python3.7/dist-packages/ray/tune/tune.py in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, queue_trials, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, loggers, ray_auto_init, run_errored_only, global_checkpoint_period, with_server, upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint, _remote)
541 if incomplete_trials:
542 if raise_on_failed_trial and not state[signal.SIGINT]:
--> 543 raise TuneError("Trials did not complete", incomplete_trials)
544 else:
545 logger.error("Trials did not complete: %s", incomplete_trials)
TuneError: ('Trials did not complete', [DEFAULT_b0772_00000])
|
st32412
|
It seems the code is failing with:
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
while trying to call .numpy() on a tensor, which is still on the GPU, so you might need to move it to the CPU first.
You should also consider fixing this warning:
UserWarning: Using a target size (torch.Size([4208, 1])) that is different to the input size (torch.Size([4208, 75])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
|
st32413
|
Hello,
Thanks for responding. I managed to fix the first error, although the second remains because I am using an LSTM where the input is the 75 previous values and the prediction of the next point in the series
|
st32414
|
I am trying to code a test case for multi task learning, using my own loss function. The idea is that the output layer is 3-dimensional, the first output is used for 1D regression, the last two are used for 2-class classification. So, my combined loss function is a weighted sum of L1 loss and CELoss.
However, pytorch is complaining about my datatypes?
class my_loss:
def __init__(self,weights):
self.weights=weights
#age loss:
self.L1=nn.SmoothL1Loss(reduction='mean',beta=0.05)
#gender loss:
self.CE=nn.CrossEntropyLoss(reduction='mean')
def __call__(self,output,target):
loss = self.weights[0]*self.L1(output[:,0],target[:,0])+self.weights[1]*self.CE(output[:,1:3],target[:,1])
return loss
#testing:
it=iter(dataloaders_dict['train'])
X,y=it.next()
criterion = my_loss(weights=(1,1))
out=model_ft(X)
loss=criterion(out,y)
print(y.dtype)
print(X.dtype)
print(out.dtype)
print(loss.dtype)
loss.backward()
This returns:
torch.int64
torch.float32
torch.float32
torch.float32
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-88-fc96d4e3072b> in <module>()
31 print(loss.dtype)
32
---> 33 loss.backward()
1 frames
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
145 Variable._execution_engine.run_backward(
146 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 147 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
148
149
RuntimeError: Found dtype Long but expected Float
If only my targets are long, how come loss.backward is finding long’s, and how do I fix it?
|
st32415
|
Solved by ptrblck in post #2
You are using a target tensor as a LongTensor, which is fine for nn.CrossEntropyLoss, but will fail for nn.SmoothL1Loss, so you would have to transform it into a FloatTensor:
loss = self.weights[0]*self.L1(output[:,0],target[:,0].float())+self.weights[1]*self.CE(output[:,1:3],target[:,1])
|
st32416
|
You are using a target tensor as a LongTensor, which is fine for nn.CrossEntropyLoss, but will fail for nn.SmoothL1Loss, so you would have to transform it into a FloatTensor:
loss = self.weights[0]*self.L1(output[:,0],target[:,0].float())+self.weights[1]*self.CE(output[:,1:3],target[:,1])
|
st32417
|
thanks, yes, that solves it. But, do you have any idea why the error does not arise in nn.SmoothL1Loss?
|
st32418
|
The error is raised in nn.SmoothL1Loss, but unfortunately in the backward pass, and I’m unsure if accepting LongTensors in the forward is a bug or a feature, so feel free to create an issue on GitHub, so that it can be tracked and fixed, if needed.
|
st32419
|
I’m training a CNN using Transfer Learning, resnet34 to be precise, the accuracies of training and validation set are not improving. They are stuck at 72% to 18% respectively. I trained the model for 25 epochs, and it didn’t show any signs of improvement at all. But when I tested the model, the accuracy was nowhere near the training and validation set, it was 86%. Can anyone explain why this is happening?
|
st32420
|
I was watching some very good videos by Aladdin Persson 2 on Youtube, and he shows a simple Sequence-2-Sequence model for machine translation + Teacher Forcing. Now technically I adapted this model for time-series analysis, but the example is fine. The original code is below. The key issues is that due to Teacher Forcing, in the Seq2Seq layer, the forward() method takes both the input sentence and the label–meaning the correct answer.
My question is, in the case of actual inference on the model, I won’t have a label. During inference I will only have the input sentence. So when trying to run the model, the model function will expect model(input, label), and we won’t have any label to provide. So what is the way to deal with that?
Here is the code.
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder):
super(Seq2Seq, self).__init__()
self.encoder = encoder
self.decoder = decoder
def forward(self, source, target, teacher_force_ratio=0.5):
batch_size = source.shape[1]
target_len = target.shape[0]
target_vocab_size = len(english.vocab)
outputs = torch.zeros(target_len, batch_size, target_vocab_size).to(device)
hidden, cell = self.encoder(source)
# Grab the first input to the Decoder which will be <SOS> token
x = target[0]
for t in range(1, target_len):
# Use previous hidden, cell as context from encoder at start
output, hidden, cell = self.decoder(x, hidden, cell)
# Store next output prediction
outputs[t] = output
# Get the best word the Decoder predicted (index in the vocabulary)
best_guess = output.argmax(1)
# With probability of teacher_force_ratio we take the actual next word
# otherwise we take the word that the Decoder predicted it to be.
# Teacher Forcing is used so that the model gets used to seeing
# similar inputs at training and testing time, if teacher forcing is 1
# then inputs at test time might be completely different than what the
# network is used to. This was a long comment.
x = target[t] if random.random() < teacher_force_ratio else best_guess
return outputs
As you can see, the forward() function takes a source, target, where the source is the input sentence and the target is the actually translated sentence. I have to use the model as below.
model = Seq2Seq(encoder_net, decoder_net).to(device)
prediction = model(data, label)
Can anyone explain how to do inference on a Sequence-to-Sequence model, or if there is a better way to train or write these models to deal with teacher forcing, etc. Thanks.
|
st32421
|
During inference, you use the encoder as normal. For decoder, you pass the input (output of encoder) and then the output of decoder is used as input for next timestamp (and repeat).
So in your seq2seq imagine an arrow going from decoder output at step t to decoder input at step t+1.
|
st32422
|
Adding to what @Kushaj says, special symbols are usually added to the sequence when forming them, in most cases: <BOS> target_senquence <EOS>
Now in production, once we have the encoder output for the input sequence so we want to predict the output (encoder_output), the first thing we give to the decoder is: source = encoder_output and target = <BOS>
With that it will predict the first token (token_1) of the output sequence, and the next thing we give to the decoder is: source = encoder_output and target = <BOS> token_1.
It will output a second token, and so on until it outputs <EOS>, and the decoding process stops: the sequence between <BOS> and <EOS> will be our prediction.
In practice, to predict each token generally, we don’t take only one token that maximizes the probability at the output of the model, but k (top_k) tokens that maximize it: thus several possible outputs are exploited before choosing the one with the maximum probability.
The idea of beam search here is that, it is not necessarily because the current token produced by the model has maximized the output probability, that it will allow to have the final output sequence that maximizes the probability: several paths are thus exploited to hope to be able to make the good choice.
|
st32423
|
I think in the future Peturch will replace Tensorflow. What do you think, friends? And which framework do you think will become more widespread in the future?
|
st32424
|
In my humble opinion I don’t think this is the right place to discuss this @David_Smit
There are discussions elsewhere on the subject, like on reddit 2 for example.
But if you want to know if you have to use tensorflow or pytorch for a particular task, I could try to give my opinion on that
|
st32425
|
I agree that this discussion board might not be the best place to discuss these vague questions, as you might get biased answers in the best case.
I’m sure a lot of ML practitioners could give you pro and cons for all frameworks and I see it in a pragmatic way: use whichever framework fits into your mindset and use case.
As always, if you are missing something in PyTorch, let us know and we’ll certainly take a look into it.
|
st32426
|
If you are in academia and are getting started, go for Pytorch. It will be easier to learn and use. If you are in the industry where you need to deploy models in production, Tensorflow is your best choice. You can use Keras/Pytorch for prototyping if you want.
|
st32427
|
Hi ! I’m training CNN and error comes up: “cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.”
I first used Tensor with shape (1,1,96,96,36) and the model worked well, but what data I need to input have the shape (1,1,512,512,336). I know that this error may happen because of the lack of GPU memory. But I checked “nvidia-smi” and the memory was enough to abundant.
I wonder whether there are any methods to solve this problem without changing firmware. Thanks in advance.
|
st32428
|
Could you post an executable code snippet as well as the output of python -m torch.utils.collect_env, please?
|
st32429
|
I was trying to implement several runs for one model to see the confidence interval of my model prediction. However, it becomes very slow. The following are the sample codes.
for i in range(opt.ite + 1):
model = getattr(models, opt.model)()
if opt.load_model_path:
model.load(opt.load_model_path)
model.to(device)
model.apply(weight_init)
# optimizer
op = Optim(model.parameters(), opt)
optimizer = op._makeOptimizer()
previous_err = torch.tensor(10000)
best_epoch = 0
for epoch in range(opt.max_epoch + 1):
# load the data
for data in zip(datI_loader, datB_loader):
# train part
loss.backward()
optimizer.step()
if epoch % 100 == 0:
test_err = val(model, grid, sol)
if test_err < previous_err:
previous_err = test_err
best_epoch = epoch
test_meter.add(previous_err.to('cpu'))
epoch_meter.add(best_epoch)
For example, if I take opt.ite to be like 2 or 3, it will becomes extremely slow, which might take 5 or 6 hours, however, I remove the outer for loop, which, looks below,
model = getattr(models, opt.model)()
if opt.load_model_path:
model.load(opt.load_model_path)
model.to(device)
model.apply(weight_init)
# optimizer
op = Optim(model.parameters(), opt)
optimizer = op._makeOptimizer()
previous_err = torch.tensor(10000)
best_epoch = 0
for epoch in range(opt.max_epoch + 1):
# load the data
for data in zip(datI_loader, datB_loader):
# train part
loss.backward()
optimizer.step()
if epoch % 100 == 0:
test_err = val(model, grid, sol)
if test_err < previous_err:
previous_err = test_err
best_epoch = epoch
It only take about 20 mins to finish.
Any ideas about this problem?
|
st32430
|
I don’t know what val is doing exactly, but it seems you are appending its output to some object.
If you are not running this code (inside val) in a with torch.no_grad() context, this could store the entire computation graph and thus increase the memory and yield to slowdowns.
|
st32431
|
Hello! I have been trying to understand the dynamics for a while but I cannot make sense of the error below:
RuntimeError: input.size(-1) must be equal to input_size. Expected 1, got 2
My training data contains only one feature column, therefore I pass “n_features” = 1. Labels can only be 0 or 1. Since this is binary classification, I pass “n_classes” = 1.
class ModuleLSTM(nn.Module):
def __init__(self, n_features, n_classes, n_hidden=256, n_layers=3):
super().__init__()
self.lstm = nn.LSTM(
input_size = n_features,
hidden_size = n_hidden,
num_layers = n_layers,
batch_first = True,
dropout = 0.75
)
self.classifier = nn.Linear(n_hidden, n_classes)
def forward(self, x):
self.lstm.flatten_parameters()
_, (hidden, _) = self.lstm(x)
return self.classifier(hidden[-1])
When I debug the ‘x’ in the forward method, I see that the shape is torch.Size([64, 5, 2]), where the 64 and 5 correspond to batch size and sequence length, respectively. I don’t understand why ‘x’ has the second dimension full of zeros below:
tensor([[[-4.3775e-01, 0.0000e+00],
[-4.7356e-01, 0.0000e+00],
[-4.9494e-01, 0.0000e+00],
[-5.2778e-01, 0.0000e+00],
[-5.5412e-01, 0.0000e+00]],
...
[[ 2.7826e+00, 0.0000e+00],
[ 2.7535e+00, 0.0000e+00],
[ 2.7076e+00, 0.0000e+00],
[ 2.6636e+00, 0.0000e+00],
[ 2.6562e+00, 0.0000e+00]]]
|
st32432
|
n_feature should be the word embedding vector size(i.e. the length of each word represented as a vector)
|
st32433
|
The x tensor in your example is the input, so I assume your input just contains zeros in one dimension?
If I use a randomly initialized input, neither hidden nor out contains all zeros.
|
st32434
|
I’m sorry for the confusion. The second column is the label column which consists of zeros and ones. In the debugged part (until the error), all labels are 0 by a coincidence, therefore I thought there was a mistake. I don’t know why but I am not able to edit my first message.
Concisely, this tensor seems okay but the problem is still the error below:
RuntimeError: input.size(-1) must be equal to input_size. Expected 1, got 2
My training data contains only one feature column, therefore I pass “n_features” = 1. Labels can only be 0 or 1, hence this is binary classification, I pass “n_classes” = 1. With this setup, I get the error above.
If and only if I pass “n_features” = 2 and “n_classes” = 2, the model works.
I would be grateful if you help me on this. Thanks @ptrblck !
|
st32435
|
This this error raised by the model in the forward pass or later by the loss function?
Could you post the input and target shapes as an executable code snippet to reproduce the issue?
|
st32436
|
The error is being raised in the forward pass, directly with the following line:
_, (hidden, _) = self.lstm(x)
I don’t know how to build an executable code snippet to reproduce the issue here, but I trimmed all unnecessary parts in the Colab link below. The error is present there and if you want, you can quickly run it again (I gathered into 2 cells):
Google Colab model.ipnyb 2
Thank you for the effort!
|
st32437
|
Hi, I looked at your code.
The error is:
model = GamestagePredictor(n_features = 1, n_classes = 1)
you are giving n_features = 1 and this n_features is being called by an LSTM layer
self.lstm = nn.LSTM(
input_size = n_features,
hidden_size = n_hidden,
num_layers = n_layers,
batch_first = True,
dropout = 0.75
)
The input_size argument of an LSTM layer should be a 3-D tensor not an “integer”
like:
(batch_size, seq_len, dim)
|
st32438
|
Thank you @Tejan_Mehndiratta
According to the documentation 2, “input_size” should be an integer. As I stated above, if I set “n_features” = 2, it works without a problem. However, I think that I should be able to set it to 1, since my training data has only one feature column. Setting it 1 causes the error.
|
st32439
|
Thanks for the code. I’ve removed all unnecessary code, since the forward pass should raise the issue, which works fine on my setup:
class ModuleLSTM(nn.Module):
def __init__(self, n_features, n_classes, n_hidden=256, n_layers=3):
super().__init__()
self.lstm = nn.LSTM(
input_size = n_features,
hidden_size = n_hidden,
num_layers = n_layers,
batch_first = True,
dropout = 0.75
)
self.classifier = nn.Linear(n_hidden, n_classes)
def forward(self, x):
self.lstm.flatten_parameters()
_, (hidden, _) = self.lstm(x)
out = hidden[-1]
return self.classifier(out)
batch_size = 10
seq_len = 50
nb_features = 1
model = ModuleLSTM(nb_features, n_classes=10)
x = torch.randn(batch_size, seq_len, nb_features)
out = model(x)
print(out)
Could you check this minimal code snippet and compare it to the input shapes you are using?
|
st32440
|
Thanks. I will try but why do we need to set “n_classes” to 10? Shouldn’t it be 1 in my case?
|
st32441
|
I just picked a random number, as the number of output features shouldn’t make a difference.
Also, since my small code snippet is running fine, please feel free to add any code snippets to it, which would reproduce your issue.
|
st32442
|
As far as I understood, the cause of the problem as follows:
batch_size = 64
seq_len = 5
n_features = 1
n_class = 1
model = ModuleLSTM(n_features, n_class)
If I feed the setup above with the ‘x’ below, there is no problem as you’ve said:
x = torch.randn(batch_size, seq_len, n_features)
BUT the problem is, in the same setup above, my real data comes into ‘x’ with shape (64, 5, 2), not with (64, 5, 1). This is because of the way I generate the data I guess, but I don’t know how to fix it. Let me explain you step by step:
1-> I create X_train, y_train, X_val, y_val, X_test, y_test, I scale the X… parts. Nothing unusual. In the rest, I will just mention the training data. I merge the scaled X_train with y_train:
trainData = pd.concat([X_train_scaled, y_train],axis=1)
2-> I create my data sequences with the function below. It fetches the first “seq_len” amount of feature rows, saves into “sequence”, fetches the “seq_len+1”-th label and appends this sequence and label together to “sequences” list. Then moves into the next sequence-label pair (till the end)
def create_sequences(input_data, target_name, sequence_length):
sequences = []
for i in range(0, len(input_data) - sequence_length, sequence_length):
sequence = input_data[i : i + sequence_length]
label = input_data.iloc[i + sequence_length][target_name]
sequences.append((sequence, label))
return sequences
trainSequences = create_sequences(trainData, 'gamestageEMA', seq_len)
3-> I pass the sequences to my “DataModule” class which inherits the pl.LightningDataModule. (You can see this class and the rest in my Colab)
dataModule = DataModule(trainSequences, valSequences, testSequences, BATCH_SIZE)
it first passes “trainSequences” to my “DoomFrameDataset” class which inherits torch Dataset and as a result, it’s “self.trainDataset” is filled. Then it returns Train DataLoader in the following way:
DataLoader(self.trainDataset, batch_size=self.batchSize, shuffle=False, num_workers=cpu_count())
4-> I create the model:
model = GamestagePredictor(n_features = 1, n_classes = 1)
and you know the rest. My GameStagePredictor class fetches the sequence and label pairs and passes them to LSTM:
def training_step(self, batch, batch_idx):
sequences, labels = batch["sequence"], batch["label"]
loss, outputs = self(sequences, labels)
...
I am sorry for this long message, I just wanted to be more specific. Concisely, the problem is about the way I pass my sequence and label pairs to my model. Since set n_features = 1 and n_classes = 1, it wants to see the input with shape (64, 5, 1), but receives (64, 5, 2) instead. I don’t know how to deal with this.
|
st32443
|
I’m unfortunately not familiar with Lightning’s DataModule and don’t know if any reshaping is done internally.
Since the shape is apparently wrong for the input, I would recommend to add print statements in all data loading classes and check the current shape of the input to further isolate the additional values.
|
st32444
|
Thank you for your time and effort @ptrblck , I am appreciated.
I realized that the problem was being caused by the sequence creation function. I solved the issue there. Now, if I pass n_features=1 and class=2 (I was thinking that I should set this to 1 since this is binary classification, but setting 1 raises another error), the model runs without any problem.
The only problem now is that if I run the model on CPU, there is no problem, however, if I try to use GPU, I get:
RuntimeError: CUDA error: device-side assert triggered
|
st32445
|
Often device assertions are trigged by e.g. invalid indices. You could run the script via CUDA_LAUNCH_BLOCKING=1 python script.pt args and check the stacktrace for the failing operation.
Once you know which line of code is raising this error, you can add print statements to debug the issue further.
|
st32446
|
I am training a Sequence to Sequence LSTM model. The problem is that during the training loop, the loss calculation is complaining with the error: “Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!”.
So it seems like the computing the “score” from the model generates a tensor on the CPU instead of the GPU. I can fix this by just adjusting the line scores = model(data, targets) to scores = model(data, targets).to(device), but that seems like an unnecessary passing of a tensor back and forth from the GPU to the CPU and then back to the GPU.
model = Seq2SeqPF(encoder_net, decoder_net).to(device)
load_from_checkpoint = False
if load_from_checkpoint:
load_checkpoint(torch.load(os.path.join(CHECKPOINT_DIRECTORY, CHECKPOINT_NAME)),model, device)
for epoch in range(EPOCHS):
print(f"Epoch: {epoch + 1}/{EPOCHS}")
kbar = pkbar.Kbar(target=batches_per_epoch, width=8)
if epoch % 5 == 0:
checkpoint = {'state_dict': model.state_dict(),
'optimizer': optimizer.state_dict()}
save_checkpoint(checkpoint,
CHECKPOINT_DIRECTORY,
CHECKPOINT_NAME)
for batch_idx, (data, targets) in enumerate(train_loader):
data = data.to(device=device)
targets = targets.to(device=device)
# forward pass and compute error
scores = model(data, targets)
loss = criterion(scores, targets) # <---GENERATES ERROR ABOUT CPU AND GPU
# backward pass and apply gradients.
optimizer.zero_grad()
loss.backward()
# gradient descent step
optimizer.step()
kbar.update(batch_idx, values=[("loss", loss)])
The code seems standard. I also did push the model itself to the GPU device, and the encoder_net, and decoder_net are also network layers that are pushed to the GPU before the model code is pushed to the GPU.
Any suggestions on the right way to handle this?
|
st32447
|
@eqy Oh that makes sense. I did not know I needed that. I did not realize that I needed to explicitly indicate where the criterion is computed. Hmm. Thanks for finding that for me. I appreciate it.
|
st32448
|
m = nn.LogSoftmax(dim=1)
loss = nn.NLLLoss()
# input is of size N x C = 1 x 3
input = torch.tensor([[0,0,0]], requires_grad=True, dtype=torch.float)
# each element in target has to have 0 <= value < C
target = torch.tensor([1])
output = loss(m(input), target)
print(output)
_Output: tensor(1.0986, grad_fn=<NllLossBackward>)_
I read the Documentation 2.3k but it’s not clear. Can someone explain the math behind this example?
|
st32449
|
In your example the your output has the same “probability” for all three classes, i.e. the logits have the same value.
Their probability should therefore be approx [0.33, 0.33, 0.33].
Since you are using LogSoftmax we can check, if this is true by calling exp on it (thus getting rid of the log):
print(m(input))
> tensor([[-1.0986, -1.0986, -1.0986]], grad_fn=<LogSoftmaxBackward>)
print(m(input).exp())
> tensor([[0.3333, 0.3333, 0.3333]], grad_fn=<ExpBackward>)
You will get the same values every time you pass the same logits into LogSoftmax.
Now we just have to get the right index using target, multiply with -1, and end up with a loss value of 1.0986.
|
st32450
|
loss = nn.NLLLoss()
a = torch.tensor(([0.88, 0.12], [0.51, 0.49]), dtype = torch.float)
target = torch.tensor([1, 0])
output = loss(a, target)
print(output)
Don’t know if it’s right to post this question here, but I’m trying: Why the output of this piece of code is tensor(-0.3150)? I was expecting to be (-1/2) * ((1 * ln(0.88) + 0 * ln(0.12) + 1 * ln(0.51) + 0 * ln(0.49)), which would be equal to 0.4005, not -0.3150?
I found the formula for log likelihood here 92
|
st32451
|
nn.NLLLoss expects the inputs to be log probabilities, while you are passing the probabilities into the criterion.
Also, your manual calculation seem to mix the target indices, as the first sample will have the class1 as its target and the second one class0.
Here is an example showing the same result:
loss = nn.NLLLoss()
a = torch.tensor(([0.88, 0.12], [0.51, 0.49]), dtype = torch.float)
target = torch.tensor([1, 0])
output = loss(torch.log(a), target)
print(output)
> tensor(1.3968)
print((-torch.log(a[0, 1]) - torch.log(a[1, 0])) / 2)
> tensor(1.3968)
|
st32452
|
Ahh ok, thanks for the answer! What I am trying to figure out actually is how the nn.NLLLoss works for multidimensional tensors, but I couldn’t find an example. Could you give me a simple example on how that loss is calculated for a 2D or 3D tensor?
|
st32453
|
Hi Calin!
Calin_Serban:
What I am trying to figure out actually is how the nn.NLLLoss works for multidimensional tensors
Please see (if I understand what you are asking) the description of
the “K-dimensional case” in the documentation for NLLLoss 145.
Here is an illustrative (pytorch 0.3.0) script:
import torch
torch.__version__
torch.manual_seed (2020)
nBatch = 2
nClass = 4
width = 3
height = 5
input = torch.randn (nBatch, nClass, width, height)
target = torch.multinomial (torch.ones (nClass) / nClass, nBatch * width * height, replacement = True).resize_ (nBatch, width, height)
input.shape
target.shape
target.min()
target.max()
input = torch.autograd.Variable (input)
input = torch.nn.functional.log_softmax (input, dim = 1)
target = torch.autograd.Variable (target)
torch.nn.NLLLoss() (input, target)
And here is the output:
>>> import torch
>>> torch.__version__
'0.3.0b0+591e73e'
>>>
>>> torch.manual_seed (2020)
<torch._C.Generator object at 0x00000170D6456630>
>>>
>>> nBatch = 2
>>> nClass = 4
>>> width = 3
>>> height = 5
>>> input = torch.randn (nBatch, nClass, width, height)
>>> target = torch.multinomial (torch.ones (nClass) / nClass, nBatch * width * height, replacement = True).resize_ (nBatch, width, height)
>>>
>>> input.shape
torch.Size([2, 4, 3, 5])
>>> target.shape
torch.Size([2, 3, 5])
>>> target.min()
0
>>> target.max()
3
>>>
>>> input = torch.autograd.Variable (input)
>>> input = torch.nn.functional.log_softmax (input, dim = 1)
>>> target = torch.autograd.Variable (target)
>>>
>>> torch.nn.NLLLoss() (input, target)
Variable containing:
1.9742
[torch.FloatTensor of size 1]
Note that target has one less dimension than input. In particular,
target does not have an nClass dimension, while input does.
Best.
K. Frank
|
st32454
|
Hi Frank, so I took a more simple example for trying to understand:
m = nn.LogSoftmax(dim=1)
loss = nn.NLLLoss()
# input is of size N x C = 2 X 2
input = torch.randn(2, 2, requires_grad=True)
# each element in target has to have 0 <= value < C
target = torch.tensor([1, 0])
output = loss(m(input), target)
print(m(input))
print(target)
print(output)
, and one of its outputs was:
tensor([[-1.1722, -0.3706],
[-0.5150, -0.9100]], grad_fn=<LogSoftmaxBackward>)
tensor([1, 0])
tensor(0.4428, grad_fn=<NllLossBackward>)
So which is the formula involved in this example for getting the value 0.4428 based on the given input and target, because it’s not really clear for me how l1,…,ln from the l(x, y) formula (official NLLLoss documentation) are calculated since the loss weight is None?
Thanks, Calin!
|
st32455
|
Hello Calin!
Calin_Serban:
tensor([[-1.1722, -0.3706],
[-0.5150, -0.9100]], grad_fn=<LogSoftmaxBackward>)
tensor([1, 0])
tensor(0.4428, grad_fn=<NllLossBackward>)
So which is the formula involved in this example for getting the value 0.4428
0.4428 = -(-0.3706 + -0.5150) / 2.
That is, the value of your output is the average of the losses for each
of the two samples in you batch.
it’s not really clear for me how l1,…,ln from the l(x, y) formula (official NLLLoss documentation) are calculated since the loss weight is None?
Quoting from the NLLLoss 49 documentation:
weight ( Tensor , optional ) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
Optional means that the argument is allowed to be None, i.e, absent.
In such a case there is no reweighting (or, equivalently, the reweighting
factors are all equal to 1).
Best.
K. Frank
|
st32456
|
I don’t know, if I could post a question here. But it’d great to find a soluton because honestly , I don’t understand what wrong here
optimizer = optim.Adam(net.parameters(), lr=0.001 )
EPOCHS = 3
for epoch in range(EPOCHS):
for data in trainset:
X, y = data
#Make the zero_grad()
net.zero_grad()
#output of the loss created
input = net(X.view(-1, 784)
#calculating the loss
loss = F.nll_loss(input, y) # calc and grab the loss value
loss.backward() # apply this loss backwards thru the network's parameters
optimizer.step() # attempt to optimize weights to account for loss/gradients
print(loss) # print loss. We hope loss (a measure of wrong-ness) declines!
But this keeps showing error at loss = F.nll_loss(input, y) and the error is SyntaxError, how do I solve it ?
I am new to deep learning, but I have experience with sklearn
|
st32457
|
What kind of error are you seeing?
Could you post the complete error message with the stack trace here, please?
Often F.nll_loss creates a shape mismatch error, since for a multi-class classification use case the model output is expected to contain log probabilities (applied F.log_softmax as the last activation function on the output) and have the shape [batch_size, nb_classes]. The target should be a LongTensor in the shape [batch_size] and should contain the class indices in the range [0, nb_classes-1].
|
st32458
|
I am creating positive and negative pairs for Contrastive loss calculation in Siamese network. For positive pairs I am taking two similar images. However for creating a negative pair I am randomly pairing two different images. I am still working on the model architecture. Every time I train a new model a random combination of dissimilar pair is picked. This causes different accuracy and loss function. Am I right to pick dissimilar pairs randomly or should I create fixed similar and dissimilar pairs for training and testing?
Also what should be the ratio for similar and dissimilar pairs for every image?
|
st32459
|
It seems that your pairing is somehow good, could you share about your network architecture? Maybe the feature extractor?
|
st32460
|
Hello,
I wanted to raise the question on what is the most general and elegant way to load a pruned model (pruned using the utils.prune functionality). There is an easy fix I’ll post as well but it is not satisfactory I think.
Problem
Given a model (I am using MS-D here https://github.com/ahendriksen/msd_pytorch 5), pruning introduces new parameters weight_orig, weight_mask etc. and also makes sure they are properly applied in forward/backward passes by hooks. If I simply define a model and try to load a pruned network I will get a key-error:
class MSDModel:
def __init__(...):
....
def save(self, path, epoch):
state = {
"epoch": int(epoch),
"state_dict": self.net.state_dict(),
"optimizer": self.optimizer.state_dict(),
}
torch.save(state, path)
def simple_load(self, path, strict=True):
state = torch.load(path)
self.net.load_state_dict(state["state_dict"], strict=strict)
self.optimizer.load_state_dict(state["optimizer"])
self.net.cuda()
epoch = state["epoch"]
return epoch
model = MSDSegmentationModel(...)
model.simple_load(pruned_nets_path + netname)
This will give an unexpected ker-error and rightly so, it doesn’t know about weight_orig etc. Calling model.load(..., strict=False) will load the network without errors but then the new parameters are ignored, i.e. the model is loaded without the masks and such.
Easy solution
I wanted to include a quick workaround for anyone who wants a quick fix. The easiest fix for anyone who just wants it to work is to call a the predefined pruning method on the modules that are pruned in the network you are trying to load with a pruning percentage of 0%. This will set the correct hooks and introduce the correct parameters after which the loading will work fine.
General solution? (does not work yet)
In my opinion, the above is not really a proper solution. Instead, I would like to have a loading function which introduces the missing parameters in the dicts and sets the hooks properly. I have made a start below (including some code from: https://github.com/KaiyangZhou/deep-person-reid/blob/master/torchreid/utils/torchtools.py 4) but this does not properly set the buffers, hooks etc. and simply does not work. The idea is that the expanded_loading flag would allow the function to add new parameters etc. to the existing model.
from collections import OrderedDict
import warnings
class MSDModel:
def __init__(...):
...
def load(self, path, strict=True, expanded_loading=False):
state = t.load(path)
if 'state_dict' in state:
state_dict = state['state_dict']
else:
state_dict = state
if strict:
self.net.load_state_dict(state_dict)
else:
model_dict = self.net.state_dict()
new_state_dict = OrderedDict()
matched_keys, discarded_keys = [], []
if not expanded_loading:
for k, v in state_dict.items():
if k in model_dict and model_dict[k].size() == v.size():
new_state_dict[k] = v
matched_keys.append(k)
else:
discarded_keys.append(k)
else:
for k, v in state_dict.items():
new_state_dict[k] = v
matched_keys.append(k)
model_dict.update(new_state_dict)
self.net.load_state_dict(state_dict, strict=strict) # unnecessary?
if len(matched_keys) == 0:
warnings.warn(
'The pretrained weights cannot be loaded, '
'please check the key names manually '
'(** ignored and continue **)'
)
else:
print(
'Successfully loaded pretrained weights.'
)
if len(discarded_keys) > 0:
print(
'** The following keys are discarded '
'due to unmatched keys or layer size: {}'.
format(discarded_keys)
)
self.optimizer.load_state_dict(state["optimizer"])
self.net.cuda()
epoch = state["epoch"]
return epoch
I am out of my depth here so I was hoping somebody would like to help me to write a proper loading function for pruned networks.
Kind regards,
Richard
|
st32461
|
Solved by Michela in post #3
Yep, as @Jayant_Parashar said: remove the pruning reparametrization prior to saving the state_dict.
Yet another solution is to save out the whole model instead of the state dict while it’s still pruned:
torch.save(pruned_model, 'pruned_model.pth'), and then restore it as pruned_model = torch.load(…
|
st32462
|
Remove the pruning before saving using prune.remove(layername, "weight"). This makes pruning permanent.
|
st32463
|
Yep, as @Jayant_Parashar said: remove the pruning reparametrization prior to saving the state_dict.
Yet another solution is to save out the whole model instead of the state dict while it’s still pruned:
torch.save(pruned_model, 'pruned_model.pth'), and then restore it as pruned_model = torch.load('pruned_model.pth'). This might be a bit risky because it assumes the model class can be easily found.
If, however, you care about retaining the masks, or you have inherited a state_dict from somewhere else which contains the pruned reparametrization (so the various weight_mask and weight_orig buffers and parameters), then the solution is to: 1) put your newly instantiated model in a pruned state using prune.identity, which creates all the objects you’d expect, but with masks of ones; 2) load the state_dict, which should now fit the model.
|
st32464
|
Let me also add that, in the last scenario, loading the state_dict into a newly instantiated model will make it such that all your weight_origs and weight_masks will be properly filled in with the info from the state_dict, BUT the weight will still be a randomly sampled tensor from the new model instantiation.
Why does this matter? weight_orig and weight_mask together can be used to recompute the weight on the fly, through the forward_pre_hook that PyTorch pruning uses. But this hook needs a forward call to act and recompute the weight . Without that call, the weight will just be some random tensor that has nothing to do with the loaded weight_orig and weight_mask.
Therefore, either serialize you models after removing the pruning parametrization, or remember to set the weight correctly (by hand or with a forward call) before trying to prune it again. In practice, this means preferably calling _ = model(X), where X is some (even fake) input data batch, or, alternatively, setting the weight by hand weight = weight_orig * weight_mask (be careful with this).
|
st32465
|
Hi Chamroukhi!
Chamroukhi:
any help for which metric in Pytorch used for multi-labeling ?
The appropriate loss function for a multi-label, multi-class classification
task is BCEWithLogitsLoss 2.
Best.
K. Frank
|
st32466
|
Hello Guys,
where, how do you store, download/upload your files to while training your Models in the Cloud (Kaggle, Colab)?
Im currently using Google Cloud Storage, but the 300 free USD Bonus is simply used up immediately just for Storing the Checkpoints. I bought 100GB Gdrive Storage, but thats just not enough if you use Colab Pro and you can train multiple Models at once.
Ty in advcanded for your suggestions
|
st32467
|
I trained my module with 4 gpus ,like:
mymodule = nn.parallel.DistributedDataParallel(mymodule, device_ids=[local_rank])
But I saved my module by
torch.save(mymodule.state_dict() , '%s/modelG_%d.pth' % (opt.outf, epoch))
When I load it by:
mymodule.load_state_dict(torch.load(f))
I got:
RuntimeError: storage has wrong size: expected -4763383137013773690 got 128
What is wrong? And if there is any way to deal with it without re-training?
Thanks.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.