id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st46368 | This error is raised by PIL, which gets unexpected images and cannot blend them.
Is the error raised in the forward pass of self.net?
If so, it seems you are using PIL methods inside the model, while you are also passing tensors to its forward. Could you explain the use case a bit? |
st46369 | I want to run a model with syncBn in cpu, so I have to convert syncBN to batchNormND, how can I do that?
I just found a way to convert from bn to syncbn, but how to do the opposite? Thanks in advance.
convert2syncbn 3 |
st46370 | I think the best approach would be to not use convert_sync_batchnorm in your model, which should be executed on the CPU.
Would this work or why do you have to apply this transformation? |
st46371 | Solved by ptrblck in post #8
You could pass an input with a larger spatial size (height and width) or alternatively remove some pooling layers or other layers, such as strided convs, which would reduce the spatial size too much. |
st46372 | The tensors with the provided shapes should work fine in torch.cat:
x = torch.randn(1, 128, 32, 160, 160)
x1 = torch.randn(1, 64, 32, 160, 160)
x = torch.cat((x, x1), dim=1)
print(x.shape)
> torch.Size([1, 192, 32, 160, 160]) |
st46373 | Thanks for your help,At the beginning of the code,I write form torch import cat,It can be run in the first few cat operations of the network,But when I debugged to the last cat operation, it stopped working,I don’t know the reason . |
st46374 | when I run train.py in windows , It get error,image1777×517 86.1 KB
when I run train.py in ubuntu, It get error ,image1186×500 38.2 KB |
st46375 | These are all different errors.
The out of memory issue is raised, if your GPU doesn’t have enough memory and you would have to reduce e.g. the batch size.
The second error is raised if you are passing an input tensor to the model, which is too small and a layer would create an empty output tensor (a pooling layer in your case).
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier. |
st46376 | You could pass an input with a larger spatial size (height and width) or alternatively remove some pooling layers or other layers, such as strided convs, which would reduce the spatial size too much. |
st46377 | Thank you for your kind answer,this is the same code in different os.I don’t know why x6 can be calculated in windows, image1039×310 10.1 KB ,but in ubuntu ,it can’t caculate x6.image1145×633 43.4 KB |
st46378 | I don’t know how the model is defined, but one often overlooked difference between different OSes is the sorting of file names, so I guess your might be facing the same issue for a later input.
Feel free to post a reproducible code snippet by wrapping it into three backticks, as mentioned before, so that we can take a look. |
st46379 | I am working on vessel segmentation.I use bcewithlogit as loss funtion and accuracy,precision,recall,specific,F1,and auc as metric.
The loss keep going lower during training,but the most of these metrics goes down too.
I don’t understand how could this happen,can anyone explain this,please,thank you |
st46380 | I want to be able to wrap all convolutional parameters (w) of my model with a function e.g. softmax(w), sigmoid(w), or alpha*w, etc. Is there any way of making this function part of the graph using hooks etc.? |
st46381 | Performing distributed training, I have the following code like this:
training_sampler = DistributedSampler(training_set, num_replicas=2, rank=0)
training_generator = data.DataLoader(training_set, **params, sampler=training_sampler)
for x, y, z in training_generator: # Error occurs here.
...
Overall, I get the following message:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/home/ubuntu/VC/ppg_training_extraction/ppg_training_scripts/train_ASR_trim_scp.py", line 336, in train
for local_batch_src, local_batch_tgt, lengths in dataloaders[phase]:
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 352, in __iter__
return self._get_iterator()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 294, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 827, in __init__
self._reset(loader, first_iter=True)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 857, in _reset
self._try_put_index()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1091, in _try_put_index
index = self._next_index()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 227, in __iter__
for idx in self.sampler:
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/torch/utils/data/distributed.py", line 97, in __iter__
indices = torch.randperm(len(self.dataset), generator=g).tolist() # type: ignore
RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
Now at that line, I ran the following instructions in pdb:
(Pdb) g = torch.Generator()
(Pdb) g.manual_seed(0)
<torch._C.Generator object at 0x7ff7f8143110>
(Pdb) indices = torch.randperm(4556, generator=g).tolist()
(Pdb) indices = torch.randperm(455604, generator=g).tolist()
*** RuntimeError: Expected a 'cuda' device type for generator but found 'cpu'
Why am I getting the runtime error when the upperbound integer is high, but not when it’s low enough?
Note, I ran on a clean Python session and found
>>> import torch
>>> g = torch.Generator()
>>> g.manual_seed(0)
<torch._C.Generator object at 0x7f9d2dfb39f0>
>>> indices = torch.randperm(455604, generator=g).tolist()
that this worked fine. Is it some configuration in how I’m handling distributed training among multiple GPUs? Any sort of insights would be appreciated! |
st46382 | Solved by shawnbzhang in post #2
So I found out why this error was occurring. It was because earlier in my code, I had the following line:
torch.set_default_tensor_type('torch.cuda.FloatTensor') |
st46383 | So I found out why this error was occurring. It was because earlier in my code, I had the following line:
torch.set_default_tensor_type('torch.cuda.FloatTensor') |
st46384 | here is My personal mapper for augmentation
def mapper(dataset_dict):
dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
image = utils.read_image(dataset_dict["file_name"], format="BGR")
transform_list = [
#T.Resize((800,800)),
T.ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice')
,T.RandomApply(T.RandomFlip(prob=0.5, horizontal=True, vertical=False),prob=0.3)
,T.RandomApply(T.RandomRotation((-10,10)),prob=0.4)
,T.RandomApply(T.RandomSaturation(0.8, 1.2),prob=0.3)
,T.RandomApply(T.RandomBrightness(0.8, 1.2),prob=0.2)
,T.RandomApply(T.RandomContrast(0.6, 1.3),prob=0.2)
,T.RandomApply(T.RandomLighting(0.7),prob=0.4)
]
image, transforms = T.apply_transform_gens(transform_list, image)
dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
annos = [
utils.transform_instance_annotations(obj, transforms, image.shape[:2])
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image.shape[:2])
dataset_dict["instances"] = instances
return dataset_dict
Here so far no problem
but when I add the
#,T.RandomApply(T.RandomCrop('relative_range', (0.4, 0.6)),prob=0.05)
to the augmentation loss become diverged and show Nan or infinity number.
and here is my configuration for detectron2
import os
from detectron2.data import DatasetMapper, build_detection_train_loader # the default mapper
import detectron2.data.transforms as T
## Adding the new CascadeDropoutROIHeads to the config file
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("kogas_train",)
cfg.DATASETS.TEST = ("kogas_test",)
cfg.TEST.EVAL_PERIOD = 200
cfg.DATALOADER.NUM_WORKERS = 4
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 12#6
cfg.SOLVER.MAX_ITER = 15000 # Need to train longer
cfg.MODEL.RESNETS.DEPTH = 50 #RESNET 50,101
cfg.SOLVER.CHECKPOINT_PERIOD = 200
#cfg.MODEL.ROI_HEADS.NAME= "CascadeROIHeads"
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 4 # 4 classes
cfg.MODEL.MASK_ON = False
cfg.OUTPUT_DIR = save_path
cfg.SOLVER.LR_SCHEDULER_NAME = 'WarmupCosineLR'
cfg.SOLVER.BASE_LS = 0.0001
os.makedirs(cfg.OUTPUT_DIR, exist_ok=True)
dataloader = build_detection_train_loader(cfg, mapper=mapper2)
trainer = CocoTrainer(cfg)
trainer.resume_or_load(resume=False) |
st46385 | where should I add the resize module in my code to make the image in the same size ? I have tried many times but it does not work at all.
from torchvision import transforms
from segmentation.data_loader.segmentation_dataset import SegmentationDataset
from segmentation.data_loader.transform import Rescale, ToTensor
from segmentation.trainer import Trainer
from segmentation.predict import *
from segmentation.models import all_models
from util.logger import Logger
train_images = r’./plant/images/train’
test_images = r’./plant/images/test’
train_labled = r’./plant/labeled/train’
test_labeled = r’./plant/labeled/test’
if name == ‘main’:
model_name = “unet_mobilenet_v2”
device = ‘cuda’
batch_size = 8
n_classes = 256
num_epochs = 10
image_axis_minimum_size = 200
pretrained = True
fixed_feature = False
logger = Logger(model_name=model_name, data_name='example')
### Loader
compose = transforms.Compose([
Rescale(image_axis_minimum_size),
ToTensor()
])
train_datasets = SegmentationDataset(train_images, train_labled, n_classes, compose)
train_loader = torch.utils.data.DataLoader(train_datasets, batch_size=batch_size, shuffle=True, drop_last=True)
test_datasets = SegmentationDataset(test_images, test_labeled, n_classes, compose)
test_loader = torch.utils.data.DataLoader(test_datasets, batch_size=batch_size, shuffle=True, drop_last=True)
### Model
model = all_models.model_from_name[model_name](n_classes, batch_size,
pretrained=pretrained,
fixed_feature=fixed_feature)
model.to(device)
###Load model
###please check the foloder: (.segmentation/test/runs/models)
#logger.load_model(model, 'epoch_15')
### Optimizers-
if pretrained and fixed_feature: #fine tunning
params_to_update = model.parameters()
print("Params to learn:")
params_to_update = []
for name, param in model.named_parameters():
if param.requires_grad == True:
params_to_update.append(param)
print("\t", name)
optimizer = torch.optim.Adadelta(params_to_update)
else:
optimizer = torch.optim.Adadelta(model.parameters())
### Train
#scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=10, gamma=0.1)
trainer = Trainer(model, optimizer, logger, num_epochs, train_loader, test_loader)
trainer.train()
#### Writing the predict result.
predict(model, r'./plant/input.png',
r'./plant/output.png')
641296×514 66.9 KB |
st46386 | Could you try passing the size value as a tuple to the Resize function. Resize behaves differently if your images have non-equal dimensions and you pass only a single value. |
st46387 | Sorry for the late reply. The error looks to be from the Dataloader, which in turn is coming from the Dataset due to incorrect dimensions. Try using the compose as below,
compose = transforms.Compose([
Resize(image_axis_minimum_size, image_axis_minimum_size),
ToTensor()
]) |
st46388 | I am working on vessel segmentation and I have three datasets DRIVE,stare and chaseDB.Should my model be trained on these three datasets together and get one model finally or train them separately and get three models.Which option is better,I just want the metric looks nice.
Thank you!! |
st46389 | I am trying to constraint the final layer of my NN to have non negative weights in the final layer, for my binary classification task ( the reason for me wanting to have non negative weights does not matter right now)
This is basically what my code looks like :
class Classifier(nn.Module):
def __init__(self, in_dim, hidden_dim1 , hidden_dim2 , hidden_dim3 , n_classes):
super(Classifier, self).__init__()
# other layers
self.classify = nn.Linear(hidden_dim3 , n_classes)
def forward(self, g , h ):
# other layers
hg = self.classify(h)
self.classify.weight.data = self.classify.weight.data.clamp(min=0)
hg = torch.sigmoid ( hg )
return hg
so am i doing this right? is this proper way of forcing the final layer to only have positive weights and therefore only looks for “positive” features to do classification ?
wouldn’t there be problems because sigmoid with only positive input only outputs +50% probabilities? the bias should fix this problem, right?
Note that keras has
tf.keras.constraints.NonNeg()
which does the same thing and i am trying to do that in pytorch. |
st46390 | Hi Richard!
It looks like this is related to your previous post:
How to use sigmoid on only positive values?
I have a binary classification model, that in the latest linear layer, it outputs only positive values (don’t ask why, that’s a different matter), now when i give the final layer’s output to torch.sigmoid, all the results are above 50%, because the final linear layer is only outputting positive values, how can i fix this and output probability? is there any “positive only” sigmoid in pytorch?
Richard_S:
I am trying to constraint the final layer of my NN to have non negative weights in the final layer, for my binary classification task ( the reason for me wanting to have non negative weights does not matter right now)
My guess is that you’re not going about your problem in a sensible way.
But unless you describe your actual use case, it’s hard to know.
self.classify.weight.data = self.classify.weight.data.clamp(min=0)
is this proper way of forcing the final layer to only have positive weights
.data is deprecated, and the forum experts will threaten you with
the specter of computation-graph gremlins if you use it.
If you really want to do this, something like:
with torch.no_grad():
self.classify.weight.copy_ (self.classify.weight.data.clamp(min=0))
might be better.
and therefore only looks for “positive” features to do classification ?
Is this the key to what you are trying to do? What would it mean to
build a classifier that “only looks for ‘positive’ features?”
wouldn’t there be problems because sigmoid with only positive input only outputs +50% probabilities?
Well, if you only look for positive features, I suppose that it would
be natural to only find positive features, and therefore only output
probabilities greater than 50%.
But yes, this does seem problematic to me.
the bias should fix this problem, right?
I doubt anything I say where will be relevant to your actual problem,
but here are some observations:
Just because your last layer has only positive weights (as distinct
from biases) doesn’t mean that your output values (even if you had
no biases) would be positive. Negative inputs to a positive-weight
layer will produce negative outputs.
But yes, a negative bias could flip the value of what would have been
a positive output to negative.
However, as it stands, I don’t see how your optimizer would know that
you want positive weights. So even if your biases could “fix” the
problem, your optimizer could well just take steps that leave your
biases alone, and make your weights negative (even though you
then force them by hand to be positive).
Good luck.
K. Frank |
st46391 | I’m trying to implement the idea of this paper for a more robust model :
arxiv.org
1806.06108.pdf 23
375.14 KB
so the “attacker” adding more benign features will not evade the model, the core idea is that by forcing the weights to be positive only, adding benign features will not affect the output, because only features that drive the model towards positive will affect the output, pretty simple but effective.
they implemented this in keras using
tf.keras.constraints.NonNeg()
So what is the most optimal way of implementing this in a multi layer NN in pytorch? (for binary classification), meaning how should i force the weights to be 0, what activation function and optimization parameters should i use? |
st46392 | Hi Richard!
Richard_S:
they implemented this in keras using
tf.keras.constraints.NonNeg()
So what is the most optimal way of implementing this in a multi layer NN in pytorch?
According to the keras documentation, Layer weight constraints 12:
“They are per-variable projection functions applied to the target
variable after each gradient update.”
So following along with what keras claims it does, you could try:
optimizer.step()
with torch.no_grad():
self.classify.weight.copy_ (self.classify.weight.data.clamp(min=0))
to force the constraint after each optimizer step.
You would then hope that the training process causes the optimizer
to move the last layer’s bias into negative territory so that you get
predicted logits centered around zero, and therefore make “negative”
as well as “positive” predictions.
Such an approach would seem to be training the model with one hand
tied behind its back, but, in principle, it ought to be able to train the
biases to become negative.
Good luck.
K. Frank |
st46393 | Thank you for the answer, any suggestion on which optimizer & loss function to choose? does it really matter in this case? i am currently using :
criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001) |
st46394 | Hi Richard!
Richard_S:
criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
From your previous posts it appears that you have a multi-label,
multi-class problem. For this, BCELoss is reasonable. For
improved numerical stability, however, you should prefer using
BCEWithLogitsLoss and remove the sigmoid() call from your
forward() function.
Would it be practical for you to experiment with different optimizers,
or is your model too slow and expensive to play around with?
I would recommend starting with plain-vanilla SGD, if only to get a
baseline for comparison.
Both SGD with momentum and Adam “remember” things from
previous steps, so the fact that you will be munging your weights
after the optimization step might confuse this process. It’s not that
you shouldn’t try SGD with momentum or Adam – they might work
fine – but be on the lookout for potential issues and use plain-vanilla
SGD as a sanity check.
In general, weight decay is worthwhile, so you should probably turn
it on (but run a baseline without it).
Your BCEWithLogitsLoss (or BCELoss) loss function should cause
your model to train to make some “negative” predictions (assuming
that your training data is sensible), but your weight-munging scheme
is, at best, going to make your training process more difficult. So you
might have to carry out training runs that are longer than normal.
(Also, experiment with your learning rate.)
Lastly, if your training data is *unbalanced," that is, some given class
has many more “negative” samples than “positive,” you should also
consider using BCEWithLogitsLoss's pos_weight argument to
account for this.
Good luck.
K. Frank |
st46395 | I’m trying to go seq2seq with a Transformer model. My input and output are the same shape (torch.Size([499, 128]) where 499 is the sequence length and 128 is the number of features.
My input looks like:
spec.x1200×400 44.2 KB
My output looks like:
spec.y1200×400 45.4 KB
My training loop is:
for batch in tqdm(dataset):
optimizer.zero_grad()
x, y = batch
x = x.to(DEVICE)
y = y.to(DEVICE)
pred = model(x, torch.zeros(x.size()).to(DEVICE))
loss = loss_fn(pred, y)
loss.backward()
optimizer.step()
My model is:
import math
from typing import final
import torch
import torch.nn as nn
class Reconstructor(nn.Module):
def __init__(self, input_dim, output_dim, dim_embedding, num_layers=4, nhead=8, dim_feedforward=2048, dropout=0.5):
super(Reconstructor, self).__init__()
self.model_type = 'Transformer'
self.src_mask = None
self.pos_encoder = PositionalEncoding(d_model=dim_embedding, dropout=dropout)
self.transformer = nn.Transformer(d_model=dim_embedding, nhead=nhead, dim_feedforward=dim_feedforward, num_encoder_layers=num_layers, num_decoder_layers=num_layers)
self.decoder = nn.Linear(dim_embedding, output_dim)
self.decoder_act_fn = nn.PReLU()
self.init_weights()
def init_weights(self):
initrange = 0.1
nn.init.zeros_(self.decoder.weight)
nn.init.uniform_(self.decoder.weight, -initrange, initrange)
def forward(self, src, tgt):
pe_src = self.pos_encoder(src.permute(1, 0, 2)) # (seq, batch, features)
transformer_output = self.transformer_encoder(pe_src)
decoder_output = self.decoder(transformer_output.permute(1, 0, 2)).squeeze(2)
decoder_output = self.decoder_act_fn(decoder_output)
return decoder_output
My output has a shape of torch.Size([32, 499, 128]) where 32 is batch, 499 is my sequence length and 128 is the number of features. But the output has the same values:
tensor([[[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
...,
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017],
[0.0014, 0.0016, 0.0017, ..., 0.0018, 0.0021, 0.0017]]],
grad_fn=<PreluBackward>)
What am I doing wrong? Thank you so much for any help. |
st46396 | Hello again, i add my model to here but when i decrease learning rate its giving right outputs, now new problem arisen, whatever i give input its predict same value; example:
Input:
[90, 91, 26, 62, 92, 93, 26, 94, 95, 96]
incumbering soil and washed into immediate and glittering popularity possibly
Masked Input:
[90, 91, 26, 62, 92, 93, 26, 1, 95, 96]
incumbering soil and washed into immediate and unnk popularity possibly
Output:
[90, 91, 26, 62, 92, 93, 26, 33, 95, 96]
incumbering soil and washed into immediate and the popularity possibly
As you can see like this, it always predict “the” token.
Model:
class Kemal(nn.Module):
def __init__(self, src_vocab_size, embedding_size, num_heads, dim_forward, num_encoder_layers, max_len, src_pad_idx, dropout, device):
super(Kemal, self).__init__()
self.src_word_embedding = nn.Embedding(src_vocab_size, embedding_size)
self.src_position_embedding = nn.Embedding(max_len, embedding_size)
self.device = device
self.encoder_norm = nn.LayerNorm(embedding_size)
self.encoder_layer = nn.TransformerEncoderLayer(embedding_size, num_heads, dim_feedforward=dim_forward, dropout=dropout, activation='gelu')
self.encoder = nn.TransformerEncoder(self.encoder_layer, num_encoder_layers, self.encoder_norm)
self.fc = nn.Linear(embedding_size, src_vocab_size)
self.src_pad_idx = src_pad_idx
def make_src_pad_mask(self, src):
src_mask = src.transpose(0, 1) == self.src_pad_idx
return src_mask
# (N, src_len)
def forward(self, src):
src_seq_lenght, N = src.shape
src_mask = nn.Transformer.generate_square_subsequent_mask(None, src_seq_lenght).to(self.device)
src_positions = (
torch.arange(0, src_seq_lenght).unsqueeze(1).to(self.device)
)
embed_src = (self.src_word_embedding(src) + self.src_position_embedding(src_positions))
src_padding_mask = self.make_src_pad_mask(src)
out = self.encoder(embed_src, mask=src_mask, src_key_padding_mask=src_padding_mask)
out = self.fc(out)
return out
With CrossEntropyLoss |
st46397 | key_padding_mask – if provided, specified padding elements in the key will be ignored by the attention. When given a binary mask and a value is True, the corresponding value on the attention layer will be ignored. When given a byte mask and a value is non-zero, the corresponding value on the attention layer will be ignored
from https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html 2 |
st46398 | I’m not using now , i new to PyTorch so i follow tutorials and pad_mask remains from the examples, not in the current version.
Do you have any idea about my problem? |
st46399 | Sorry for late update, i solve my problem.
I was give to input to model in (N, seq_len) shape but i need to give (seq_len, N).
All the time i generate wrong src_mask and positional embeddings.
And problem about predict wrong tokens is becouse of word frequencies |
st46400 | Hello, i try to make bert-like model with nn.TransformerEncoder but when i predict masked word in sequence, usually its get the most frequent words in vocab.
When I thought about, it kind of made sense. Predicting the most frequent class for imbalanced data gives high accuracy ‘for free’.
What can i do about this situation, i tought changing loss function? |
st46401 | I have a checkpoint file which was trained with torch and the file extension is t7. Is it possible for me to load it in a pytorch model? Thanks. |
st46402 | Solved by albanD in post #2
Hi,
Very old versions of pytorch used to have a compatibility layer to do this but it was removed a while ago.
It might be simpler for your to dump the file in json from lua and reload it in python. It will be a bit slow but should be fairly simple. |
st46403 | Hi,
Very old versions of pytorch used to have a compatibility layer to do this but it was removed a while ago.
It might be simpler for your to dump the file in json from lua and reload it in python. It will be a bit slow but should be fairly simple. |
st46404 | Hello everyone, I am currently facing a problem regarding a small GPU memory during my deep learning project. To handle this, I am currently training in batch size =4 but this requires a significant sampling from the initial data to be able to fit into my GPU. Hence, I think I have to use batch size = 1 which is a stochastic gd. However, I have read some posts saying that batch normalization is not good to be used in batch size =1. If it is true, what should I do with the BN in my model? Do I have to remove them? |
st46405 | The batch statistics might be noise with a single sample or your model might even raise an error, if no statistics can be computed from the input.
However, as usual, it depends on your use case and I would recommend to run some experiments and play around with the momentum term in the batchnorm layers. |
st46406 | @ptrblck Hi than you for your reply!
image770×140 11.9 KB
I realized that my momentum value is set in really low value currently, but apparently it is recommended to set high value when I am training in small batch.
Using the equation above, could you please interpret to me how the statement is valid? |
st46407 | Be a bit careful about the momentum definition in batchnorm layers, as they might differ from other definitions.
From the docs 11:
This momentum argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is x_hat_new = (1 - momentum) * x_hat + momentum * x_t, where x_hat is the estimated statistic and x_t is the new observed value.
Based on your posted formula (assuming alpha is the momentum) the definition differs as explained in the docs. |
st46408 | @ptrblck, thank you for the reply.
Base on the document, it means that the momentum in pytorch is opposite to the equation I posted. Hence, it should be set really low like 0.1 during training with low batch.
Is this correct? |
st46409 | The default value is already set to 0.1, so you might want to decrease it even further. |
st46410 | @ptrblck, hi I just tried changing the momentum for batch norm, but unfortunately it did not work.
The error is triggered during batchnorm1d with batch size 1.
It seems that it is not possible to train in batch size 1 with batchnorm1d.
Is this correct? If so, is there any alternative solution that I can do? |
st46411 | You cannot use batchnorm layers with a single sample, if the temporal dimension also contains only a single time step as seen here:
bn = nn.BatchNorm1d(3)
x = torch.randn(1, 3, 10)
out = bn(x)
x = torch.randn(1, 3, 1)
out = bn(x) # error
since the mean would just be the channel values and the stddev cannot be calculated.
I’m not sure, if any normalization layer would make sense in such a use case, but lets wait for others to chime in. |
st46412 | edshkim98:
If so, is there any alternative solution that I can do?
You can try LayerNorm as a substitute, but it is not always performing as well, being too different from batch norm. Or something like https://github.com/Cerebras/online-normalization 25 |
st46413 | @googlebot Thank you for recommending an excellent paper. I will have a read and try it!! |
st46414 | Wouldn’t LayerNorm calculate the stats from the single pixel and thus return a zero output or do I misunderstand the use case? |
st46415 | It is applied to vectors in feature space, though I’ve read that layer norm doesn’t work well with convolutions, stats would be per image region. I think the problem is rather with channel importance equalization, as layer norm “ties” all dimensions; I guess that is bad for early vision filters.
PS: if that’s not clear, I meant group norm applied to a channels last permuted tensor |
st46416 | I ran densenet-121 with cifar10 dataset.I want to run it with other dataset.Does it require large dataset like cifar10?If it is with small dataset like that 1000 data,how can I write to get good result?Please help.Thanks in advance!! |
st46417 | Hello! I tried for a while to initialize my network using a fixed random seed, but it doesn’t seem to work (if I run the code multiple times and print the parameters, they are different every time). Here is what I have so far:
torch.cuda.manual_seed_all(0)
class Encoder(nn.Module):
def __init__(self):
super().__init__()
N = 32
self.encoder = nn.Sequential(
nn.Conv2d(3, N, 4, 2, 1),
nn.ReLU(True),
nn.Conv2d(N, N, 4, 2, 1),
nn.ReLU(True),
nn.Conv2d(N, 2*N, 4, 2, 1),
nn.ReLU(True),
nn.Conv2d(2*N, 2*N, 4, 2, 1),
nn.ReLU(True),
nn.Conv2d(2*N, 8*N, 4, 1),
nn.ReLU(True),
View((-1, 8*N*1*1)),
nn.Linear(8*N, 4),
)
def forward(self, x):
z = self.encoder(x)
return z
model_E = Encoder().cuda()
I tired to place torch.cuda.manual_seed_all(0) at different places in my code, but it seems like no matter where I put it, it doesn’t make a difference. How should I do it the right way? Thank you! |
st46418 | Add this at the starting of the first file you call (eg: main.py)
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True |
st46419 | Hi, everyone! I’m stuck with silly problem. Check out my code for toy example:
def train(model, optim, criterion, train_dataloader, val_dataloader, device, epochs):
train_acc_ = []
val_acc_ = []
for ep in range(1, epochs + 1):
model.train(True)
train_acc = 0
for x, y in tqdm(train_dataloader):
optim.zero_grad()
x = x.to(device)
y = y.to(device)
out = model(x)
loss = criterion(out, y)
loss.backward()
optim.step()
train_acc += torch.mean((y == torch.argmax(out, 1)).type(torch.float))
train_acc /= len(train_dataloader)
val_acc = 0
model.train(False)
with torch.no_grad():
for x, y in tqdm(val_dataloader):
x = x.to(device)
y = y.to(device)
out = model(x)
val_acc += torch.mean((y == torch.argmax(out, 1)).type(torch.float))
val_acc /= len(val_dataloader)
train_acc = train_acc.detach().cpu().item()
val_acc = val_acc.detach().cpu().item()
train_acc_.append(train_acc)
val_acc_.append(val_acc)
clear_output()
plt.plot(np.arange(ep), train_acc_, label=f"Train acc (last: {round(train_acc, 3)})")
plt.plot(np.arange(ep), val_acc_, label=f"Val acc (last: {round(val_acc, 3)})")
plt.legend()
plt.show()
return train_acc_, val_acc_
tr = torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(train_mean, train_std)
])
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model = torchvision.models.resnet18()
model.conv1 = torch.nn.Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
model.fc = torch.nn.Linear(512, 10)
model = model.to(device)
train_data = datasets.CIFAR10("./cifar10_data", train=True, transform=tr)
val_data = datasets.CIFAR10("./cifar10_data", train=False, transform=tr)
train_dataloader = torch.utils.data.DataLoader(train_data, batch_size=128)
val_dataloader = torch.utils.data.DataLoader(val_data, batch_size=256)
criterion = torch.nn.CrossEntropyLoss()
optim = torch.optim.Adam(model.parameters())
when I run this code my accuracy plot looks like this
But when I removed model.train(False) in my train function everything is ok.
Where did I go wrong? |
st46420 | try this model.eval() instead of model.train(False)
Pardon me if couldn’t understand your question well. |
st46421 | I’m not sure why you are getting a random validation accuracy, but your code seems to work at least for 3 epochs (I stopped it afterwards): |
st46422 | Hi All,
Can someone please help with the below error.
I have no idea what went wrong and here is my error and code link from my Github please help in fixing the error.
github.com
Batmancity/Detection/blob/main/Code.py 7
from .imports import *
from .torch_imports import *
from .core import *
from .layer_optimizer import *
def cut_model(m, cut):
return list(m.children())[:cut] if cut else [m]
def predict_to_bcolz(m, gen, arr, workers=4):
arr.trim(len(arr))
lock=threading.Lock()
m.eval()
for x,*_ in tqdm(gen):
y = to_np(m(VV(x)).data)
with lock:
arr.append(y)
arr.flush()
def num_features(m):
c=children(m)
This file has been truncated. show original |
st46423 | As the error message suggests, you should use tensor.item() instead of tensor[0].
Also, don’t use the .data attribute, as it might yield unwanted side effects. |
st46424 | Hi everyone, suppose I’ve two tensors holding N RGB images of size 16x32 so:
Input_0 = N × 3 × 32 × 16
Input_1 = N × 3 × 32 × 16
now I want to iteratively concat these images (e.g first image of input_0 with all N images of input_1, second image of input_0 with all N images of input_1, and so on…). In order to get an output like:
Output = N × N × 3 × 32 × 32
How can I do that? |
st46425 | Hi, I am going through the beginners examples, and on the part of PyTorch: optim, it is not clear to me why the optimisation package receives all model parameters (including the Tensors that describe the states of the layers) instead of only the Tensors that describe the weights. Maybe I am missing a more fundamental concept, but it was my understanding that we just wanted to updated the weights.
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
Thank you! |
st46426 | model.parameters() will return all registered nn.Parameters not the arguments to layers.
You can check it via:
print(list(model.parameters()))
# or
print(dict(model.named_parameters())) |
st46427 | Hey,
I built a CNN-LSTM model to forecast the monthly demand of some product item in the future (1, 3 and 6 month in the future) based on the sales and order history + some chosen indicators.
(It is kind of a time series except for some dates I have multiple entries…)
The data consists of the sales history for many, many items from different product groups (for the last 7 years). It is a mix of categorical and continuous data - I included embedding layers in the preparation of the data to take care of the categorical features.
So far I trained and tested the NN with subsets of the main dataset, only containing information for some selected item and some shortened time frame (3-4 years).
This works quite well for the moment (just a little bit overfitting that needs to be taken care of).
Is there a possibility to train the same neural net on multiple different data sets, e.g. different items or different time frames and combine this “knowledge” to one model.
I can’t just feed the whole dataset to the model and adjust the output, because the data will be too big.
To make the input the same length, I had to pad my sequences, which makes the input data even bigger. And if the data is too big my computation crashes when I want to create the dataset that I need for my dataloaders. (all available ram is used…) |
st46428 | Username2:
And if the data is too big my computation crashes when I want to create the dataset that I need for my dataloaders. (all available ram is used…)
Would it be possible to lazily load the data, i.e. each call into Dataset.__getitem__ would load a single sample and the DataLoader would create the final batch? |
st46429 | I am new to PyTorch and I am trying to do semantic segmentation.
I am trying to do semantic segmentation with two classes - Edge and Non-Edge.
I have 224x224x3 images and 224x224 binary segmentation masks. I am reshaping the masks to be 224x224x1 (I read somewhere that this is the format that I should pass to the model). Here is a sample image and mask: https://imgur.com/IfAO2zv 3
I want to try whatever model, loss, and optimizer to proceed with the training. I am currently trying with torchvision.models.segmentation. fcn_resnet50 which I found that can be used for segmentation from the docs (I am not sure if I have to modify it or use it as it is).
I get the following errors when I try different loss functions:
BCELoss
AttributeError: 'collections.OrderedDict' object has no attribute 'size'
CrossEntropyLoss
AttributeError: 'collections.OrderedDict' object has no attribute 'log_softmax'
NLLLoss
AttributeError: 'collections.OrderedDict' object has no attribute 'dim'
Here is the code:
roof_edges_dataset.py
import os
import cv2
from torch.utils.data import Dataset
from torchvision.transforms import transforms
from utils import create_binary_mask, get_labelme_shapes, plot_segmentation_dataset
class RoofEdgesDataset(Dataset):
def __init__(self, im_path, ann_path, transform=None):
self.im_path = im_path
self.ann_path = ann_path
self.transform = transform
self.im_fn_list = sorted(os.listdir(im_path), key=lambda x: int(x.split('.')[0]))
self.ann_fn_list = sorted(os.listdir(ann_path), key=lambda x: int(x.split('.')[0]))
def __len__(self):
return len(self.im_fn_list)
def __getitem__(self, index):
im_path = os.path.join(self.im_path, self.im_fn_list[index])
im = cv2.imread(im_path)
ann_path = os.path.join(self.ann_path, self.ann_fn_list[index])
ann = create_binary_mask(im, get_labelme_shapes(ann_path))
ann = ann.reshape(ann.shape[0], ann.shape[1], 1)
ann = transforms.ToTensor()(ann)
if self.transform:
im = self.transform(im)
return im, ann
main.py
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
from torch import optim
from torch.utils.data import DataLoader
from roof_edges_dataset import RoofEdgesDataset
# Device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Hyperparameters
in_im_shape = (3, 224, 224)
num_classes = 2 # Edge / Non-edge
learning_rate = 0.001
batch_size = 4
n_epochs = 10
# Data - 60% Train - 20% Val - 20% Test
transformations = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
dataset = RoofEdgesDataset(im_path='data/images', ann_path='data/annotations', transform=transformations)
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [train_size, test_size])
train_size = int(0.8 * len(train_dataset))
val_size = len(train_dataset) - train_size
train_dataset, val_dataset = torch.utils.data.random_split(train_dataset, [train_size, val_size])
train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
val_loader = DataLoader(dataset=val_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=True)
# Model
model = torchvision.models.segmentation.fcn_resnet50(pretrained=False, progress=True, num_classes=2)
model.to(device)
print(model)
# Loss and Optimizer
criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)
# Train
for epoch in range(n_epochs):
for batch_idx, (image, annotation) in enumerate(train_loader):
image = image.to(device=device)
annotation = annotation.to(device=device)
# forward
output = model(image)
loss = criterion(output, annotation)
# backward
optimizer.zero_grad()
loss.backward()
# gradient descent (adam step)
optimizer.step()
if (batch_idx + 1) % 2 == 0:
print(
f'Epoch [{epoch + 1}/{n_epochs}], Step [{batch_idx + 1}/{len(train_loader)}], Loss: {loss.item():.4f}')
# Evaluate
How should I proceed? What am I doing wrong? How can I fix it?
Also, any examples, guides, tutorials, references, and everything that will help me solve my issue and understand the topic better is welcome. |
st46430 | Solved by ptrblck in post #2
The model output is an OrderedDict, while tensors are expected in the loss functions.
Use ['out'] to get the class logits:
model = models.segmentation.fcn_resnet50()
output = model(torch.randn(1, 3, 224, 224))
print(output['out'].shape)
> torch.Size([1, 21, 224, 224])
Also note that PyTorch uses … |
st46431 | The model output is an OrderedDict, while tensors are expected in the loss functions.
Use ['out'] to get the class logits:
model = models.segmentation.fcn_resnet50()
output = model(torch.randn(1, 3, 224, 224))
print(output['out'].shape)
> torch.Size([1, 21, 224, 224])
Also note that PyTorch uses “channels-first” tensors, so the class and channel dimensions should be in dim1. |
st46432 | I get the error described above when attempting to perform batch norm on my network
The isolated code for the section that fails to run is
print(layer_f)
print(layer_b)
x = self.af(layer_f(x))
print(x.shape)
x = layer_b(x)
The print out for the objects are
Linear(in_features=84, out_features=1024, bias=True)
BatchNorm1d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
torch.Size([11, 32, 1024])
From what I understand I am passing my 11,32,84 tensor into the linear layer to get a 11,32,1024 tensor then pushing that tensor through to a batch norm layer of size 1024. I do not understand what has gone wrong here?
The input dimensions are [Batch,channel,element] and the number of channels changes for each input. |
st46433 | As you’ve already described the number of channels in the input activation to the batchnorm layer doesn’t match the expected channels, so you would have to permute the activation via:
x = x.permute(0, 2, 1)
before passing it to layer_b. |
st46434 | If I do this wouldn’t things be normalized with respect to the channels rather than to the features or have I misunderstood something? |
st46435 | Permuting the output of the linear layer would assign the out_features dimension to dim1, which is the “channels” dimension in batchnorm layers.
I assumed this is the expected use case, since @Michael_Moran defined the size of these dimensions both as 1024. dim1 in the original output of the linear layer (size of 32) is the “additional” dimension and is often used e.g. for the temporal dimension.
If this approach is correct or not depends on the use case and might be wrong for you so you would have to explain a bit what you are trying to achieve. |
st46436 | In the end I got the desired behaviour by doing the following snippet
i,j,k = x.shape
x = x.view(i*j,k)
x = batch_layer(x)
x = x.view(i,j,k)
This seems to have worked where the usage case was that I have a minibatch of sets / unordered sequences and I wanted the embedding of each token in these sets / sequences to be normalized in the same way with the same parameters.
This isn’t an NLP task but the language is appropiate so I will use it.
I wanted to normalize each word token consistently with the same parameters. In the context of the normalization the batch of sentences where each sentence contained words was abstracted down to just being a large batch of words. I was unsure how batch_norm1d would actually handle things using the permutation solution.
I apologise if I’m unclear as I’m unsure how batch_norm1d behaves with 3d tensors. |
st46437 | I created an embedding and passing out of index tensors doesn’t work on CPU, but on GPU it is returning a tensor(all except first embedding keep changing with every call, and even the first tensor is not equal to embedding.weight).
Torch version = 1.5.0
a = torch.nn.Embedding(1,768)
>>> a(torch.LongTensor([1,2,3]))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<path>/envs/vl-bert/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "<path>/envs/vl-bert/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "<path>/vl-bert/lib/python3.6/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
a.cuda()(torch.LongTensor([1,2,3]).cuda())
tensor([[1.4013e-45, 0.0000e+00, 2.8026e-45, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 0.0000e+00, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00]], device='cuda:0', grad_fn=<EmbeddingBackward>)
a.cuda()(torch.LongTensor([1,2,3]).cuda())
tensor([[ 1.4013e-45, 0.0000e+00, 2.8026e-45, ..., 1.0869e+00,
-1.7131e+00, -6.9908e-01],
[-5.6997e-01, 1.6486e+00, 1.7096e+00, ..., 1.0869e+00,
1.7131e+00, 6.9908e-01],
[ 5.6996e-01, 1.6486e+00, 1.7096e+00, ..., 1.7427e+00,
2.0000e+00, 1.7774e+00]], device='cuda:0',
grad_fn=<EmbeddingBackward>)
a.cuda()(torch.LongTensor([1,2,3]).cuda())
tensor([[1.4013e-45, 0.0000e+00, 2.8026e-45, ..., 0.0000e+00, 0.0000e+00,
0.0000e+00],
[0.0000e+00, 2.0000e+00, 0.0000e+00, ..., 1.0869e+00, 1.7131e+00,
6.9908e-01],
[6.5202e+06, 4.3207e+00, 8.6057e-02, ..., 1.7427e+00, 2.0000e+00,
1.7774e+00]], device='cuda:0', grad_fn=<EmbeddingBackward>)
>>> a.weight
Parameter containing:
tensor([[ 0.7804, 1.5051, 0.0861, -0.9269, -0.8105, -2.7018, -1.2860, -0.4517,
0.6019, 1.2832, 2.1942, 0.3216, 1.9599, 0.8146, 0.0085, 0.6976,
1.9618, 0.0783, 1.3515, 0.8830, 0.8101, -2.4665, 2.6164, 1.1543,
-0.8128, -0.9217, 1.3534, -0.3387, 0.1712, 1.1185, -0.5681, 0.2406,
1.8387, 0.7704, 1.6712, 0.4060, -1.2792, -0.3 |
st46438 | Solved by ptrblck in post #2
Some CUDA assert statements were accidentally disabled in PyTorch 1.5.0, so you should update to the latest stable version, as it was fixed in 1.5.1. |
st46439 | Some CUDA assert statements were accidentally disabled in PyTorch 1.5.0, so you should update to the latest stable version, as it was fixed in 1.5.1. |
st46440 | I’m doing detection problem with cascade rcnn with detectron2.
Testing the augmentation right now.
T.RandomFlip(prob=0.5, horizontal=True, vertical=False),
T.ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice')
with this two options it has no problem.
but when I add the
Preformatted text ,T.RandomCrop('relative_range', (0.4, 0.5))
the error occured
FloatingPointError: Loss became infinite or NaN at iteration=1!
loss_dict = {'loss_cls_stage0': 1.613979458808899, 'loss_box_reg_stage0': 0.011088866740465164, 'loss_cls_stage1': 1.668488621711731, 'loss_box_reg_stage1': 0.00575869157910347, 'loss_cls_stage2': 1.4664241075515747, 'loss_box_reg_stage2': 0.0028667401056736708, 'loss_rpn_cls': 1.469098448753357, 'loss_rpn_loc': inf}
[11/21 06:53:37 d2.engine.hooks]: Total training time: 0:00:00 (0:00:00 on hooks)
[11/21 06:53:37 d2.utils.events]: iter: 1 total_loss: 4.9 loss_cls_stage0: 1.609 loss_box_reg_stage0: 0.005077 loss_cls_stage1: 1.685 loss_box_reg_stage1: 0.002878 loss_cls_stage2: 1.464 loss_box_reg_stage2: 0.002352 loss_rpn_cls: 0.09238 loss_rpn_loc: 0.03937 data_time: 0.1260 lr: 2e-05 max_mem: 5386M
My initial value of parameter is as follow
cfg.DATALOADER.NUM_WORKERS = 4
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("Misc/cascade_mask_rcnn_R_50_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 6#6
cfg.SOLVER.MAX_ITER = 15000 # Need to train longer
#cfg.SOLVER.CHECKPOINT_PERIOD = 1000
cfg.MODEL.RESNETS.DEPTH = 50 #RESNET 50,101
#RESNEXT parameters
cfg.SOLVER.CHECKPOINT_PERIOD = 200
cfg.MODEL.ROI_HEADS.NAME= "CascadeROIHeads"
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 4 # 4 classes
cfg.MODEL.MASK_ON = False
cfg.OUTPUT_DIR = save_path
cfg.SOLVER.LR_SCHEDULER_NAME = 'WarmupCosineLR'
cfg.SOLVER.BASE_LS = 0.0001 |
st46441 | I want to use dropout on input1 and input2 (where they are both tensors with similar sizes) but I want it to be the same dropout for both of them. meaning that it zeros(drops) the same elements on both of them. how can i do that? |
st46442 | I would sample a mask manually and apply it to both tensors (with the scaling, if needed).
Alternatively, you could also try to seed the dropout call before its usage, but I would rather avoid these kind of seed-hacks. |
st46443 | I see thank you, would that be possible to see what elements the dropout will zero out if we know the size of tensor?
I mean I can check the output of tensor to see where it is zero but that is kinda risky (if the tensor has zero elements then that would cause issue, though it is not too difficult to check I guess).
if I can get the mask from the dropout then the problem is solved. |
st46444 | I think you can use your mentioned approach, but I would rather sample the mask manually instead of checking the input and output for zeros. |
st46445 | just to confirm is this a right way of creating mask:
let say I want to do dropout with p=0.1
import torch
input1 = torch.rand(5,2,2)
mask = torch.bernoulli(input1.data.new(input1.data.size()).fill_(1-p))
input_after_dropout = mask*input1 |
st46446 | torch.bernoulli expects an input tensor containing the probabilities of drawing a 1 value. So depending how p is defined your code should be correct.
Note that you would have to take care of the scaling in dropout layers (either the inverse scaling during training or the vanilla scaling during evaluation).
To switch the behavior between training and evaluation you could create a custom nn.Module and use the internal self.training flag to switch between the behaviors.
The self.training flag will be changed through calls into model.train() and model.eval(). |
st46447 | I am trying to build libtorch like so:
USE_CUDA=0 cmake -DBUILD_SHARED_LIBS:BOOL=ON -DCMAKE_BUILD_TYPE:STRING=Release -DPYTHON_EXECUTABLE:PATH=`which python3` -DCMAKE_INSTALL_PREFIX:PATH=../pytorch-install ../pytorch &&\
USE_CUDA=0 cmake --build . --target install
However, I see that it is still building with CUDA not sure why this is… |
st46448 | Solved by albanD in post #2
Hi,
Why are you building with CMake directly? If you want to do a python install, you should be using setup.py and the USE_CUDA flag will be properly picked up in that case.
If you are trying to do a libtorch install (I am not sure what is the process supposed to be), it might be that the flag is … |
st46449 | Hi,
Why are you building with CMake directly? If you want to do a python install, you should be using setup.py and the USE_CUDA flag will be properly picked up in that case.
If you are trying to do a libtorch install (I am not sure what is the process supposed to be), it might be that the flag is not the same for it. |
st46450 | my goal is to build a minimalist cpu build of libtorch.
I now see what you mean that I can use python to build libtorch:
github.com
pytorch/pytorch/blob/master/docs/libtorch.rst 4
libtorch (C++-only)
===================
The core of pytorch does not depend on Python. A
CMake-based build system compiles the C++ source code into a shared
object, libtorch.so.
Building libtorch using Python
------------------------------
You can use a python script/module located in tools package to build libtorch
::
cd <pytorch_root>
# Make a new folder to build in to avoid polluting the source directories
mkdir build_libtorch && cd build_libtorch
# You might need to export some required environment variables here.
Normally setup.py sets good default env variables, but you'll have to do
that manually.
This file has been truncated. show original
but I think the problem is I am not sure if I can link to through through other C++ programs.
Anyways, I will try it now. |
st46451 | Based on the documentation, the tensor.argmax should returns the index of the first occurrence of multiple largest values.
But when the tensor contains all same values, it returns the index of the last value.
For example, torch.tensor([0, 0, 0, 0]).argmax(dim=0), outputs is tensor(3). Shouldn’t it be 0? |
st46452 | Solved by albanD in post #2
Hi,
Which version of pytorch are you using?
This was fixed in 1.7+ versions only IIRC |
st46453 | Hi,
Which version of pytorch are you using?
This was fixed in 1.7+ versions only IIRC |
st46454 | Suppose we have 2 minibatch (each with 10 data point). When turning on the dropout for forward pass of first minibatch, the dropout mask with dimension 10 is generated. What if we want to use the same mask for the second batch of data ? |
st46455 | you can do this dropout operation yourself, instead of using nn.Dropout.
You can generate a bernoulli mask of numbers using torch.bernoulli and then multiply your both mini-batches with the same mask.
For example:
# generate a mask of same shape as input1
mask = Variable(torch.bernoulli(input1.data.new(input1.data.size()).fill_(0.5)))
output1 = input1 * mask
output2 = input2 * mask |
st46456 | Is it correct to rescale the mask to output the same magnitude in following way ?
mask = Variable(torch.bernoulli(input1.data.new(input1.data.size()).fill_(0.4)))/0.6 |
st46457 | For future readers I would like to mention that the rescaling is not correct.
Please note that the Bernoulli distribution samples 0 with the probability (1-p), contrary to dropout implementations, which sample 0 with probability p.
Therefore, if you want dropout with p=0.4, the mask has to be
mask = Bernoulli(torch.full_like(input1, 0.6)).sample()/0.6
For dropout with p=0.6 the mask is
mask = Bernoulli(torch.full_like(input1, 0.4)).sample()/0.4 |
st46458 | Hi,
I am trying to wrap DistributedDataParallel() with the Transformer model.
But, I am facing the below error
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class ‘torchtext.data.example.Example’>
Code:
def train_1(args):
#init_process_group
#rank = args.nr * args.gpus + gpu
rank = int(os.environ[‘LOCAL_RANK’])
gpu = torch.device(f’cuda:{rank}’)
torch.distributed.init_process_group(backend=‘nccl’, init_method=‘env://’)
torch.cuda.set_device(gpu)
TEXT = torchtext.data.Field(tokenize=get_tokenizer("basic_english"),
init_token='<sos>',
eos_token='<eos>',
lower=True)
train_txt, val_txt, test_txt = torchtext.datasets.WikiText2.splits(TEXT)
TEXT.build_vocab(train_txt)
batch_size = 20
eval_batch_size = 10
sampler = torch.utils.data.distributed.DistributedSampler(train_txt);
loader = torch.utils.data.DataLoader(train_txt, shuffle=(sampler is None), sampler=sampler)
bptt = 35
ntokens = len(TEXT.vocab.stoi) # the size of vocabulary
emsize = 200 # embedding dimension
nhid = 200 # the dimension of the feedforward network model in nn.TransformerEncoder
nlayers = 2 # the number of nn.TransformerEncoderLayer in nn.TransformerEncoder
nhead = 2 # the number of heads in the multiheadattention models
dropout = 0.2 # the dropout value
model = TransformerModel(ntokens, emsize, nhead, nhid, nlayers, dropout).to(gpu)
#DDP
model = nn.parallel.DistributedDataParallel(model,device_ids=[gpu])
criterion = nn.CrossEntropyLoss()
lr = 5.0 # learning rate
optimizer = torch.optim.SGD(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.95)
def train():
model.train() # Turn on the train mode
total_loss = 0.
start_time = time.time()
ntokens = len(TEXT.vocab.stoi)
for i, (data, targets) in enumerate(loader):
optimizer.zero_grad()
output = model(data)
loss = criterion(output.view(-1, ntokens), targets)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
total_loss += loss.item()
log_interval = 200
if batch % log_interval == 0 and batch > 0:
cur_loss = total_loss / log_interval
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches | '
'lr {:02.2f} | ms/batch {:5.2f} | '
'loss {:5.2f} | ppl {:8.2f} | device{:3d}'.format(
epoch, batch, len(train_data) // bptt, scheduler.get_lr()[0],
elapsed * 1000 / log_interval,
cur_loss, math.exp(cur_loss), torch.cuda.current_device()))
total_loss = 0
start_time = time.time() |
st46459 | Please share the full error log. Most likely this error is not due to DistributedDataParallel. |
st46460 | Traceback (most recent call last):
File “standard_ddp_7.py”, line 219, in
main()
File “standard_ddp_7.py”, line 216, in main
train_1(args)
File “standard_ddp_7.py”, line 174, in train_1
train()
File “standard_ddp_7.py”, line 127, in train
for i, (data, targets) in enumerate(loader):
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 435, in next
data = self._next_data()
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py”, line 47, in fetch
return self.collate_fn(data)
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py”, line 85, in default_collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class ‘torchtext.data.example.Example’>
Traceback (most recent call last):
File “standard_ddp_7.py”, line 219, in
main()
File “standard_ddp_7.py”, line 216, in main
train_1(args)
File “standard_ddp_7.py”, line 174, in train_1
train()
File “standard_ddp_7.py”, line 127, in train
for i, (data, targets) in enumerate(loader):
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 435, in next
data = self._next_data()
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py”, line 47, in fetch
return self.collate_fn(data)
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py”, line 85, in default_collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class ‘torchtext.data.example.Example’>
Traceback (most recent call last):
File “/home/nadaf/anaconda3/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/home/nadaf/anaconda3/lib/python3.8/runpy.py”, line 87, in _run_code
exec(code, run_globals)
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/distributed/launch.py”, line 260, in
main()
File “/home/nadaf/anaconda3/lib/python3.8/site-packages/torch/distributed/launch.py”, line 255, in main
raise subprocess.CalledProcessError(returncode=process.returncode,
subprocess.CalledProcessError: Command ‘[’/home/nadaf/anaconda3/bin/python’, ‘-u’, ‘standard_ddp_7.py’]’ returned non-zero exit status 1. |
st46461 | gnadaf:
for i, (data, targets) in enumerate(loader):
optimizer.zero_grad()
output = model(data)
....
Try modifying this to:
for i, batch in enumerate(loader):
data, targets = batch.text, batch.target
optimizer.zero_grad()
output = model(data)
.... |
st46462 | Abhilash_Srivastava:
Try modifying this to:
@Abhilash_Srivastava
I modified the code as you suggested, but still facing the same error |
st46463 | The error doesn’t seem to be related to DDP, but the DataLoader and torchtext:
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class ‘torchtext.data.example.Example’>
I’m unsure how the torchtext.data.example.Example class is implemented, but could you try to return tensors in the Dataset.__getitem__ instead of this object? |
st46464 | I am loading datset like this
train_txt, val_txt, test_txt = torchtext.datasets.WikiText2.splits(TEXT) |
st46465 | @Abhilash_Srivastava @ptrblck
Still facing the same error:
I do not see any online example where we wrap the transformer model with DDP.
Transformer mode:
pytorch.org
Sequence-to-Sequence Modeling with nn.Transformer and TorchText — PyTorch... |
st46466 | Are you able to run and train your model successfully (say for a smaller dataset) without DDP? |
st46467 | I want to wrap all of my convolution parameters (.weight and .bias) with a sigmoid function which I want to make part of the graph i.e. sigmoid’s should also be part of the backprop. In other words, weight and bias values should never be out of range [0, 1].
What is the suggested way of doing this?
Thanks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.