instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
Dealing with large slowdown when moving PyTorch code to GPU | I have a Graph Neural Network model I have written using Pytorch. On my CPU I am not getting fantastic performance, so I tried to port it over to a V100 GPU I have access to. In this process, I have received a huge performance decrease (around 10times slower).
I have two ideas of where might be the issue, but I would like some input to try get the optimum performance from my model. The first problem might be coming from my custom graph convolutional layer:
class GraphConvLayer(torch.nn.Module):
"""
Based, basically, on https://arxiv.org/abs/1609.02907
Have some modifications:
https://towardsdatascience.com/how-to-do-deep-learning-on-graphs-with-graph-convolutional-networks-7d2250723780
This helped:
https://pytorch.org/docs/master/notes/extending.html
"""
def __init__(self, input_features, output_features, device, bias=True):
super(GraphConvLayer, self).__init__()
self.input_features = input_features
self.output_features = output_features
self.device = device
self.weight = nn.Parameter(torch.FloatTensor(self.input_features, self.output_features))
if bias:
self.bias = nn.Parameter(torch.FloatTensor(self.output_features))
else:
self.register_parameter('bias', None)
# Not a very smart way to initialize weights
self.weight.data.uniform_(-0.1, 0.1)
if bias is not None:
self.bias.data.uniform_(-0.1, 0.1)
def forward(self,input, adj):
# Here, we put in the forward pass:
# Our forward pass needs to be:
# D^-1 * (A + 1) * X * weights
input, adj = input.float(), adj.float()
Identity = torch.eye( len(adj[0]), device = self.device)
A_hat = adj + Identity
D = torch.sum(A_hat, dim=0)
len_D = len(D)
zero = torch.zeros(len_D,len_D, device = self.device)
mask = torch.diag(torch.ones_like(D, device = self.device))
D = mask*torch.diag(D) + (1. - mask)*zero
D_inv = torch.inverse(D)
out = torch.mm(input, self.weight)
out = torch.spmm(A_hat,out)
out = torch.spmm(D_inv, out)
if self.bias is not None:
return out + self.bias
else:
return out
return out
def extra_repr(self):
# (Optional)Set the extra information about this module. You can test
# it by printing an object of this class.
return 'node_features={}, length of weights={}, bias={}'.format(
self.node_features, self.input_features, self.bias is not None
)
Specifically, in the forward pass I am doing a selection of transformations that are described in the towardsdatascience link in the class. Is there something here that is causing this large slow-down? It seems to me that the tensors are all being initialised on the GPU.
Secondly, as all my graphs are different sizes I am being forced into using a batch size of 1. In my training loop I have this:
for batch in tqdm(train_loader):
opt.zero_grad()
adjacency, features, _, nodes = batch
adjacency = adjacency.to(device)
features = features.to(device)
nodes = nodes.to(device)
output = model(features[0], adjacency[0])
loss = F.nll_loss(output, nodes[0])
loss.backward()
opt.step()
This means (as I interpret it) that every single piece of data is being moved to the GPU individually, every loop. This seems like an obvious cause of inefficiency. Is there a way to move all the data into GPU memory at once, outside the training loop, allowing me to remove the adjacency = adjacency.to(device) lines?
Any help would be really appreciated.
| Your problem is almost guaranteed to be bound by the memory movement to the GPU, especially since you mention your singular batches.
The only ways that may help you speed up the current implementation might be to look into memory maps, which we are not able to see whether or not you are already using them based on the provided code.
Other than that, even with differently sized adjacency matrix, padding might be a valid strategy, if you manage to sort your batches by somewhat equal sizes.
Your forward() function is also clearly not optimized and might be able to deliver some sort of speedup, but I would expect optimization towards better batching to be of much greater improvement.
| https://stackoverflow.com/questions/60189355/ |
Get the value of '[UNK]' in BERT | I have designed a model based on BERT to solve NER task. I am using transformers library with the "dccuchile/bert-base-spanish-wwm-cased" pre-trained model. The problem comes when my model detect an entity but the token is '[UNK]'. How could I know which is the string behind that token?
I know that an unknown token can't be reverted to the original one, but I would like to at least capture that values before passing the inputs to the model.
The code is really simple:
sentenceIds = tokenizer.encode(sentence,add_special_tokens = True)
inputs = pad_sequences([sentenceIds], maxlen=256, dtype="long",
value=0, truncating="post", padding="post")
att_mask = torch.tensor([[int(token_id > 0) for token_id in inputs[0]]]).to(device)
inputs = torch.tensor(inputs).to(device)
with torch.no_grad():
outputs = model(inputs,
token_type_ids=None,
attention_mask=att_mask)
As you see is really simple, just tokenize, padding or truncating, creating attentionMask and calling to the model.
I have tried using regex, trying to find the two tokens that are around it and things like that, but I can't solve it properly.
| The tokenizer works in two steps. First, it does pre-tokenization, which is basically splitting on spaces and separating punctuation. Let's have a look at it on a random Czech sentence:
tokenizer.basic_tokenizer.tokenize("Kočka leze dírou.")
This gives you: ['kocka', 'leze', 'dirou', '.']
In the second step, it applies the word piece splitting algorithm, so you get:
tokenizer.tokenize("Kočka leze dírou.")
You get: ['[UNK]', 'le', '##ze', 'di', '##ro', '##u', '.']
If there is no way how to split the token into subwords, the whole word becomes [UNK]. Tokens starting with ## get appended to the previous ones, so this way you can find out where the [UNK] originally came from.
(And it seems weird to me that Spanish WordPiece tokenizer cannot parse a word that only consists of Latin characters.)
| https://stackoverflow.com/questions/60192523/ |
How to make early stopping in image classification pytorch | I'm new with Pytorch and machine learning I'm follow this tutorial in this tutorial https://www.learnopencv.com/image-classification-using-transfer-learning-in-pytorch/ and use my custom dataset. Then I have same problem in this tutorial but I dont know how to make early stopping in pytorch and if do you have better without create early stopping process please tell me.
| This is what I did in each epoch
val_loss += loss
val_loss = val_loss / len(trainloader)
if val_loss < min_val_loss:
#Saving the model
if min_loss > loss.item():
min_loss = loss.item()
best_model = copy.deepcopy(loaded_model.state_dict())
print('Min loss %0.2f' % min_loss)
epochs_no_improve = 0
min_val_loss = val_loss
else:
epochs_no_improve += 1
# Check early stopping condition
if epochs_no_improve == n_epochs_stop:
print('Early stopping!' )
loaded_model.load_state_dict(best_model)
Donno how correct it is (I took most parts of this code from a post on another website, but forgot where, so I can't put the reference link. I have just modified it a bit), hope you find it useful, in case I'm wrong, kindly point out the mistake. Thank you
| https://stackoverflow.com/questions/60200088/ |
Subsetting A Pytorch Tensor Using Square-Brackets | I came across a line of code used to reduce a 3D Tensor to a 2D Tensor in PyTorch. The 3D tensor x is of size torch.Size([500, 50, 1]) and this line of code:
x = x[lengths - 1, range(len(lengths))]
was used to reduce x to a 2D tensor of size torch.Size([50, 1]). lengths is also a tensor of shape torch.Size([50]) containing values.
Please can anyone explain how this works? Thank you.
| After being quite stumped by the behavior, I did some more digging into this, and found that it is consistent behavior with the indexing of multi-dimensional NumPy arrays. What makes this counter-intuitive is the less obvious fact that both arrays have to have the same length, i.e. in this case len(lengths).
In fact, it works as the following:
* lengths is determining the order in which you access the first dimension. I.e., if you have a 1D array a = [0, 1, 2, ...., 500], and access it with the list b = [300, 200, 100], then the result a[b] = [301, 201, 101] (This also explains the lengths - 1 operator, which simply causes the accessed values to be the same as the index used in b, or lengths, respectively).
* range(len(lengths)) then *simply chooses the i-th element in the i-th row. If you have a square matrix, you can interpret this as the diagonal of the matrix. Since you only access a single element for each position along the first two dimensions, this can be stored in a single dimension (thus reducing your 3D tensor to 2D). The latter dimension is simply kept "as is".
If you want to play around with this, I strongly recommend to change the range() value to something longer/shorter, which will result in the following error:
IndexError: shape mismatch: indexing arrays could not be broadcast
together with shapes (x,) (y,)
where x and y are your specific length values.
To write this accessing method out in the long form to understand what happens "under the hood", also consider the below example:
import torch
x = torch.randint(500, 50, 1)
lengths = torch.tensor([2, 30, 1, 4]) # random examples to explore
diag = list(range(len(lengths))) # [0, 1, 2, 3]
result = []
for i, row in enumerate(lengths):
temp_tensor = x[row, :, :] # temp_tensor.shape = [1, 50, 1]
temp_tensor = temp_tensor.squeeze(0)[diag[i]] # temp_tensor.shape = [1, 1]
result.append(temp.tensor)
# back to pytorch
result = torch.tensor(result)
result.shape # [4, 1]
| https://stackoverflow.com/questions/60201895/ |
The result is different when I apply torch.manual_seed before loading cuda() after loading the model | I tried to make sure my code to be reproducible (always get the same results)
So I applied below settings before my codes.
os.environ['PYTHONHASHSEED'] = str(args.seed)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed) # if you are using multi-GPU.
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
With these settings, I always achieved the same results with the same environment and GPU.
Howerver, when I applied torch.manual_seed() after loading the model.
torch.manual_seed(args.seed)
model = Net()
Net.cuda()
torch.manual_seed(args.seed)
model = Net()
torch.manual_seed(args.seed)
Net.cuda()
The above two results were different.
How should I understand this situation?
Does seed reinitialize after loading the model?
| The Net.cuda() has no effect on the random number generator. Under the hood it just calls cuda() for all of the model parameters. So basically it's multiple calls to Tensor.cuda().
https://github.com/pytorch/pytorch/blob/ecd3c252b4da3056797f8a505c9ebe8d68db55c4/torch/nn/modules/module.py#L293
We can test this by doing the following:
torch.random.manual_seed(42)
x = torch.rand(1)
x.cuda()
y = torch.rand(1)
y.cuda()
print(x, y)
# the above prints the same as below
torch.random.manual_seed(42)
print(torch.rand(1), torch.rand(1))
So that means Net() is using the number generator to initialize random weights within the layers.
torch.manual_seed(args.seed)
model = Net()
print(torch.rand(1))
# the below will print a different result
torch.manual_seed(args.seed)
model = Net()
torch.manual_seed(args.seed)
print(torch.rand(1))
I would recommend narrowing the scope of how random numbers are managed within your Python source code. So that a global block of code outside of the Model isn't responsible for how internal values are generated.
Simply said, pass the seed as a parameter to the __init__ of the model.
model = Net(args.seed)
print(torch.rand(1))
This will force developers to always provide a seed for consistency when using the model, and you can make the parameter optional if seeding isn't always necessary.
I'd avoid using the same seed all the time, because you're going to learn to use parameters that work best with that seed.
| https://stackoverflow.com/questions/60221715/ |
Huggingface Transformers ByteLevelBPETokenizer tokenizer not found | I'm trying to run through the (new) tutorial here: https://huggingface.co/blog/how-to-train, but hit an error trying to load the ByteLevelBPETokenizer. I started from an existing conda env and also tried with a totally fresh env, but both give the same error:
Exception has occurred: ImportError
cannot import name 'ByteLevelBPETokenizer' from 'tokenizers' (/home/james/anaconda3/envs/torch/lib/python3.7/site-packages/tokenizers/__init__.py)
Any thoughts as to what might be wrong?
I'm on Ubuntu 18.04, Python 3.7
| Okay, turns out the transformers installer pulls an older version (0.0.11). So...
pip uninstall tokenizers
pip install tokenizers==0.4.2
...fixes it.
It does issues a warning: ERROR: transformers 2.4.1 has requirement tokenizers==0.0.11, but you'll have tokenizers 0.4.2 which is incompatible., but this can safely be ignored (this answer came from @julien-c at huggingface/tokenizers).
| https://stackoverflow.com/questions/60244001/ |
No such operator torchvision::nms | When I try to run yoloV3 detect,it happend the error
op = torch._C._jit_get_operation(qualified_op_name)
RuntimeError: No such operator torchvision::nms
Though this code is the source code of torchvision ,I try sevaral time to correct the code by the tips with failure.
| I had the same problem on Ubuntu 18.04. Upgrading python to 3.8 and Installing fresh torch and torchvision libraries worked for me.
virtualenv -p python3.8 torch17
source torch17/bin/activate
pip install cython matplotlib tqdm scipy ipython ninja yacs opencv-python ffmpeg opencv-contrib-python Pillow scikit-image scikit-learn lmfit imutils pyyaml jupyterlab==3
pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
I tried the solutions discussed in some pytorch forums and github but that did not help.
| https://stackoverflow.com/questions/60247432/ |
broadcasting across tensors in `pytorch` | I am using pytorch as an array processing language (not for the traditional deep learning purposes), and I am wondering what the canonical way is to do "batching" parallelism.
For example, suppose I want to compute svds of two dimensional layers of a 3-d tensor (using torch.svd(), say), and I want to return a tuple of stacked us, stacked s, stacked v.
Presumably, through the magic of SIMD parallelism, this should be doable in roughly the same time as a single layer svd (on gpu), but how to program it?
| PyTorch is a high level software library with lots of python wrappers for highly optimized compiled code. A function or operator either supports batch data or not.
There is no other way around it than writing your own C/C++/CUDA code and invoke it with python.
Luckily, most functions support batch processing (including torch.svd() as pointed out by jodag) and it can be assumed that the developers (or the compiler) paid attention to data parallelism in the implementation. I recommend you to stack your tensors wherever you can. It usually leads to significant speedups.
Note that the batch dimension is always the first dimension of a tensor. PyTorch supports broadcasting for common operators like +, -, *, / as documented here. Because of possible ambiguities you are sometimes required to reshape your data accordingly to make clear what you want. For example if you want to add a batch of scalars to a batch of vectors you need to do something like:
a = torch.zeros(2, 2)
b = torch.arange(2)
a + b.view(2, 1) # or b.reshape(2, 1)
# tensor([[0., 0.],
[1., 1.]])
| https://stackoverflow.com/questions/60250696/ |
Implementing SmoothL1Loss for specific case | I have been experimenting with L1 and MSE losses in Pytorch and noticing that L1Loss performs better because it is more robust to outliers. I discovered SmoothL1Loss which seems to be the best of both worlds. I understand that it behaves like MSELoss for error<1 and like L1Loss otherwise. My dataset only contains values between 0 and 1. Therefore the largest possible error is 1. Does this mean the function behaves identical to MSELoss? Is it possible adjust the threshold in anyway to work better for my problem?
| Yes, in this case it acts just like torch.nn.MSELoss, and it is called Huber Loss all in all.
Due to it's nature threshold doesn't make much sense, let's look at example why that is the case:
How it works
Let's compare errors being larger than 1.0 in case of MSELoss and SmoothL1Loss. Assume our absolute error (|f(x) - y|) is 10. MSELoss would give it value of 100 (or 50 in case of pytorch implementation), while SmoothL1Loss gives just this value of 10, hence it won't punish the model so much for large errors.
In case of value below 1.0 SmoothL1Loss punishes the model less than L1Loss. E.g. 0.5 would become 0.5*0.5 so 0.25 for Huber and 0.5 for L1Loss.
It's not "best of both worlds" it depends what you are after. Mean Squared Error - amplifies large errors and downplays the small ones, L1Loss gives errors "equal" weight let's say.
Custom loss function
Though it's not usually done you could use any loss function you'd like, depending on your goal (threshold doesn't really make sense here). If you want smaller errors to be more severe you could, for example, do something like this:
import torch
def fancy_squared_loss(y_true, y_pred):
return torch.mean(torch.sqrt(torch.abs(y_true - y_pred)))
For value 0.2 you would get ~0.447, for 0.5 ~0.7 and so on. Experiment and check whether any specific loss functions exist for task at hand, though I think it's unlikely those experiments will give you significant boost over L1Loss if any.
Custom threshold
If you really want to set custom threshold for MSELoss and L1Loss you could implement it on your own though:
import torch
class CustomLoss:
def __init__(self, threshold: float = 0.5):
self.threshold = threshold
def __call__(self, predicted, true):
errors = torch.abs(predicted - true)
mask = errors < self.threshold
return (0.5 * mask * (errors ** 2)) + ~mask * errors
Everything below threshold would get MSELoss while all above would have L1Loss.
| https://stackoverflow.com/questions/60252902/ |
pytorch conv2d value cannot be converted to type uint8_t without overflow | I'm passing a torch.Tensor with a dtype of torch.uint8 to an nn.Conv2d module and it is giving the error
RuntimeError: value cannot be converted to type uint8_t without
overflow: -0.0344873
My conv2d is defined as self.conv1 = nn.Conv2d(3, 6, 5). The error comes in my forward method when I pass the tensor to the module like self.conv1(x). The tensor has shape (4, 3, 480, 640). I'm not sure how to fix this. Here is the stack trace
Traceback (most recent call last):
File "cnn.py", line 54, in <module>
outputs = net(inputs)
File "/Users/my_repos/venv_projc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "cnn.py", line 24, in forward
test = self.conv1(x)
File "/Users/my_repos/venv_projc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/Users/my_repos/venv_projc/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 345, in forward
return self.conv2d_forward(input, self.weight)
File "/Users/my_repos/venv_projc/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 342, in conv2d_forward
self.padding, self.dilation, self.groups)
RuntimeError: value cannot be converted to type uint8_t without overflow: -0.0344873
| Converting the tensor to a float seemed to fix it self.conv1(x.float())
| https://stackoverflow.com/questions/60253449/ |
Unable to import torch.distributed.rpc | I was trying to run the RPC rnn example from the following link - https://github.com/pytorch/examples/tree/master/distributed/rpc/rnn
but I am unable to import RPC module of the torch.distributed and getting the following error.
Traceback (most recent call last):
File ".\main.py", line 6, in <module>
import torch.distributed.rpc as rpc
File "C:\Users\Public\Anaconda\lib\site-packages\torch\distributed\rpc\__init__.py", line 9, in <module>
from . import backend_registry
File "C:\Users\Public\Anaconda\lib\site-packages\torch\distributed\rpc\backend_registry.py", line 8, in <module>
import torch.distributed.distributed_c10d as dc10d
File "C:\Users\Public\Anaconda\lib\site-packages\torch\distributed\distributed_c10d.py", line 9, in <module>
from .rendezvous import rendezvous, register_rendezvous_handler # noqa: F401
File "C:\Users\Public\Anaconda\lib\site-packages\torch\distributed\rendezvous.py", line 9, in <module>
from . import FileStore, TCPStore
ImportError: cannot import name 'FileStore' from 'torch.distributed' (C:\Users\Public\Anaconda\lib\site-packages\torch\distributed\__init__.py)
Torch Version:
torch 1.4.0+cpu
torchvision 0.5.0+cpu
| PyTorch Distributed package does not support Windows yet. Requests for this feature is tracked here: https://github.com/pytorch/pytorch/issues/37068
| https://stackoverflow.com/questions/60257756/ |
(pytorch) I want to normalize [0 255] integer tensor to [0 1] float tensor | I want to normalize [0 255] integer tensor to [0 1] float tensor.
I used cifar10 dataset and wanted to deal with integer image tensor.
so I made them integer tensor
when I loaded dataset, I used "transforms.ToTensor()" so the values were set to [0 1] float
tensor([[[0.4588, 0.4588, 0.4588, ..., 0.4980, 0.4980, 0.5020],
[0.4706, 0.4706, 0.4706, ..., 0.5098, 0.5098, 0.5137],
[0.4824, 0.4824, 0.4824, ..., 0.5216, 0.5216, 0.5294],
...,
[0.3098, 0.3020, 0.2863, ..., 0.4549, 0.3608, 0.3137],
[0.2902, 0.2902, 0.2902, ..., 0.4431, 0.3333, 0.3020],
[0.2706, 0.2941, 0.2941, ..., 0.4157, 0.3529, 0.3059]],
[[0.7725, 0.7725, 0.7725, ..., 0.7569, 0.7569, 0.7608],
[0.7765, 0.7765, 0.7765, ..., 0.7608, 0.7608, 0.7686],
[0.7765, 0.7765, 0.7765, ..., 0.7608, 0.7608, 0.7725],
...,
[0.6510, 0.6314, 0.6078, ..., 0.6941, 0.6510, 0.6392],
[0.6314, 0.6235, 0.6118, ..., 0.6784, 0.6196, 0.6275],
[0.6157, 0.6235, 0.6157, ..., 0.6549, 0.6431, 0.6314]],
To make them [0 255] integer tensor.
temp = np.floor(temp_images*256)
temp_int = torch.tensor(temp, dtype=torch.int32)
temp_images = torch.clamp(temp, 0, 255)
and the result was
torch.IntTensor
tensor([[[[ 94., 100., 100., ..., 98., 100., 102.],
[ 86., 100., 101., ..., 83., 91., 103.],
[ 90., 100., 99., ..., 80., 66., 86.],
...,
[ 92., 92., 90., ..., 77., 107., 119.],
[ 76., 91., 100., ..., 95., 158., 170.],
[ 86., 83., 87., ..., 97., 176., 205.]],
[[105., 111., 111., ..., 109., 112., 113.],
[ 97., 111., 112., ..., 94., 102., 114.],
[101., 111., 110., ..., 90., 77., 97.],
...,
[111., 110., 108., ..., 88., 120., 131.],
[ 95., 108., 114., ..., 105., 165., 172.],
[106., 100., 101., ..., 108., 183., 206.]],
[[ 62., 68., 68., ..., 66., 68., 70.],
[ 55., 69., 70., ..., 51., 59., 71.],
[ 59., 69., 68., ..., 48., 34., 54.],
...,
[ 59., 59., 56., ..., 54., 95., 107.],
[ 49., 61., 66., ..., 76., 152., 166.],
[ 61., 55., 54., ..., 73., 170., 206.]]],
before forwarding them to the network,
I want to make them [0 1] float tensor again.
So I tried
transforms.Normalize((0.5,0.5,0.5),(0.5,0.5,0.5))
But, the result is not normalized to [0 1] and rather it becomes bigger...!
tensor([[[117., 117., 117., ..., 127., 127., 128.],
[120., 120., 120., ..., 130., 130., 131.],
[123., 123., 123., ..., 133., 133., 135.],
...,
[ 79., 77., 73., ..., 116., 92., 80.],
[ 74., 74., 74., ..., 113., 85., 77.],
[ 69., 75., 75., ..., 106., 90., 78.]],
[[197., 197., 197., ..., 193., 193., 194.],
[198., 198., 198., ..., 194., 194., 196.],
[198., 198., 198., ..., 194., 194., 197.],
to
tensor([[[233., 233., 233., ..., 253., 253., 255.],
[239., 239., 239., ..., 259., 259., 261.],
[245., 245., 245., ..., 265., 265., 269.],
...,
[157., 153., 145., ..., 231., 183., 159.],
[147., 147., 147., ..., 225., 169., 153.],
[137., 149., 149., ..., 211., 179., 155.]],
[[393., 393., 393., ..., 385., 385., 387.],
[395., 395., 395., ..., 387., 387., 391.],
[395., 395., 395., ..., 387., 387., 393.],
...,
[331., 321., 309., ..., 353., 331., 325.],
[321., 317., 311., ..., 345., 315., 319.],
[313., 317., 313., ..., 333., 327., 321.]],
How I can normalize [0 255] integer tensor to [0 1] float tensor?
| The problem is that you seem to misunderstand what transforms.Normalize does. To quote from the PyTorch documentation:
Normalize a tensor image with mean and standard deviation. Given mean:
(M1,...,Mn) and std: (S1,..,Sn) for n channels, this transform will
normalize each channel of the input torch.*Tensor i.e. input[channel] = (input[channel] - mean[channel]) / std[channel]
The calculation for a value of, say 100, and the std and mean you provided would then be: 100 - 0.5 / 0.5 = 199.
Of course, you could increase std and mean, but this does not guarantee you the exact result that you might expect.
As suggested in the comments, the best way would probably be to invert the operations that you performed in order to get the tensor to [0 255] in the first place.
Edit:
As it turns out, according to this forum post, it seems that the transformations from PIL images to tensors automatically turn your value range to [0 1] (and to [0 255] if you transform to a PIL image, respectively), as is written in the fine-print of transforms.ToTensor. For the return transformation it is not explicitly stated, but can be enforced via the mode.
| https://stackoverflow.com/questions/60257898/ |
cnn IndexError: Target 2 is out of bounds | I got this error after I executed my code and it seems that the below portion of the code is throwing this error. I tried different ways but nothing could solve it. The error is given by the loss function.
for i, data in enumerate(train_loader, 0):
# import pdb;pdb.set_trace()
inputs, labels = data
print(type(inputs))
for input in inputs:
inputs = torch.Tensor(input)
inputs, labels= Variable(inputs), Variable(labels)
inputs=inputs.unsqueeze(1)
optimizer.zero_grad()
outputs = net(inputs)
#import pdb;pdb.set_trace()
loss_size = loss(outputs, labels)
loss_size.backward()
optimizer.step()
running_loss += loss_size.data[0]
total_train_loss += loss_size.data[0]
if (i + 1) % (print_every + 1) == 0:
print("Epoch {}, {:d}% \t train_loss: {:.2f} took: {:.2f}s".format(
epoch+1, int(100 * (i+1) / n_batches), running_loss / print_every, time.time() - start_time))
running_loss = 0.0
start_time = time.time()
--------------------------------------------------------------------------- IndexError Traceback (most recent call
last) <ipython-input-10-7d1b8710defa> in <module>
1 CNN = Net()
----> 2 trainNet(CNN, learning_rate=0.001)
3 #test()
<ipython-input-7-3208c0794681> in trainNet(net, learning_rate)
23 outputs = net(inputs)
24 #import pdb;pdb.set_trace()
---> 25 loss_size = loss(outputs, labels)
26 loss_size.backward()
27 optimizer.step()
~\Documents\Anaconda3\lib\site-packages\torch\nn\modules\module.py in
__call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
~\Documents\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in
forward(self, input, target)
914 def forward(self, input, target):
915 return F.cross_entropy(input, target, weight=self.weight,
--> 916 ignore_index=self.ignore_index, reduction=self.reduction)
917
918
~\Documents\Anaconda3\lib\site-packages\torch\nn\functional.py in
cross_entropy(input, target, weight, size_average, ignore_index,
reduce, reduction) 2019 if size_average is not None or reduce
is not None: 2020 reduction =
_Reduction.legacy_get_string(size_average, reduce)
-> 2021 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2022 2023
~\Documents\Anaconda3\lib\site-packages\torch\nn\functional.py in
nll_loss(input, target, weight, size_average, ignore_index, reduce,
reduction) 1836 .format(input.size(0),
target.size(0))) 1837 if dim == 2:
-> 1838 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) 1839 elif dim == 4: 1840 ret = torch._C._nn.nll_loss2d(input, target,
weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 2 is out of bounds.
IndexError: Target 2 is out of bounds.
| I faced the same problem. The problem was solved by changing the number of classes.
num_classes = 10 (changed to the actual class number, instead of 1)
| https://stackoverflow.com/questions/60259836/ |
can someone help me solve this problem, look for many solutions but they have not worked for me | error al entrenar
the images have a size of ([64, 3, 224, 224])
I tried to change the batch-size or image size but I still get errors
Epoch 1/30
----------
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-160-dbcdb17ea6ee> in <module>()
1 epochs = 30
2 net.to(device)
----> 3 net = train_model(net, criterion, optimizer, sched, epochs)
2 frames
<ipython-input-157-d34ea1683b12> in forward(self, x)
12 x = self.pool(F.relu(self.conv1(x)))
13 x = self.pool(F.relu(self.conv2(x)))
---> 14 x = x.view(x.size(0), 16 * 38 * 38)
15 x = F.relu(self.fc1(x))
16 x = F.relu(self.fc2(x))
RuntimeError: shape '[64, 23104]' is invalid for input of size 2876416
| This is because the product of your spatial & channel dimensions is not equivalent to 23104 but rather is equal to 2876416. To flatten your tensor, you can try out = out.view(out.size(0), -1) instead, which should work fine.
| https://stackoverflow.com/questions/60270418/ |
How does one have parameters in a pytorch model not be leafs and be in the computation graph? | I am trying to update/change the parameters of a neural net model and then having the forward pass of the updated neural net be in the computation graph (no matter how many changes/updates we do).
I tried this idea but whenever I do it pytorch sets my updated tensors (inside the model) to be leafs, which kills the flow of gradients to the networks I want to receive gradients. It kills the flow of gradients because leaf nodes are not part of the computation graph the way I want them to be (since they aren't truly leafs).
I've tried multiple things but nothing seems to work. I created a dummy code that is self contained that prints the gradients of the networks I desire to have gradients:
import torch
import torch.nn as nn
import copy
from collections import OrderedDict
# img = torch.randn([8,3,32,32])
# targets = torch.LongTensor([1, 2, 0, 6, 2, 9, 4, 9])
# img = torch.randn([1,3,32,32])
# targets = torch.LongTensor([1])
x = torch.randn(1)
target = 12.0*x**2
criterion = nn.CrossEntropyLoss()
#loss_net = nn.Sequential(OrderedDict([('conv0',nn.Conv2d(in_channels=3,out_channels=10,kernel_size=32))]))
loss_net = nn.Sequential(OrderedDict([('fc0', nn.Linear(in_features=1,out_features=1))]))
hidden = torch.randn(size=(1,1),requires_grad=True)
updater_net = nn.Sequential(OrderedDict([('fc0',nn.Linear(in_features=1,out_features=1))]))
print(f'updater_net.fc0.weight.is_leaf = {updater_net.fc0.weight.is_leaf}')
#
nb_updates = 2
for i in range(nb_updates):
print(f'i = {i}')
new_params = copy.deepcopy( loss_net.state_dict() )
## w^<t> := f(w^<t-1>,delta^<t-1>)
for (name, w) in loss_net.named_parameters():
print(f'name = {name}')
print(w.size())
hidden = updater_net(hidden).view(1)
print(hidden.size())
#delta = ((hidden**2)*w/2)
delta = w + hidden
wt = w + delta
print(wt.size())
new_params[name] = wt
#del loss_net.fc0.weight
#setattr(loss_net.fc0, 'weight', nn.Parameter( wt ))
#setattr(loss_net.fc0, 'weight', wt)
#loss_net.fc0.weight = wt
#loss_net.fc0.weight = nn.Parameter( wt )
##
loss_net.load_state_dict(new_params)
#
print()
print(f'updater_net.fc0.weight.is_leaf = {updater_net.fc0.weight.is_leaf}')
outputs = loss_net(x)
loss_val = 0.5*(target - outputs)**2
loss_val.backward()
print()
print(f'-- params that dont matter if they have gradients --')
print(f'loss_net.grad = {loss_net.fc0.weight.grad}')
print('-- params we want to have gradients --')
print(f'hidden.grad = {hidden.grad}')
print(f'updater_net.fc0.weight.grad = {updater_net.fc0.weight.grad}')
print(f'updater_net.fc0.bias.grad = {updater_net.fc0.bias.grad}')
if anyone knows how to do this please give me a ping...I set the the number of times to update to be 2 because the update operation should be in the computation graph an arbitrary number of times...so it MUST work for 2.
Strongly related post:
SO: How does one have parameters in a pytorch model not be leafs and be in the computation graph?
pytorch forum: https://discuss.pytorch.org/t/how-does-one-have-the-parameters-of-a-model-not-be-leafs/70076
Cross-posted:
Quora: https://www.quora.com/unanswered/How-does-one-have-parameters-in-a-PyTorch-model-not-be-leaves-and-be-in-the-computation-graph
reddit: https://www.reddit.com/r/pytorch/comments/f5gu3g/how_does_one_have_parameters_in_a_pytorch_model/
| DOESNT WORK PROPERLY cuz the named parameter modules get deleted.
Seems this works:
import torch
import torch.nn as nn
from torchviz import make_dot
import copy
from collections import OrderedDict
# img = torch.randn([8,3,32,32])
# targets = torch.LongTensor([1, 2, 0, 6, 2, 9, 4, 9])
# img = torch.randn([1,3,32,32])
# targets = torch.LongTensor([1])
x = torch.randn(1)
target = 12.0*x**2
criterion = nn.CrossEntropyLoss()
#loss_net = nn.Sequential(OrderedDict([('conv0',nn.Conv2d(in_channels=3,out_channels=10,kernel_size=32))]))
loss_net = nn.Sequential(OrderedDict([('fc0', nn.Linear(in_features=1,out_features=1))]))
hidden = torch.randn(size=(1,1),requires_grad=True)
updater_net = nn.Sequential(OrderedDict([('fc0',nn.Linear(in_features=1,out_features=1))]))
print(f'updater_net.fc0.weight.is_leaf = {updater_net.fc0.weight.is_leaf}')
#
def del_attr(obj, names):
if len(names) == 1:
delattr(obj, names[0])
else:
del_attr(getattr(obj, names[0]), names[1:])
def set_attr(obj, names, val):
if len(names) == 1:
setattr(obj, names[0], val)
else:
set_attr(getattr(obj, names[0]), names[1:], val)
nb_updates = 2
for i in range(nb_updates):
print(f'i = {i}')
new_params = copy.deepcopy( loss_net.state_dict() )
## w^<t> := f(w^<t-1>,delta^<t-1>)
for (name, w) in list(loss_net.named_parameters()):
hidden = updater_net(hidden).view(1)
#delta = ((hidden**2)*w/2)
delta = w + hidden
wt = w + delta
del_attr(loss_net, name.split("."))
set_attr(loss_net, name.split("."), wt)
##
#
print()
print(f'updater_net.fc0.weight.is_leaf = {updater_net.fc0.weight.is_leaf}')
print(f'loss_net.fc0.weight.is_leaf = {loss_net.fc0.weight.is_leaf}')
outputs = loss_net(x)
loss_val = 0.5*(target - outputs)**2
loss_val.backward()
print()
print(f'-- params that dont matter if they have gradients --')
print(f'loss_net.grad = {loss_net.fc0.weight.grad}')
print('-- params we want to have gradients --')
print(f'hidden.grad = {hidden.grad}') # None because this is not a leaf, it is overriden in the for loop above.
print(f'updater_net.fc0.weight.grad = {updater_net.fc0.weight.grad}')
print(f'updater_net.fc0.bias.grad = {updater_net.fc0.bias.grad}')
make_dot(loss_val)
output:
updater_net.fc0.weight.is_leaf = True
i = 0
i = 1
updater_net.fc0.weight.is_leaf = True
loss_net.fc0.weight.is_leaf = False
-- params that dont matter if they have gradients --
loss_net.grad = None
-- params we want to have gradients --
hidden.grad = None
updater_net.fc0.weight.grad = tensor([[0.7152]])
updater_net.fc0.bias.grad = tensor([-7.4249])
Acknowledgement: mighty albanD from pytorch team: https://discuss.pytorch.org/t/how-does-one-have-the-parameters-of-a-model-not-be-leafs/70076/9?u=pinocchio
| https://stackoverflow.com/questions/60271131/ |
Error relating to conversion from list to tensor in Pytorch | There is a variable 'tmp' (3 dimension).
tmp = [torch.tensor([1]),torch.tensor([2,3])]
type(tmp) -> <class 'list'>
type(tmp[0]) -> <class 'torch.Tensor'>
type(tmp[0][0]) -> <class 'torch.Tensor'>
I want to convert 'tmp' into torch.Tensor type.
But, when I run this code below, an error occurs.
torch.Tensor(tmp)
>> ValueError: only one element tensors can be converted to Python scalars
How can I fix this?
torch.stack cannot be effective in this case because tensors in 'tmp' are not the same shape.
| Use torch.stack - All tensors need to be of the same size in the list.
>>> torch.stack(tmp)
Ex:
>>> tmp = [torch.rand(2,2),torch.rand(2,2)]
>>> tmp = torch.stack(tmp)
>>> tmp
tensor([[[0.0212, 0.1864],
[0.0070, 0.3381]],
[[0.1607, 0.9568],
[0.9093, 0.1835]]])
>>> type(tmp)
<class 'torch.Tensor'>
| https://stackoverflow.com/questions/60274667/ |
AttributeError: dataset object has no attribute 'c' FastAI | I am trying to train a ResNet based UNet for image segmentation. I have the location of images and mask images in a csv file, that's why I have created my own dataloader, which is as follows:
X = list(df['input_img'])
y = list(df['mask_img'])
X_train, X_valid, y_train, y_valid = train_test_split(
X, y, test_size=0.33, random_state=42)
class NumbersDataset():
def __init__(self, inputs, labels):
self.X = inputs
self.y = labels
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
img_train = cv2.imread(self.X[idx])
img_mask = cv2.imread(self.y[idx])
img_train = cv2.resize(img_train, (427,240), interpolation = cv2.INTER_LANCZOS4)
img_mask = cv2.resize(img_mask, (427,240), interpolation = cv2.INTER_LANCZOS4)
return img_train, img_mask
I then call this datagenerator in the __main__ function:
if __name__ == '__main__':
dataset_train = NumbersDataset(X_train, y_train)
dataloader_train = DataLoader(dataset_train, batch_size=4, shuffle=True, num_workers=2)
dataset_valid = NumbersDataset(X_valid, y_valid)
dataloader_valid = DataLoader(dataset_valid, batch_size=4, shuffle=True, num_workers=2)
datas = DataBunch(train_dl = dataloader_train, valid_dl = dataloader_valid)
leaner = unet_learner(data = datas, arch = models.resnet34)
But I end up getting the following error:
Traceback (most recent call last):
File "dataset_test.py", line 70, in <module>
leaner = unet_learner(data = datas, arch = models.resnet34)
File "/home/sarvagya/miniconda3/envs/gr/lib/python3.6/site-packages/fastai/vision/learner.py", line 118, in unet_learner
model = to_device(models.unet.DynamicUnet(body, n_classes=data.c, img_size=size, blur=blur, blur_final=blur_final,
File "/home/sarvagya/miniconda3/envs/gr/lib/python3.6/site-packages/fastai/basic_data.py", line 122, in __getattr__
def __getattr__(self,k:int)->Any: return getattr(self.train_dl, k)
File "/home/sarvagya/miniconda3/envs/gr/lib/python3.6/site-packages/fastai/basic_data.py", line 38, in __getattr__
def __getattr__(self,k:str)->Any: return getattr(self.dl, k)
File "/home/sarvagya/miniconda3/envs/gr/lib/python3.6/site-packages/fastai/basic_data.py", line 20, in DataLoader___getattr__
def DataLoader___getattr__(dl, k:str)->Any: return getattr(dl.dataset, k)
AttributeError: 'NumbersDataset' object has no attribute 'c'
I tried searching and even tried using SegmentationItemList.from_df but nothing helped. What am I getting wrong here?
| You should add the attribute c into your NumbersDataset, like this:
def __init__(self, inputs, labels, c):
self.inputs = inputs
self.labels = labels
self.c = c
| https://stackoverflow.com/questions/60296710/ |
How to take depth of neural network as argument while constructing Network in Pytorch | I have written following code to take depth of network as parameter in Pytorch.
Later I realized even if I am using many hidden layers, the learnable parameters remain the same.
class Net3(torch.nn.Module):
def __init__(self, n_feature, n_hidden, n_output, depth, init):
super(Net3, self).__init__()
self.input = torch.nn.Linear(n_feature, n_hidden).float().to(device)
self.hidden = torch.nn.Linear(n_hidden, n_hidden).float().to(device)
self.predict = torch.nn.Linear(n_hidden, n_output).float().to(device) =
self.depth = depth
def forward(self, x):
x = F.relu(self.input(x)) # activation function for hidden layer
for i in range(self.depth):
x = F.relu(self.hidden(x)) # activation function for hidden layer
x = self.predict(x)
return x
Is there any other way to achieve this ?
| In init you need to create multiple hidden layers, currently you're only making one. One possibility to do this with little overhead is using a torch.nn.ModuleDict that will give you named layers:
class Net3(torch.nn.Module):
def __init__(self, n_feature, n_hidden, n_output, depth, init):
super(Net3, self).__init__()
self.layers = nn.ModuleDict() # a collection that will hold your layers
self.layers['input'] = torch.nn.Linear(n_feature, n_hidden).float().to(device)
for i in range(1, depth):
self.layers['hidden_'+str(i)] = torch.nn.Linear(n_hidden, n_hidden).float().to(device)
self.layers['output'] = torch.nn.Linear(n_hidden, n_output).float().to(device) =
self.depth = depth
def forward(self, x):
for layer in self.layers:
x = F.relu(self.layers[layer](x))
return x
| https://stackoverflow.com/questions/60298457/ |
RuntimeError: expected scalar type Long but found Int in loss = criterion(outputs, y_train) | I built this acoustic model with features dim = [1124823,13] and labels dim = [1124823,1] and I split both to train, test, and dev. The problem that when I try to run my model I get this error
RuntimeError: expected scalar type Long but found Int in
loss = criterion(outputs, y_train)
import torch
import torch.nn as nn
from fela import feat, labels
from Dataloader import train_loader, test_loader, X_train, X_test, X_val, y_train, y_test, y_val
################################################################################################
input_size = 13
hidden1_size = 13
hidden2_size = 128
hidden3_size = 64
output_size = 50
################################################################################################
class DNN(nn.Module):
def __init__(self, input_size, hidden2_size, hidden3_size, output_size):
super(DNN, self).__init__()
self.fc1 = nn.Linear(input_size, hidden1_size)
self.relu1 = nn.ReLU()
self.fc2 = nn.Linear(hidden1_size, hidden2_size)
self.relu2 = nn.ReLU()
self.fc3 = nn.Linear(hidden2_size, hidden3_size)
self.relu3 = nn.ReLU()
self.fc4 = nn.Linear(hidden3_size, output_size)
self.relu4 = nn.ReLU()
def forward(self, x):
out = self.fc1(x)
out = self.relu1(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
out = self.relu3(out)
out = self.fc4(out)
out = self.relu4(out)
return out
################################################################################################
# Instantiate the model
batch_size = 50
n_iterations = 50
no_epochs = 80
model = DNN(input_size, hidden2_size, hidden3_size, output_size)
################################################################################################
# Define the loss criterion and optimizer
criterion = nn.CrossEntropyLoss()
learning_rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
print(model)
########################################################################################################################
# train the network
iter = 0
for epoch in range(no_epochs):
for i, (X_train, y_train) in enumerate(train_loader):
optimizer.zero_grad()
outputs = model(X_train)
loss = criterion(outputs, torch.max(labels, 1)[1])
loss.backward()
optimizer.step()
iter += 1
if iter % 500 == 0:
correct = 0
total = 0
for X_test, y_test in test_loader:
outputs = model(X_test)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum()
accuracy = 100 * correct / total
print(iter, loss.data[0], accuracy)
| I think no_epochs=0 with this initialization. Possibly (len(train_loader) / batch_size) > n_iterations. Then int(no_eps) = 0. Try to change no_epochs to 100 manually, for example.
no_eps = n_iterations / (len(train_loader) / batch_size)
no_epochs = int(no_eps)
for epoch in range(no_epochs):
| https://stackoverflow.com/questions/60300668/ |
What does the copy_initial_weights documentation mean in the higher library for Pytorch? | I was trying to use the higher library for meta-learning and I was having issues understanding what the copy_initial_weights mean. The docs say:
copy_initial_weights – if true, the weights of the patched module are copied to form the initial weights of the patched module, and thus are not part of the gradient tape when unrolling the patched module. If this is set to False, the actual module weights will be the initial weights of the patched module. This is useful when doing MAML, for example.
but that doesn't make much sense to me because of the following:
For example, "the weights of the patched module are copied to form the initial weights of the patched module" doesn't make sense to me because when the context manager is initiated a patched module does not exist yet. So it is unclear what we are copying from and to where (and why copying is something we want to do).
Also, "unrolling the patched module" does not make sense to me. We usually unroll a computaiton graph caused by a for loop. A patched module is just a neural net that has been modified by this library. Unrolling is ambiguous.
Also, there isn't a technical definition for "gradient tape".
Also, when describing what false is, saying that it's useful for MAML isn't actually useful because it doesn't even hint why it's useful for MAML.
Overall, it's impossible to use the context manager.
Any explanations and examples of what the that flag does in more precise terms would be really valuable.
Related:
gitissue: https://github.com/facebookresearch/higher/issues/30
new gitissue: https://github.com/facebookresearch/higher/issues/54
pytorch forum: https://discuss.pytorch.org/t/why-does-maml-need-copy-initial-weights-false/70387
pytorch forum: https://discuss.pytorch.org/t/what-does-copy-initial-weights-do-in-the-higher-library/70384
important question related to this on how the fmodel parameters are copied so that the optimizers work (and the use of deep copy): Why does higher need to deep copy the parameters of the base model to create a functional model?
| I think it's more or less clear what this means now to me.
First I'd like to make some notation clear, specially with respect to indices wrt inner time step and outer time step (also known as episodes):
W^<inner_i, outer_i> = denotes the value a tensor has at time step inner_i, outer_i.
At the beginning of training a neural net has params:
W^<0,0>
and are held inside it's module. For the sake of explanation the specific tensor (for the base model) will be denoted:
W = the weight holding the weights for the model. This can be thought as the initialization of the model.
and will be updated with with an in-place operation (this is important since W is the placeholder for all W^<0,outer_i> for all outer step values during "normal" meta-learning) by the outer optimizer. I want to emphasize that W is the tensor for the normal Pytorch neural net base model. By changing this in-place with an outer optimizer (like Adam) we are effectively training the initialization. The outer optimizer will use the gradients wrt this tensor to do the update through the whole unrolled inner loop process.
When we say copy_initial_weights=False we mean that we will have a gradient path directly to W with whatever value it currently has. Usually the context manager is done before a inner loop after an outer step has been done so W will have W^<0,outer_i> for the current step. In particular the code that does this is this one for copy_initial_weight=False:
params = [ p.clone() if device is None else p.clone().to(device) for p in module.parameters() ]
this might look confusing if you're not familiar with clone but what it's doing is making a copy of the current weight of W. The unusual thing is that clone also remembers the gradient history from the tensor it came from (.clone() is as identity). It's main use it to add an extra layer of safety from the user doing dangerous in-place ops in it's differentiable optimizer. Assuming the user never did anything crazy with in-place ops one could in theory remove the .clone(). the reason this is confusing imho is because "copying in Pytorch" (clinging) does not automatically block gradient flows, which is what a "real" copy would do (i.e. create a 100% totally separate tensor). This is not what clone does and that is not what copy_initial_weights does.
When copy_initial_weights=True what really happens is that the weights are cloned and detached. See the code it eventually runs (here and here):
params = [_copy_tensor(p, safe_copy, device) for p in module.parameters()]
which runs copy tensor (assuming they are doing a safe copy i.e. doing the extra clone):
t = t.clone().detach().requires_grad_(t.requires_grad)
Note that .detach() does not allocate new memory. It shares the memory with the original tensor, which is why the .clone() is needed to have this op be "safe" (usually wrt in-place ops).
So when copy_initial_weights they are copying and detaching the current value of W. This is usually W^<0,outer_i> if it's doing usual meta-learning in the inner adaptation loop. So the intended semantics of copy_initial_weight is that and the initial_weight they simply mean W. The important thing to note is that the intermediate tensors for the net in the inner loop are not denoted in my notation but they are fmodel.parameters(t=inner_i). Also if things are usually meta-learning we have fmodel.parameters(t=0) = W and it gets update in-place by the outer optimizer.
Note that because of the outer optimizer's in-place op and the freeing of the graphs we never take the derivate Grad_{W^<0,0>} with respect to the initial value of W. Which was something I initially thought we were doing.
| https://stackoverflow.com/questions/60311183/ |
How does one reset the dataloader in pytorch? | I was trying to reset the dataloader manually but was unable. I tried everything here https://discuss.pytorch.org/t/how-could-i-reset-dataloader-or-count-data-batch-with-iter-instead-of-epoch/22902/4 but no luck. Anyone know how to reset the data loader AND also have the suffle/randomness of the batches not be broken?
| To reset a DataLoader then just enumerate the loader again. Each call to enumerate(loader) starts from the beginning.
To not break transformers that use random values, then reset the random seed each time the DataLoader is initialized.
def seed_init_fn(x):
seed = args.seed + x
np.random.seed(seed)
random.seed(seed)
torch.manual_seed(seed)
return
loader = torch.utils.data.DataLoader(...., worker_init_fn = seed_init_fn)
while True:
for i,data in enumerate(loader):
# will always yield same data
See worker_init_fn in the documents:
https://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader
Here is a better example:
https://github.com/pytorch/pytorch/issues/5059#issuecomment-404232359
| https://stackoverflow.com/questions/60311307/ |
Pytorch - how to undersample using weightedrandomsampler | I have an unbalanced dataset and would like to undersample the class that is overrepresented.How do I go about it. I would like to use to weightedrandomsampler but I am also open to other suggestions.
So far I am assuming that my code will have to be structured kind of like the following. But I dont know how to exaclty do it.
trainset = datasets.ImageFolder(path_train,transform=transform)
...
sampler = data.WeightedRandomSampler(weights=..., num_samples=..., replacement=...)
...
trainloader = data.DataLoader(trainset, batchsize = batchsize, sampler=sampler)
I hope someone can help. Thanks a lot
| From my understanding, pytorch WeightedRandomSampler 'weights' argument is somewhat similar to numpy.random.choice 'p' argument which is the probability that a sample will get randomly selected. Pytorch uses weights instead to random sample training examples and they state in the doc that the weights don't have to sum to 1 so that's what I mean that it's not exactly like numpy's random choice. The stronger the weight, the more likely that sample will get sampled.
When you have replacement=True, it means that training examples can be drawn more than once which means you can have copies of training examples in your train set that get used to train your model; oversampling. Alongside, if the weights are low COMPARED TO THE OTHER TRAINING SAMPLE WEIGHTS the opposite occurs which means that those samples have a lower chance of being selected for random sampling; undersampling.
I have no clue how the num_samples argument works when using it with the train loader but I can warn you to NOT put your batch size there. Today, I tried putting the batch size and it gave horrible results. My co-worker put the number of classes*100 and his results were much better. All I know is that you should not put the batch size there. I also tried putting the size of all my training data for num_samples and it had better results but took forever to train. Either way, play around with it and see what works best for you. I would guess that the safe bet is to use the number of training examples for the num_samples argument.
Here's the example I saw somebody else use and I use it as well for binary classification. It seems to work just fine. You take the inverse of the number of training examples for each class and you set all training examples with that class its respective weight.
A quick example using your trainset object
labels = np.array(trainset.samples)[:,1] # turn to array and take all of column index 1 which are the labels
labels = labels.astype(int) # change to int
majority_weight = 1/num_of_majority_class_training_examples
minority_weight = 1/num_of_minority_class_training_examples
sample_weights = np.array([majority_weight, minority_weight]) # This is assuming that your minority class is the integer 1 in the labels object. If not, switch places so it's minority_weight, majority_weight.
weights = samples_weights[labels] # this goes through each training example and uses the labels 0 and 1 as the index in sample_weights object which is the weight you want for that class.
sampler = WeightedRandomSampler(weights=weights, num_samples=, replacement=True)
trainloader = data.DataLoader(trainset, batchsize = batchsize, sampler=sampler)
Since the pytorch doc says that the weights don't have to sum to 1, I think you can also just use the ratio which between the imbalanced classes. For example, if you had 100 training examples of the majority class and 50 training examples of the minority class, it would be a 2:1 ratio. To counterbalance this, I think you can just use a weight of 1.0 for each majority class training example and a weight 2.0 for all minority class training examples because technically you want the minority class to be 2 times more likely to be selected which would balance your classes during random selection.
I hope this helped a little bit. Sorry for the sloppy writing, I was in a huge rush and saw that nobody answered. I struggled through this myself without being able to find any help for it either. If it doesn't make sense just say so and I'll re-edit it and make it more clear when I get free time.
| https://stackoverflow.com/questions/60320232/ |
How can I matrix-multiply two PyTorch quantized Tensors? | I am new to tensor quantization, and tried doing something as simple as
import torch
x = torch.rand(10, 3)
y = torch.rand(10, 3)
[email protected]
with PyTorch quantized tensors running on CPU. I thus tried
scale, zero_point = 1e-4, 2
dtype = torch.qint32
qx = torch.quantize_per_tensor(x, scale, zero_point, dtype)
qy = torch.quantize_per_tensor(y, scale, zero_point, dtype)
[email protected] # I tried...
..and got as error
RuntimeError: Could not run 'aten::mm' with arguments from the
'QuantizedCPUTensorId' backend. 'aten::mm' is only available for these
backends: [CUDATensorId, SparseCPUTensorId, VariableTensorId,
CPUTensorId, SparseCUDATensorId].
Is matrix multiplication just not supported, or am I doing something wrong?
| It is not straight forward to implement matrix multiplication for quantized matrices. Therefore, the "conventional" matrix multiplication (@) does not support it (as your error message suggests).
You should look at quantized operations, e.g., torch.nn.quantized.functional.linear:
torch.nn.quantized.functional.linear(qx[None,...], qy.T)
| https://stackoverflow.com/questions/60325913/ |
bias dimension definition in coding neural network | In the following Figure showing the code for defining the dimension of the bias b1 term, I wonder why the first dimension of bias b1 is not the batch size? Does it mean then it just assumes this bias is applied to all batches then?
If I specify the bias b1 dimension to be (batch_size, 256) then does it mean i am applying a different b1 to different batch? But theoretically it should still work right? Also what is the difference between tensor (256), (256,) and (256,1)...?
Figure: dimension definition of nn
| The weights and biases of your neural network layer are not specified in terms of batch size.
eg: w1 = torch.randn(784,256) :
This is a 2D matrix you're going to use for a matrix multiply.
784 is the dimension of your input image without considering batch size. (I'm guessing this is for mnist? it looks like you're flattening the 2d images to a 1d vector so 28*28=784).
256 is your output dimension of your output (how many logits you're using)
Similarly, b1 = torch.randn(256):
This is an 1D vector you're just adding to the logits.
256 is the dimension of the logits
Pytorch automaticallly broadcasts (repeats) these over the batch dimension for all your operations, so it doesn't matter what the batch size was.
Eg. eg in adding, b1 is automatically repeated over the first dimension, so it's actual shape for the add is (batch_size, 256).
By convention, pytorch "aligns" dimensions from right to left.
if any are missing, it then repeats the tensor over the missing dimension
If any dimension is 1, it repeats the tensor over that dimension to match the other operand.
Eg (copied from the docs on broadcasting)
>>> x=torch.empty(5,7,3)
>>> y=torch.empty(5,7,3)
# same shapes are always broadcastable (i.e. the above rules always hold)
>>> x=torch.empty((0,))
>>> y=torch.empty(2,2)
# x and y are not broadcastable, because x does not have at least 1 dimension
# can line up trailing dimensions
>>> x=torch.empty(5,3,4,1)
>>> y=torch.empty( 3,1,1)
# x and y are broadcastable.
# 1st trailing dimension: both have size 1
# 2nd trailing dimension: y has size 1
# 3rd trailing dimension: x size == y size
# 4th trailing dimension: y dimension doesn't exist
# but:
>>> x=torch.empty(5,2,4,1)
>>> y=torch.empty( 3,1,1)
# x and y are not broadcastable, because in the 3rd trailing dimension 2 != 3
This is really convenient because it means you don't have to redefine your neural net every time you want to use a different batch_size
Here's a link if you want to learn more about broadcasting in pytorch
Also what is the difference between tensor (256), (256,) and (256,1)
the first two are exactly the same; python generally allows for trailing commas in tuple expressions. You are creating a 1D vector of 256 elements.
The last one is different; you are creating a 2D tensor where the first dimension is 256 and the second dimension is 1. The underlying data is the same, and it doesn't matter as long as you're consistent about which you're using, but if you mix them, it often leads to undesired behavior:
Eg:
a = torch.randn(256)
b = torch.randn(256)
c = a + b
c.shape
>>> torch.Size([256])
Simple: they just add element-wise.
But notice what happens when one of them is shaped (-1,1):
b = b.view(-1,1) # -1 here means torch will infer the shape of this dimension based on the known size of the tensor and all other specified dimensions
b.shape
>>> torch.Size([256, 1])
c = a + b
Now because of broadcasting rules
a is repeated over the first dimension so it has the same number of dimensions as b, so it automatically interpretes a as tensor(256,256)
b is repeated so it's last dimension (1) now matches the dimension of a (256)
so:
c.shape
>>> torch.Size([256, 256])
Hint: The broadcasting rules can be hard to remember, and are often the source of bugs. When in doubt about tensor shapes, it's worth running your code in an interpreter line by line with dummy data and just checking what the shape of each tensor is eg print(torch.mm(input,w1).shape)
| https://stackoverflow.com/questions/60328668/ |
Why is there an error (numpy.float64 cannot be interpreted as in integer) in pytorch sample code | I was trying to run the sample code found here:
https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
I get a crash in the class CocoEvaluator() constructor in coco_eval.py where the following line of code:
for iou_type in iou_types:
self.coco_eval[iou_type] = COCOeval(coco_gt, iouType=iou_type)
will crash with the warning "object of type class 'numpy.float64' cannot be safely interpreted as an integer."
iou_type is a string 'bbox'
COCOeval is a class from pycocotools (pycocotools.cocoeval.COCOeval)
coco_gt is the return value from get_coco_api_from_dataset(data_loader.dataset)
its not clear to me where the numpy.float64 value is being used here, or what I can change to fix this
| The problem most likely lies in the numpy version. Numpy version 1.18.+ usually throws this error. However when changing to numpy 1.17.4 the problem is fixed.
as shown here
-> https://github.com/pytorch/vision/issues/1700
-> https://www.kaggle.com/questions-and-answers/90865
#check for version number
np.version.version
#downgrade version
!pip install numpy==1.17.4
This fixed it for me, hope it helps.
| https://stackoverflow.com/questions/60331464/ |
Gradient Computation broken by Sigmoid function in Pytorch | Hey I have been struggling with this weird problem. Here is my code for the Neural Net:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv_3d_=nn.Sequential(
nn.Conv3d(1,1,9,1,4),
nn.LeakyReLU(),
nn.Conv3d(1,1,9,1,4),
nn.LeakyReLU(),
nn.Conv3d(1,1,9,1,4),
nn.LeakyReLU()
)
self.linear_layers_ = nn.Sequential(
nn.Linear(batch_size*32*32*32,batch_size*32*32*3),
nn.LeakyReLU(),
nn.Linear(batch_size*32*32*3,batch_size*32*32*3),
nn.Sigmoid()
)
def forward(self,x,y,z):
conv_layer = x + y + z
conv_layer = self.conv_3d_(conv_layer)
conv_layer = torch.flatten(conv_layer)
conv_layer = self.linear_layers_(conv_layer)
conv_layer = conv_layer.view((batch_size,3,input_sizes,input_sizes))
return conv_layer
The weird problem I am facing is that running this NN gives me an error
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3072]], which is output 0 of SigmoidBackward, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
The stack trace shows that the issue is in line
conv_layer = self.linear_layers_(conv_layer)
However, if I replace the last activation function of my FCN from nn.Sigmoid() to nn.LeakyRelu(), the NN executes properly.
Can anyone tell me why Sigmoid activation function is causing my backward computation to break?
| I found the problem with my code. I delved deeper into what in-place actually meant. So, if you check the line
conv_layer = self.linear_layers_(conv_layer)
linear_layers_ of the assignment is changing the values of conv_layer in-place and as a result the values are getting overwritten and because of this, gradient computation fails. Easy solution for this problem is to use the clone() function
i.e.
conv_layer = self.linear_layers_(conv_layer).clone()
This creates a copy of the right hand computation and Autograd is able to store the reference of the computation graph.
| https://stackoverflow.com/questions/60337608/ |
Validation loss for pytorch Faster-RCNN | I’m currently doing object detection on a custom dataset using transfer learning from a pytorch pretrained Faster-RCNN model (like in torchvision tutorial).
I would like to compute validation loss dict (as in train mode) at the end of each epoch.
I can just run model in train mode for validation like this:
model.train()
for images, targets in data_loader_val:
images = [image.to(device) for image in images]
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
with torch.no_grad():
val_loss_dict = model(images, targets)
print(val_loss_dict)
but I don't think, that it's "correct" way to validate (cause some special layers like dropout and batch norm works different in eval/train mode). And in eval mode model returns predicted bboxes (as expected). Can I use some build-in function for this?
Thanks.
| There was some discussion about this issue here. The conclusion there is that it is absolutely valid to calculate validation loss in train mode. The numerical value of the val loss in itself is not meaningful, only the trend is important to prevent overfitting. Therefore while train mode does alter the numerical value of the loss, it's still valid to use.
There is however another issue with efficiency here, in case you also need the model outputs in the validation process (for calculating IoU, accuracy, etc. as is often the case). Right now RCNN in torchvision gives you either losses or outputs, depending on training/eval mode.
UPDATE:
I realized this fix is not working unfortunately. All submodules would have to be patched to calculate both losses and outputs. Too bad.
My dirty solution was patching the GeneralizedRCNN class from which
FasterRCNN inherits. The problem is in
this
line, in eager_outputs(). The workaround:
return losses, detections
model = fasterrcnn_resnet50_fpn() model.eager_outputs =
eager_outputs_patch
Now you can get both outputs after a single inference run: model.train() with torch.no_grad(): loss_dict, outputs = model(images, targets). # yaay, now we have both! Note that you still need to put your model to train mode in
order to have the losses too. In eval mode GeneralizedRCNN's
submodules (rpn, roi_heads) don't calculate any loss, and loss_dict
is empty.
| https://stackoverflow.com/questions/60339336/ |
challenging special numpy operation | I have a NumPy array that is full of indices of numbers input. I want to check if certain indices indices are in it. Say that the i'th row of input, input[i] has entries j_1<...<j_n that their values belong to indices. I would like to switch the value of input[i,j_n] with a random value from indices. How can I do it elegantly and efficiently?
Example:
input = [[1,2,3],[4,5,6],[7,8,9]]
indices = [2,4,6]
input --> [[1,4,3],[4,5,4],[7,8,9]]
| Something like that:
import random
input = [[i if i not in indices else random.choice(indices) for i in x] for x in input]
Better, of course, to check in set instead of list:
import random
d = {*indices}
input = [[i if i not in d else random.choice(indices) for i in x] for x in input]
| https://stackoverflow.com/questions/60340357/ |
Best way to run a trained PyTorch LSTM/GRU model fully in the browser | I'm looking into running a trained PyTorch model (containing LSTM/GRU layers) fully in the browser (no backend) as part of an interactive blog post. I've looked at ONNX.js, and that works great, but not for a model containing a GRU layer. I saw someone comment on the ONNX.js github that Gated RNN's are not supported yet, but that was over half a year ago and I can't find any other information about this.
Other than that, it seems like the best option would be to just rewrite the model in Tensorflow and export to Tensorflow.js.
Is there an easier and more direct solution?
| There is this thread, that describes the options but is not receiving a lot of attention.
In summary, as of May 2020, there are only two options:
1) ONNX.js but its development is currently stale.
2) Converting the model to Tensorflow.
Technically there is a third one, that includes no server. And that is to run the model in a Mobile Application.
| https://stackoverflow.com/questions/60340552/ |
Pytorch Runtime Error - The size of tensor a (5) must match the size of tensor b (3) at non-singleton dimension | I am trying to train a Faster RCNN Network on a custom dataset consisting of images for object detection. However, I don't want to directly give an RGB image as input, I actually need to pass it through another network (a feature extractor) along with the corresponding thermal image and give the extracted features as the input to the FRCNN Network. The feature extractor combines these two images into a 4 channel tensor and the output is a 5 channel tensor. It is this 5 channel tensor that I wish to give as input to the Faster RCNN Network.
I followed the PyTorch docs for Object Detection Finetuning (link here) and came up with the following code to suit my dataset.
class CustomDataset(torch.utils.data.Dataset):
def __getitem__(self, idx):
self.num_classes = 5
img_rgb_path = os.path.join(self.root, "rgb/", self.rgb_imgs[idx])
img_thermal_path = os.path.join(self.root, "thermal/", self.thermal_imgs[idx])
img_rgb = Image.open(img_rgb_path)
img_rgb = np.array(img_rgb)
x_rgb = TF.to_tensor(img_rgb)
x_rgb.unsqueeze_(0)
img_thermal = Image.open(img_thermal_path)
img_thermal = np.array(img_thermal)
img_thermal = np.expand_dims(img_thermal,-1)
x_th = TF.to_tensor(img_thermal)
x_th.unsqueeze_(0)
print(x_rgb.shape) # shape of [3,640,512]
print(x_th.shape) # shape of [1,640,512]
input = torch.cat((x_rgb,x_th),dim=1) # shape of [4,640,512]
img = self.feature_extractor(input) # My custom feature extractor which returns a 5 dimensional tensor
print(img.shape) # shape of [5,640,512]
filename = os.path.join(self.root,'annotations',self.annotations[idx])
tree = ET.parse(filename)
objs = tree.findall('object')
num_objs = len(objs)
boxes = np.zeros((num_objs, 4), dtype=np.uint16)
labels = np.zeros((num_objs), dtype=np.float32)
seg_areas = np.zeros((num_objs), dtype=np.float32)
boxes = []
for ix, obj in enumerate(objs):
bbox = obj.find('bndbox')
x1 = float(bbox.find('xmin').text)
y1 = float(bbox.find('ymin').text)
x2 = float(bbox.find('xmax').text)
y2 = float(bbox.find('ymax').text)
cls = self._class_to_ind[obj.find('name').text.lower().strip()]
boxes.append([x1, y1, x2, y2])
labels[ix] = cls
seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)
boxes = torch.as_tensor(boxes, dtype=torch.float32)
seg_areas = torch.as_tensor(seg_areas, dtype=torch.float32)
labels = torch.as_tensor(labels, dtype=torch.float32)
target = {'boxes': boxes,
'labels': labels,
'seg_areas': seg_areas,
}
return img,target
My main function code is as follows
import utils
def train_model(model, criterion,dataloader,num_epochs):
since = time.time()
best_model = model
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
optimizer = torch.optim.SGD(params, lr=0.005,
momentum=0.9, weight_decay=0.0005)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=3,
gamma=0.1)
# optimizer = lr_scheduler(optimizer, epoch)
model.train() # Set model to training mode
running_loss = 0.0
running_corrects = 0
for data in dataloader:
inputs, labels = data[0][0], data[1]
inputs = inputs.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs, labels)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum(preds == labels).item()
epoch_loss = running_loss / len(dataloader)
epoch_acc = running_corrects / len(dataloader)
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
backbone = torchvision.models.mobilenet_v2(pretrained=True).features
backbone.out_channels = 1280
anchor_generator = AnchorGenerator(sizes=((32, 64, 128, 256, 512),),
aspect_ratios=((0.5, 1.0, 2.0),))
roi_pooler = torchvision.ops.MultiScaleRoIAlign(featmap_names=[0],
output_size=7,
sampling_ratio=2)
num_classes = 5
model = FasterRCNN(backbone = backbone,num_classes=5,rpn_anchor_generator=anchor_generator,box_roi_pool=roi_pooler)
dataset = CustomDataset('train_folder/')
data_loader_train = torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=True,collate_fn=utils.collate_fn)
train_model(model, criterion, data_loader_train, num_epochs=10)
The collate_fn defined in the utils.py file is the following
def collate_fn(batch):
return tuple(zip(*batch))
I, however, get the following error while training
Traceback (most recent call last):
File "train.py", line 147, in <module>
train_model(model, criterion, data_loader_train, num_epochs)
File "train.py", line 58, in train_model
outputs = model(inputs, labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/generalized_rcnn.py", line 66, in forward
images, targets = self.transform(images, targets)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py", line 46, in forward
image = self.normalize(image)
File "/usr/local/lib/python3.6/dist-packages/torchvision/models/detection/transform.py", line 66, in normalize
return (image - mean[:, None, None]) / std[:, None, None]
RuntimeError: The size of tensor a (5) must match the size of tensor b (3) at non-singleton dimension 0
I am a newbie in Pytorch.
| The backbone network you are using for the FasterRCNN is a pretrained mobilenet_v2.
The input channel of a network is decided by the number of channels of the input data. Since the (backbone) model is pretrained (on natural images?) with 3 channels 3xNxM, you cannot use it for tensors of dimension 5xPxQ (skipping the singleton <batch_size> dimension).
Basically, you have 2 options,
1. Reduce the output channel dimension of the 1st network to 3 (better if you are training it from scratch)
2. Make a new backbone for the FasterRCNN with 5 channels in input and train it from scratch.
As for explaining the error message,
return (image - mean[:, None, None]) / std[:, None, None]
Pytorch is trying to normalize the input image where your input image has dimension (5,M,N) and teh tensors mean and std have 3 channels instead of 5
| https://stackoverflow.com/questions/60342869/ |
Torch.sort and argsort sorting randomly in case of same element | When same elements are encountered, torch.sort and argsort sort the tensor in random manner.
This is not the case in numpy.
I have a list of elements already sorted according to the second column and now i want to sort it using the first column but preserve the earlier sort in case of tie in the new sorting.
import torch
a = torch.tensor(
[[ 0., 3.],
[ 2., 3.],
[ 2., 2.],
[10., 2.],
[ 0., 2.],
[ 6., 2.],
[10., 1.],
[ 2., 1.],
[ 0., 1.],
[ 6., 1.],
[10., 0.],
[12., 0.]]
)
print(a[torch.argsort(a[:, 0])])
Output:
tensor([[ 0., 3.],
[ 0., 2.],
[ 0., 1.],
[ 2., 1.],
[ 2., 2.],
[ 2., 3.],
[ 6., 1.],
[ 6., 2.],
[10., 1.],
[10., 2.],
[10., 0.],
[12., 0.]])
Numpy:
import numpy as np
a = np.array(
[[ 0., 3.],
[ 2., 3.],
[ 2., 2.],
[10., 2.],
[ 0., 2.],
[ 6., 2.],
[10., 1.],
[ 2., 1.],
[ 0., 1.],
[ 6., 1.],
[10., 0.],
[12., 0.]]
)
print(a[np.argsort(a[:, 0])])
Output:
[[ 0. 3.]
[ 0. 2.]
[ 0. 1.]
[ 2. 3.]
[ 2. 2.]
[ 2. 1.]
[ 6. 2.]
[ 6. 1.]
[10. 2.]
[10. 1.]
[10. 0.]
[12. 0.]]
What could be the reason for this? And what can I do to avoid it?
| As per torch 1.9.0 you can run the sort with option stable=True. See https://pytorch.org/docs/1.9.0/generated/torch.sort.html?highlight=sort#torch.sort
>>> x = torch.tensor([0, 1] * 9)
>>> x.sort()
torch.return_types.sort(
values=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
indices=tensor([ 2, 16, 4, 6, 14, 8, 0, 10, 12, 9, 17, 15, 13, 11, 7, 5, 3, 1]))
>>> x.sort(stable=True)
torch.return_types.sort(
values=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
indices=tensor([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 1, 3, 5, 7, 9, 11, 13, 15, 17]))
The documentation says this is only on the CPU, but it will come to GPU sorting soon, since that documentation warning has been removed in the master branch at github (as per https://github.com/pytorch/pytorch/pull/61685)
| https://stackoverflow.com/questions/60366033/ |
Install Pytorch GPU with pre-installed CUDA and cudnn | As the title suggests, I have pre-installed CUDA and cudnn (my Tensorflow is using them).
The version of CUDA is 10.0 from nvcc --version.
The versiuon of cudnn is 7.4.
I am trying to install pytorch in a conda environment using conda install pytorch torchvision cudatoolkit=10.0 -c pytorch.
However, the installed pytorch does not detect my GPU successfully.
Does anyone know if there is a way to install GPU-version pytorch with a specific CUDA and cudnn version? I do not want to change CUDA and cudnn version because my Tensorflow is using them.
Any ideas would be appreciated!
| So I solved this myself finally. The issue is that I didn't reboot my system after installing pytorch. After rebooting, torch.cuda.is_available() returns True as expected.
| https://stackoverflow.com/questions/60368896/ |
Pythonic Nested for - loops in Python | I am working on this code where I have nested for loops. a_list and b_list are list of tuples, where each tuple is made up of two tensors [(tens1, tens2), ...]. I am trying to compute the similarity of every tens1 in a_list to every tens1 in b_list. Below is the code I have. And the nested loop appears to be a bottleneck. Is there a better way(pythonic) that I can re-write the loops?
a2b= defaultdict(dict)
b2a= defaultdict(dict)
ab_sim = []
for a, vec_a in a_list:
for b, vec_b in b_list:
# Ignore combination if the first element in both a and b are same
if a[0] == b[0]:
continue
# Calculate cosine similarity of combination
sim = self.calculate_similarity(vec_a, vec_b )
a2b[a][b] = sim
b2a[b][a] = sim
ab_sim.append(sim)
The calculate_similarity is just a method computing cosine similarity. a_list and b_list could be of any size. I have b2a and a2b because I need them for other computations.
| You could use a dictionary comprehension:
a2b = {a: {b: self.calculate_similarity(vec_a, vec_b )
for (b, vec_b) in b_list if a[0] != b[0]} for (a, vec_a) in a_list}
| https://stackoverflow.com/questions/60378598/ |
Indexing in two dimensional PyTorch Tensor using another Tensor | Suppose that tensor A is defined as:
1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
I'm trying to extract a flat array out of this matrix by using another tensor as indices. For example, if the second tensor is defined as:
0
1
2
3
I want the result of the indexing to be 1-D tensor with the contents:
1
6
11
16
It doesn't seem to behave like NumPy; I've tried A[:, B] but it just throws an error for not being able to allocate an insane amount of memory and I've no idea why!
| 1st Approach: using torch.gather
torch.gather(A, 1, B.unsqueeze_(dim=1))
if you want one-dimensional vector, you can add squeeze to the end:
torch.gather(A, 1, B.unsqueeze_(dim=1)).squeeze_()
2nd Approach: using list comprehensions
You can use list comprehensions to select the items at specific indexes, then they can be concatenated using the torch.stack. An importat point here is that you should not use torch.tensor to create a new tensor from a list, if you do, you will break the chain (you cannot calculate gradient through that node):
torch.stack([A[i, B[i]] for i in range(A.size()[0])])
| https://stackoverflow.com/questions/60399734/ |
TypeError: can't multiply sequence by non-int type of 'tuple' in pytorch | The code as below:
class L2Norm(nn.Module):
def __init__(self):
super(L2Norm, self).__init__()
self.eps = 1e-10
def forward(self, x):
norm = torch.sqrt(torch.sum(x * x, dim = 1) + self.eps)
x = x / norm.unsqueeze(-1).expand_as(x)
return x
I want to normalize the features. The input x is the output of nn.Linear() in FC layer (x = self.fc1(x)).
def forward(self, x):
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = x.view(x.size(0), -1)
x = self.fc1(x)
if self.feature:
return x
# x = self.last_bn(x)
x = self.fc2(x)
return x
I print the x as below:
(tensor([[-0.8409, 8.6126, -1.6639, ..., -3.3563, 10.0872, 2.4730],
[-1.3959, 0.5608, -0.9233, ..., 0.4385, -0.7089, -1.3401],
[ 0.5742, -3.8479, 1.7756, ..., -4.2798, -5.0684, -0.9032],
...,
[ 0.9205, 3.1602, -3.9247, ..., -2.1396, 4.0262, 2.8075],
[-0.2024, 0.5603, 0.0491, ..., -0.1716, -0.2513, 0.1179],
[ 4.8053, 0.3062, -1.6867, ..., -1.5749, 0.5193, 0.8671]],
device='cuda:0', grad_fn=<GatherBackward>), tensor([[-156.3423, -145.1505, -156.6586, ..., -157.9570, -141.1895,
-155.2964],
[ -31.9854, -30.2333, -31.1459, ..., -30.3290, -30.8740,
-31.8696],
[-141.5926, -144.1404, -141.1151, ..., -144.7264, -145.9508,
-141.7867],
...,
[-193.6224, -192.4931, -195.6285, ..., -194.2939, -191.7269,
-192.7527],
[ -5.8791, -4.5035, -5.6987, ..., -5.8316, -5.9696,
-5.6506],
[ -77.5002, -83.8829, -84.7204, ..., -84.6169, -83.8326,
-83.6949]], device='cuda:0', grad_fn=<GatherBackward>))
However, the mistake occurs in the torch.sum(x*x, dim=1). I have any solution for the mistake.
| x is a tuple of two tensors, as shown in your output. x * x would require a way to multiply two tuples.
If I simply define x as a tuple of ints, e.g. x=(1, 1), and tried the same code: x * x, I get the same error:
>>> x=(1,1)
>>> x * x
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't multiply sequence by non-int of type 'tuple'
# though tuple times an int does work:
>>> x * 3
(1, 1, 1, 1, 1, 1)
In your case, x should probably be a tensor, not a tuple.
| https://stackoverflow.com/questions/60416907/ |
how visualize multi channel of feature from PyTorch? | I'm almost newbie at PyTorch
One of my output size from conv is [1, 25, 8, 32]
(25=channel, 8=height, 32=width)
I can use squeeze and make it to [25, 8, 32].
But I'm confused with 25 channel.
When I want to visualize sum of 25 channel and make to one GRAYorRGB image(1or3x8x32),How can i deal with in code??
I can use matplot or tensorboardX for visualizing..
| It is difficult to visualize images with more than 3 channels and it is unclear what a feature vector in 25 dimensional space actually looks like.
The most straight forward approach would be to visualize the 8x32 feature maps you have as separate 25 gray scale images of size 8x32. Each image will show how how "sensitive" is a specific neuron/conv filter/channel (these are all equivalent) to the input at a certain spatial location.
There are more intricate methods for feature visualization, you can find more details about them in this blog post.
| https://stackoverflow.com/questions/60425609/ |
How to change certain values in a torch tensor based on an index in another torch tensor? | This is an issue I'm running while convertinf DQN to Double DQN for the cartpole problem. I'm getting close to figuring it out.
tensor([0.1205, 0.1207, 0.1197, 0.1195, 0.1204, 0.1205, 0.1208, 0.1199, 0.1206,
0.1199, 0.1204, 0.1205, 0.1199, 0.1204, 0.1204, 0.1203, 0.1198, 0.1198,
0.1205, 0.1204, 0.1201, 0.1205, 0.1208, 0.1202, 0.1205, 0.1203, 0.1204,
0.1205, 0.1206, 0.1206, 0.1205, 0.1204, 0.1201, 0.1206, 0.1206, 0.1199,
0.1198, 0.1200, 0.1206, 0.1207, 0.1208, 0.1202, 0.1201, 0.1210, 0.1208,
0.1205, 0.1205, 0.1201, 0.1193, 0.1201, 0.1205, 0.1207, 0.1207, 0.1195,
0.1210, 0.1204, 0.1209, 0.1207, 0.1187, 0.1202, 0.1198, 0.1202])
tensor([ True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, False, True, True, True,
True, True, True, True, True, True, True, False, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True])
As you can see here two tensors.
The first has the q values I want
but,
some values need to be changed to zeros because of it an end state.
The second tensor shows where it will be zeros.
At the index where the Boolean value is false is the equivalent spot for where the upper tensor needs to be zeros.
I am not sure how to do that.
| You can use torch.where - torch.where(condition, x, y)
Ex.:
>>> x = tensor([0.2853, 0.5010, 0.9933, 0.5880, 0.3915, 0.0141, 0.7745,
0.0588, 0.4939, 0.0849])
>>> condition = tensor([False, True, True, True, False, False, True,
False, False, False])
>>> # It's equivalent to `torch.where(condition, x, tensor(0.0))`
>>> x.where(condition, tensor(0.0))
tensor([0.0000, 0.5010, 0.9933, 0.5880, 0.0000, 0.0000, 0.7745,
0.0000, 0.0000,0.0000])
| https://stackoverflow.com/questions/60442272/ |
How forward() method is used when it have more than one two input parameters in pytorch | Can someone tell me the concept behind the multiple parameters in forward() method?
Generally, the implementation of forward() method has two parameters
self
input
if a forward method has more than these parameters how PyTorch is using the forward method.
Let's consider this codebase:
https://github.com/bamps53/kaggle-autonomous-driving2019/blob/master/models/centernet.py
here online 236 authors have used forward method with two more parameters:
centers
return_embeddings
I am unable to find a single article that can answer my query on what condition Line 254(return_embeddings:) and Line 257(if centers is not None:) will execute. As per my knowledge forward, the method is internally called by nn module. Can someone please put some lights on this?
| Forward function set by you. That means you can add more parameters as you want. For example, you could add inputs as shown below
def forward(self, input1, input2, input3):
x = self.layer1(input1)
y = self.layer2(input2)
z = self.layer3(input3)
net = torch.cat((x,y,z),1)
return net
You have to control your parameters while feeding the network. Layers couldn't be feed with more than a parameter. Therefore, you need to extract features from your input one by one and concatenate with torch.cat((x,y),1)(1 for dimension) them.
| https://stackoverflow.com/questions/60463821/ |
Training TFBertForSequenceClassification with custom X and Y data | I am working on a TextClassification problem, for which I am trying to traing my model on TFBertForSequenceClassification given in huggingface-transformers library.
I followed the example given on their github page, I am able to run the sample code with given sample data using tensorflow_datasets.load('glue/mrpc').
However, I am unable to find an example on how to load my own custom data and pass it in
model.fit(train_dataset, epochs=2, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7).
How can I define my own X, do tokenization of my X and prepare train_dataset with my X and Y. Where X represents my input text and Y represents classification category of given X.
Sample Training dataframe :
text category_index
0 Assorted Print Joggers - Pack of 2 ,/ Gray Pri... 0
1 "Buckle" ( Matt ) for 35 mm Width Belt 0
2 (Gagam 07) Barcelona Football Jersey Home 17 1... 2
3 (Pack of 3 Pair) Flocklined Reusable Rubber Ha... 1
4 (Summer special Offer)Firststep new born baby ... 0
| Fine Tuning Approach
There are multiple approaches to fine-tune BERT for the target tasks.
Further Pre-training the base BERT model
Custom classification layer(s) on top of the base BERT model being trainable
Custom classification layer(s) on top of the base BERT model being non-trainable (frozen)
Note that the BERT base model has been pre-trained only for two tasks as in the original paper.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
3.1 Pre-training BERT ...we pre-train BERT using two unsupervised tasks
Task #1: Masked LM
Task #2: Next Sentence Prediction (NSP)
Hence, the base BERT model is like half-baked which can be fully baked for the target domain (1st way). We can use it as part of our custom model training with the base trainable (2nd) or not-trainable (3rd).
1st approach
How to Fine-Tune BERT for Text Classification? demonstrated the 1st approach of Further Pre-training, and pointed out the learning rate is the key to avoid Catastrophic Forgetting where the pre-trained knowledge is erased during learning of new knowledge.
We find that a lower learning rate, such as 2e-5,
is necessary to make BERT overcome the catastrophic forgetting problem. With an aggressive learn rate of 4e-4, the training set fails to converge.
Probably this is the reason why the BERT paper used 5e-5, 4e-5, 3e-5, and 2e-5 for fine-tuning.
We use a batch size of 32 and fine-tune for 3 epochs over the data for all GLUE tasks. For each task, we selected the best fine-tuning learning rate (among 5e-5, 4e-5, 3e-5, and 2e-5) on the Dev set
Note that the base model pre-training itself used higher learning rate.
bert-base-uncased - pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer used is Adam with a learning rate of 1e-4, β1=0.9 and β2=0.999, a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Will describe the 1st way as part of the 3rd approach below.
FYI:
TFDistilBertModel is the bare base model with the name distilbert.
Model: "tf_distil_bert_model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
distilbert (TFDistilBertMain multiple 66362880
=================================================================
Total params: 66,362,880
Trainable params: 66,362,880
Non-trainable params: 0
2nd approach
Huggingface takes the 2nd approach as in Fine-tuning with native PyTorch/TensorFlow where TFDistilBertForSequenceClassification has added the custom classification layer classifier on top of the base distilbert model being trainable. The small learning rate requirement will apply as well to avoid the catastrophic forgetting.
from transformers import TFDistilBertForSequenceClassification
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased')
optimizer = tf.keras.optimizers.Adam(learning_rate=5e-5)
model.compile(optimizer=optimizer, loss=model.compute_loss) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16)
Model: "tf_distil_bert_for_sequence_classification_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
distilbert (TFDistilBertMain multiple 66362880
_________________________________________________________________
pre_classifier (Dense) multiple 590592
_________________________________________________________________
classifier (Dense) multiple 1538
_________________________________________________________________
dropout_59 (Dropout) multiple 0
=================================================================
Total params: 66,955,010
Trainable params: 66,955,010 <--- All parameters are trainable
Non-trainable params: 0
Implementation of the 2nd approach
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from transformers import (
DistilBertTokenizerFast,
TFDistilBertForSequenceClassification,
)
DATA_COLUMN = 'text'
LABEL_COLUMN = 'category_index'
MAX_SEQUENCE_LENGTH = 512
LEARNING_RATE = 5e-5
BATCH_SIZE = 16
NUM_EPOCHS = 3
# --------------------------------------------------------------------------------
# Tokenizer
# --------------------------------------------------------------------------------
tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
def tokenize(sentences, max_length=MAX_SEQUENCE_LENGTH, padding='max_length'):
"""Tokenize using the Huggingface tokenizer
Args:
sentences: String or list of string to tokenize
padding: Padding method ['do_not_pad'|'longest'|'max_length']
"""
return tokenizer(
sentences,
truncation=True,
padding=padding,
max_length=max_length,
return_tensors="tf"
)
# --------------------------------------------------------------------------------
# Load data
# --------------------------------------------------------------------------------
raw_train = pd.read_csv("./train.csv")
train_data, validation_data, train_label, validation_label = train_test_split(
raw_train[DATA_COLUMN].tolist(),
raw_train[LABEL_COLUMN].tolist(),
test_size=.2,
shuffle=True
)
# --------------------------------------------------------------------------------
# Prepare TF dataset
# --------------------------------------------------------------------------------
train_dataset = tf.data.Dataset.from_tensor_slices((
dict(tokenize(train_data)), # Convert BatchEncoding instance to dictionary
train_label
)).shuffle(1000).batch(BATCH_SIZE).prefetch(1)
validation_dataset = tf.data.Dataset.from_tensor_slices((
dict(tokenize(validation_data)),
validation_label
)).batch(BATCH_SIZE).prefetch(1)
# --------------------------------------------------------------------------------
# training
# --------------------------------------------------------------------------------
model = TFDistilBertForSequenceClassification.from_pretrained(
'distilbert-base-uncased',
num_labels=NUM_LABELS
)
optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(
x=train_dataset,
y=None,
validation_data=validation_dataset,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
)
3rd approach
Basics
Please note that the images are taken from A Visual Guide to Using BERT for the First Time and modified.
Tokenizer
Tokenizer generates the instance of BatchEncoding which can be used like a Python dictionary and the input to the BERT model.
BatchEncoding
Holds the output of the encode_plus() and batch_encode() methods (tokens, attention_masks, etc).
This class is derived from a python dictionary and can be used as a dictionary. In addition, this class exposes utility methods to map from word/character space to token space.
Parameters
data (dict) – Dictionary of lists/arrays/tensors returned by the encode/batch_encode methods (‘input_ids’, ‘attention_mask’, etc.).
The data attribute of the class is the tokens generated which has input_ids and attention_mask elements.
input_ids
input_ids
The input ids are often the only required parameters to be passed to the model as input. They are token indices, numerical representations of tokens building the sequences that will be used as input by the model.
attention_mask
Attention mask
This argument indicates to the model which tokens should be attended to, and which should not.
If the attention_mask is 0, the token id is ignored. For instance if a sequence is padded to adjust the sequence length, the padded words should be ignored hence their attention_mask are 0.
Special Tokens
BertTokenizer addes special tokens, enclosing a sequence with [CLS] and [SEP]. [CLS] represents Classification and [SEP] separates sequences. For Question Answer or Paraphrase tasks, [SEP] separates the two sentences to compare.
BertTokenizer
cls_token (str, optional, defaults to "[CLS]")The Classifier Token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens.
sep_token (str, optional, defaults to "[SEP]")The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens.
A Visual Guide to Using BERT for the First Time show the tokenization.
[CLS]
The embedding vector for [CLS] in the output from the base model final layer represents the classification that has been learned by the base model. Hence feed the embedding vector of [CLS] token into the classification layer added on top of the base model.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
The first token of every sequence is always a special classification token ([CLS]). The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks. Sentence pairs are packed together into a single sequence. We differentiate the sentences in two ways. First, we separate them with a special token ([SEP]). Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B.
The model structure will be illustrated as below.
Vector size
In the model distilbert-base-uncased, each token is embedded into a vector of size 768. The shape of the output from the base model is (batch_size, max_sequence_length, embedding_vector_size=768). This accords with the BERT paper about the BERT/BASE model (as indicated in distilbert-base-uncased).
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BERT/BASE (L=12, H=768, A=12, Total Parameters=110M) and BERT/LARGE (L=24, H=1024, A=16, Total Parameters=340M).
Base Model - TFDistilBertModel
Hugging Face Transformers: Fine-tuning DistilBERT for Binary Classification Tasks
TFDistilBertModel class to instantiate the base DistilBERT model without any specific head on top (as opposed to other classes such as TFDistilBertForSequenceClassification that do have an added classification head).
We do not want any task-specific head attached because we simply want the pre-trained weights of the base model to provide a general understanding of the English language, and it will be our job to add our own classification head during the fine-tuning process in order to help the model distinguish between toxic comments.
TFDistilBertModel generates an instance of TFBaseModelOutput whose last_hidden_state parameter is the output from the model last layer.
TFBaseModelOutput([(
'last_hidden_state',
<tf.Tensor: shape=(batch_size, sequence_lendgth, 768), dtype=float32, numpy=array([[[...]]], dtype=float32)>
)])
TFBaseModelOutput
Parameters
last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) – Sequence of hidden-states at the output of the last layer of the model.
Implementation
Python modules
import pandas as pd
import tensorflow as tf
from sklearn.model_selection import train_test_split
from transformers import (
DistilBertTokenizerFast,
TFDistilBertModel,
)
Configuration
TIMESTAMP = datetime.datetime.now().strftime("%Y%b%d%H%M").upper()
DATA_COLUMN = 'text'
LABEL_COLUMN = 'category_index'
MAX_SEQUENCE_LENGTH = 512 # Max length allowed for BERT is 512.
NUM_LABELS = len(raw_train[LABEL_COLUMN].unique())
MODEL_NAME = 'distilbert-base-uncased'
NUM_BASE_MODEL_OUTPUT = 768
# Flag to freeze base model
FREEZE_BASE = True
# Flag to add custom classification heads
USE_CUSTOM_HEAD = True
if USE_CUSTOM_HEAD == False:
# Make the base trainable when no classification head exists.
FREEZE_BASE = False
BATCH_SIZE = 16
LEARNING_RATE = 1e-2 if FREEZE_BASE else 5e-5
L2 = 0.01
Tokenizer
tokenizer = DistilBertTokenizerFast.from_pretrained(MODEL_NAME)
def tokenize(sentences, max_length=MAX_SEQUENCE_LENGTH, padding='max_length'):
"""Tokenize using the Huggingface tokenizer
Args:
sentences: String or list of string to tokenize
padding: Padding method ['do_not_pad'|'longest'|'max_length']
"""
return tokenizer(
sentences,
truncation=True,
padding=padding,
max_length=max_length,
return_tensors="tf"
)
Input layer
The base model expects input_ids and attention_mask whose shape is (max_sequence_length,). Generate Keras Tensors for them with Input layer respectively.
# Inputs for token indices and attention masks
input_ids = tf.keras.layers.Input(shape=(MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='input_ids')
attention_mask = tf.keras.layers.Input((MAX_SEQUENCE_LENGTH,), dtype=tf.int32, name='attention_mask')
Base model layer
Generate the output from the base model. The base model generates TFBaseModelOutput. Feed the embedding of [CLS] to the next layer.
base = TFDistilBertModel.from_pretrained(
MODEL_NAME,
num_labels=NUM_LABELS
)
# Freeze the base model weights.
if FREEZE_BASE:
for layer in base.layers:
layer.trainable = False
base.summary()
# [CLS] embedding is last_hidden_state[:, 0, :]
output = base([input_ids, attention_mask]).last_hidden_state[:, 0, :]
Classification layers
if USE_CUSTOM_HEAD:
# -------------------------------------------------------------------------------
# Classifiation leayer 01
# --------------------------------------------------------------------------------
output = tf.keras.layers.Dropout(
rate=0.15,
name="01_dropout",
)(output)
output = tf.keras.layers.Dense(
units=NUM_BASE_MODEL_OUTPUT,
kernel_initializer='glorot_uniform',
activation=None,
name="01_dense_relu_no_regularizer",
)(output)
output = tf.keras.layers.BatchNormalization(
name="01_bn"
)(output)
output = tf.keras.layers.Activation(
"relu",
name="01_relu"
)(output)
# --------------------------------------------------------------------------------
# Classifiation leayer 02
# --------------------------------------------------------------------------------
output = tf.keras.layers.Dense(
units=NUM_BASE_MODEL_OUTPUT,
kernel_initializer='glorot_uniform',
activation=None,
name="02_dense_relu_no_regularizer",
)(output)
output = tf.keras.layers.BatchNormalization(
name="02_bn"
)(output)
output = tf.keras.layers.Activation(
"relu",
name="02_relu"
)(output)
Softmax Layer
output = tf.keras.layers.Dense(
units=NUM_LABELS,
kernel_initializer='glorot_uniform',
kernel_regularizer=tf.keras.regularizers.l2(l2=L2),
activation='softmax',
name="softmax"
)(output)
Final Custom Model
name = f"{TIMESTAMP}_{MODEL_NAME.upper()}"
model = tf.keras.models.Model(inputs=[input_ids, attention_mask], outputs=output, name=name)
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
metrics=['accuracy']
)
model.summary()
---
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_ids (InputLayer) [(None, 256)] 0
__________________________________________________________________________________________________
attention_mask (InputLayer) [(None, 256)] 0
__________________________________________________________________________________________________
tf_distil_bert_model (TFDistilB TFBaseModelOutput(la 66362880 input_ids[0][0]
attention_mask[0][0]
__________________________________________________________________________________________________
tf.__operators__.getitem_1 (Sli (None, 768) 0 tf_distil_bert_model[1][0]
__________________________________________________________________________________________________
01_dropout (Dropout) (None, 768) 0 tf.__operators__.getitem_1[0][0]
__________________________________________________________________________________________________
01_dense_relu_no_regularizer (D (None, 768) 590592 01_dropout[0][0]
__________________________________________________________________________________________________
01_bn (BatchNormalization) (None, 768) 3072 01_dense_relu_no_regularizer[0][0
__________________________________________________________________________________________________
01_relu (Activation) (None, 768) 0 01_bn[0][0]
__________________________________________________________________________________________________
02_dense_relu_no_regularizer (D (None, 768) 590592 01_relu[0][0]
__________________________________________________________________________________________________
02_bn (BatchNormalization) (None, 768) 3072 02_dense_relu_no_regularizer[0][0
__________________________________________________________________________________________________
02_relu (Activation) (None, 768) 0 02_bn[0][0]
__________________________________________________________________________________________________
softmax (Dense) (None, 2) 1538 02_relu[0][0]
==================================================================================================
Total params: 67,551,746
Trainable params: 1,185,794
Non-trainable params: 66,365,952 <--- Base BERT model is frozen
Data allocation
# --------------------------------------------------------------------------------
# Split data into training and validation
# --------------------------------------------------------------------------------
raw_train = pd.read_csv("./train.csv")
train_data, validation_data, train_label, validation_label = train_test_split(
raw_train[DATA_COLUMN].tolist(),
raw_train[LABEL_COLUMN].tolist(),
test_size=.2,
shuffle=True
)
# X = dict(tokenize(train_data))
# Y = tf.convert_to_tensor(train_label)
X = tf.data.Dataset.from_tensor_slices((
dict(tokenize(train_data)), # Convert BatchEncoding instance to dictionary
train_label
)).batch(BATCH_SIZE).prefetch(1)
V = tf.data.Dataset.from_tensor_slices((
dict(tokenize(validation_data)), # Convert BatchEncoding instance to dictionary
validation_label
)).batch(BATCH_SIZE).prefetch(1)
Train
# --------------------------------------------------------------------------------
# Train the model
# https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
# Input data x can be a dict mapping input names to the corresponding array/tensors,
# if the model has named inputs. Beware of the "names". y should be consistent with x
# (you cannot have Numpy inputs and tensor targets, or inversely).
# --------------------------------------------------------------------------------
history = model.fit(
x=X, # dictionary
# y=Y,
y=None,
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
validation_data=V,
)
To implement the 1st approach, change the configuration as below.
USE_CUSTOM_HEAD = False
Then FREEZE_BASE is changed to False and LEARNING_RATE is changed to 5e-5 which will run Further Pre-training on the base BERT model.
Saving the model
For the 3rd approach, saving the model will cause issues. The save_pretrained method of the Huggingface Model cannot be used as the model is not a direct sub class from of Huggingface PreTrainedModel.
Keras save_model causes an error with the default save_traces=True, or causes a different error with save_traces=True when loading the model with Keras load_model.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-71-01d66991d115> in <module>()
----> 1 tf.keras.models.load_model(MODEL_DIRECTORY)
11 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/saved_model/load.py in _unable_to_call_layer_due_to_serialization_issue(layer, *unused_args, **unused_kwargs)
865 'recorded when the object is called, and used when saving. To manually '
866 'specify the input shape/dtype, decorate the call function with '
--> 867 '`@tf.function(input_signature=...)`.'.format(layer.name, type(layer)))
868
869
ValueError: Cannot call custom layer tf_distil_bert_model of type <class 'tensorflow.python.keras.saving.saved_model.load.TFDistilBertModel'>, because the call function was not serialized to the SavedModel.Please try one of the following methods to fix this issue:
(1) Implement `get_config` and `from_config` in the layer/model class, and pass the object to the `custom_objects` argument when loading the model. For more details, see: https://www.tensorflow.org/guide/keras/save_and_serialize
(2) Ensure that the subclassed model or layer overwrites `call` and not `__call__`. The input shape and dtype will be automatically recorded when the object is called, and used when saving. To manually specify the input shape/dtype, decorate the call function with `@tf.function(input_signature=...)`.
Only Keras Model save_weights worked as far as I tested.
Experiments
As far as I tested with Toxic Comment Classification Challenge, the 1st approach gave better recall (identify true toxic comment, true non-toxic comment). Code can be accessed as below. Please provide correction/suggestion if anything.
Code for 1st and 3rd approach
Related
BERT Document Classification Tutorial with Code - Fine tuning using TFDistilBertForSequenceClassification and Pytorch
Hugging Face Transformers: Fine-tuning DistilBERT for Binary Classification Tasks - Fine tuning using TFDistilBertModel
| https://stackoverflow.com/questions/60463829/ |
Pairwise similarity matrix between a set of vectors in PyTorch | Let's suppose that we have a 3D PyTorch tensor, where the first dimension represents the batch_size, as follows:
import torch
import torch.nn as nn
x = torch.randn(32, 100, 25)
That is, for each i, x[i] is a set of 100 25-dimensional vectors. I would like to compute the similarity (e.g., the cosine similarity -- but in general any such pairwise distance/similarity matrix) of these vectors for each batch item.
That is, for each x[i] I need to compute a [100, 100] matrix which will contain the pairwise similarities of the above vectors. More specifically, the (i,j)-th element of this matrix should contain the similarity (or the distance) between the i-th and the j-th row of (the 100x25) x[t], for all t=1, ..., batch_size.
If I use torch.nn.CosineSimilarity(), no matter what dim I'm using, the result is either [100, 25] (dim=0), or [32, 25] (dim=1) , where I need a tensor of size [32, 100, 100]. I would expect torch.nn.CosineSimilarity() to work this way (since, at least to me, it looks more intuitive), but it doesn't.
Could that be done using something like below?
torch.matmul(x, x.permute(0, 2, 1))
I guess that this could give a distance matrix, but what if I need an arbitrary pairwise operation? Should I build this operation using the above?
Or maybe should I repeat x in a way so I can use the built-in torch.nn.CosineSimilarity()?
Thank you.
| The documentation implies that the shapes of the inputs to cosine_similarity must be equal but this is not the case. Internally PyTorch broadcasts via torch.mul, inserting a dimension with a slice (or torch.unsqueeze) will give you the desired result. This is not optimal due to duplicate computations and memory for the upper and lower triangles but it's simple:
import torch
from torch.nn import functional as F
from scipy.spatial import distance
# compute once in pytorch
x = torch.randn(32, 100, 25)
y = F.cosine_similarity(x[..., None, :, :], x[..., :, None, :], dim=-1)
assert y.shape == torch.Size([32, 100, 100])
# test against scipy by iterating over each batch element
z = []
for i in range(x.shape[0]):
slice = x[i, ...].numpy()
z.append(torch.tensor(distance.cdist(slice, slice, metric='cosine'), dtype=torch.float32))
# convert similarity to distance and ensure they're reasonably close
assert torch.allclose(torch.stack(z), 1.0-y)
| https://stackoverflow.com/questions/60467264/ |
pytorch model summary - forward func has more than one argument | I am using torch summary
from torchsummary import summary
I want to pass more than one argument when printing the model summary, but the examples mentioned here: Model summary in pytorch taken only one argument. for e.g.:
model = Network().to(device)
summary(model,(1,28,28))
The reason is that the forward function takes two arguments as input, e.g.:
def forward(self, img1, img2):
How do I pass two arguments here?
| You can use the example given here: pytorch summary multiple inputs
summary(model, [(1, 16, 16), (1, 28, 28)])
| https://stackoverflow.com/questions/60480686/ |
creating a common embedding for two languages | My task deals with multi-language like (english and hindi). For that I need a common embedding to represent both languages.
I know there are methods for learning multilingual embedding like 'MUSE', but this represents those two embeddings in a common vector space, obviously they are similar, but not the same.
So I wanted to know if there is any method or approach that can learn to represent both embedding in form of a single embedding that represents the both the language.
Any lead is strongly appreciated!!!
| I think a good lead would be to look at past work that has been done in the field. A good overview to start with is Sebastian Ruder's talk, which gives you a multitude of approaches, depending on the level of information you have about your source/target language. This is basically what MUSE is doing, and I'm relatively sure that it is considered state-of-the-art.
The basic idea in most approaches is to map embedding spaces such that you minimize some (usually Euclidean) distance between the both (see p. 16 of the link). This obviously works best if you have a known dictionary and can precisely map the different translations, and works even better if the two languages have similar linguistic properties (not so sure about Hindi and English, to be honest).
Another recent approach is the one by Multilingual-BERT (mBERT), or similarly, XLM-RoBERTa, but those learn embeddings based on a shared vocabulary. This might again be less desirable if you have morphologically dissimilar languages, and also has the drawback that they incorporate a bunch of other, unrelated, languages.
Otherwise, I'm unclear on what exactly you are expecting from a "common embedding", but happy to extend the answer once clarified.
| https://stackoverflow.com/questions/60481990/ |
How to normalize images in PyTorch | transform = transforms.Compose([
transforms.ToTensor()
])
trainset = torchvision.datasets.ImageFolder(root='C:/Users/beomseokpark/Desktop/CNN/train_data', transform = transform)
data_loader = DataLoader(dataset = trainset, batch_size = 8, shuffle = True, num_workers=2)
with torch.no_grad():
for num, data in enumerate(trainset):
imgs, label = data
I loaded images with ImageFolder in torchvision library, and how can I get mean and std from each channel of my images?
Can anyone please help me out?
| There's the "lazy man" approach: You can simply plug a nn.BatchNorm2d as the very first layer of your network. With the appropriate momentum, and track_running_stats=True this layer will estimate your data's mean and variance for you.
Alternatively, you can compute the mean and variance using
mu = torch.zeros((3,), dtype=torch.float)
sig = torch.zeros((3,), dtype=torch.float)
n = 0
with torch.no_grad():
for num, data in enumerate(trainset):
imgs, _ = data
mu += torch.sum(imgs, dim=(0, 2, 3))
sig += torch.sum(imgs**2, dim=(0, 2, 3))
n += imgs.numel() // imgs.shape[0]
n = float(n)
mu = mu / n # mean
sig = sig / n - (mu ** 2)
| https://stackoverflow.com/questions/60485362/ |
PyTorch FasterRCNN TypeError: forward() takes 2 positional arguments but 3 were given | I am working on object detection and I have a dataset containing images and their corresponding bounding boxes (ground-truth values).
I actually have built my own feature extractor which takes an image as input and outputs a feature map(basically an encoder-decoder system where the final output of the decoder is the same as the image size and has 3 channels). Now, I want to feed this feature map as an input to a FasterRCNN model for detection instead of the original image. I am using the following code to add the feature map(using RTFNet to generate feature map - code at this link) on top the FRCNN detection module
frcnn_model = fasterrcnn_resnet50_fpn(pretrained=True)
in_features = frcnn_model.roi_heads.box_predictor.cls_score.in_features
frcnn_model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
fpn_block = frcnn_model.backbone.fpn
rpn_block = frcnn_model.rpn
backbone = RTFNet(num_classes) RTFNet is a feature extractor taking as input, an image having 4 channels(fused RGB and thermal image) ,
model = nn.Sequential(backbone, nn.ReLU(inplace=True))
model = nn.Sequential(model,fpn_block)
model = nn.Sequential(model,rpn_block)
model = nn.Sequential(model,FastRCNNPredictor(in_features, num_classes))
I am just trying to test and see if it is working by using the following code which generates random images and bounding boxes
images, boxes = torch.rand(1, 4, 512, 640), torch.rand(4, 11, 4)
labels = torch.randint(1, num_classes, (4, 11))
images = list(image for image in images)
targets = []
for i in range(len(images)):
d = {}
d['boxes'] = boxes[i]
d['labels'] = labels[i]
targets.append(d)
output = model(images, targets)
Running this gives me the following error
TypeError Traceback (most recent call last)
<ipython-input-22-2637b8c27ad2> in <module>()
----> 1 output = model(images, targets)
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
530 result = self._slow_forward(*input, **kwargs)
531 else:
--> 532 result = self.forward(*input, **kwargs)
533 for hook in self._forward_hooks.values():
534 hook_result = hook(self, input, result)
TypeError: forward() takes 2 positional arguments but 3 were given
However, when I replace my model with a normal FasterRCNN Model with the following,
model = fasterrcnn_resnet50_fpn(pretrained=True)
in_features = model.roi_heads.box_predictor.cls_score.in_features
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
there is no error and it works fine
Can anyone let me know where I am going wrong? Thanks in advance
| This is because only the image inputs should be passed into the models, instead of both images and the ground truth targets. So instead of doing output = model(images, targets), you can do output = model(images).
As for why the error message talks about being given 3 positional arguments, this is because forward is initiated with a default self keyword, which represents the class instance. So in addition to self, you should only give 1 more argument, which would be the input image.
| https://stackoverflow.com/questions/60513469/ |
Why torch.FloatTensor([[[0,1,2],[3,4,5]],[[6,7,8],[9,10,11]]]) size is [2,2,3]? | >>> ft = torch.FloatTensor([[[0,1,2],[3,4,5]],[[6,7,8],[9,10,11]]])
>>> print(ft.shape)
torch.Size([2, 2, 3])
I can't understand this result.
I think the torch size should be [2,3,2], but the result is [2,2,3].
| Because
len([[[0,1,2],[3,4,5]],[[6,7,8],[9,10,11]]]) = 2
This is the first 2.
and each item inside:
len([[0,1,2],[3,4,5]]) = 2
This is the second 2.
and each item inside:
len([0,1,2]) = 3
This is the 3.
| https://stackoverflow.com/questions/60519646/ |
Pytorch - select region of a tensor using torch function | I am looking for a way to select a region of a PyTorch tensor using a torch function (without using numpy). Do you have suggestions on how to proceed?
In other words, I'm looking for a way to crop a region of a matrix. Using numpy, it would be something like
import numpy as np
A = np.random.rand(16,16)
B = A[0:8, 0:8]
The approach I am trying is the following:
from torchvision import transforms
A = torch.randn([1,3,64,64])
B = torch.split(A, [16,32,16], dim =2)
C = torch.split(B, [16,32,16], dim =3)
Which gives the error
'tuple' object has no attribute 'split'
| What's wrong with regular slicing?
import torch
A = torch.randn([1,3,64,64])
B = A[..., 16:32, 16:32]
| https://stackoverflow.com/questions/60527036/ |
What is the difference between sample() and rsample()? | When I sample from a distribution in PyTorch, both sample and rsample appear to give similar results:
import torch, seaborn as sns
x = torch.distributions.Normal(torch.tensor([0.0]), torch.tensor([1.0]))
sns.distplot(x.sample((100000,)))
sns.distplot(x.rsample((100000,)))
When should I use sample(), and when should I use rsample()?
| Using rsample allows for pathwise derivatives:
The other way to implement these stochastic/policy gradients would be to use the reparameterization trick from the rsample() method, where the parameterized random variable can be constructed via a parameterized deterministic function of a parameter-free random variable. The reparameterized sample therefore becomes differentiable.
| https://stackoverflow.com/questions/60533150/ |
getting the classification labels for torchvision's pretrained networks | Pytorch's torchvision package provides pre-trained neural networks for image classification. I've been using the following code to classify an image using Alexnet (note: some of this code is from this webpage):
from PIL import Image
import torch
from torchvision import transforms
from torchvision import models
# function to transform image
transform = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
# image
img = Image.open('/path/to/image.jpg')
img = transform(img)
img = torch.unsqueeze(img, 0)
# alexnet
alexnet = models.alexnet(pretrained=True)
alexnet.eval()
out = alexnet(img)
percents = torch.nn.functional.softmax(out, dim=1)[0] * 100
top5_vals, top5_inds = percents.topk(5)
There are 1,000 total classes, and the top5_inds variable gives me the indices of the top 5 classes. But how do I get the associated labels (e.g. snail, basketball, banana)? I can't seem to find any sort of list as part of Pytorch's documentation or the alexnet variable.
| Torchvision models are pretrained on the ImageNet dataset. Due to its comprehensiveness and size, ImageNet is the most commonly used dataset for pretraining & transfer learning. As you noted, it has 1000 classes. The complete class list can be searched, or you can refer to this listing on GitHub: https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a
| https://stackoverflow.com/questions/60536972/ |
Tensor output from final layer is of the wrong shape in PyTorch | I am building a sequence-to-label classifier, where the input data are text sequences and output labels are binary. The model is very simple, with GRU hidden layers and a Word Embeddings input layer. I want a [n, 60] input to output a [n, 1] label, but the Torch model returns a [n, 60] output.
My model, with minimal layers:
class Model(nn.Module):
def __init__(self, weights_matrix, hidden_size, num_layers):
super(Model, self).__init__()
self.embedding, num_embeddings, embedding_dim = create_emb_layer(weights_matrix, True)
self.hidden_size = hidden_size
self.num_layers = num_layers
self.gru = nn.GRU(embedding_dim, hidden_size, num_layers, batch_first=True)
self.out = nn.Linear(hidden_size, 1)
def forward(self, inp, hidden):
emb = self.embedding(inp);
out, hidden = self.gru(emb, hidden)
out = self.out(out);
return out, hidden;
def init_hidden(self, batch_size):
return torch.zeros(self.num_layers, batch_size, self.hidden_size).to(device);
Model Layers:
Model(
(embedding): Embedding(184901, 100)
(gru): GRU(100, 60, num_layers=3, batch_first=True)
(out): Linear(in_features=60, out_features=1, bias=True)
)
Input shapes of my data are: X : torch.Size([64, 60]), and Y : torch.Size([64, 1]), for a single batch of size 64.
When I run the X tensor through the model, it should output a single label, however, the output from the classifier is torch.Size([64, 60, 1]). To run the model, I do the following:
for epoch in range(1):
running_loss = 0.0;
batch_size = 64;
hidden = model.init_hidden(batch_size)
for ite, data in enumerate(train_loader, 0):
x, y = data[:,:-1], data[:,-1].reshape(-1,1)
optimizer.zero_grad();
outputs, hidden = model(x, hidden);
hidden = Variable(hidden.data).to(device);
loss = criterion(outputs, y);
loss.backward();
optimizer.step();
running_loss = running_loss + loss.item();
if ite % 2000 == 1999:
print('[%d %5d] loss: %.3f'%(epoch+1, ite+1, running_loss / 2000))
running_loss = 0.0;
When I print the shape of outputs, it is 64x60x1 rather than 64x1. What I also don't get is how the criterion function is able to calculate the loss when the shapes of outputs and labels are inconsistent. With Tensorflow, this would always throw an error, but it doesn't with Torch.
| The output from your model is of shape torch.Size([64, 60, 1]) i.e. 64 is the batch size, and (60, 1) corresponds [n, 1] as expected.
Assuming you're using nn.CrossEntropy(input, target), it expected the input to be (N,C) and target to be (N), where C is number of classes.
Your output is consistent, and hence loss is evaluated.
For example,
outputs = torch.randn(3, 2, 1)
target = torch.empty(3, 1, dtype=torch.long).random_(2)
criterion = nn.CrossEntropyLoss(reduction='mean')
print(outputs)
print(target)
loss = criterion(outputs, target)
print(loss)
# outputs
tensor([[[ 0.5187],
[ 1.0320]],
[[ 0.2169],
[ 2.4480]],
[[-0.4895],
[-0.6096]]])
tensor([[0],
[1],
[0]])
tensor(0.5731)
Read more here.
| https://stackoverflow.com/questions/60537594/ |
Pytorch dataset and shared memory? | I would want to cache data in a torch.utils.data.Dataset. The simple solution is to just persist certain tensors in a member of the dataset. However, since the torch.utils.data.DataLoader class spawns multiple processes, the cache would only be local to each instance and would cause me to possibly cache multiple copies of the same tensors. Is there a way to use Python's multiprocessing library to share data between the different loader processes?
| The answer depends on your OS and settings. If you are using Linux with the default process start method, you don't have to worry about duplicates or process communication, because worker processes share memory! This is efficiently implemented as Inter Process Communication (IPC) through shared memory (some more details here).
For Windows, things are more complicated. From the documentation:
Since workers rely on Python multiprocessing, worker launch behavior
is different on Windows compared to Unix.
On Unix, fork() is the default multiprocessing start method. Using
fork(), child workers typically can access the dataset and Python
argument functions directly through the cloned address space.
On Windows, spawn() is the default multiprocessing start method. Using
spawn(), another interpreter is launched which runs your main script,
followed by the internal worker function that receives the dataset,
collate_fn and other arguments through pickle serialization.
This means that your dynamically cached Dataset members would be automatically shared between all processes on Linux. That's great! However, on Windows, processes will not have received copies of them (they only received the Dataset upon spawning), so you should use a process communication scheme, e.g. through multiprocessing Pipe, Queue or Manager (preferred for broadcasting to multiple processes, but you would have to convert tensors to lists). This is not very efficient, besides rather bothersome to implement.
Nevertheless, there is another method: memory mapping (memmaping). This means that your objects will be written to virtual memory, and again all processes will have access to it, while a respective "shadow copy" of these objects will at some point be flushed and exist on your hard drive (can be placed in a /tmp directory). You can use memmaping with the mmap module, in which case your objects will have to be serialized as a binary file, or you can use numpy.memmap. You can find more details here.
| https://stackoverflow.com/questions/60542153/ |
torch.nn.functional.conv2d for several channels/batches | I have an image with I want to pad (to maintain the same shape) and then perform a convolution with a given kernel. It works ok if I have only one channel and one image in the batch. But how to properly rewrite it for several batches & channels? I suppose, for batches I can just duplicate the kernel along dimension 0. But what about channels? What is the proper way to do it in torch? See the toy example below.
import torch.nn.functional as f
x = torch.zeros((1,1,16,16))
x[...,6:10,6:10] = 1.
ker = torch.ones(3,3)
ker[1,1] = -4
padding = (ker.shape[1] // 2, ker.shape[1] // 2,
ker.shape[0] // 2, ker.shape[0] // 2)
x = f.pad(x, padding, mode='replicate')
kernel = ker.reshape((1, 1, ker.shape[0], -1))
result = f.conv2d(x, ker, groups=1)
How to rewrite this piece of code to deal with multiple channels and batches? I read documentation but to be honest it did not seem very detailed to me.
| The documentation at https://pytorch.org/docs/stable/nn.functional.html#torch.nn.functional.conv2d seems to answer your question:
input – input tensor of shape (minibatch,in_channels,iH,iW)
weight – filters of shape (out_channels,in_channels/groups,kH,kW)
so your x must be size (batch_size, in_channels, 16, 16)
and your kernel (batch_size, out_channels/groups, kH, kW)
| https://stackoverflow.com/questions/60551548/ |
How do I crop a Landsat image into smaller chunks for training and then predict on the original image | I am looking at using Landsat imagery to train a CNN for unsupervised pixel-wise semantic segmentation classification. That said, I have been unable to find a method that allows me to crop images from the larger Landsat image for training and then predict on the original image. Essentially here is what I am trying to do:
Original Landsat image (5,000 x 5,000 - this is an arbitrary size, not exactly sure of the actual dimensions off-hand) -> crop the image into (100 x 100) chunks -> train the model on these cropped images -> output a prediction for each pixel in the original (uncropped) image.
That said, I am not sure if I should predict on the cropped images and stitch them together after they are predicted or if I can predict on the original image.
Any clarification/code examples would be greatly appreciated. For reference, I use both pytorch and tensorflow.
Thank you!
Lance D
| Borrowing from Ronneberger et al., what we have been doing is to split the input Landsat scene and corresponding ground truth mask into overlapping tiles. Take the original image and pad it by the overlap margin (we use reflection for the padding) then split into tiles. Here is a code snippet using scikit-image:
import skimage as sk
patches = sk.util.view_as_windows(image,
(self.tile_height+2*self.image_margin,
self.tile_width+2*self.image_margin,raster_value['channels']),
(self.tile_height,self.tile_width,raster_value['channels'])
I don't know what you are using for a loss function for unsupervised segmentation. In our case with supervised learning, we crop the final segmentation prediction to match the ground truth output shape. In the Ronneberger paper they relied on shrinkage due to the use of valid padding.
For predictions you would do the same (split into overlapping tiles) and stitch the result.
| https://stackoverflow.com/questions/60555060/ |
padding and attention mask does not work as intended in batch input in GPT language model | The following code is without batch:
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
context=torch.tensor([tokenizer.encode("This is")])
output, past = model(context)
token = torch.argmax(output[..., -1, :])
print(tokenizer.decode(token.item()))
output: ' a'
This is working fine. Now, I extended this to batch setting:
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
context=[torch.tensor(tokenizer.encode("This is ")),torch.tensor(tokenizer.encode("Hello How are "))]
context=pad_sequence(context,batch_first=True)
mask=torch.tensor([[1,1,0],[1,1,1]])
output, past = model(context,attention_mask=mask)
token = torch.argmax(output[..., -1, :],dim=1)
tokenizer.decode(token)
output: '\n you'
Here \n is next token for the first context and you is next token for second context of the batch.
But The expected next token for the first context is a, since all the settings are same. Furthermore, if you reduce the second context to 2 token you will get a in this batch setting. So clearly, model can not understand the padding.
Also, the attention mask does not work. Because,
after padding the next token of sequence this is is 0 (zero). And according to the attention mask ([1,1,0]), this zero should be avoided and only the tokens this and is should be attended. The proofs that this attention masking is not working are:
Use attention mask [1,1,1], that means attend even on the padding zero, you get the same output
which is \n.
Use the the string this is!. Here ! has the zero index in the vocabulary matrix. Again you get the same output which is \n.
Only time, it is possible to get desirable output is without the batch settings and attention mask ( now it seems, it does not matter because it has no effect anyway)
Then I found this, which suggested to use pad_token. So I used like following:
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
from torch.nn.utils.rnn import pad_sequence
tokenizer = GPT2Tokenizer.from_pretrained("gpt2",pad_token="<PAD>")
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
context=[torch.tensor(tokenizer.encode("This is <PAD> ")),torch.tensor(tokenizer.encode("Hello How are"))]
context=torch.stack(context)
print(context)
mask=torch.tensor([[1,1,0],[1,1,1]])
output, past = model(context,attention_mask=mask)
token = torch.argmax(output[..., -1, :],dim=1)
tokenizer.decode(token)
output: 'The you'
Here The is next token for the first context and you is next token for second context of the batch. This is also not working. Because The is not expected for the first context.
How do I use variable length sequence in batch setting in gpt/gpt2 model?
| I'm not sure if this helps, but you don't need to implement you own attention masking and padding. The Transformers library provides the encode_plus() and batch_encode_plus() functions that will perform tokenization, generate the attention masks, and do padding for you. The results come out as Python dictionaries.
| https://stackoverflow.com/questions/60579343/ |
How is the number of channels adjusted in efficientnet | I was reading the code at efficientnet and was shocked by its clever ideas. But I don't quite understand how it adjusts the number of channels.
def round_filters(filters, width_coefficient, depth_divisor):
filters *= width_coefficient
new_filters = int(filters + depth_divisor / 2) // depth_divisor * depth_divisor
new_filters = max(depth_divisor, new_filters)
# Make sure that round down does not go down by more than 10%.
if new_filters < 0.9 * filters:
new_filters += depth_divisor
return int(new_filters)
I know the number of channels has to be adjusted by the width factor, but why do I do the following? What is depth_divisor?
| It's making the scaled width divisible by depth_divisor. You can view as rounding the scaled width to the nearest multiple of depth_divisor with some additional consideration (round up when going down more than 10%). In almost all applications of this function in various MobileNets and EfficientNets the depth_divisor is 8.
Why 8? Mostly due to the constraints of common hardware accelerators like GPU and TPU. See a recent cuDNN developer guide and count the number of 'multiple of 8', 'multiple of 32', etc guidelines for optimal performance of certain operations:
https://docs.nvidia.com/deeplearning/sdk/cudnn-developer-guide/index.html
For algorithms other than *_ALGO_WINOGRAD_NONFUSED, when the following
requirements are met, the cuDNN library will trigger the Tensor Core
operations:
Input, filter, and output descriptors (xDesc, yDesc,
wDesc, dxDesc, dyDesc and dwDesc as applicable) are of the dataType =
CUDNN_DATA_HALF (i.e., FP16). For FP32 dataType see FP32-to-FP16
Conversion.
The number of input and output feature maps (i.e., channel dimension
C) is a multiple of 8. When the channel dimension is not a multiple of
8, see Padding.
The filter is of type CUDNN_TENSOR_NCHW or CUDNN_TENSOR_NHWC.
If using a filter of type CUDNN_TENSOR_NHWC, then the input, filter,
and output data pointers (X, Y, W, dX, dY, and dW as applicable) are
aligned to 128-bit boundaries.
| https://stackoverflow.com/questions/60583868/ |
Is there a nice way to to check if numpy array and torch tensor point to same underlying data? | I want to check if numpy array and torch tensor point to same underlying memory.
So far I've came up with a simple check but it doesn't look super elegant.
import numpy as np
import torch
# example
a = np.random.randn(3,3)
b = torch.from_numpy(a)
assert a.__array_interface__['data'][0] == b.data_ptr()
Is there a nicer way to do it? Also, could some potential undefined/incorrect behaviour occur if using this assertion?
Thanks in advance for the answers :)
| This is a completely valid way to access and compare the pointers. The array interface is designed to allow sharing data buffers, so it will have the correct pointer. With that said, if you prefer a less verbose solution, you could also grab it directly like so:
import numpy as np
import torch
# example
a = np.random.randn(3,3)
b = torch.from_numpy(a)
print(a.ctypes.data)
print(b.data_ptr())
140413464706720
140413464706720
| https://stackoverflow.com/questions/60587536/ |
I don't understand pytorch input sizes of conv1d, conv2d | I have a data of 2 temporal series of 18 points each one. So I organized in a matrix of 18 rows and 2 columns (with 180 samples to classify in 2 classes - activated and non-activated).
So, I want to do a CNN with this data, my kernel walks in one direction, along the lines (temporal). Examples of the figure attached.
My data 18x2
In my code, I don't know how channels I have, in comparison to RGB with 3 channels. And don't know the input sizes of the layers, and how to calculate to know the fully connected layer.
I need to use conv1d ? conv2d? conv3d ?
Based on Understand conv 1D 2D 3D, I have 2D inputs and I want to do 1D convolution (because I move my kernel in one direction), is it correct ?
How I pass the kernel size (3,2) for example?
My data is in this form, after using DataLoader with batch_size= 4:
print(data.shape, label.shape)
torch.Size([4, 2, 18]) torch.Size([4, 1])
My Convolutional Model is:
OBS: I just put any number of input/output size.
# Creating our CNN Model -> 1D convolutional with 2D input (HbO, HbR)
class ConvModel(nn.Module):
def __init__(self):
super(ConvModel, self).__init__()
self.conv1 = nn.Conv1d(in_channels=1, out_channels= 18, kernel_size=3, stride = 1)
# I dont know the in/out channels of the first conv
self.maxpool = nn.MaxPool1d(kernel_size=3, stride=3)
self.conv2 = nn.Conv1d(18, 32, kernel_size=3)
self.fc1 = nn.Linear(200, 100) #What I put in/out here ?
self.fc2 = nn.Linear(100, 50)
self.fc3 = nn.Linear(50, 2)
def forward(self, x):
x = F.relu(self.mp(self.conv1(x)))
x = self.maxpool(x)
x = F.relu(self.mp(self.conv2(x)))
x = self.maxpool(x)
x = x.view(-1, ??) # flatten the tensor, which number here ?
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
| You will want to use a two channel conv1d as the first convolution later. I.e. it will take in a tensor of shape [B, 2, 18]. Having 2 channel input with kernel size 3 will define kernels of shape [2, 3] where the kernel slides along the last dimension of the input. The number of channels C1 in your output feature map is up to you. C1 defines how many independent [2, 3] kernels you learn. Each convolution with a [2, 3] kernel produces an output channel.
Note that if you don't define any zero padding during conv1d then the output for a size 3 kernel will be reduced by 2, i.e. you will get [B, C1, 16]. If you include a padding of 1 (which effectively pads both sides of input with a column of zeros before convolving) then the output would be [B, C1, 18].
Max-pooling doesn't change the number of channels. If you use a kernel size of 3, stride of 3, and no padding then the last dimension will be reduced down to floor(x.size(2) / 3) where x is the input tensor to the max-pooling layer. If the input isn't a multiple of 3 then the values at the end of x feature map will be ignored (AKA a kernel/window alignment issue).
I recommend taking a look at the documentation for nn.Conv1d and nn.MaxPool1d since it provides equations to compute the output shape.
Let's consider two examples. You can define C1, C2, F1, F2 however you like. The optimal values will depend on your data.
Without padding we get
class ConvModel(nn.Module):
def __init__(self):
# input [B, 2, 18]
self.conv1 = nn.Conv1d(in_channels=2, out_channels=C1, kernel_size=3)
# [B, C1, 16]
self.maxpool = nn.MaxPool1d(kernel_size=3, stride=3)
# [B, C1, 5] (WARNING last column of activations in previous layer are ignored b/c of kernel alignment)
self.conv2 = nn.Conv1d(C1, C2, kernel_size=3)
# [B, C2, 3]
self.fc1 = nn.Linear(C2*3, F1)
# [B, F1]
self.fc2 = nn.Linear(F1, F2)
# [B, F2]
self.fc2 = nn.Linear(F2, 2)
# [B, 2]
def forward(x):
x = F.relu(self.mp(self.conv1(x)))
x = self.maxpool(x)
x = F.relu(self.mp(self.conv2(x)))
x = self.maxpool(x)
x = x.flatten(1) # flatten the tensor starting at dimension 1
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
Notice the kernel alignment issue with the max-pooling layer. This occurs because the input to max-pooling isn't a multiple of 3. To avoid the kernel alignment issue and to make output sizes more consistent I recommend including an additional padding of 1 to both the convolution layers. Then you would have
class ConvModel(nn.Module):
def __init__(self):
# input [B, 2, 18]
self.conv1 = nn.Conv1d(in_channels=2, out_channels=C1, kernel_size=3, padding=1)
# [B, C1, 18]
self.maxpool = nn.MaxPool1d(kernel_size=3, stride=3)
# [B, C1, 6] (no alignment issue b/c 18 is a multiple of 3)
self.conv2 = nn.Conv1d(C1, C2, kernel_size=3, padding=1)
# [B, C2, 6]
self.fc1 = nn.Linear(C2*6, F1)
# [B, F1]
self.fc2 = nn.Linear(F1, F2)
# [B, F2]
self.fc2 = nn.Linear(F2, 2)
# [B, 2]
def forward(x):
x = F.relu(self.mp(self.conv1(x)))
x = self.maxpool(x)
x = F.relu(self.mp(self.conv2(x)))
x = self.maxpool(x)
x = x.flatten(1) # flatten the tensor starting at dimension 1
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
| https://stackoverflow.com/questions/60591140/ |
Error: AttributeError: module 'transformers' has no attribute 'TFBertModel' | I am applying transfer learning with the python framework (PyTorch). I am getting the below error, when loading a PyTorch pre-trained model in Google Colab. After changing the code 1 to be as code 2, I got the same error.
CODE 1: BertModel.from_pretrained
CODE 2: TFBertModel.from_pretrained
Error: AttributeError: module 'transformers' has no attribute 'TFBertModel'
I tried to search the internet, but I didn't find any useful content.
| You should probably list the available package with its version in your python and your Colab link, for TFBertModel is only available when you have tensorflow.
In order to reproduce your error. I play around in the Colab as following:
No tensorflow cause error when you import TFBertModel
!pip install transformers
from transformers import BertModel, TFBertModel # no attribute 'TFBertModel'
!pip install tensorflow-gpu
from transformers import BertModel, TFBertModel # good to go
Directly use BertModel
!pip install transformers
from transformers import BertModel
BertModel.from_pretrained # good to go
As the result of my testing, you should probably check out if you import the TFBertModel while let tensorflow uninstalled.
Transformers under the master branch import the TFBertModel only if is_tf_available() is set to True. Here is the code for if_is_tf_available():
# transformers/src/transformers/file_utils.py
# >>> 107 lines
def is_tf_available():
return _tf_available
# >>> 48 lines
try:
USE_TF = os.environ.get("USE_TF", "AUTO").upper()
USE_TORCH = os.environ.get("USE_TORCH", "AUTO").upper()
if USE_TF in ("1", "ON", "YES", "AUTO") and USE_TORCH not in ("1", "ON", "YES"):
import tensorflow as tf
assert hasattr(tf, "__version__") and int(tf.__version__[0]) >= 2
_tf_available = True # pylint: disable=invalid-name
logger.info("TensorFlow version {} available.".format(tf.__version__))
else:
logger.info("Disabling Tensorflow because USE_TORCH is set")
_tf_available = False
except (ImportError, AssertionError):
_tf_available = False # pylint: disable=invalid-name
| https://stackoverflow.com/questions/60593173/ |
How to implement a PyTorch NN from a directed graph | I'm new to Pytorch and teaching myself, and I want to create ANNs that takes in a directed graph. I also want to pass predefined weights & biases for each connection into it, but willing to ignore that for now.
My motivation for these conditions is that I'm trying to implement the NEAT algorithm, which is basically using a Genetic Algorithm to evolve the network.
For example, let
graph = dict{'1':[[], [4, 7]], '2':[[], [6]], '3':[[], [6]], '4':[[1, 7], []], '5':[[7], []], '6':[[2, 3], [7]], '7':[[1, 6], [4, 5]]}
represent the directed graph.
My code for what I'm thinking is:
class Net(torch.nn.Module):
def __init__(self, graph):
super(Net, self).__init__()
self.graph = graph
self.walk_graph()
def walk_graph(self):
graph_remaining = copy.deepcopy(self.graph)
done = False # Has every node/connection been processed?
while not done:
processed = [] # list of tuples, of a node and the nodes it outputs to
for node_id in graph_remaining.keys():
if len(graph_remaining[node_id][0]) == 0: # if current node has no incoming connections
try:
# if current node has been processed, but waited for others to finish
if callable(getattr(self, 'layer{}'.format(node_id))):
D_in = len(eval('self.layer{}'.format(node_id)).in_features)
D_out = len(eval('self.layer{}'.format(node_id)).out_features)
setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(D_in, D_out))
cat_list = [] # list of input tensors
for i in self.graph[node_id][0]: # search the entire graph for inputs
cat_list.append(globals()['out_{}'.format(i)]) # add incoming tensor to list
# create concatenated tensor for incoming tensors
# I'm not confident about this
globals()['in_{}'.format(node_id)] = torch.cat(cat_list, len(cat_list))
except AttributeError: # if the current node hasn't been waiting
try:
setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(len(self.graph[node_id][0]), len(self.graph[node_id][1])))
except ZeroDivisionError: # Input/Output nodes have zero inputs/outputs in the graph
setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(1, 1))
globals()['out_{}'.format(node_id)] = getattr(self, 'layer' + node_id)(globals()['in_{}'.format(node_id)])
processed.append((node_id, graph_remaining[node_id][1]))
for node_id, out_list in processed:
for out_id in out_list:
try:
graph_remaining[str(out_id)][0].remove(int(node_id))
except ValueError:
pass
try:
del graph_remaining[node_id]
except KeyError:
pass
done = True
for node_id in self.graph.keys():
if len(graph_remaining[node_id][0]) != 0 or len(graph_remaining[node_id][1]) != 0:
done = False
return None
I'm a little out of my comfort zone on this, but if you have a better idea, or can point out how this is fatally flawed, I'm all ears. I know I'm missing a forward function, and could use some advice about how to restructure.
| Since you don't plan on doing any actual training of the network, PyTorch might not be your best option in this case.
NEAT is about recombining and mutating neural networks - both their structure and their weights and biases - and thereby achieving better results. PyTorch generally is a deep learning framework, meaning that you define the structure (or architecture) of your network and then use algorithms like stochastic gradient descent to update the weights and biases in order to improve your performance. As a consequence of this, PyTorch works based on modules and submodules of neural networks, like fully connected layers, convolutional layers and so on.
The problem with this discrepancy is that NEAT not only requires you to store a lot more information (like their ID for recombination etc.) about the individual nodes than PyTorch supports, it also doesn't fit in very well with the "layer-wise" approach of deep learning frameworks.
In my opinion, you will be better off implementing the forward pass through the network yourself. If you're unsure how to do that, this video gives a very good explanation.
| https://stackoverflow.com/questions/60605251/ |
How to get top-k elements of each row in a 2D tensor? | How to get the top-k elements of each row in a 2D tensor in an elegant way instead of using for-loop as below?
import torch
elements = torch.rand(5,10)
topk_list = [2,3,1,2,0] # means top2 for 1st row, top3 for 2nd row, top1 for 3rd row,....
index_list = [] # record the topk index in elements
for i in range(5):
index_list.append(elements[i].topk(topk_list[i]))
| If your k's don't vary too much and you want to vectorize your code you can first take the maximum top k per row and then gather the desired results.
# Code from OP
import torch
elements = torch.rand(5,10)
topk_list = [2,3,1,2,0] # means top2 for 1st row, top3 for 2nd row, top1 for 3rd row,....
index_list = [] # record the topk index in elements
for i in range(5):
index_list.append(elements[i].topk(topk_list[i]))
# Print the result
print(index_list)
# Get topk for max_k
max_k = max(topk_list)
topk_vals, topk_inds = elements.topk(max_k, dim=-1)
# Select desired topk using mask
mask = torch.arange(max_k)[None, :] < torch.tensor(topk_list)[:, None]
vals, inds = topk_vals[mask], topk_inds[mask]
rows, _ = mask.nonzero().T
print("-" * 10)
print("rows", rows)
print("inds", inds)
print("vals", vals)
# Or split
vals_per_row = vals.split(topk_list)
inds_per_row = inds.split(topk_list)
print("-" * 10)
print("vals_per_row", vals_per_row)
print("inds_per_row", inds_per_row)
# Or zip (for loop but should be cheap)
index_list = zip(vals_per_row, inds_per_row)
print("-" * 10)
print("zipped results", list(index_list))
This gives the following output:
[torch.return_types.topk(
values=tensor([0.8148, 0.7443]),
indices=tensor([8, 4])), torch.return_types.topk(
values=tensor([0.7529, 0.7352, 0.6354]),
indices=tensor([8, 1, 9])), torch.return_types.topk(
values=tensor([0.8792]),
indices=tensor([7])), torch.return_types.topk(
values=tensor([0.9626, 0.8728]),
indices=tensor([6, 2])), torch.return_types.topk(
values=tensor([]),
indices=tensor([], dtype=torch.int64))]
----------
rows tensor([0, 0, 1, 1, 1, 2, 3, 3])
inds tensor([8, 4, 8, 1, 9, 7, 6, 2])
vals tensor([0.8148, 0.7443, 0.7529, 0.7352, 0.6354, 0.8792, 0.9626, 0.8728])
----------
vals_per_row (tensor([0.8148, 0.7443]), tensor([0.7529, 0.7352, 0.6354]), tensor([0.8792]), tensor([0.9626, 0.8728]), tensor([]))
inds_per_row (tensor([8, 4]), tensor([8, 1, 9]), tensor([7]), tensor([6, 2]), tensor([], dtype=torch.int64))
----------
zipped results [(tensor([0.8148, 0.7443]), tensor([8, 4])), (tensor([0.7529, 0.7352, 0.6354]), tensor([8, 1, 9])), (tensor([0.8792]), tensor([7])), (tensor([0.9626, 0.8728]), tensor([6, 2])), (tensor([]), tensor([], dtype=torch.int64))]
| https://stackoverflow.com/questions/60614116/ |
AttributeError: module 'tensorflow' has no attribute 'value' | I am training pytorch-yolov3 in custom dataset. I prepared all the required txt, data and names files .
while runninng following command:
python3 train.py --model_def config/yolov3.cfg --data_config config/custom.data
I got following error:
Warning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. (expandTensors at /pytorch/aten/src/ATen/native/IndexingUtils.h:20)
Warning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead. (expandTensors at /pytorch/aten/src/ATen/native/IndexingUtils.h:20)
Traceback (most recent call last):
File "train.py", line 136, in <module>
logger.list_of_scalars_summary(tensorboard_log, batches_done)
File "/home/sudip/torch/PyTorch-YOLOv3/utils/logger.py", line 16, in list_of_scalars_summary
summary = tf.summary(value=[tf.summary.Value(tag=tag, simple_value=value) for tag, value in tag_value_pairs])
File "/home/sudip/torch/PyTorch-YOLOv3/utils/logger.py", line 16, in <listcomp>
summary = tf.summary(value=[tf.summary.Value(tag=tag, simple_value=value) for tag, value in tag_value_pairs])
AttributeError: module 'tensorboard.summary._tf.summary' has no attribute 'Value'
This is logger.py file:
import tensorflow as tf
class Logger(object):
def __init__(self, log_dir):
self.writer = tf.summary.create_file_writer(log_dir)
def scalar_summary(self, tag, value, step):
"""Log a scalar variable."""
summary = tf.summary(value=[tf.summary.Value(tag=tag, simple_value=value)])
self.writer.add_summary(summary, step)
def list_of_scalars_summary(self, tag_value_pairs, step):
"""Log scalar variables."""
summary = tf.summary(value=[tf.summary.Value(tag=tag, simple_value=value) for tag, value in tag_value_pairs])
self.writer.add_summary(summary, step)
Any idea or suggestions to solve this problem?
Any help would be appreciated.
Thank you
| Change
summary = tf.summary(value=[tf.summary.Value(tag=tag, simple_value=value)])
To
summary = tf.summary.scalar(tag=tag, simple_value=value)
| https://stackoverflow.com/questions/60614678/ |
Install Detectron2 on Windows 10 | I try to install Facebook's Detectron2 followed this official repo. Following that repo, detectron2 can only install on linux. However, I'm working on a server run on Windows operator. Anybody know how to install it on Windows?
| Answer found through this issue: https://github.com/facebookresearch/detectron2/issues/9
These steps worked for me on my RTX 3070.
Install Anaconda https://docs.anaconda.com/anaconda/install/windows/
Create a environment.yml file containing the following code.
name: detectron2
channels:
- pytorch
- conda-forge
- anaconda
- defaults
dependencies:
- python=3.8
- numpy
- pywin32
- cudatoolkit=11.0
- pytorch==1.7.1
- torchvision
- git
- pip
- pip:
- git+https://github.com/facebookresearch/[email protected]
Launch the Anaconda terminal, navigate to the yml file and run conda env create -f environment.yml
Activate the environment conda activate detectron2
And you're good to go.
Edit: This works without issue if you run your script within the anaconda terminal but I was also having this issue ImportError: DLL load failed: The specified module could not be found. with numpy and Pillow when running the script from VS Code so if you happen to have this issue, I fixed it by uninstalling and reinstalling the troubled modules from within the anaconda terminal.
pip uninstall numpy
pip install numpy
| https://stackoverflow.com/questions/60631933/ |
Inverted colors in Tensorboard SummaryWriter add_image() function | There is an image stored in image_tensor (image_tensor of size (3,256,512), storing values in the interval 0,255) which I would like to display in Tensorboard (TensorboardX for PyTorch, more specifically) via the add_image() function for SummaryWriter. When I add the image to the Tensorboard via writer.add_image("color_image",image_tensor,self.step), the colors are inverted.
When I write the image to a file via scipy.misc.imsave("/write/to/path/image.png",np.transpose(image_tensor.data.cpu().numpy(),(1,2,0))), the image is perfectly fine.
Only thing I change for the second line is changing CxHxW to HxWxC, but I don't think that this is the root of this color inversion issue. What might be the problem?
| I had a similar problem, where I was also unable to track the problem. The solution that worked for me but is unfortunately a little bit cumbersome is:
Take your image and plug it into a matplotlib figure then use add_figure.
For example:
fig, ax = plt.subplots(2,3)
# add your subplots with some images eg.
ax[0,0].imshow(image_1)
# etc.
writer_semisuper.add_figure("testfig", fig, 0)
This shows exactly the same plot that you created, but with lower resolution. So if your plot works in jupyter or saved on disk it should also work in tensorboard.
| https://stackoverflow.com/questions/60651684/ |
Python coverage report covering only test file | I’m pretty new to contributing to open source projects and am trying to get some coverage reports so I can find out what needs more / better testing. However, I am having trouble getting the full coverage of a test. This is for pytorch
For example, lets say I want to get the coverage report of test_indexing_py.
I run the command:
pytest test_indexing.py --cov=../ --cov-report=html
Resulting in this:
================================================= test session starts =================================================
platform win32 -- Python 3.7.4, pytest-5.2.1, py-1.8.0, pluggy-0.13.0
rootdir: C:\Projects\pytorch
plugins: hypothesis-5.4.1, arraydiff-0.3, cov-2.8.1, doctestplus-0.4.0, openfiles-0.4.0, remotedata-0.3.2
collected 62 items
test_indexing.py ............................s................................. [100%]
----------- coverage: platform win32, python 3.7.4-final-0 -----------
Coverage HTML written to dir htmlcov
=========================================== 61 passed, 1 skipped in 50.43s ============================================
Ok, looks like the tests ran. Now when I check the html coverage report, I only get the coverage for the test file and not for the classes tested (the tests are ordered by coverage percentage).
As you can see, I am getting coverage for only test_indexing.py. How do I get the full coverage report including the classes tested?
Any guidance will be greatly appreciated.
| I think its because you are asking to check the coverage from the test running directory, ie where test_indexing.py is.
A better approach would be like running the test from the root directory itself, rather than test directory, it has several advantages like the configuration file reading and all.
And regarding your question, try running the test from the root directory and try
pytest path/to/test/ --cov --cov-report=html
| https://stackoverflow.com/questions/60658028/ |
Sharing parameters in different nn.Moules in pytorch | I've got the model that you can see below, but I need to create two instances of them that shares x2h and h2h.
Does anyone know how to do it?
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.x2h = nn.Linear(input_size, hidden_size)
self.h2h = nn.Linear(hidden_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
#self.softmax = nn.LogSoftmax(dim=1)
self.softmax = nn.Softmax(dim=1)
def forward(self, input, hidden):
hidden1 = self.x2h(input)
hidden2 = self.h2h(hidden)
hidden = hidden1 + hidden2
output = self.h2o(hidden)
output = self.softmax(output)
return output, hidden
def initHidden(self):
return torch.zeros(1, self.hidden_size)
| It is a Python question i assume.
Variables declared inside the class, not inside a method are class or static variables.
Ref:
https://radek.io/2011/07/21/static-variables-and-methods-in-python/
| https://stackoverflow.com/questions/60659971/ |
How to append a list with arrays of two different sizes based on conditons | I was wondering how to do this in a more efficient way for arbitrary arrays, the code is written in PyTorch, but it is only for 1-d tensors.
Thank you!
test=[]
data=np.random.uniform(0,1,[20,])
x=torch.from_numpy(data).float()
x,_=torch.sort(x)
v=torch.rand(5).float()
v,_=torch.sort(v)
for i in range(len(x)):
if x[i] < v[0]:
test.append(v[0])
elif x[i] < v[1]:
test.append(v[1])
elif x[i] < v[2]:
test.append(v[2])
elif x[i] < v[3]:
test.append(v[3])
else:
test.append(v[4])
test
| you can use the built-in function next:
for i in x:
test.append(next((e for e in v[:4] if i < e), v[4]))
you can also use a list comprehension instead of for loop:
s = v[:4]
d = v[4]
test = [next((e for e in s if i < e), d)) for i in x]
if the test variable has already some elements you can use the in-place assignment += operator:
test += [next((e for e in s if i < e), d) for i in x]
| https://stackoverflow.com/questions/60663054/ |
How to visualize 3d joints of a SMPL model based on pose params | I am trying to use demo.py in nkolot
/
GraphCMR | GitHub. I am interested in obtaining joints from the inferred SMPL image and visualize it similar to described in README of this project: gulvarol
/
smplpytorch | GitHub.
I also posted the issue here: https://github.com/nkolot/GraphCMR/issues/36.
What I tried that didn't work.
I changed https://github.com/nkolot/GraphCMR/blob/4e57dca4e9da305df99383ea6312e2b3de78c321/demo.py#L118 to
pred_vertices, pred_vertices_smpl, pred_camera, smpl_pose, smpl_shape = model(...) to get smpl_pose (of shape torch.Size([1, 24, 3, 3])). Then I just flattened it by doing smpl_pose.cpu().data.numpy()[:, :, :, -1].flatten('C').reshape(1, -1) and used the resulting (1, 72) pose params as input in pose_params variable of smplpytorch demo.
The resulting visualization doesn't look correct to me. Is this the right approach? Perhaps there is an easier way to do what I am doing.
How to get 3d joints from demo.py and visualize it | nkolot
/
GraphCMR
| The problem is that
smpl_pose (of shape torch.Size([1, 24, 3, 3]))
is the SMPL pose parameters expressed as a rotation matrix.
You need to make a transformation from rotation matrix to axis-angle representation which is (72,1). You can use Rodrigues formula to do it, as claimed in the paper:
Get more information from the paper:
SMPL: A Skinned Multi-Person Linear Model
| https://stackoverflow.com/questions/60667134/ |
An appropriate way of adding a feature to a time series forecasting model input | I have been working on a demand forecasting model for a while. I am using an LSTM model to predict the future demand of a product family of a company. To solidify and exemplify my raw data, an example is as below;
Unprocessed data
np.random.seed(1)
raw_data = pd.DataFrame({"product_type": ["A"]*3 + ["B"]*3 + ["C"]*3, "product_family": ["x", "y", "z", "t", "u", "y", "p", "k", "l"]})
for col in [str(x)+"-"+str(y) for x in range(2015, 2020) for y in range(1, 13)]:
raw_data[col] = np.random.randint(10, 50, 9)
raw_data.head()
product_type product_family 2015-1 ... 2019-10 2019-11 2019-12
0 A x 47 ... 15 39 38
1 A y 22 ... 37 28 29
2 A z 18 ... 41 41 37
3 B t 19 ... 32 44 29
4 B u 21 ... 22 29 25
[5 rows x 62 columns]
As can be seen above, the data has two nominal feature, and the rest are the past demand data.
First, let me interpret what I do in my case:
I first select the product_family to be predicted and let that product_family be "x":
prod_family_data = raw_data.loc[raw_data.product_family == "x", raw_data.columns[2:]].to_numpy()
Then I create the x and y of the training set:
x_train, y_train = [], []
for i in range(0, len(prod_family_data) - 12):
x_train.append(prod_family_data[i: i + 12])
y_train.append(prod_family_data[i + 12])
x_train = np.array(x_train)
y_train = np.array(y_train)
array([[47, 11, 21, 32, 34, 14, 35, 49, 44, 42, 31, 18],
.
.
.
[14, 20, 45, 13, 48, 43, 45, 49, 49, 37, 15, 39]], dtype=object)
y_train
array([28, 38, 12, 12, 23, 29, 19, 23, 39, 38, 18, 40, 46, 48, 44, 27, 10,
24, 25, 22, 15, 28, 44, 46, 22, 12, 45, 47, 38, 21, 46, 26, 12, 21,
18, 14, 20, 45, 13, 48, 43, 45, 49, 49, 37, 15, 39, 38])
x_train = x_train.reshape(x_train.shape[0], x_train.shape[1], 1)
x_train.shape
(48, 12, 1)
y_train.shape
(48,)
Then I predict the product_family's demand with a LSTM model, then I go back to the start, select another product_family, rinse and repeat.
What I wonder is if there is a way to add the product_family feature to the input (and may be product_type and other nominal qualities of products in the future too) of the model, and feed it to the model all at once?
Also, is there a way to bound the demand data with the timestamps to the input so that the model will catch the trend or seasonality of the data/
| I would first recommend you to re-think the shape of your dataset. A classic time serie dataset "X" feeded to a LSTM network will have a 3D shape as :
X.shape[0] : number of time series (to use for training / testing)
X.shape[1] : number of timesteps in the time series
X.shape[2] : number of features of each time series
In your example, assuming you have only one time serie per unique pair (product_type, product_familly), grouping your time series by product family "x" should translate into :
X.shape[0] : number of product_type including "x" product_familly
X.shape[1] : number of timesteps in the "x" product_familly sell amount
X.shape[2] : 1, because your only feature seems to be sell amount
You could add the product_type or product_familly in directly in the 3rd dimension(X.shape[2]) of your dataset.
Even if the information is never changing in the timesteps it will be considered in the learning phase, and could be used in prediction. Is it a good idea to do so ? I'm not sure of it.
Now, the comment of Sergey Bushmanov is to consider. Detrending your data in this kind of task is a good step, you could look at algorithm like STL to do that for you.
What i would advise you to do if you didn't already, is to have a look at models like ARMA, which can include a seasonal component in their modelling, and that are tailor made to the case of forecasting univariate time series like yours.
| https://stackoverflow.com/questions/60667909/ |
How to compare one picture to all data test in siamese neural network? | I've been build siamese neural network using pytorch. But I've just test it by inserting 2 pictures and calculate the similarity score, where 0 says that picture is different and 1 says picture is same.
import numpy as np
import os, sys
from PIL import Image
dir_name = "/Users/tania/Desktop/Aksara/Compare" #this should contain 26 images only
X = []
for i in os.listdir(dir_name):
if ".PNG" in i:
X.append(torch.from_numpy(np.array(Image.open("./Compare/" + i))))
x1 = np.array(Image.open("/Users/tania/Desktop/Aksara/TEST/Ba/B/B.PNG"))
x1 = transforms(x1)
x1 = torch.from_numpy(x1)
#x1 = torch.stack([x1])
closest = 0.0 #highest similarity
closest_letter_idx = 0 #index of closest letter 0=A, 1=B, ...
cnt = 0
for i in X:
output = model(x1,i) #assuming x1 is your input image
output = torch.sigmoid(output)
if output > closest:
closest_letter_idx = cnt
closest = output
cnt += 1
Both pictures are different, so the output
File "test.py", line 83, in <module>
X.append(torch.from_numpy(Image.open("./Compare/" + i)))
TypeError: expected np.ndarray (got PngImageFile)
this is the directory
| Yes there is a way, you could use the softmax function:
output = torch.softmax(output)
This returns a tensor of 26 values, each corresponding to the probability that the image corresponds to each of the 26 classes. Hence, the tensor sums to 1 (100%).
However, this method is suitable for classification tasks, as opposed to Siamese Networks. Siamese networks compare between inputs, instead of sorting inputs into classes. From your question, it seems like you're trying to compare 1 picture with 26 others. You could loop over all the 26 samples to compare with, compute & save the similarity score for each, and output the maximum value (that is if you don't want to modify your model):
dir_name = '/Aksara/Compare' #this should contain 26 images only
X = []
for i in os.listdir(dir_name):
if ".PNG" in i:
X.append(torch.from_numpy(np.array(Image.open("./Compare/" + i))))
x1 = np.array(Image.open("test.PNG"))
#do your transformations on x1
x1 = torch.from_numpy(x1)
closest = 0.0 #highest similarity
closest_letter_idx = 0 #index of closest letter 0=A, 1=B, ...
cnt = 0
for i in X:
output = model(x1,i) #assuming x1 is your input image
output = torch.sigmoid(output)
if output > closest:
closest_letter_idx = cnt
closest = output
cnt += 1
print(closest_letter_idx)
| https://stackoverflow.com/questions/60680091/ |
Can't convert Pytorch to ONNX | Trying to convert this pytorch model with ONNX gives me this error. I've searched github and this error came up before in version 1.1.0 but was apparently rectified. Now I'm on torch 1.4.0. (python 3.6.9) and I see this error.
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/init.py", line 148, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 416, in _export
fixed_batch_size=fixed_batch_size)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 296, in _model_to_graph
fixed_batch_size=fixed_batch_size, params_dict=params_dict)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 135, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/init.py", line 179, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 657, in _run_symbolic_function
return op_fn(g, *inputs, **attrs)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py", line 128, in wrapper
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py", line 128, in
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py", line 81, in _parse_arg
"', since it's not constant, please try to make "
RuntimeError: Failed to export an ONNX attribute 'onnx::Gather', since it's not constant, please try to make things (e.g., kernel size) static if possible
How to fix it? I've also tried latest nightly build, same error comes up.
My code:
from model import BiSeNet
import torch.onnx
import torch
net = BiSeNet(19)
net.cuda()
net.load_state_dict(torch.load('/content/drive/My Drive/Collab/fp/res/cp/79999_iter.pth'))
net.eval()
dummy = torch.rand(1,3,512,512).cuda()
torch.onnx.export(net, dummy, "Model.onnx", input_names=["image"], output_names=["output"])
I added print (v.node ()) to symbolic_helper.py just before the runtime error is raised to see what's causing the error.
This is the output: %595 : Long() = onnx::Gather[axis=0](%592, %594) # /content/drive/My Drive/Collab/fp/model.py:111:0
And that line in 111 in model.py is: avg = F.avg_pool2d(feat32, feat32.size()[2:])
This source suggests that tensor.size method in pytorch cannot be recognized by onnx and needs to be modified into a constant.
| I used to have a similar error when exporting using
torch.onnx.export(model, x, ONNX_FILE_PATH)
and I fixed it by specifying the opset_version like so:
torch.onnx.export(model, x, ONNX_FILE_PATH, opset_version = 11)
| https://stackoverflow.com/questions/60682622/ |
TypeError: h5py objects cannot be pickled | I am trying to run a PyTorch implementation of a code, which is supposed to work on SBD dataset.
The training labels are originally available in .bin file, which are then converted to HDF5 (.h5) files.
Upon running the algorithm, I get an error as: " TypeError: h5py objects cannot be pickled "
I think the error is stemming from torch.utils.data.DataLoader.
Any idea if I am missing any concept here? I read that pickling is generally not preferred but as of now, my dataset is in HDF5 format only.
For your reference, the error's stack trace is as follows:
File "G:\My Drive\Debvrat - shared\Codes\CASENet PyTorch Implementations\SBD-lijiaman\main.py", line 130, in <module>
main()
File "G:\My Drive\Debvrat - shared\Codes\CASENet PyTorch Implementations\SBD-lijiaman\main.py", line 85, in main
win_feats5, win_fusion, viz, global_step)
File "G:\My Drive\Debvrat - shared\Codes\CASENet PyTorch Implementations\SBD-lijiaman\train_val\model_play.py", line 31, in train
for i, (img, target) in enumerate(train_loader):
File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 819, in __iter__
return _DataLoaderIter(self)
File "C:\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 560, in __init__
w.start()
File "C:\Anaconda3\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "C:\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "C:\Anaconda3\lib\site-packages\h5py\_hl\base.py", line 308, in __getnewargs__
raise TypeError("h5py objects cannot be pickled")
TypeError: h5py objects cannot be pickled
I am using Conda version 4.8.2, Python 3.7.4, PyTorch 1.0.0 with Cuda 10.2.89
Thanks,
| setting num_workers=0 solve this issue for me
| https://stackoverflow.com/questions/60684061/ |
How to load large multi file parquet files for tensorflow/pytorch | I am trying to load a few parquet files from a directory into Python for tensorflow/pytorch.
The files are too large to be loaded through the pyarrow.parquet functions
import pyarrow.parquet as pq
dataset = pq.ParquetDataset('dir')
table = dataset.read()
This gives out of memory error.
I have also tried using petastorm, but that doesn't work for make_reader() because it isn't of the petastorm type.
with make_batch_reader('dir') as reader:
dataset = make_petastorm_dataset(reader)
When I used the make_batch_reader() and then the make_petastorm_dataset(reader), it again gave an zip not iterable error or something along those lines.
I am not sure how to load the file into Python for ML training.
Some quick help would be greatly appreciated.
Thanks
Zash
| For pyarrow, you can list the directory with Python, iterate over *.parquet files, open each one as pq.ParquetFile, and read it one row group at a time. This will alleviate the memory pressure, but won't be super fast without parallelization.
For petastorm, you are right to use make_batch_reader(). Indeed, the error messages are not always helpful; but you can inspect the stack trace and investigate where in petastorm code it originates from.
| https://stackoverflow.com/questions/60685684/ |
Google Colab TensorBoard in another Chrome Tab | I am going through the PyTorch tutorials and am currently on the TensorBoard one.
Through research, I have been able to get it to work inline and through another tab.
My preference is to have it persisent in another tab that will update automatically.
The method described below uses ngrok:
https://medium.com/@iamsdt/using-tensorboard-in-google-colab-with-pytorch-458f9bb95212
However, it is unstable because it gets to many connections and then drop:
Too many connections! The tunnel session '1Z8HZHdl5gLd1xJVdx5Vqqpl9dW' has violated the rate-limit policy of 20 connections per minute by initiating 133 connections in the last 60 seconds. Please decrease your inbound connection volume or upgrade to a paid plan for additional capacity.
The error encountered was: ERR_NGROK_702
When this happens i have to restart the whole kernal and start over. Very frustrating.
This really isn't the solution I am looking for.
Does anyone have a solution for this?
I figured google would have a solution for this.....
| Update:
Here's an example cell with two buttons to open Tensorboard in another window and hide it on Colab notebook:
%load_ext tensorboard
%tensorboard --logdir="logdir"
import IPython
display(IPython.display.HTML('''
<button id='open_tb'>Open TensorBoard</button>
<button id='hide_tb'>Hide TensorBoard</button>
<script>document.querySelector('#open_tb').onclick = () => { window.open(document.querySelector('iframe').src, "__blank") }
document.querySelector('#hide_tb').onclick = () => { document.querySelector('iframe').style.display = "none" }</script>'''))
You can simple do this by copying the embedded Colab Tensorboard iFrame URL to another Tab.
See example here in a gif
| https://stackoverflow.com/questions/60686617/ |
Can I combine Monte Carlo policy gradient algorithm with other policy gradient algorithms | I know that Monte Carlo REINFORCE policy gradient algorithm is different in how it calculates the reward values by calculating discounted cumulative future reward at each step.
here is the peace of code to calculate the discounted cumulative future reward at each time step.
G = np.zeros_like(self.reward_memory, dtype=np.float64)
for t in range(len(self.reward_memory)):
G_sum = 0
discount = 1
for k in range(t, len(self.reward_memory)):
G_sum += self.reward_memory[k] * discount
discount *= self.gamma
G[t] = G_sum
another example for increasing accuracy is to calculate the reward after the action called "reward to go".
another example is to add the entropy bonus.
Is it possible to add the entropy bonus and the rewards to go or either one to the Monte Carlo method.
Also another step is taken in the Monte Carlo after the reward calculation is to normalize the values.
“In practice it can can also be important to normalize these. For example, suppose we compute [discounted cumulative reward] for all of the 20,000 actions in the batch of 100 Pong game rollouts above. One good idea is to “standardize” these returns (e.g. subtract mean, divide by standard deviation) before we plug them into backprop. This way we’re always encouraging and discouraging roughly half of the performed actions. Mathematically you can also interpret these tricks as a way of controlling the variance of the policy gradient estimator”.
Does that effect the accuracy if both or either of the entropy bonus or reward to go modification is added?
That is from the research PDF https://arxiv.org/pdf/1506.02438.pdf
I am Studying Policy Gradient Algorithms and I want to know how to Improve these algorithms. I would greatly appreciated if you could help me out.
Edit:
I would also like to add on whether the advantage function could also be added
The A(s,a) is the advantages function; is it possible to add this to the Monte Carlo approach assuming we also add both reward to go and the entropy bonus?
| You are mixing some things up here.
The Monte Carlo approach is a way to compute the returns for the state-action pairs: as the discounted sum of all the future rewards after that state-action pair (s, a) following the current policy π.
(It is also worth noting that REINFORCE is not an especially good RL algorithm, and that Monte Carlo estimates of the returns have a rather high variance in comparison to e. g. TD(λ).)
The entropy bonus and the advantage function on the other hand are part of the loss (the function you use to train your actor), and therefore have nothing to do with the return computation.
I would suggest you read the Reinforcement Learning Book to get a deeper understanding of what you're doing.
| https://stackoverflow.com/questions/60689453/ |
Can someone explain this pytorch neural network code ? Are there two different neural networks here or one? | class doubleNetwork(nn.Module):
def __init__(self, input_dim, output_dim):
super(doubleNetwork, self).__init__()
self.policy1 = nn.Linear(input_dim, 256)
self.policy2 = nn.Linear(256, output_dim)
self.value1 = nn.Linear(input_dim, 256)
self.value2 = nn.Linear(256, 1)
def forward(self, state):
logits = F.relu(self.policy1(state))
logits = self.policy2(logits)
value = F.relu(self.value1(state))
value = self.value2(value)
return logits, value
Are policy1, value1 in different networks or the same?
Are there two different neural networks here or single?
What is happening in the code here?
| You have two networks in parallel. You can see it in the forward method:
state -> policy1 -> policy2 -> logits
state -> value1 -> value2 -> value
policy1, policy2, value1 and value2 are 4 different and independent fully connected (Linear) layers. The nn.Linear method creates a new layer of neurons every time it's called.
Edit for more details:
In your code you define a doubleNetwork class, the __init__ method will be called when you create an object of this class
So this line:
my_network = doubleNetwork(10,15)
call the __init__ method, and create a new doubleNetwork object. The newtork will have 4 attributes value1, value1, policy1, policy2 that are fully connected layers.
the line:
self.policy1 = nn.Linear(input_dim, 256)
Create a new Linear Object which is a fully connected layer, when this line is executed the weights for the layers are initialized.
The forward method of network defines what append when the network object is called. For example with a line like that:
output1, output2 = my_network(input)
The code written in the forward is the function applied to the input. Here the input which is state is passed in the two policy layers in one side and then passed into the two value layers. And then both outputs are returned. So the network in form of a fork with one input and 2 outputs.
In this code it's one network, but because the two outputs depend only of the input and are independent with each other we could have define them in two separate network with the same result. See the code for example:
class SingleNetwork(nn.Module):
def __init__(self, input_dim, output_dim):
super(doubleNetwork, self).__init__()
self.layer1 = nn.Linear(input_dim, 256)
self.layer2 = nn.Linear(256, output_dim)
def forward(self, state):
output = F.relu(self.layer1(state))
output = self.layer2(output)
return output
my_network1 = singleNetwork(10,15)
my_network2 = singleNetwork(10,1)
then:
output1 = my_network1(input)
output2 = my_network2(input)
Will be equivalent to
output1, output2 = my_network(input)
| https://stackoverflow.com/questions/60701648/ |
Pytorch device and .to(device) method | I'm trying to learn RNN and Pytorch.
So I saw some codes for RNN where in the forward probagation method, they did a check like this:
def forward(self, inputs, hidden):
if inputs.is_cuda:
device = inputs.get_device()
else:
device = torch.device("cpu")
embed_out = self.embeddings(inputs)
logits = torch.zeros(self.seq_len, self.batch_size, self.vocab_size).to(device)
I think the point of the check is to see if we can run the code on faster GPU instead of CPU?
To understand the code a bit more, I did the following:
ex= torch.zeros(3,10,5)
ex1= torch.tensor(np.array([[0,0,0,1,0], [1,0,0,0,0],[0,1,0,0,0]]))
print(ex)
print("device is")
print(ex1.get_device())
print(ex.to(ex1.get_device()))
And the output was:
...
[[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]]])
device is
-1
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-b09342e2ba0f> in <module>()
67 print("device is")
68 print(ex1.get_device())
---> 69 print(ex.to(ex1.get_device()))
RuntimeError: Device index must not be negative
I don't understand the "device" in the code and I don't understand the .to(device) method. Can you help me understand it?
| This code is deprecated. Just do:
def forward(self, inputs, hidden):
embed_out = self.embeddings(inputs)
logits = torch.zeros((self.seq_len, self.batch_size, self.vocab_size), device=inputs.device)
Note that to(device) is cost-free if the tensor is already on the requested device. And do not use get_device() but rather device attribute. It is working fine with cpu and gpu out of the box.
Also, note that torch.tensor(np.array(...)) is a bad practice for several reasons. First, to convert numpy array to torch tensor either use as_tensor or from_numpy. THen, you will get a tensor with default numpy dtype instead of torch. In this case it is the same (int64), but for float it would be different. Finally, torch.tensor can be initialized using a list, just as numpy array, so you can get rid of numpy completely and call torch directly.
| https://stackoverflow.com/questions/60713781/ |
What does 'Epoch' mean in training Generative Adversarial Networks | I am training a GAN with text data. When I train the discriminator, I randomly sample m positive data from the dataset and generate m negative data with the generator. I found many papers mention about details of implementation such as training epochs. About the training epochs, I have a question about sampling positive data:
Sample from the dataset (maybe shuffled) in order, when the whole dataset is covered, we call 1 epoch
Just as I did, randomly sample positive data, when the total amount of sampled data is the same size as the dataset, we call 1 epoch
Which one is right? or which one is commonly used? or which one is better?
| In my opinion, an epoch is when you passed through the whole training data once. and I think in the paper also they mean a pass through the whole training set when they mention an epoch.
However, the epoch can be also defined as after processing k elements, where k can be smaller than n (the size of the training set). Such a definition might make sense when you want to capture get some evaluation about your model on the dev set, and you normally do it after every single epoch.
After all, that is my opinion and my understandings of GAN papers.
Good luck!
| https://stackoverflow.com/questions/60715524/ |
Deploying a pytorch model in java | I have a pytorch model trained and saved and now I want to use it in a java (not android) environment in windows os (since I'm using some library only available in java), Is it possible? I couldn't find a straight answer in the pytorch docs, and when clicking java api docs the link is broken.
| @Gilad, You can do this with Deep Java Library (djl.ai). Check out: https://github.com/awslabs/djl/tree/master/pytorch/pytorch-engine
| https://stackoverflow.com/questions/60721831/ |
Is there any way to convert pytorch tensor to tensorflow tensor | https://github.com/taoshen58/BiBloSA/blob/ec67cbdc411278dd29e8888e9fd6451695efc26c/context_fusion/self_attn.py#L29
I need to use mulit_dimensional_attention from the above link which is implemented in TensorFlow but I am using PyTorch so can I Convert Pytorch Tensor to TensorFlow Tensor or I have to implement it in PyTorch.
code which I am trying to use here I have to pass 'rep_tensor' as TensorFlow tensor type but I have PyTorch tensor
def multi_dimensional_attention(rep_tensor, rep_mask=None, scope=None,
keep_prob=1., is_train=None, wd=0., activation='elu',
tensor_dict=None, name=None):
# bs, sl, vec = tf.shape(rep_tensor)[0], tf.shape(rep_tensor)[1], tf.shape(rep_tensor)[2]
ivec = rep_tensor.shape[2]
with tf.variable_scope(scope or 'multi_dimensional_attention'):
map1 = bn_dense_layer(rep_tensor, ivec, True, 0., 'bn_dense_map1', activation,
False, wd, keep_prob, is_train)
map2 = bn_dense_layer(map1, ivec, True, 0., 'bn_dense_map2', 'linear',
False, wd, keep_prob, is_train)
# map2_masked = exp_mask_for_high_rank(map2, rep_mask)
soft = tf.nn.softmax(map2, 1) # bs,sl,vec
attn_output = tf.reduce_sum(soft * rep_tensor, 1) # bs, vec
# save attn
if tensor_dict is not None and name is not None:
tensor_dict[name] = soft
return attn_output
| You can convert a pytorch tensor to a numpy array and convert that to a tensorflow tensor and vice versa:
import torch
import tensorflow as tf
pytorch_tensor = torch.zeros(10)
np_tensor = pytorch_tensor.numpy()
tf_tensor = tf.convert_to_tensor(np_tensor)
That being said, if you want to train a model that uses a combination of pytorch and tensorflow that's going to be... awkward, slow, buggy and take a long time to write to say the least. Since the libraries have to figure out how to backpropagate the cost.
So unless the pytorch attention block you have is pre-trained, I'd recommend just sticking to one library or another, there's plenty of examples for implementing anything you want in either and plenty of pretrained models for both. Tensorflow is usually a bit faster but the speed differences aren't that significant and the kind of "hack" I presented above will likely make the whole thing slower than using either of the libraries standalone.
| https://stackoverflow.com/questions/60722008/ |
Predicting the surface of the car using its 2d bbox and plate bbox | I'm trying to solve an interesting problem w/o using GPU intensive model in inference time. (No Deep Learning)
Input: 2D Image which contains car(s) in it, with accurate bboxes, and also a bbox of the plate's car. (We also know that the cameras are located just a bit above the cars)
Output: Surface of the car prediction (the bottom side of a cuboid in 3d bbox)
Approach 1: I'm trying to leverage the fact that I have some prior knowledge except the 2d bbox of the car, but also the 2d bbox of the plate, which can give me the orientation of the car, I thought about taking an angle between the center bbox of the car and the center bbox of the 2d plate to understand what is the direction the car is facing at.
After I know the direction the car is facing to, I also can roughly know where should be one of the edges of the surface because of the fact that the 3d bbox is bounded by the 2d bbox (thus the surface is also bounded), and the fact that the 2d bbox of the plate is a few pixels far from the surface, so one of the edges of the surface can be estimated.
But, the problem here is determining the lateral edges, how 'long' should they be. I'm not quite sure how to estimate the lateral sides of the bottom surface, but I think it can be somehow inferred by the size of the 2d bbox of the car (which again, should bound that surface). Maybe I'll be able to solve it after finding the edge of the surface, and then exploring ways to infer the lateral edges of that surface.
Approach 2: Annotating the data with 3d bboxes with a pre-trained model, and trying to predict the 3d bbox from a 2d bbox (and probably some more priors like 2d bbox of the plate), but I'm not using a deep model to do it, but a simple NN with a few layers to predict the 3d bbox. (trained in a supervised manner)
| Using Deep learning-based object detection methods is tend to achieve a really high detection
accuracy. Deep neural network is a trend to improve the accuracy of bounding box, designing a reasonable
regression loss function is also an important way. So, if you are considering accuracy as an important factor on the project you may need to consider using deep learning.
But if the accuracy doesn't matter that much and you really prefer not to use deep learning then you can use other simple ways.
The conventional 2D object detection yields 4 degrees of freedom (DoF) axis-aligned bounding boxes with center (x, y) and 2D size (w, h), the 3D bounding boxes in autonomous driving context generally have 7 DoF: 3D physical size (w, h, l), 3D center location (x, y, z) and yaw. Note that roll and pitch are normally assumed to be zero. Now the question is, how do we recover a 7-DoF object from a 4-DoF one?
You can find a solution and approach explanation based on this research, but it is a little bit complex since it came from a research.
In your 2nd Approach:
"Annotating the data with 3d bboxes with a pre-trained model"
You can try that, then putting all the work for the 3D bbox creation during inference. This is too specific and very complex problem to answer directly, even more without deep learning. But I hope my answer can help a bit.
Here is another approach I can share just in case you want to consider:
You can also train your own model that has different classes for each direction of the car. It actually may take you a lot of time to prepare the dataset for it. Using that model, you can easily detect car direction.
By that you may able to let a specific function to create a 3D bbox based on that car-direction detected. Although I cannot recommend this approach if you do not prefer making your own annotated dataset since it really takes a lot of time.
You can use OpenCV for creating the 3D bbox by getting the specific values you'll need from the 2D bbox.
But do take note that it will not provide you the best accuracy. It's still the best way to use Deep Learning instead for better accuracy. You can find a lot of implementation of this in the net.
| https://stackoverflow.com/questions/60723535/ |
Understanding tf.nn.depthwise_conv2d | From
https://www.tensorflow.org/api_docs/python/tf/nn/depthwise_conv2d
Given a 4D input tensor ('NHWC' or 'NCHW' data formats) and a filter
tensor of shape [filter_height, filter_width, in_channels,
channel_multiplier] containing in_channels convolutional filters of
depth 1, depthwise_conv2d applies a different filter to each input
channel (expanding from 1 channel to channel_multiplier channels for
each), then concatenates the results together. The output has
in_channels * channel_multiplier channels
What does it mean "expanding from 1 channel to channel_multiplier channels for each" ?
Is it possible to have out_channels < in_channels?
Is it possible to divide input tensor to groups like in Pytorch https://pytorch.org/docs/stable/nn.html#conv2d?
Example:
import tensorflow as tf
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
np.random.seed(2020)
print('tf.__version__', tf.__version__)
def get_data_batch():
bs = 2
h = 3
w = 3
c = 4
x_np = np.random.rand(bs, h, w, c)
x_np = x_np.astype(np.float32)
print('x_np.shape', x_np.shape)
return x_np
def run_conv_dw():
print('='*60)
x_np = get_data_batch()
in_channels = x_np.shape[-1]
kernel_size = 3
channel_multiplier = 1
with tf.Session() as sess:
x_tf = tf.convert_to_tensor(x_np)
filter = tf.get_variable('w1', [kernel_size, kernel_size, in_channels, channel_multiplier],
initializer=tf.contrib.layers.xavier_initializer())
z_tf = tf.nn.depthwise_conv2d(x_tf, filter=filter, strides=[1, 1, 1, 1], padding='SAME')
sess.run(tf.global_variables_initializer())
z_np = sess.run(fetches=[z_tf], feed_dict={x_tf: x_np})[0]
print('z_np.shape', z_np.shape)
if '__main__' == __name__:
run_conv_dw()
Channel multiplier can't be float:
If channel_multiplier = 1:
x_np.shape (2, 3, 3, 4)
z_np.shape (2, 3, 3, 4)
If channel_multiplier = 2:
x_np.shape (2, 3, 3, 4)
z_np.shape (2, 3, 3, 8)
| In pytorch terms:
always one input channel per group, 'channel_multiplier' output
channels per group;
not in one step;
see 1
I see a way to emulate several input channels per group. For two, do depthwise_conv2d, then split result Tensor as deck of cards by half, and then sum acquired halves elementwise (before relu etc.). Note, that input channel number i will be grouped with i+inputs/2 one.
EDIT: Trick above is useful for small groups, for big ones just split input tensor for N parts, where N is group count, make conv2d with each independently, then concatenate results.
| https://stackoverflow.com/questions/60724571/ |
tensorboard colab tensorflow._api.v1.io.gfile' has no attribute 'get_filesystem | I am trying to use tensorboard on colab. I manage to make it work, but not for all commands. add_graph and add_scalar works, but when I tried to run add_embedding I am getting the following error:
AttributeError: module 'tensorflow._api.v1.io.gfile' has no attribute 'get_filesystem'
This is the relevant code (I think);
import os
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter(log_dir ="logs" )
images, labels = select_n_random(trainset.data, trainset.targets)
images = torch.from_numpy(images)
labels = torch.from_numpy(np.array(labels))
class_labels = [classes[lab] for lab in labels]
# log embeddings
features = images.reshape((-1,32*32*3))
writer.add_embedding(features,metadata=class_labels) #, label_img=images.unsqueeze(1))
The complete error is:
/tensorflow-1.15.0/python3.6/tensorflow_core/python/util/module_wrapper.py in __getattr__(self, name)
191 def __getattr__(self, name):
192 try:
--> 193 attr = getattr(self._tfmw_wrapped_module, name)
194 except AttributeError:
195 if not self._tfmw_public_apis:
AttributeError: module 'tensorflow._api.v1.io.gfile' has no attribute 'get_filesystem'
using
tensorflow-1.15.0 (tried to install 2.0 but had different issues)
Python 3.6.9
torch 1.4.0
tensorboard 2.1.1 (tried also with 1.15.0 but same issue)
I also tried using the "magic" command:
%load_ext tensorboard
%tensorboard --logdir logs
But I wasn't able to make it work that way (other issues).
Any suggestions how can I make it work?
| For me, this fixed the problem:
import tensorflow as tf
import tensorboard as tb
tf.io.gfile = tb.compat.tensorflow_stub.io.gfile
| https://stackoverflow.com/questions/60730544/ |
Approximate the q-function with NN in the FrozenLake exercise | import numpy as np
import gym
import random
import time
from IPython.display import clear_output
env = gym.make("FrozenLake-v0")
action_space_size = env.action_space.n
state_space_size = env.observation_space.n
q_table = np.zeros((state_space_size, action_space_size))
num_episodes = 10000
max_steps_per_episode = 100
learning_rate = 0.1
discount_rate = 0.99
exploration_rate = 1
max_exploration_rate = 1
min_exploration_rate = 0.01
exploration_decay_rate = 0.01
reward_all_episodes = []
#Q-learning algorithm
for episode in range(num_episodes):
state = env.reset()
done = False
reward_current_episode = 0
for step in range(max_steps_per_episode):
exploration_rate_threshold = random.uniform(0, 1)
if exploration_rate_threshold > exploration_rate: #Exploit
action = np.argmax(q_table[state, :])
else:
action = env.action_space.sample() #Explore
new_state, reward, done, info = env.step(action)
q_table[state, action] = (1 - learning_rate) * q_table[state, action] + \
learning_rate * (reward + discount_rate * np.max(q_table[new_state]))
state = new_state
reward_current_episode += reward
if done == True:
break
exploration_rate = min_exploration_rate + \
(max_exploration_rate - min_exploration_rate) * np.exp(-exploration_decay_rate * episode)
reward_all_episodes.append(reward_current_episode)
reward_per_thousand_episodes = np.split(np.array(reward_all_episodes), num_episodes/1000)
count = 1000
print("Average Reward per thousand episode \n")
for r in reward_per_thousand_episodes:
print(count, ":", str(sum(r/1000)))
count += 1000
print("\n ***************Q-table****************\n\n")
print(q_table)
I am new in AI and I need a bit of help. I have completed the FrozenLake exercise with MVP / Q-learning. Someone told me I can approximate the q-function using a deep neural network. It explains it is called deep Q-learning. How can I improve that code using deep Q-learning and pytorch? In other words, how can I approximate the q-function in using NN here?
| This is a slightly broad question, but here's a breakdown.
Firstly NNs are just function approximators.
Give them some input and output and they will find f(input) = output
Only, if such a function exists and is differentiable based on the loss/cost
So the Q function is Q(state,action) = futureReward for that action taken in that state
Alternatively,
we can change the Q function to take in just the current state, and output an array of the estimated future rewards for each action.
Example:
[7,5,1,8] for action a,b,c,d
So now the Q function => Q(state) = futureRewardMatrix[action*]
Now all you need is a neural network that takes in the current state and outputs the rewards for each action.(Only works for discrete actions [a,b,c,d..])
How to train the network.
Collect a training batch,by collecting states,actions,rewards, nextState
To get actions you use nn.predict(state), factoring epsilon to choose
random actions
Training:
x_train = state
y_train[action] = reward + self.gamma * (np.amax(nn.predict(nextState))
next we train on a relatively large batch of x_trains and y_trains
nn.train_on_batch(x_train_batch,y_train_batch)
Then repeat the process, of collecting batches for every step of the environment.
I recommend you check out medium and towardsdatascience DQN articles and their respective Github repos to get full code implementation
| https://stackoverflow.com/questions/60749628/ |
Deploying a hosted deep learning model on Heroku? | I currently want to deploy a deep learning REST API using Flask on Heroku. The weights (Its a pre-trained BERT model) are stored here*as a .zip file. Is there a way I can directly deploy these?
From what I currently understand I have to have these uploaded on Github/S3. That's a bit of a hassle and seems pointless since they are already hosted. Do let me know!
| Generally you can write a bash script that unzips the content and then you execute your program. However...
Time Concern: Unpacking costs time. And the free tier heroku workers only work for roughly a day before being forcefully restarted. If you are operating a web dyno the restarts will be even more frequent and if it takes too long to boot up the process fails (60 seconds to bind to $PORT)
Size Concern: That zip file is 386 MB big and when unpacked liklier to be even bigger.
Heroku has a slug size limit of 500 MB see: https://devcenter.heroku.com/changelog-items/1145
Once the zip file is unpacked you will be over the limit. The zip file itself + its unpacked content is well over 500 MB. You need to pre-unpack it and make sure the files are less than 500 MB. But given that the data is zipped already 386 MB and unpacked it will be bigger. Furthermore you will rely on some buildpacks (python, javascript, ...) that and processing it will take memory. You will go well over 500 MB.
Which means: You will need to pay for Heroku services or look for a different hosting provider.
| https://stackoverflow.com/questions/60757087/ |
Understanding log_prob for Normal distribution in pytorch | I'm currently trying to solve Pendulum-v0 from the openAi gym environment which has a continuous action space. As a result, I need to use a Normal Distribution to sample my actions. What I don't understand is the dimension of the log_prob when using it :
import torch
from torch.distributions import Normal
means = torch.tensor([[0.0538],
[0.0651]])
stds = torch.tensor([[0.7865],
[0.7792]])
dist = Normal(means, stds)
a = torch.tensor([1.2,3.4])
d = dist.log_prob(a)
print(d.size())
I was expecting a tensor of size 2 (one log_prob for each actions) but it output a tensor of size(2,2).
However, when using a Categorical distribution for discrete environment the log_prob has the expected size:
logits = torch.tensor([[-0.0657, -0.0949],
[-0.0586, -0.1007]])
dist = Categorical(logits = logits)
a = torch.tensor([1, 1])
print(dist.log_prob(a).size())
give me a tensor a size(2).
Why is the log_prob for Normal distribution of a different size ?
| If one takes a look in the source code of torch.distributions.Normal and finds the definition of the log_prob(value) function, one can see that the main part of the calculation is:
return -((value - self.loc) ** 2) / (2 * var) - some other part
where value is a variable containing values for which you want to calculate the log probability (in your case, a), self.loc is the mean of the distribution (in you case, means) and var is the variance, that is, the square of the standard deviation (in your case, stds**2). One can see that this is indeed the logarithm of the probability density function of the normal distribution, minus some constants and logarithm of the standard deviation that I don't write above.
In the first example, you define means and stds to be column vectors, while the values to be a row vector
means = torch.tensor([[0.0538],
[0.0651]])
stds = torch.tensor([[0.7865],
[0.7792]])
a = torch.tensor([1.2,3.4])
But subtracting a row vector from a column vector, that the code does in value - self.loc in Python gives a matrix (try!), thus the result you obtain is a value of log_prob for each of your two defined distribution and for each of the variables in a.
If you want to obtain a log_prob without the cross terms, then define the variables consistently, i.e., either
means = torch.tensor([[0.0538],
[0.0651]])
stds = torch.tensor([[0.7865],
[0.7792]])
a = torch.tensor([[1.2],[3.4]])
or
means = torch.tensor([0.0538,
0.0651])
stds = torch.tensor([0.7865,
0.7792])
a = torch.tensor([1.2,3.4])
This is how you do in your second example, which is why you obtain the result you expected.
| https://stackoverflow.com/questions/60765000/ |
Pytorch w/ GPU on Docker Container Error - no CUDA-capable device is detected | I am trying to use Pytorch with a GPU on my Docker Container.
1. On the Host -
I have nvidia-docker installed, CUDA Driver etc
Here is the nvidia-smi output from host:
Fri Mar 20 04:29:49 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00 Driver Version: 440.64.00 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 33C P8 28W / 149W | 16MiB / 11441MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1860 G /usr/lib/xorg/Xorg 15MiB |
+-----------------------------------------------------------------------------+
2. On the Docker Container (Dockerfile for app - Docker Compose File below) -
FROM ubuntu:latest
FROM dsksd/pytorch:0.4
#FROM nvidia/cuda:10.1-base-ubuntu18.04
#FROM nablascom/cuda-pytorch
#FROM nvidia/cuda:10.0-base
RUN apt-get update -y --fix-missing
RUN apt-get install -y python3-pip python3-dev build-essential
RUN apt-get install -y sudo curl
#RUN sudo apt-get install -y nvidia-container-toolkit
#RUN apt-get install -y curl python3.7 python3-pip python3.7-dev python3.7-distutils build-essential
#RUN apt-get install -y curl
#RUN apt-get install -y sudo
#RUN curl -O http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
#RUN sudo dpkg -i cuda-repo-ubuntu1604_10.0.130-1_amd64.deb
#RUN sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
#RUN sudo apt-get install cuda -y
#----------
# Add the package repositories
#RUN distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
#RUN curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
#RUN curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
#RUN sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
#RUN sudo systemctl restart docker
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
ENV LD_LIBRARY_PATH $LD_LIBRARY_PATH:/usr/local/cuda-10.1/compat/
ENV PYTHONPATH $PATH
#----------
ENV LC_ALL=mylocale.utf8
COPY . /app
WORKDIR /app
RUN pip3 install -r requirements.txt
ENTRYPOINT ["python3"]
EXPOSE 5000
CMD ["hook.py"]
When I try running my code on the GPU I run into:
>>> torch.cuda.current_device()
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=50 error=100 : no CUDA-capable device is detected
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 386, in current_device
_lazy_init()
File "/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py", line 193, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:50
I invoke the container using : docker-compose up --build
Here is my docker-compose.yaml file:
version: '3.6'
services:
rdb:
image: mysql:5.7
#restart: always
environment:
MYSQL_DATABASE: 'c_rdb'
MYSQL_USER: 'user'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: '123123'
#ports:
# - '3306:3306'
#expose:
# - '3306'
volumes:
- rdb-data:/var/lib/mysql
- ./init-db/init.sql:/docker-entrypoint-initdb.d/init.sql
mongo:
image: mongo
#restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: 12312323
MONGO_INITDB_DATABASE: chronicler_ndb
volumes:
- ndb-data:/data/db
- ./init-db/init.js:/docker-entrypoint-initdb.d/init.js
ports:
- '27017-27019:27017-27019'
mongo-express:
image: mongo-express
#restart: always
depends_on:
- mongo
- backend
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: rooer
ME_CONFIG_MONGODB_ADMINPASSWORD: 123123
redis:
image: redis:latest
command: ["redis-server", "--appendonly", "yes"]
hostname: redis
#ports:
# - "6379:6379"
volumes:
- cache-data:/data
backend:
build: ./app
ports:
- "5000:5000"
volumes:
- backend-data:/code
links:
- rdb
- redis
volumes:
rdb-data:
name: c-relational-data
ndb-data:
name: c-nosql-data
cache-data:
name: redis-data
backend-data:
name: backend-engine
| It needs runtime options, but well, the runtime option is not available at compose file format 3. So there's some options
Downgrade your compose file version to 2, so something like this :
version: 2
backend:
build: ./app
ports:
- "5000:5000"
volumes:
- backend-data:/code
links:
- rdb
- redis
runtime: nvidia
Or, manually run the container using docker run with --runtime=nvidia argument
Also I recommend using image built by nvidia instead of ubuntu:latest
For more information, you can read the issue here
| https://stackoverflow.com/questions/60768583/ |
Torch.nn.Transformer Example Code Throwing Tensor Shape Errors | I was trying to implement a Transformer model with Pytorch and was experimenting with the example from this GitHub repo, which was linked from here in the documentation, and ran into a problem within the PositionalEncoding class, found within model.py.
The code for the class's __init__() function is as follows:
def __init__(self, d_model, dropout=0.1, max_len=5000):
super(PositionalEncoding, self).__init__()
self.dropout = nn.Dropout(p=dropout)
pe = torch.zeros(max_len, d_model)
position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model))
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
pe = pe.unsqueeze(0).transpose(0, 1)
self.register_buffer('pe', pe)
This code, ran with d_model = 103, threw the following error on the forth last line (pe[:, 0::2] =...):
RuntimeError: The expanded size of the tensor (52) must match the existing size (51) at non-singleton dimension 1. Target sizes: [5000, 52]. Tensor sizes: [5000, 51]
I've found this error fairly impenetrable and haven't had much success writing my own similarly effective implementation.
My first guess would be that this is a problem with version changes within Python/PyTorch, but it could of course be something else I'm missing.
| Try
0:51:2 instead of 0::2
0::2 will generate this -> [0,2,4,...,until the end of Elements]
| https://stackoverflow.com/questions/60769118/ |
In pytorch, how can I sum some elements, and get a tensor of smaller shape? | Specifically I have a tensor of dimension 298x160x160 (faces in 298 frames), I need to sum every 4x4 element in last two dimesnion so that I can get a 298x40x40 tensor.
How can I achieve that?
| You could create a Convolutional layer with a single 4x4 channel and set its weights to 1, with a stride of 4 (also see Conv2D doc):
a = torch.ones((298,160,160))
# add a dimension for the channels. Conv2D expects the input to be : (N,C,H,W)
# where N=number of samples, C=number of channels, H=height, W=width
a = a.unsqueeze(1)
a.shape
Out: torch.Size([298, 1, 160, 160])
with torch.no_grad(): # I assume you don't need to backprop, otherwise remove this check
m = torch.nn.Conv2d(in_channels=1, out_channels=1, kernel_size=4,stride=4,bias=False)
# set the kernel values to 1
m.weight.data = m.weight.data * 0. + 1.
# apply the kernel and squeeze the channel dim out again
res = m(a).squeeze()
res.shape
Out: torch.Size([298, 40, 40])
| https://stackoverflow.com/questions/60769227/ |
Have you encountered the similar problem like loss jitter during training? | Background: It's about loss jittering which generates at the beginning stage of every training epoch. When the dataloader loads the first batch data to feed into the network, the loss value always rises suddenly, then returns to normal from the second batch and continues to decline. The curve is so strange. I need your help!
for epoch in range(begin_epoch, end_epoch):
print('PROGRESS: %.2f%%' % (100.0 * epoch / end_epoch))
# set epoch as random seed of sampler while distributed training
if train_sampler is not None and hasattr(train_sampler, 'set_epoch'):
train_sampler.set_epoch(epoch)
# reset metrics
metrics.reset()
# set net to train mode
net.train()
# clear the paramter gradients
# optimizer.zero_grad()
# init end time
end_time = time.time()
if isinstance(lr_scheduler, torch.optim.lr_scheduler.ReduceLROnPlateau):
name, value = validation_monitor.metrics.get()
val = value[name.index(validation_monitor.host_metric_name)]
lr_scheduler.step(val, epoch)
# training
train_loader_iter = iter(train_loader)
for nbatch in range(total_size):
try:
batch = next(train_loader_iter)
except StopIteration:
print('reset loader .. ')
train_loader_iter = iter(train_loader)
batch = next(train_loader_iter)
global_steps = total_size * epoch + nbatch
os.environ['global_steps'] = str(global_steps)
# record time
data_in_time = time.time() - end_time
# transfer data to GPU
data_transfer_time = time.time()
batch = to_cuda(batch)
data_transfer_time = time.time() - data_transfer_time
# forward
forward_time = time.time()
outputs, loss = net(*batch)
loss = loss.mean()
if gradient_accumulate_steps > 1:
loss = loss / gradient_accumulate_steps
forward_time = time.time() - forward_time
# backward
backward_time = time.time()
if fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
backward_time = time.time() - backward_time
optimizer_time = time.time()
if (global_steps + 1) % gradient_accumulate_steps == 0:
# clip gradient
if clip_grad_norm > 0:
if fp16:
total_norm = torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer),
clip_grad_norm)
else:
total_norm = torch.nn.utils.clip_grad_norm_(net.parameters(),
clip_grad_norm)
if writer is not None:
writer.add_scalar(tag='grad-para/Total-Norm',
scalar_value=float(total_norm),
global_step=global_steps)
optimizer.step()
# step LR scheduler
if lr_scheduler is not None and not isinstance(lr_scheduler,
torch.optim.lr_scheduler.ReduceLROnPlateau):
lr_scheduler.step()
# clear the parameter gradients
optimizer.zero_grad()
optimizer_time = time.time() - optimizer_time
# update metric
metric_time = time.time()
metrics.update(outputs)
if writer is not None and nbatch % 50 == 0:
with torch.no_grad():
for group_i, param_group in enumerate(optimizer.param_groups):
writer.add_scalar(tag='Initial-LR/Group_{}'.format(group_i),
scalar_value=param_group['initial_lr'],
global_step=global_steps)
writer.add_scalar(tag='LR/Group_{}'.format(group_i),
scalar_value=param_group['lr'],
global_step=global_steps)
writer.add_scalar(tag='Train-Loss',
scalar_value=float(loss.item()),
global_step=global_steps)
name, value = metrics.get()
for n, v in zip(name, value):
if 'Logits' in n:
writer.add_scalar(tag='Train-Logits/' + n,
scalar_value=v,
global_step=global_steps)
else:
writer.add_scalar(tag='Train-' + n,
scalar_value=v,
global_step=global_steps)
for k, v in outputs.items():
if 'score' in k:
writer.add_histogram(tag=k,
values=v,
global_step=global_steps)
metric_time = time.time() - metric_time
| You have a batch in your dataset that have high loss, that's it.
It is not that common that people store metrics for every batch, usually it is the average over epoch (or average over multiple batch steps) that is stored. You won't see such spikes if you will store averages.
You also could reduce these spikes by shuffling your data so that the problematic batch is spread out across the epoch. In general it is a good practice to do so at the beginning of each epoch.
| https://stackoverflow.com/questions/60774620/ |
Installing torchvision from source libavcodec/avcodec.h not found | I am trying to install torchvision from source, was able to get pytorch installed (needed it from source to use GPU) and now can't get torchvision to work.
I am getting the following error when I run the setup.py:
C:\Users\hoski\vision\torchvision\csrc\cpu\decoder\defs.h(11): fatal error C1083: Cannot open include file: 'libavcodec/avcodec.h': No such file or directory
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.24.28314\bin\HostX86\x64\cl.exe' failed with exit status 2
I downloaded the source for ffmpeg and added it to my path, and I checked the libavcodec folder and the file is there, it just isn't seeing it I guess?
Any advice would be greatly appreciated!
| Try change has_ffmpeg = ffmpeg_exe is not None in Setup.py to has_ffmpeg = False
| https://stackoverflow.com/questions/60781599/ |
how to assign value to a tensor using index | I defined four tensors that represent index_x,index_y,index_z,and value,respectively and assigned value to a new tensor using these three index. Why were the results of the two assignments different?
import torch
import numpy as np
import random
import os
def seed_torch(seed=0):
random.seed(seed)
np.random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
seed_torch(1)
a_list, b_list, c_list = [], [], []
for i in range(0, 512*512):
a_ = random.randint(0, 399)
b_ = random.randint(0, 399)
c_ = random.randint(0, 199)
a_list.append(a_)
b_list.append(b_)
c_list.append(c_)
a = torch.tensor(a_list)
b = torch.tensor(b_list)
c = torch.tensor(c_list)
v = torch.rand(512*512)
matrix1 = torch.zeros(400,400,200)
matrix2 = torch.zeros(400,400,200)
index=[a,b,c]
matrix1[index]=v
matrix2[index]=v
m = matrix1 - matrix2
print(m.sum())
print(m.sum()) is not zero
| Can't add a comment, but when I run your exact code it returns tensor(0.) on my machine, so it seems to work just fine.
Also, just a tip, instead of the for loop
a_list, b_list, c_list = [], [], []
for i in range(0, 512*512):
a_ = random.randint(0, 399)
b_ = random.randint(0, 399)
c_ = random.randint(0, 199)
a_list.append(a_)
b_list.append(b_)
c_list.append(c_)
a = torch.tensor(a_list)
b = torch.tensor(b_list)
c = torch.tensor(c_list)
you could also do:
a = torch.randint(400, (512*512,))
b = torch.randint(400, (512*512,))
c = torch.randint(200, (512*512,))
| https://stackoverflow.com/questions/60808314/ |
How update weights of two separate neural network with a computed loss? | I have an encoder and a proxy network that help the encoder to maximize information between its input(an image) and output (feature vector of image). to get this done, I used a loss function that estimate MI and by an optimizer the weights of both networks get updated with computed loss, but I'm not sure that does this get done correctly or not. I used following code (in pytorch):
# Clear the previous gradients
discriminator_net_optim.zero_grad()
encoder_net_optim.zero_grad()
autograd.backward(loss)
torch.nn.utils.clip_grad_norm_(discriminator.parameters(), 2)
torch.nn.utils.clip_grad_norm_(encoder.parameters(), 2)
# adjust weights in discriminator and encoder
discriminator_net_optim.step()
encoder_net_optim.step()
any help or suggestion is appreciated.
| If you have multiple networks, this is an example of how they would train
encoder = Encoder(args).to(device)
decoder = Decoder(args).to(device)
params = list(encoder.parameters()) + list(decoder.parameters())
optimizer = torch.optim.Adam(params, learning_rate)
And this is called on each batch:
optimizer.zero_grad()
loss.backward()
optimizer.step()
Hope it helps
| https://stackoverflow.com/questions/60815938/ |
Dimension mismatch CNN LibTorch/PyTorch | I have a CNN structure in LibTorch but the dimensions are not ok. My objective is to input a 3 channel 64x64 image and output a logistic regression float for a DGAN. Last layer I set as input channels 36 because if I remove that layer the output neuron had 6x6 dimension so I guesses that was the required dimension for the input of the fully connected. I would like to know:
What do you normally do to check dimensions in LibTorch or Pytorch (i.e. check the required size for the last module, check how many trainable parameters has each layer ...)
What is the error in this case
#include <torch/torch.h>
#include "parameters.h"
using namespace torch;
class DCGANDiscriminatorImpl: public nn::Module {
private:
nn::Conv2d conv1, conv2, conv3, conv4;
nn::BatchNorm2d batch_norm1, batch_norm2;
nn::Linear fc1;
public:
DCGANDiscriminatorImpl()
:conv1(nn::Conv2dOptions(3, 64, 4).stride(2).padding(1).bias(false)),
conv2(nn::Conv2dOptions(64, 128, 4).stride(2).padding(1).bias(false)),
batch_norm1(128),
conv3(nn::Conv2dOptions(128, 256, 4).stride(2).padding(1).bias(false)),
batch_norm2(256),
conv4(nn::Conv2dOptions(256, 1, 3).stride(1).padding(0).bias(false)),
fc1(6*6, 1)
{
register_module("conv1", conv1);
register_module("conv2", conv2);
register_module("conv3", conv3);
register_module("conv4", conv4);
register_module("batch_norm1", batch_norm1);
register_module("batch_norm2", batch_norm2);
register_module("fc1", fc1);
}
Tensor forward(torch::Tensor x)
{
x = leaky_relu(conv1(x), cte::NEGATIVE_SLOPE);
x = leaky_relu(batch_norm1(conv2(x)), cte::NEGATIVE_SLOPE);
x = leaky_relu(batch_norm2(conv3(x)), cte::NEGATIVE_SLOPE);
x = sigmoid(fc1(x));
return x;
}
};
TORCH_MODULE(DCGANDiscriminator);
The error I get is:
libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: size mismatch, m1: [131072 x 8], m2: [36 x 1] at ../aten/src/TH/generic/THTensorMath.cpp:136
| I had several issues but at the end this architecture worked.
using namespace torch;
class DCGANDiscriminatorImpl: public nn::Module {
private:
nn::Conv2d conv1, conv2, conv3, conv4;
nn::BatchNorm2d batch_norm1, batch_norm2;
nn::Linear fc1;
public:
DCGANDiscriminatorImpl()
:conv1(nn::Conv2dOptions(3, 64, 4).stride(2).padding(1).bias(false)),
conv2(nn::Conv2dOptions(64, 128, 4).stride(2).padding(1).bias(false)),
batch_norm1(128),
conv3(nn::Conv2dOptions(128, 256, 4).stride(2).padding(1).bias(false)),
batch_norm2(256),
conv4(nn::Conv2dOptions(256, 64, 3).stride(1).padding(0).bias(false)),
fc1(6*6*64, 1)
{
register_module("conv1", conv1);
register_module("conv2", conv2);
register_module("conv3", conv3);
register_module("conv4", conv4);
register_module("batch_norm1", batch_norm1);
register_module("batch_norm2", batch_norm2);
register_module("fc1", fc1);
}
Tensor forward(torch::Tensor x)
{
x = leaky_relu(conv1(x), cte::NEGATIVE_SLOPE);
x = leaky_relu(batch_norm1(conv2(x)), cte::NEGATIVE_SLOPE);
x = leaky_relu(batch_norm2(conv3(x)), cte::NEGATIVE_SLOPE);
x = leaky_relu(conv4(x), cte::NEGATIVE_SLOPE);
x = x.view({x.size(0), -1});
x = sigmoid(fc1(x));
return x;
}
};
TORCH_MODULE(DCGANDiscriminator);
| https://stackoverflow.com/questions/60826846/ |
Pytorch crashes on input in eval mode | My model trains perfectly fine, but when I switch it to evaluation mode it does not like the data types of the input samples:
Traceback (most recent call last):
File "model.py", line 558, in <module>
main_function(train_sequicity=args.train)
File "model.py", line 542, in main_function
out = model(user, bspan, response_, degree)
File "/home/memduh/git/project/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "model.py", line 336, in forward
self.params['bspan_size'])
File "model.py", line 283, in _greedy_decode_output
out = decoder(input_, encoder_output)
File "/home/memduh/git/project/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "model.py", line 142, in forward
tgt = torch.cat([go_tokens, tgt], dim=0) # concat GO_2 token along sequence lenght axis
RuntimeError: Expected object of scalar type Long but got scalar type Float for sequence element 1 in sequence argument at position #1 'tensors'
This seems to occur in a part of the code where concatenation happens. This is in an architecture similar to the pytorch transformer, just modified to have two decoders:
def forward(self, tgt, memory):
""" Call decoder
the decoder should be called repeatedly
Args:
tgt: input to transformer_decoder, shape: (seq, batch)
memory: output from the encoder
Returns:
output from linear layer, (vocab size), pre softmax
"""
go_tokens = torch.zeros((1, tgt.size(1)), dtype=torch.int64) + 3 # GO_2 token has index 3
tgt = torch.cat([go_tokens, tgt], dim=0) # concat GO_2 token along sequence lenght axis
+
mask = tgt.eq(0).transpose(0,1) # 0 corresponds to <pad>
tgt = self.embedding(tgt) * self.ninp
tgt = self.pos_encoder(tgt)
tgt_mask = self._generate_square_subsequent_mask(tgt.size(0))
output = self.transformer_decoder(tgt, memory, tgt_mask=tgt_mask, tgt_key_padding_mask=mask)
output = self.linear(output)
return output
The concatenation bit in the middle of the codeblock is where the problem happens. The odd thing is that it works perfectly fine and trains, with loss going down in train mode. This issue only comes up in eval mode. What could the problem be?
| The errors seems to be clear: tgt is Float, but it was expecting it to be Long. Why?
In your code, you define that go_tokens is torch.int64 (i.e., Long):
def forward(self, tgt, memory):
go_tokens = torch.zeros((1, tgt.size(1)), dtype=torch.int64) + 3 # GO_2 token has index 3
tgt = torch.cat([go_tokens, tgt], dim=0) # concat GO_2 token along sequence lenght axis
# [...]
You can avoid that error by saying that go_tokens should have the same data type as tgt:
def forward(self, tgt, memory):
go_tokens = torch.zeros((1, tgt.size(1)), dtype=tgt.dtype) + 3 # GO_2 token has index 3
tgt = torch.cat([go_tokens, tgt], dim=0) # concat GO_2 token along sequence lenght axis
# [...]
Now, if the rest of the code relies on tgt being torch.int64, then you should identify why tgt is torch.int64 at training time and torch.float32 at test time, otherwise another error will be thrown.
| https://stackoverflow.com/questions/60838718/ |
Pytorch - Concatenating Datasets before using Dataloader | I am trying to load two datasets and use them both for training.
Package versions: python 3.7;
pytorch 1.3.1
It is possible to create data_loaders seperately and train on them sequentially:
from torch.utils.data import DataLoader, ConcatDataset
train_loader_modelnet = DataLoader(ModelNet(args.modelnet_root, categories=args.modelnet_categories,split='train', transform=transform_modelnet, device=args.device),batch_size=args.batch_size, shuffle=True)
train_loader_mydata = DataLoader(MyDataset(args.customdata_root, categories=args.mydata_categories, split='train', device=args.device),batch_size=args.batch_size, shuffle=True)
for e in range(args.epochs):
for idx, batch in enumerate(tqdm(train_loader_modelnet)):
# training on dataset1
for idx, batch in enumerate(tqdm(train_loader_custom)):
# training on dataset2
Note: MyDataset is a custom dataset class which has def __len__(self): def __getitem__(self, index): implemented. As the above configuration works it seems that this is implementation is OK.
But I would ideally like to combine them into a single dataloader object. I attempted this as per the pytorch documentation:
train_modelnet = ModelNet(args.modelnet_root, categories=args.modelnet_categories,
split='train', transform=transform_modelnet, device=args.device)
train_mydata = CloudDataset(args.customdata_root, categories=args.mydata_categories,
split='train', device=args.device)
train_loader = torch.utils.data.ConcatDataset(train_modelnet, train_customdata)
for e in range(args.epochs):
for idx, batch in enumerate(tqdm(train_loader)):
# training on combined
However, on random batches I get the following 'expected a tensor as element X in argument 0, but got a tuple instead' type of error. Any help would be much appreciated!
> 40%|████ | 53/131 [01:03<02:00, 1.55s/it]
> Traceback (mostrecent call last): File
> "/home/chris/Programs/pycharm-anaconda-2019.3.4/plugins/python/helpers/pydev/pydevd.py",
> line 1434, in _exec
> pydev_imports.execfile(file, globals, locals) # execute the script File
> "/home/chris/Programs/pycharm-anaconda-2019.3.4/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
> exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/chris/Documents/4yp/Data/my_kaolin/Classification/pointcloud_classification_combinedset.py",
> line 83, in <module>
> for idx, batch in enumerate(tqdm(train_loader)): File "/home/chris/anaconda3/envs/4YP/lib/python3.7/site-packages/tqdm/std.py",
> line 1107, in __iter__
> for obj in iterable: File "/home/chris/anaconda3/envs/4YP/lib/python3.7/site-packages/torch/utils/data/dataloader.py",
> line 346, in __next__
> data = self._dataset_fetcher.fetch(index) # may raise StopIteration File
> "/home/chris/anaconda3/envs/4YP/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py",
> line 47, in fetch
> return self.collate_fn(data) File "/home/chris/anaconda3/envs/4YP/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py",
> line 79, in default_collate
> return [default_collate(samples) for samples in transposed] File "/home/chris/anaconda3/envs/4YP/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py",
> line 79, in <listcomp>
> return [default_collate(samples) for samples in transposed] File "/home/chris/anaconda3/envs/4YP/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py",
> line 55, in default_collate
> return torch.stack(batch, 0, out=out) TypeError: expected Tensor as element 3 in argument 0, but got tuple
| If I got your question right, you have train and dev sets (and their corresponding loaders) as follows:
train_set = CustomDataset(...)
train_loader = DataLoader(dataset=train_set, ...)
dev_set = CustomDataset(...)
dev_loader = DataLoader(dataset=dev_set, ...)
And you want to concatenate them in order to use train+dev as the training data, right? If so, you just simply call:
train_dev_sets = torch.utils.data.ConcatDataset([train_set, dev_set])
train_dev_loader = DataLoader(dataset=train_dev_sets, ...)
The train_dev_loader is the loader containing data from both sets.
Now, be sure your data has the same shapes and the same types, that is, the same number of features, or the same categories/numbers, etc.
| https://stackoverflow.com/questions/60840500/ |
Compute grads of cloned tensor Pytorch | I am having a hard time with gradient computation using PyTorch.
I have the outputs and the hidden states of the last time step T of an RNN.
I would like to clone my hidden states and compute its grad after backpropagation but it doesn't work.
After reading pytorch how to compute grad after clone a tensor, I used retain_grad() without any success.
Here's my code
hidden_copy = hidden.clone()
hidden.retain_grad()
hidden_copy.retain_grad()
outputs_T = outputs[T]
targets_T = targets[T]
loss_T = loss(outputs_T,targets_T)
loss_T.backward()
print(hidden.grad)
print(hidden_copy.grad)
hidden_grad gives an array while hidden_copy.grad gives None.
Why does hidden_copy.grad give None ? Is there any way to compute the gradients of a cloned tensor ?
| Based on the comments the problem is that hidden_copy is never visited during the backward pass.
When you perform backward pytorch follows the computation graph backwards starting at loss_T and works backwards to all the leaf nodes. It only visits the tensors which were used to compute loss_T. If a tensor isn't part of that backward path then it won't be visited and it's grad member will not be updated. Basically by creating a copy of the tensor and then not using it to compute loss_T this results in a "dead-end" in the computation graph.
To illustrate take a look at this diagram representing a simplified view of a computation graph. Each node in the graph is a tensor where the edges point back to direct descendants.
Notice if we follow the path back from loss_T to the leaves then we never visit hidden_conv. Note that a leaf is a tensor with no descendants and in this case input is the only leaf.
This is an extremely simplified computation graph used to demonstrate a point. Of course in reality there are probably many more nodes between input and hidden and between hidden and output_T as well as other leaf tensors since the weights of layers are almost certainly leaves.
| https://stackoverflow.com/questions/60853680/ |
Slice 4d tensor into 4D tensor of smaller subtensors (slice in last 2 dimensions only) | The question is analogous to Slice 2d array into smaller 2d arrays except for the fact that I use tensors (torch) & I have a 4D, not 2D, tensor of the shape eg. (3, 1, 32, 32) - in my case, it is 3 images of size 32x32.
I want to split each tensor of form [i, 0, :, :] into smaller subarrays, so the output would have a shape eg. (3, 16, 8, 8), where each [:, j, :, :] is a small square cut from the original image. I cannot find a way to modify the proposed solution for 4D tensor.
I also tried to just use
subx = x.reshape(3, 16, 8, 8)
but this does not reshape it as I want.
| reshape will not work for this purpose. You could look into skimage's view_as_blocks, where the resulting blocks are non-overlapping views of the input array:
from skimage.util.shape import view_as_blocks
view_as_blocks(a, block_shape=(3,1,8,8)).reshape(3, 16, 8, 8)
| https://stackoverflow.com/questions/60865167/ |
k-fold cross validation using DataLoaders in PyTorch | I have splitted my training dataset into 80% train and 20% validation data and created DataLoaders as shown below. However I do not want to limit my model's training. So I thought of splitting my data into K(maybe 5) folds and performing cross-validation. However I do not know how to combine the datasets to my dataloader after splitting them.
train_size = int(0.8 * len(full_dataset))
validation_size = len(full_dataset) - train_size
train_dataset, validation_dataset = random_split(full_dataset, [train_size, validation_size])
full_loader = DataLoader(full_dataset, batch_size=4,sampler = sampler_(full_dataset), pin_memory=True)
train_loader = DataLoader(train_dataset, batch_size=4, sampler = sampler_(train_dataset))
val_loader = DataLoader(validation_dataset, batch_size=1, sampler = sampler_(validation_dataset))
| I just wrote a cross validation function work with dataloader and dataset.
Here is my code, hope this is helpful.
# define a cross validation function
def crossvalid(model=None,criterion=None,optimizer=None,dataset=None,k_fold=5):
train_score = pd.Series()
val_score = pd.Series()
total_size = len(dataset)
fraction = 1/k_fold
seg = int(total_size * fraction)
# tr:train,val:valid; r:right,l:left; eg: trrr: right index of right side train subset
# index: [trll,trlr],[vall,valr],[trrl,trrr]
for i in range(k_fold):
trll = 0
trlr = i * seg
vall = trlr
valr = i * seg + seg
trrl = valr
trrr = total_size
# msg
# print("train indices: [%d,%d),[%d,%d), test indices: [%d,%d)"
# % (trll,trlr,trrl,trrr,vall,valr))
train_left_indices = list(range(trll,trlr))
train_right_indices = list(range(trrl,trrr))
train_indices = train_left_indices + train_right_indices
val_indices = list(range(vall,valr))
train_set = torch.utils.data.dataset.Subset(dataset,train_indices)
val_set = torch.utils.data.dataset.Subset(dataset,val_indices)
# print(len(train_set),len(val_set))
# print()
train_loader = torch.utils.data.DataLoader(train_set, batch_size=50,
shuffle=True, num_workers=4)
val_loader = torch.utils.data.DataLoader(val_set, batch_size=50,
shuffle=True, num_workers=4)
train_acc = train(res_model,criterion,optimizer,train_loader,epoch=1)
train_score.at[i] = train_acc
val_acc = valid(res_model,criterion,optimizer,val_loader)
val_score.at[i] = val_acc
return train_score,val_score
train_score,val_score = crossvalid(res_model,criterion,optimizer,dataset=tiny_dataset)
In order to give an intuition of correctness for what we are doing, see the output below:
train indices: [0,0),[3600,18000), test indices: [0,3600)
14400 3600
train indices: [0,3600),[7200,18000), test indices: [3600,7200)
14400 3600
train indices: [0,7200),[10800,18000), test indices: [7200,10800)
14400 3600
train indices: [0,10800),[14400,18000), test indices: [10800,14400)
14400 3600
train indices: [0,14400),[18000,18000), test indices: [14400,18000)
14400 3600
| https://stackoverflow.com/questions/60883696/ |
pytorch .cuda() can't get the tensor to cuda | I try to get my data onto gpu,but it doesn't work;
in my train.py
if __name__ == '__main__':
ten = torch.FloatTensor(2)
ten = ten.cuda()
print(ten)
args = config()
train_net(args, args.train_net, loss_config=net_loss_config[args.train_net])
when it runs ,it prints this
tensor([0., 0.])
the tensor is not on cuda
but in test.py
import torch
ten=torch.FloatTensor(2)
ten=ten.cuda()
print(ten)
it prints this
tensor([1.4013e-45, 0.0000e+00], device='cuda:0')
now the tensor is on cuda
| The error means that the ten variable in your model is of type torch.FloatTensor (CPU), while the input you provide to the model is of type torch.cuda.FloatTensor (GPU).
The most likely scenario is that you have nn.Parameter or other modules such as nn.Conv2d defined in the __init__() method of your model, and additional weights or layers defined in the forward() method of your model.
In this case, the layers defined in the forward() method are not modules of the model, and they won’t be mapped to GPU when you call cuda().
You also need to explicitly add the parameters to your optimizer if you want them to be updated with gradient descent.
| https://stackoverflow.com/questions/60899711/ |
LSTM in PyTorch Classifying Names | I am trying the example presented in https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html but I am using a LSTM model instead of a RNN. The dataset is composed by different names (of different sizes) and their corresponding language (total number of languages is 18), and the objective is to train a model that given a certain name outputs the language it belongs to.
My problems right now are:
How to deal with variable size names, i.e. Hector and Kim, in the LSTM
A whole name (secuence of character) is processed every time in the LSTM so the output of the softmax function has shape (#characters of name, #target classes) but I would like just to obtain (1,#target of classes) in order to decide each name to which class does it correspond to. I have tried to just get the last row but results are very bad.
class LSTM(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
super(LSTM, self).__init__()
self.hidden_dim = hidden_dim
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
# The LSTM takes word embeddings as inputs, and outputs hidden states
# with dimensionality hidden_dim.
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
# The linear layer that maps from hidden state space to tag space
self.hidden2tag = nn.Linear(hidden_dim, tagset_size)
self.softmax = nn.LogSoftmax(dim = 1)
def forward(self, word):
embeds = self.word_embeddings(word)
lstm_out, _ = self.lstm(embeds.view(len(word), 1, -1))
tag_space = self.hidden2tag(lstm_out.view(len(word), -1))
tag_scores = self.softmax(tag_space)
return tag_scores
def initHidden(self):
return Variable(torch.zeros(1, self.hidden_dim))
lstm = LSTM(n_embedding_dim,n_hidden,n_characters,n_categories)
optimizer = torch.optim.SGD(lstm.parameters(), lr=learning_rate)
criterion = nn.NLLLoss()
def train(category_tensor, line_tensor):
# i.e. line_tensor = tensor([37, 4, 14, 13, 19, 0, 17, 0, 10, 8, 18]) and category_tensor = tensor([11])
optimizer.zero_grad()
output = lstm(line_tensor)
loss = criterion(output[-1:], category_tensor) # VERY BAD
loss.backward()
optimizer.step()
return output, loss.data.item()
Where line_tensor is of variable size (depending the size of each name) and is a mapping between character and their index in the dictionary
| Lets dig into the solution step by step
Frame the problem
Given your problem statement, you will have to use LSTM for making a classification rather then its typical use of tagging. The LSTM is unrolled for certain timestep and this is the reason why input and ouput dimensions of a recurrent models are
Input: batch size X time steps X input size
Output: batch size X time steps X hidden size
Now since you want to use it for classification, you have two options:
Put a dense layer over the output of all the timesteps/unrollings [My example below uses this]
Ignore the all timestep outputs except the last, and put a dense layer over the last timestep
So the input to our LSTM model are the names fed as one character per LSTM timestep and output will be the class corresponding to its language.
How to handle variable length inputs/names
We have two options again here.
Batch same length names together. This is called bucketing
Fix max length based on the average size of names you have. Pad the smaller names and chop off the longer names [My example below uses max length of 10]
Do we need Embedding layer ?
No. Embedding layers are typically used to learn a good vector representations of words. But in the case of character model, the input is a character not a word so adding an embedding layers does not help. Character can be directly encoded to number and embedding layers does very little in capturing relationship between different characters. You can still use embedding layer, but I strongly believe it will not help.
Toy character LSTM model code
import numpy as np
import torch
import torch.nn as nn
# Model architecture
class Recurrent_Model(nn.Module):
def __init__(self, output_size, time_steps=10):
super(Recurrent_Model, self).__init__()
self.time_steps = time_steps
self.lstm = nn.LSTM(1,32, bidirectional=True, num_layers=2)
self.linear = nn.Linear(32*2*time_steps, output_size)
def forward(self, x):
lstm_out, _ = self.lstm(x)
return self.linear(lstm_out.view(-1,32*2*self.time_steps))
# Sample input and output
names = ['apple', 'dog', 'donkey', "elephant", "hippopotamus"]
lang = [0,1,2,1,0]
def pad_sequence(name, max_len=10):
x = np.zeros((len(name), max_len))
for i, name in enumerate(names):
for j, c in enumerate(name):
if j >= max_len:
break
x[i,j] = ord(c)
return torch.FloatTensor(x)
x = pad_sequence(names)
x = torch.unsqueeze(x, dim=2)
y = torch.LongTensor(lang)
model = Recurrent_Model(3)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), 0.01)
for epoch in range(500):
model.train()
output = model(x)
loss = criterion(output, y)
print (f"Train Loss: {loss.item()}")
optimizer.zero_grad()
loss.backward()
optimizer.step()
Note
All the tensors are loaded into memory so if you have huge dataset, you will have to use a dataset and dataloader to avoid OOM error.
You will have to split data into train test and validate on the test dataset (the standard model building stuff)
You will have to normalize the input tensors before passing it to the model (again the standard model building stuff)
Finally
so how do you make sure your model architecture does not have bugs or is learning. As Andrej karpathy says, overfit the model on a small dataset and if it is overfitting then we are fine.
| https://stackoverflow.com/questions/60900346/ |
How to prevent inf while working with exponential | I'm trying to create a function in a network with trainable parameters. In my function I have an exponential that for large tensor values goes to infinity. What would the best way to avoid this be?
The function is as follows:
step1 = Pss-(k*Pvv)
step2 = step1*s
step3 = torch.exp(step2)
step4 = torch.log10(1+step3)
step5 = step4/s
#or equivalently
# train_curve = torch.log(1+torch.exp((Pss-k*Pvv)*s))/s
If it makes it easier to understand, the basic function is log10(1+e^(x-const)*10)/10. The exponential inside the log gets too big and goes to inf.
I think I might have to normalize my tensor x, and this would mean normalizing the constants and the rest of the function also. Would someone have any thoughts on the best way to go about this?
Thanks so much.
| One solution is to just use a more stable computation. Notice that log(1 + exp(x)) is approximately equal to x when x is large enough. Intuitively this can be observed by noting that, for example, exp(50) is approximately 5.18e+21 for which adding 1 will have no effect when using 32-bit floating point arithmetic like PyTorch does. Further verification using an arbitrary precision calculator shows that the error in this approximation at 50 is far outside the maximum 32-bit floating point precision (which is about 7 decimal digits).
Using this information we can implement a simple piecewise function in PyTorch for which we use log1p(exp(x)) for values less than 50 and x for values greater than 50. Also note that this function is autograd compatible
def log1pexp(x):
# more stable version of log(1 + exp(x))
return torch.where(x < 50, torch.log1p(torch.exp(x)), x)
This get's us most of the way to a solution since you actually want to evaluate torch.log10(1+torch.exp((Pss-k*Pvv)*s))/s
Now we can use our new log1pexp function to compute this expression without worrying about infinities
(log1pexp((Pss - k*Pvv)*s) / math.log(10)) / s
and mind the conversion from natural log to log base-10 by dividing by log(10).
| https://stackoverflow.com/questions/60903821/ |
Subsets and Splits