id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st31468
|
Hi,
How can simultaneous training of NN generating features for a final NN can be implemented with PyTorch. It would look something like this with Keras:
import tensorflow as tf
import numpy as np
class Regressor(tf.keras.layers.Layer):
def __init__(self, dims=[32, 8]):
super(Regressor, self).__init__()
self.dims = dims
for i, d in enumerate(self.dims):
setattr(self, f'dense_{i}', tf.keras.layers.Dense(d))
setattr(self, f'dense_{i+1}', tf.keras.layers.Dense(1))
def call(self, inputs):
x = inputs
for i, _ in enumerate(self.dims):
x = getattr(self, f'dense_{i}')(x)
x = tf.nn.relu(x)
x = getattr(self, f'dense_{i+1}')(x)
x = tf.nn.sigmoid(x)
return x
class FeatureRegressor(Regressor):
def __init__(self, dims=[32, 8], latent_idx=1):
super(FeatureRegressor, self).__init__(dims)
self.latent_idx = latent_idx
def call(self, inputs):
x = inputs
for i, _ in enumerate(self.dims):
x = getattr(self, f'dense_{i}')(x)
if i == self.latent_idx:
latent = x
x = tf.nn.relu(x)
return latent, getattr(self, f'dense_{i+1}')(x)
class Model(tf.keras.Model):
def __init__(self,
input_dims=10,
feature_regressor_dims=[32, 8],
feature_latent_idx=1,
target_regressor_dims=[32, 8]):
super(Model, self).__init__()
self.input_dims = input_dims
self.feature_regressor_dims = feature_regressor_dims
self.target_regressor_dims = target_regressor_dims
for i in range(input_dims):
setattr(self, f'feature_regressor_{i}', FeatureRegressor(feature_regressor_dims, feature_latent_idx))
self.target_regressor = Regressor(target_regressor_dims)
def call(self, inputs):
# Perform feature regressor inference
features_latens = []
features_preds = []
for f in range(self.input_dims):
# Prepare input without target feature
mask = np.array([d != f for d in range(self.input_dims)])
input_feature = tf.boolean_mask(inputs, mask, axis=1)
# Regress target feature
feature_latent, feature_pred = getattr(self, f'feature_regressor_{f}')(input_feature)
features_latens.append(feature_latent)
features_preds.append(feature_pred)
# Perform target regressor inference
features_latens = tf.concat(features_latens, axis=-1)
input_target = tf.concat([inputs, features_latens], axis=-1)
target_pred = self.target_regressor(input_target)
# Concat predictions
output = tf.concat(features_preds + [target_pred], axis=-1)
return output
Thanks!
|
st31469
|
I’m not sure what simulatenous training means here as it seems like the overall model is still feedforward.
You can likely port most of this implementation over by simply pattern matching (e.g., tf.keras.layers.Layer → torch.nn.Module, def call(self, inputs) → def forward(self, x)).
torchvision has many good reference model implementations that are good examples of this (e.g., ResNet: torchvision.models.resnet — Torchvision master documentation 1).
|
st31470
|
Thanks! @eqy. Didn’t know I could create and initialize other NN in the forward method.
Regarding “simultaneous”, I didn’t use the best word here. The idea is to create a model for each feature and predict it using the other features. Then use the last intermediate layer of each NN as input features to the final model that has to predict the real target. So the final model would have the initial features plus features engineered using the other NN.
The idea is from here https://towardsdatascience.com/automated-feature-engineering-using-neural-networks-5310d6d4280a
|
st31471
|
class UNET(nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
self.conv1 = self.contract_block(in_channels, 32, 7, 3)
self.conv2 = self.contract_block(32, 64, 3, 1)
self.conv3 = self.contract_block(64, 128, 3, 1)
self.upconv3 = self.expand_block(128, 64, 3, 1)
self.upconv2 = self.expand_block(64*2, 32, 3, 1)
self.upconv1 = self.expand_block(32*2, out_channels, 3, 1)
def __call__(self, x):
# downsampling part
conv1 = self.conv1(x)
conv2 = self.conv2(conv1)
conv3 = self.conv3(conv2)
upconv3 = self.upconv3(conv3)
upconv2 = self.upconv2(torch.cat([upconv3, conv2], 1))
upconv1 = self.upconv1(torch.cat([upconv2, conv1], 1))
return upconv2
def contract_block(self, in_channels, out_channels, kernel_size, padding):
contract = nn.Sequential(
torch.nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding),
torch.nn.BatchNorm2d(out_channels),
torch.nn.ReLU(),
torch.nn.Conv2d(out_channels, out_channels, kernel_size=kernel_size, stride=1, padding=padding),
torch.nn.BatchNorm2d(out_channels),
torch.nn.ReLU(),
torch.nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
return contract
def expand_block(self, in_channels, out_channels, kernel_size, padding):
expand = nn.Sequential(torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=padding),
torch.nn.BatchNorm2d(out_channels),
torch.nn.ReLU(),
torch.nn.Conv2d(out_channels, out_channels, kernel_size, stride=1, padding=padding),
torch.nn.BatchNorm2d(out_channels),
torch.nn.ReLU(),
torch.nn.ConvTranspose2d(out_channels, out_channels, kernel_size=3, stride=2, padding=1, output_padding=1)
)
return expand
error
Sizes of tensors must match except in dimension 1. Got 4 and 3 in dimension 2 (The offending index is 1)
help me
|
st31472
|
since each epoch will run 30 mins and I may be run it at night, but I can not wait for it to finish. can I run it and then close the computer and see the result in the moring?
|
st31473
|
I don’t think it is guaranteed to be connected to that same session.
On such case, I would cache the stage every couple epochs and just leave the machine on. If you get disconnected, at least you have the cache and you could continue training from there.
Also, as far as I know, you need you machine to be active. From time to time Colab asks the user to interact with the session. Though there are some javascript snippets (e.g. here 2 and here 1) to do that interaction manually, the computer would still need to be on.
|
st31474
|
Having the drive mounted does not change the requirements for interactivity from Colab. However, one of the biggest advantages of using the Drive is to save data in the mounted paths, so that you don’t lose that data when the Virtual Machine of the current session gets reset.
|
st31475
|
Intro: I have a dataset where instances are in the form of time series, but I’m generally interested in solving instance-wise (is instance anomaly or not) anomaly detection problems with different types of autoencoders such as plain-autoencoder, GRU-based, LSTM-based autoencoders, etc.
Question 1
Which is the best/recommanded cost function for autoencoders on the anomaly detection problem and why?
Binary Cross Entropy Loss (BCELoss)
From documentation target y have to be normalized into [0…1], which is usually done with Sigmod on the last layer of decoder, but I guess… I have to normalize input also into [0…1] before training. That means I have to save the min and max calculated with MinMaxScaler from Sklearn and use it in prediction before applying autoencoder. Is there some more elegant way to do the same normalization with PyTorch and save it with PyTorch model together?
Mean Squared Error Loss (MSELoss)
Here I believe that I don’t need normalization nor Sigmoid at the end.
Or … is there some other more appropriate cost function for this problem, related to time series or not?
Question 2
How to implement predict_proba method for this type of autoencoders?
I understand that I need to set a threshold that will be used in a way to predict 1 (outlier) if the difference between input and predicted values is greater than the threshold and otherwise 0 (inlier), but I’m not sure how to implement this efficiently and how to calculate the probablities.
|
st31476
|
Hello,
I am trying to create an end-to-end pipeline of models that can do OCR in a “standard” way: a first model receives an image, detects lines, outputs line coordinates; a second model takes images + lines, crops lines out, and does the text detection. My question is how to make the data pipeline for the second model, so to keep latency as low as possible, high GPU load, while being able to accept new files constantly.
As it is rather CPU-heavy (things like non-max suppression and image rotation do take time), I need multiprocessing. I have tried the following pseudocode-setup for getitem:
if there is a crop in the buffer, return it
otherwise, look for images to crop from;
if you find one (use file locks to avoid interactions with other workers and), crop all the lines in that image and return the first, add the rest to the buffer
otherwise sleep 10ms and check again
But this creates two issues for me:
The main one is that if I have, say, 4 workers and one of them doesn’t have any file left to process, the dataloader will dutifully wait for it to end without asking data from the other workers. So at any time there will be three unfinished files left until a new one arrives. I could partially fix this by adding crops to a “common” buffer across all workers saving crops on disk, but (1) passing through the disk is very suboptimal and (2) it will only reduce the problem from having “3 files waiting” to having “3 lines waiting”, but that still means at least one file left unprocessed…
The second problem, similar but more easily solvable with some hacking, comes in with batching. Even if the above is solved, I would at some point have an incomplete batch of lines and no further file to process. In that case it should be more efficient to pad to a full batch and disregard the final images, but I guess that to do that I have to disable batching and do it by hand inside the inference code (i.e. loop until I either get N images if at least one worker is working, or pad if all workers are waiting or after a small timeout), or is there a cleaner way?
So, long story short: is there a command to make a dataset worker tell the data loader “skip me, go to the next worker”? Like a specific exception that could be raised and somehow caught inside the data loader?
Thanks in advance for any hint!
|
st31477
|
I’m trying to install PyTorch through conda but I’m getting “conflict” errors:
I first activated the conda virtualenvironment:
(base) raphy@pc:~$ source activate pytorch_env
Then, tried to install the packages:
(pytorch_env) raphy@pc:~$ conda install -n pytorch_env pytorch torchvision torchaudio cpuonly -c pytorch
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package pytorch conflicts for:
torchvision -> pytorch[version='1.1.*|1.2.0+cu92|1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|>=1.1.0|>=1.0.0|>=0.4|>=0.3|1.7.1.*|1.3.1.*|1.2.0.*']
torchaudio -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|>=1.1.0']
pytorch
Package six conflicts for:
pytorch -> mkl-service[version='>=2,<3.0a0'] -> six
torchvision -> six
Package _libgcc_mutex conflicts for:
torchvision -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]
pytorch -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]
python=3.9 -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]
(pytorch_env) raphy@pc:~$
If I install it with pip3 it seems working fine:
(pytorch_env) raphy@pc:~$ pip3 install torch==1.8.1+cpu torchvision==0.9.1+cpu torchaudio==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.8.1+cpu
Downloading https://download.pytorch.org/whl/cpu/torch-1.8.1%2Bcpu-cp39-cp39-linux_x86_64.whl (169.1 MB)
|████████████████████████████████| 169.1 MB 126 kB/s
Collecting torchvision==0.9.1+cpu
Downloading https://download.pytorch.org/whl/cpu/torchvision-0.9.1%2Bcpu-cp39-cp39-linux_x86_64.whl (13.3 MB)
|████████████████████████████████| 13.3 MB 37.6 MB/s
Collecting torchaudio==0.8.1
Downloading torchaudio-0.8.1-cp39-cp39-manylinux1_x86_64.whl (1.9 MB)
|████████████████████████████████| 1.9 MB 14.3 MB/s
Requirement already satisfied: numpy in ./anaconda3/envs/pytorch_env/lib/python3.9/site-packages (from torch==1.8.1+cpu) (1.20.1)
Requirement already satisfied: typing-extensions in ./anaconda3/envs/pytorch_env/lib/python3.9/site-packages (from torch==1.8.1+cpu) (3.7.4.3)
Requirement already satisfied: pillow>=4.1.1 in ./anaconda3/envs/pytorch_env/lib/python3.9/site-packages (from torchvision==0.9.1+cpu) (8.2.0)
Installing collected packages: torch, torchvision, torchaudio
Attempting uninstall: torch
Found existing installation: torch 1.8.1
Uninstalling torch-1.8.1:
Successfully uninstalled torch-1.8.1
Successfully installed torch-1.8.1+cpu torchaudio-0.8.1 torchvision-0.9.1+cpu
(pytorch_env) raphy@pc:~$ cd pythonMatters/
(pytorch_env) raphy@pc:~/pythonMatters$ ls -lah
total 12K
drwxrwxr-x 2 raphy raphy 4,0K mag 31 16:54 .
drwxr-xr-x 39 raphy raphy 4,0K mag 31 17:22 ..
-rw-rw-r-- 1 raphy raphy 44 mag 31 16:54 pytorch_verification.py
(pytorch_env) raphy@pc:~/pythonMatters$ nano pytorch_verification.py
(pytorch_env) raphy@pc:~/pythonMatters$ python3 pytorch_verification.py
tensor([[0.3230, 0.9078, 0.4617],
[0.8623, 0.9219, 0.2986],
[0.1736, 0.4657, 0.5214],
[0.8163, 0.1591, 0.7434],
[0.1205, 0.6854, 0.3539]])
(pytorch_env) raphy@pc:~/pythonMatters$
OS: Ubuntu 20.04 Desktop
python: Python 3.9.4
PyTorch Build: 1.8.1
conda 4.10.1
|
st31478
|
I am performing cosine similarity (nn.cosineSimilarity()) between two 2D tensors (of same shape of course). Now, the resultant output is a 1D tensor which contains n single tensors. These single tensors are the pairwise cosine similarities.
Now, my question what can I do with these pairwise cosine similarities. For training, I am passing them directly to my custom loss function and things seems to work fine.
For Prediction
I have M such outputs out of which I have to choose the one with max score/output. But how can I select a max from arrays.
My approach for :
I am flattening the 2d tensors (which are outputs from a GRU) and then calculating the cosine sim thus, getting a single tensor. This approach seems to work but the accuracy of the model is not as expected.
Hence, I am looking for other alternatives.
Another thing I had tried was to just select the array with the maximum single tensor among all.
What can I do with an array of pairwise similarities?
|
st31479
|
Screen Shot 2021-05-31 at 9.16.57 AM1294×480 65.1 KB
I found that the input of the LSTM in the figure does not have a hidden state value. Does this setting make sense? So how does the LSTM obtain the value of the hidden state in the previous state during training?
|
st31480
|
As can be seen in the source code, the initial hidden state and the initial cell state are all initialized to 0 in this case.
github.com
pytorch/pytorch/blob/master/torch/nn/modules/rnn.py#L662-L671
if hx is None:
num_directions = 2 if self.bidirectional else 1
real_hidden_size = self.proj_size if self.proj_size > 0 else self.hidden_size
h_zeros = torch.zeros(self.num_layers * num_directions,
max_batch_size, real_hidden_size,
dtype=input.dtype, device=input.device)
c_zeros = torch.zeros(self.num_layers * num_directions,
max_batch_size, self.hidden_size,
dtype=input.dtype, device=input.device)
hx = (h_zeros, c_zeros)
|
st31481
|
Could anyone explain how this Gumbel-Max 1 pytorch code implementation works ?
Screenshot from 2021-05-15 19-14-49936×361 38 KB
|
st31482
|
Hello,
the standart forumla for the Gumbel - Max is
The problem is that argmax is not differentiable.
Your medium source derives this correlation:
So I would usually use softargmax as a nice approximation for argmax, but this implementation you’ve found does something quite similar.
It basically calculates this formula:
probs = exp{ ((gumbel + log(pi)) / tau) } / sum[j] { exp{ ((gumbel_j + log(pi_j)) / tau) } }
tau is a scaling / temparature parameter where if you decrease it, it will approximate the argmax, and when you increase it the probs will become more uniform.
Last but not least, the max value is taken so that a one - hot - encoding can be made.
(I still find the softargmax nicer as you would matmul the probs with a range tensor (you would choose a relatively small tau (I guess with softargmax it’s called beta and would be chosen high while not being used as a denominator).
Please let me know if there is a performance or other reason why one should do it the way it is done in this implementation.)
|
st31483
|
For the NAS multigraph 1 and equation (7) of GDAS paper 1 , how to do backpropagation across multiple parallel edges between two nodes ?
image976×1015 285 KB
|
st31484
|
I got this error while trying to evaluate my GAN model:
Code:
fpr, tpr, _ = roc_curve(labels, img_distance)
precision, recall, _ = precision_recall_curve(labels, img_distance)
roc_auc = auc(fpr, tpr)
pr_auc = auc(recall, precision)
Output Error:
C:\Users\nafti\anaconda3\envs\st\lib\site-packages\sklearn\metrics\_ranking.py:943: UndefinedMetricWarning: No negative samples in y_true, false positive value should be meaningless
UndefinedMetricWarning)
How can I solve it?
|
st31485
|
This warning is created by sklearn, as it seems that the target array doesn’t contain any negative samples, and thus the metric calculation would be meaningless.
You could check labels and make sure that it contains positive and negative samples.
|
st31486
|
Could you post the label tensor (or just a part of it containing the positive and negative samples) so that we could reproduce this issue, please?
|
st31487
|
I train the model on healthy patient and trying to test it on unhealthy patients.
but labels read only positive samples!
array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1])
|
st31488
|
This would explain the raised warning, since scikit-learn wouldn’t be able to calculate the metrics.
Is this a new issue now, as you’ve previously mentioned that you have both, positive and negative, labels?
|
st31489
|
My dataset contain both positive and negative but cannot be read, I think a problem in data splitting,
do have you any idea how can I split data in that case (train: healthy data and test: healthy and unhealthydata) because i did it manually but doesn’t work
|
st31490
|
Hi.
I have 3-dimensional input tensor with size (1,128, 100) when the agent selects the action and (batch_size, 128, 100) when the agent trains. The input is a sequence of words that tokenized and get vector for every token from Word2Vec model and concatenate to a tensor. So 128 is the number of tokens and 100 is W2V vector size. In this convolutional network:
class Actor(nn.Module):
def __init__(self, state_dim, hidden_dim, action_dim):
super(Actor, self).__init__()
self.action_layer = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=32, kernel_size=2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3),
nn.Conv2d(32, 64, kernel_size=4),
nn.ReLU(),
nn.AdaptiveMaxPool2d(output_size=64),
nn.Linear(64, action_dim),
nn.Softmax(dim=-1)).float().to(device)
def forward(self, state):
action_probs = self.action_layer(state)
return action_probs
I got this error:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [32, 1, 2, 2], but got 3-dimensional input of size [1, 128, 100] instead
Also, I am confused about some parameter values. Is in_channel=1 because of input type, is correct? Please guide me how fix this error.
Thanks in Advance
|
st31491
|
Solved by ptrblck in post #2
nn.Conv2d layers expect a 4-dimensional input tensor in the shape [batch_size, channels, height, width]. Based on your error and description I guess the channel dimension is missing, so you could add it via x = x.unsqueeze(1) before passing the tensor to the model.
|
st31492
|
nn.Conv2d layers expect a 4-dimensional input tensor in the shape [batch_size, channels, height, width]. Based on your error and description I guess the channel dimension is missing, so you could add it via x = x.unsqueeze(1) before passing the tensor to the model.
|
st31493
|
For a tensor:
x = torch.tensor([
[
[[0.4495, 0.2356],
[0.4069, 0.2361],
[0.4224, 0.2362]],
[[0.4357, 0.6762],
[0.4370, 0.6779],
[0.4406, 0.6663]]
],
[
[[0.5796, 0.4047],
[0.5655, 0.4080],
[0.5431, 0.4035]],
[[0.5338, 0.6255],
[0.5335, 0.6266],
[0.5204, 0.6396]]
]
])
Firstly would like to split it into 2 (x.shape[0]) tensors then concat them. Here, i dont really have to actually split it as long as i get the correct output, but it makes a lot more sense to me visually to split it then concat them back together.
For example:
# the shape of the splits are always the same
split1 = torch.tensor([
[[0.4495, 0.2356],
[0.4069, 0.2361],
[0.4224, 0.2362]],
[[0.4357, 0.6762],
[0.4370, 0.6779],
[0.4406, 0.6663]]
])
split2 = torch.tensor([
[[0.5796, 0.4047],
[0.5655, 0.4080],
[0.5431, 0.4035]],
[[0.5338, 0.6255],
[0.5335, 0.6266],
[0.5204, 0.6396]]
])
split1 = torch.cat((split1[0], split1[1]), dim=1)
split2 = torch.cat((split2[0], split2[1]), dim=1)
what_i_want = torch.cat((split1, split2), dim=0).reshape(x.shape[0], split1.shape[0], split1.shape[1])
For the above result, i thought directly reshaping x.reshape([2, 3, 4]) would work, it resulted in the correct dimension but incorrect result.
In general i am:
not sure how to split the tensor into x.shape[0] tensors.
confused about how reshape works. Most of the time i am able to get the dimension right, but the order of the numbers are always incorrect.
Thank you
|
st31494
|
reshape is an alias for contiguous().view(), these command:
1)copy data, synchronizing physical format (i.e. one for sequential memory reading as indexes increase) with logical one (for example, changed by permute())
2)change strides, i.e. dimension split points, but still maintain contiguous nested blocks format
as you can see, logical format is not affected, i.e. no reordering is done. in contrast, cat() with dim>0 produces interleaved data, so it is different. better alternative to cat() is permute().reshape()
|
st31495
|
I’m getting the following error and would like to know how to debug this in general.
terminate called after throwing an instance of ‘c10::Error’
what(): Output 0 of BackwardHookFunctionBackward is a view and is being modified inplace. This view was created inside a custom Function (or because an input was returned as-is) and the autograd logic to handle view+inplace would override the custom backward associated with the custom Function, leading to incorrect gradients. This behavior is forbidden. You can fix this by cloning the output of the custom Function.
|
st31496
|
Hello, i am trying to reduce the size(in MB) of my model either during training or post-training. I have tried post-training quantization for this purpose, which works. I am looking for any other methods that might be available on pytorch. I have also tried pruning, but that does not reduce the size of the model.
Any advice would be appreciated.
|
st31497
|
I have built a DNN with only one hidden layer, the following are the parameters:
input_size = 100
hidden_size = 20
output_size = 2
def init():
self.linear1 = nn.Linear()
self.linear2 = nn.Linear()
def forward():
x1 = F.leaky_relu()
return F.leaky_relu()
#unimportant codes omitted
loss_function = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.02)
normalized word vectors of size 100 from authoritative github are used as input
My purpose is to identify whether a word is an event. For example, ‘dought’ is an event but ‘dog’ is not.
After training, the 2-dimensional output tensors are almost the same (say,(-0.8,-1.20) and (-0.8,-1.21), (-0.2,-1.01) and (-0.2,-1.02)) even if the activation function and loss function are changed.
Could someone tell me the reason? I tried my best but failed to solve it.
|
st31498
|
Could you check the weight and bias in both layers?
Sometimes, e.g. when the learning rate is too high, the model just learns the “mean prediction”, i.e. the bias is responsible for most of the prediction, while the weights and input became more or less useless.
For example when I was playing with a facial keypoint dataset, some models just predicted the “mean position” of the keypoints, regardless of the input image.
|
st31499
|
Thank you! I set the bias to 0 and the problem is solved!!
unfortunately this method did not work~~~~lol
|
st31500
|
Could you please kindly elaborate more on the bias and “mean prediction” part? I’ve seen this explanation multiple times on the Internet but cannot get it. When the learning rate is too high, my understanding is that the model wouldn’t converge? Why would that results in bias responsible for most of the prediction? Thanks!
|
st31501
|
I’m not sure if there is an underlying mathematical explanation for this effect.
In the past I experienced that basically the bias in the last layer took the mean values of the regression task, so regardless of the input, I always got the average of my targets.
Could be an edge case and I don’t have a proper explanation for it.
|
st31502
|
That’s probably what happened to my model too, except I did not check that carefully for bias values but I do notice bias getting dominant in terms of scale with regard to weights. The outputs are also indeed the mean.
I solved this by 1) normalizing the input by demean and divide by std and 2) used a smaller learning rate.
Thanks for the prompt reply!
|
st31503
|
This problem may be due to the “batch normalization”. When you are evaluating your model, you should disable batch normalization. Use “model.eval()” when you want to evaluate the model (so batch normalization will be disabled) and use “model.train()” again when you want train the model.
|
st31504
|
This means that if we do the validation during training with model.eval(), in the main training loop, before we use optimizer.step(), we should add model.train() ?
|
st31505
|
You should call model.train() before the forward pass in your training loop.
If you call it before optimizer.step(), the forward pass will have been already executed in eval mode.
|
st31506
|
Do you just set the bias=false while initialisation or is there any other way to set the bias equal to 0 for the linear layer?
|
st31507
|
If you set bias=False during the initialization of the layer, the internal .bias parameter will be set to None and will thus not be available, which would be different from setting the value of the bias to zero.
The latter case can be achieved by manipulating this parameter e.g. via:
with torch.no_grad():
model.linear_layer.bias.fill_(0.)
|
st31508
|
Hi, I need to raise a matrix to the power of -1/2, but I believe doing so using just **(-1/2) makes the computation element-wise.
For example, if I do
m = torch.tensor([[.5,.5],[.7,.9]])
print(m**(-1/2))
I get
tensor([[1.4142, 1.4142],
[1.1952, 1.0541]])
But what I want is
tensor([[2.6978, -1.1091],
[-1.5527,1.8105]])
which is what Matlab provides when I run m^(-1/2). Unfortunately torch.matrix_power() seems to only take integers, and torch.pow() also seems to do the computation element-wise.
Thanks!
|
st31509
|
Although it is not best solution, but it will work.
scipy will work without changing torch tensor to numpy as eg.
import scipy.linalg
m = torch.tensor([[.5,.5],[.7,.9]])
print(scipy.linalg.fractional_matrix_power(m, (-1/2)))
array([[ 2.69776664, -1.10907208],
[-1.55270001, 1.81051025]])
|
st31510
|
Hi mmutic!
mmutic:
I need to raise a matrix to the power of -1/2, but I believe doing so using just **(-1/2) makes the computation element-wise.
A standard approach to exponentiating a matrix is to calculate its
eigendecomposition and then exponentiate the eigenvalues.
Here is a pytorch version 0.3.0 script that illustrates this:
import torch
print (torch.__version__)
m = torch.FloatTensor([[.5,.5],[.7,.9]]) # original matrix
# desired result
mres = torch.FloatTensor ([[ 2.69776664, -1.10907208], [-1.55270001, 1.81051025]])
evals, evecs = torch.eig (m, eigenvectors = True) # get eigendecomposition
evals = evals[:, 0] # get real part of (real) eigenvalues
# rebuild original matrix
mchk = torch.matmul (evecs, torch.matmul (torch.diag (evals), torch.inverse (evecs)))
mchk - m # check decomposition
evpow = evals**(-1/2) # raise eigenvalues to fractional power
# build exponentiated matrix from exponentiated eigenvalues
mpow = torch.matmul (evecs, torch.matmul (torch.diag (evpow), torch.inverse (evecs)))
mpow - mres # check result
Here is the output:
>>> import torch
>>> print (torch.__version__)
0.3.0b0+591e73e
>>>
>>> m = torch.FloatTensor([[.5,.5],[.7,.9]]) # original matrix
>>>
>>> # desired result
... mres = torch.FloatTensor ([[ 2.69776664, -1.10907208], [-1.55270001, 1.81051025]])
>>>
>>> evals, evecs = torch.eig (m, eigenvectors = True) # get eigendecomposition
>>> evals = evals[:, 0] # get real part of (real) eigenvalues
>>>
>>> # rebuild original matrix
... mchk = torch.matmul (evecs, torch.matmul (torch.diag (evals), torch.inverse (evecs)))
>>>
>>> mchk - m # check decomposition
1.00000e-07 *
-0.5960 0.0000
-0.5960 1.1921
[torch.FloatTensor of size 2x2]
>>>
>>> evpow = evals**(-1/2) # raise eigenvalues to fractional power
>>>
>>> # build exponentiated matrix from exponentiated eigenvalues
... mpow = torch.matmul (evecs, torch.matmul (torch.diag (evpow), torch.inverse (evecs)))
>>>
>>> mpow - mres # check result
1.00000e-07 *
4.7684 7.1526
-2.3842 -7.1526
[torch.FloatTensor of size 2x2]
You can see that this scheme recovers your Matlab result (actually
Mr. mathematics’s scipy result, because he printed it out with greater
precision).
This will work mathematically for positive semi-definite (square)
matrices (although numerically you will probably want your smallest
eigenvalue to be enough larger than zero that its computation doesn’t
yield a negative value due to numerical error).
mathematics:
scipy will work without changing torch tensor to numpy
However, if you want to use autograd to calculate gradients (for, e.g.,
backpropagation), performing the calculation in scipy won’t work
(unless you write your own .backward() function).
Because the torch.eig() approach works entirely with pytorch tensor
functions, you get autograd / gradients “for free.”
Good luck.
K. Frank
|
st31511
|
This is an old one, so sorry, if my question might be naive, but isn’t this valid for matrix calculation as well?
So, if I run this, I get the same result as with scipy:
X.inverse().sqrt()
I assume, this also maintains the gradients?
|
st31512
|
Hi Alfred!
byteSamurai:
X.inverse().sqrt()
This is incorrect. torch.sqrt() 2 computes the square-roots of the
individual elements of the tensor (not the matrix square-root).
(torch.inverse(), however, does compute the matrix inverse,
rather than the reciprocals of the individual elements.)
Best.
K. Frank
|
st31513
|
Hey, I hope everyone is having a great day.
I have the following setup:
state = netA(input)
output1=netB(state)
output2=netC(state)
I can define target values for output1 and output2. How would you approach training the netA?
I am trying to combine the gradient on the input of netB and netC (state) as the loss to train netA.
or should I use them separately?!!!
how can I get the gradient on the input of netB and netC?
Best,
hn
|
st31514
|
You could calculate the losses from using output1 and output2, accumulate them, and call loss.backward() to calculate the gradients for all 3 models. If you don’t want to train netB and netC, you could set the .requires_grad attribute if their parameters to False.
In case you want to get the gradients in state, you could call state.retrain_grad(), which would then allow you to access its .grad after the backward call.
|
st31515
|
Thank you ptrblck. I will try your suggestions and get back to you if I had more questions, I am a newbie, and things are slow on my side.
|
st31516
|
I have an input x of dimension 1x 2, batch size = 128, … Hence input gets passed in batches as 128x2. I have 2 parameters L_p(dimension = 1 x 1) and R_p (dimension = 2 x 2). The operation x @ R_p works but the operation L_p @ x is throwing error that matrix dimensions are not satisfied since x is passed as 128 x 2. But actually x is 1x2 , so how do I make it work? Please help!
|
st31517
|
x @ R_p will apply a matrix multiplication with the shapes:
[128, 2] @ [2, 2] = [128, 2]
which are the expected shapes.
However, L_p @ x tries to execute:
[1, 1] @ [128, 2]
which is invalid for a matmul.
I’m not sure what the expected output shape is, but assuming that you would like to broadcast L_p in the batch dimension, you could use:
out = L_p.expand(x.size(0), 1, -1) @ x.unsqueeze(1)
which would create an output of [128, 1, 2].
|
st31518
|
I am trying to add pos embedding with BERT transformer embedding. So the dimension of POS embedding should be 768.
Please suggest.
|
st31519
|
Here is what I use in my projects (I removed the segment embedding).
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
N_MAX_POSITIONS = 512 # maximum input sequence length
def Embedding(num_embeddings, embedding_dim, padding_idx=None):
m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
if padding_idx is not None:
nn.init.constant_(m.weight[padding_idx], 0)
return m
def create_sinusoidal_embeddings(n_pos, dim, out):
position_enc = np.array([
[pos / np.power(10000, 2 * (j // 2) / dim) for j in range(dim)]
for pos in range(n_pos)
])
out[:, 0::2] = torch.FloatTensor(np.sin(position_enc[:, 0::2]))
out[:, 1::2] = torch.FloatTensor(np.cos(position_enc[:, 1::2]))
out.detach_()
out.requires_grad = False
class Embeddings(nn.Module):
"""token + position embedding"""
def __init__(self, n_words, embed_dim, padding_idx = None, sinusoidal_embeddings = True, eps=1e-12, dropout = 0.1):
super().__init__()
self.token_embeddings = Embedding(n_words, embed_dim, padding_idx = padding_idx)
self.position_embeddings = Embedding(N_MAX_POSITIONS, embed_dim)
if sinusoidal_embeddings:
with torch.no_grad(): # RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
create_sinusoidal_embeddings(N_MAX_POSITIONS, embed_dim, out=self.position_embeddings.weight)
self.layer_norm_emb = nn.LayerNorm(embed_dim, eps=eps)
self.dropout = nn.Dropout(dropout)
def forward(self, x, positions = None):
"""
Inputs:
`x` LongTensor(bs, slen), containing word indices
`positions` LongTensor(bs, slen), containing word positions
"""
bs, slen = x.size()
x = self.token_embeddings(x) # bs x slen x embed_dim
# Attention Is All You Need, section 3.4, Embeddings and Softmax : In the embedding layers, we multiply those weights by sqrt(d_model)
#x= x*torch.sqrt(torch.tensor(self.embed_dim, dtype=torch.float32))
# positions
if positions is None:
positions = x.new(slen).long()
positions = torch.arange(slen, out=positions).unsqueeze(0) # bs x slen
else:
assert positions.size() == (bs, slen)
x = x + self.position_embeddings(positions).expand_as(x) # bs x slen x embed_dim
x = self.layer_norm_emb(x) # bs x slen x embed_dim
x = self.dropout(x) # bs x slen x embed_dim
return x
Which you can use as follows :
vocab_size = 10
embed_dim = 768
embedding = Embeddings(n_words = vocab_size, embed_dim = embed_dim, padding_idx = 0, sinusoidal_embeddings = True)
bs, slen = 2, 5
torch.manual_seed(0)
x = torch.empty(bs, slen, dtype=torch.long).random_(vocab_size-1)
embed = embedding(x)
#An example with traditional positional encoding
position = torch.arange(start=0, end = slen, step=1).expand_as(x) # tensor([0, 1, ..., slen - 1]) x bs
embed = embedding(x, position)
|
st31520
|
I am wondering if there is existing function allow us to add two nodes sin and cos in a net like an activation function?
|
st31521
|
Solved by ptrblck in post #2
I’m not sure I understand the use case correctly, but in case you want to apply torch.cos and torch.sin on an activation you could directly do it via:
def forward(self, x): # forward method of your model
...
x = self.layer(x)
...
x = torch.sin(x)
x = torch.cos(x)
...
ret…
|
st31522
|
I’m not sure I understand the use case correctly, but in case you want to apply torch.cos and torch.sin on an activation you could directly do it via:
def forward(self, x): # forward method of your model
...
x = self.layer(x)
...
x = torch.sin(x)
x = torch.cos(x)
...
return x
or alternatively you could also create a custom nn.Module and use these operations as an “activation” function.
|
st31523
|
Thank you ptrblck, let me explain my question more clarify. I am implementing a GAN, in order to endure the generator output to be periodic. I want to add a “layer” as the last layer of generator to change x into (sin(x), cos(x)). I want this “layer” is part of of backprop as well. since the dimension is changed, so I use torch.cat to solve the problem:
def forward(self, noise):
angle = self.gen(noise)
output = torch.cat((torch.sin(angle), torch.cos(angle)))
return output
However there are two question:
if the torch.cat() function is allowed to do backprop? I find the value of grad_fn become to <CatBackward>
I checked the output of generator after adding the output line in forward function, but didn’t see any change. I must miss something about the forward function. could you please give me some guide (or a link) so I can figure it out.
Thank you very much !!
|
st31524
|
Yes, torch.cat has a valid grad_fn and will not break the backpropagation.
What did you compare the outputs against? Your forward looks good and the added operations should be used. You could add additional print statements to the forward to make sure it’s really called and to check intermediate outputs.
|
st31525
|
Hi, I was working on Voc dataset. But when I tried to download the dataset I got the following error.
cannot import name 'VOCSegmentation' from 'torchvision.datasets
I found that the version of torchvision that I use is : 0.2.0 and Voc dataset is available in torchvision with version: 0.6.1
Then I removed torchvision from site-packages and installed it using the following command:
conda install torchvision==0.6.1 cudatoolkit=10.2 -c pytorch
But when I install the torchvision of version 0.6.1, pytorch is also reinstalled automatically with version 1.5.1, while before reinstalling torchvision I was working with updated version of pytorch.
Is there any way that I can install the newer version of torchvision but I do not have to downgrade to older version of pytorch?
|
st31526
|
Yes, you could install the latest stable torchvision (and PyTorch) release by removing the version specification:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
|
st31527
|
Hello everyone,
what is a valid way to construct a Hierarchical NN Architecture of subcomponents.
The goal is accomplish something like this:
class FeedForwardLayer( nn.Module ):
def __init__(self):
super().__init__()
self.lin = nn.Linear( ... )
self.bn = nn.BatchNorm1d( ... )
self.act = nn.ReLU( )
self.do = nn.dropout( ... )
def forward( self , x ):
return self.do( self.act( self.bn( self.lin( x ) ) ) )
class MultiLayerNetwork( nn.Module ):
def __init__( self ):
super().__init__()
self.layer_1 = FeedForwardLayer()
...
self.layer_n = FeedForwardLayer()
def forward(self, x ):
hidden_1 = self.layer_1( x )
hidden_2 = self.layer_2( hidden_1 )
...
return hidden_n
For the purpose of simplicity there are some pseudo code elements,
no need to correct them.
My question is regarding the structure.
Thanks in advance
|
st31528
|
Solved by ptrblck in post #2
Your code looks alright, so I’m unsure if you are seeing any issues with it or just would like to have some feedback?
|
st31529
|
Your code looks alright, so I’m unsure if you are seeing any issues with it or just would like to have some feedback?
|
st31530
|
I use notron to visualize my model, but I dont know how to compute the W(weights) and B(bias) ,I use formula, but it dont work. as you can see, how to compute the second Conv structure from the first Conv stucture?
image359×526 9.95 KB
|
st31531
|
The in_channels of the second convolution would be defined by the out_channels of the previous one, which defines the size of dim1 (32 in your example). Besides that you can pick other parameters as you wish.
|
st31532
|
I’m using torch 1.7.0 and when I run
prof.key_averages(group_by_stack_n=10).table(sort_by="cpu_time_total", row_limit=10)
It neither shows every level of the stack (it shows third party library functions, but not which functions I wrote that called them), nor does it show the full path or line number for the levels it does show, they cut off around 50 characters.
How can I get a full stack trace?
|
st31533
|
Hi
I am getting the following error:
RuntimeError: index 2512 is out of bounds for dimension 0 with size 2512
Here is the full troubleshooting:
Screen Shot 2021-05-27 at 10.45.28 PM2012×1160 383 KB
Here is what I have in data:
Screen Shot 2021-05-27 at 10.59.38 PM1010×212 24.7 KB
I would appreciate your help. Thank you!
|
st31534
|
The index error is raised, since an index of 2512 is invalid for a shape of 2512, which would accept indices in [0, 2511], so you would have to check how this index is created and if you could clip it to the valid range.
|
st31535
|
Thank you for your reply. I am still getting this error and not sure how to fix it, here is my code:
if name == “main”:
#process & create the dataset files
parser = argparse.ArgumentParser()
# system
parser.add_argument("--feature", type=str, default="glove", help="glove | all")
#no use of user_type for now
parser.add_argument("--user_type", type=str, default="hate", help="hate | suspend")
parser.add_argument("--model_type", type=str, default="sage", help="sage | gat")
parser.add_argument("--epoch", type=int, default=201)
args = parser.parse_args()
assert(args.feature in ['glove', 'all'])
assert(args.user_type in ['hate', 'suspend'])
assert(args.model_type in ['sage', 'gat'])
print("====information of experiment====")
print("FEATURE: ", args.feature, "classification_type:", args.user_type, "MODEL:", args.model_type)
print("====end information of experiment====")
dataset = construct_dataset(args.feature)
model_type = args.model_type
index0, index1, index2,index3,index4,index5,index6,index7,index8,index9,index10,index11,index12,index13,index14,index15,index16,index17,index18,index19,index20,index21,index22,index23,index24,index25,index26,index27,index28 = get_labeled_index(feature_type=args.feature)
skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=123)
#y_all = [2] * len(hate_index)
y_all = [0] * len(index0)
#y_normal = [0] * len(normal_index)
y_1 = [1] * len(index1)
y_2 = [2] * len(index2)
y_3 = [3] * len(index3)
y_4 = [4] * len(index4)
y_5 = [5] * len(index5)
y_6 = [6] * len(index6)
y_7 = [7] * len(index7)
y_8 = [8] * len(index8)
y_9 = [9] * len(index9)
y_10 = [10] * len(index10)
y_11 = [11] * len(index11)
y_12 = [12] * len(index12)
y_13 = [13] * len(index13)
y_14 = [14] * len(index14)
y_15 = [15] * len(index15)
y_16 = [16] * len(index16)
y_17 = [17] * len(index17)
y_18 = [18] * len(index18)
y_19 = [19] * len(index19)
y_20 = [20] * len(index20)
y_21 = [21] * len(index21)
y_22 = [22] * len(index22)
y_23 = [23] * len(index23)
y_24 = [24] * len(index24)
y_25 = [25] * len(index25)
y_26 = [26] * len(index26)
y_27 = [27] * len(index27)
y_28 = [28] * len(index28)
#y_all.extend(y_normal)
y_all.extend(y_1)
y_all.extend(y_2)
y_all.extend(y_3)
y_all.extend(y_4)
y_all.extend(y_5)
y_all.extend(y_6)
y_all.extend(y_7)
y_all.extend(y_8)
y_all.extend(y_9)
y_all.extend(y_10)
y_all.extend(y_11)
y_all.extend(y_12)
y_all.extend(y_13)
y_all.extend(y_14)
y_all.extend(y_15)
y_all.extend(y_16)
y_all.extend(y_17)
y_all.extend(y_18)
y_all.extend(y_19)
y_all.extend(y_20)
y_all.extend(y_21)
y_all.extend(y_22)
y_all.extend(y_23)
y_all.extend(y_24)
y_all.extend(y_25)
y_all.extend(y_26)
y_all.extend(y_27)
y_all.extend(y_28)
all_index = []
#all_index.extend(hate_index)
all_index.extend(index0)
all_index.extend(index1)
all_index.extend(index2)
all_index.extend(index3)
all_index.extend(index4)
all_index.extend(index5)
all_index.extend(index6)
all_index.extend(index7)
all_index.extend(index8)
all_index.extend(index9)
all_index.extend(index10)
all_index.extend(index11)
all_index.extend(index12)
all_index.extend(index13)
all_index.extend(index14)
all_index.extend(index15)
all_index.extend(index16)
all_index.extend(index17)
all_index.extend(index18)
all_index.extend(index19)
all_index.extend(index20)
all_index.extend(index21)
all_index.extend(index22)
all_index.extend(index23)
all_index.extend(index24)
all_index.extend(index25)
all_index.extend(index26)
all_index.extend(index27)
all_index.extend(index28)
recall_test = []
accuracy_test = []
fscore_test = []
precision_test = []
all_index = np.array(all_index)
trail = 0
for train_i, test_i in skf.split(all_index, y_all):
print("========begin trail {:01d}===========".format(trail))
all_train_index = all_index[train_i]
test_index = all_index[test_i]
data = dataset[0]
data.train_mask = torch.zeros(data.num_nodes, dtype=torch.long)
data.train_mask[all_train_index] = 1
data.test_mask = torch.zeros(data.num_nodes, dtype=torch.long)
data.test_mask[test_index] = 1
loader = NeighborSampler(data, size=[25], num_hops=1, batch_size=128, shuffle=True, add_self_loops=True)
Here is what is in the data exactly:
Screen Shot 2021-05-29 at 12.23.49 AM1091×546 53.9 KB
I would appreciate your help. Thank you.
|
st31536
|
I’m unsure how these indices are created, but based on the previous stack trace it seems that your are trying to index some “neighboring samples”? In case you cannot isolate and fix the issue properly (by making sure that these invalid indices are not used or by increasing the size of the tensor, which should be indexed), you could torch.clamp the indices to a max. value of 2511.
|
st31537
|
Thank you so much for your reply. I figured this out and keep data in the proper range.
|
st31538
|
I am new to PyTorch, so not sure how to use efficientNet as backbone CNN model for feature extraction, so that embeddings of images can be generated. As most of example on GitHub use 4 layer ConvNet so I can not understand how to use same thing for large CNN model. Like there are implementation of efficient-net for Torch, so what steps I need to use them as feature extractor?
I am using this efficient net code which implemeted network in PyTorch -
github.com
osmr/imgclsmob/blob/master/pytorch/pytorchcv/models/efficientnet.py 8
"""
EfficientNet for ImageNet-1K, implemented in PyTorch.
Original papers:
- 'EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,' https://arxiv.org/abs/1905.11946,
- 'Adversarial Examples Improve Image Recognition,' https://arxiv.org/abs/1911.09665.
"""
__all__ = ['EfficientNet', 'calc_tf_padding', 'EffiInvResUnit', 'EffiInitBlock', 'efficientnet_b0', 'efficientnet_b1',
'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6',
'efficientnet_b7', 'efficientnet_b8', 'efficientnet_b0b', 'efficientnet_b1b', 'efficientnet_b2b',
'efficientnet_b3b', 'efficientnet_b4b', 'efficientnet_b5b', 'efficientnet_b6b', 'efficientnet_b7b',
'efficientnet_b0c', 'efficientnet_b1c', 'efficientnet_b2c', 'efficientnet_b3c', 'efficientnet_b4c',
'efficientnet_b5c', 'efficientnet_b6c', 'efficientnet_b7c', 'efficientnet_b8c']
import os
import math
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
from .common import round_channels, conv1x1_block, conv3x3_block, dwconv3x3_block, dwconv5x5_block, SEBlock
This file has been truncated. show original
|
st31539
|
I am using CTC in an LSTM-OCR setup and was previously using a CPU implementation (from here 86). I am now looking to using the CTCloss function in pytorch, however I have some issues making it work properly. My test model is very simple and consists of a single BI-LSTM layer followed by a single linear layer.
def make_model(ninput=48, noutput=97):
return nn.Sequential(
# run 1D LSTM
layer.Lstm1(100),
# reorder for Linear layer
layer.Reorder("BDW", "BWD"),
# run single linear layer
layer.Linear(noutput),
# reorder to CTC convention
layer.Reorder("BWD", "WBD"))
My inputs are image batches in the form “BDW” (batch, depth, width). My targets are of the form “BL”
target[0] = tensor([18, 50, 43, 61, 39, 52, 42, 43, 56])
with the numbers going from 1 to C, reserving 0 for “blank”.
I then train the model like this:
def train(model, input, target, input_lengths, target_lengths):
assert input.size(0) == target.size(0)
logits = model.forward(input)
probs = nn.functional.log_softmax(logits, dim=2)
optimizer.zero_grad()
loss = ctc(probs, target, input_lengths, target_lengths)
loss.backward()
optimizer.step()
return nn.functional.softmax(logits, dim=2)
For the optimizer I use SGD.
When training using my data set, it only predicts one letter in the beginning, but after a couple of epochs it only predicts blank for all inputs. If I only use a single sample for training and the one letter predicted in the beginning is part of the target, it will increase the probability for that output to 1 for any input, instead of predicting blank.
So far I am using a batch size of 1, because I have additional problems with how to provide the data for larger batches. If I provide the input as a “BDW” tensor, where “W” is the maximum input_length for all samples in the batch, zero-pad all other samples to the same length and provide the correct input_lengths, the model produces “NaN” after a few epochs.
I had reasonable outputs using the CTC implementation mentioned in the beginning, although it was a lot slower, so I assume I am using it somehow incorrectly.
UPDATE: I at least figured out why there didnt seem to be any training going on. I am not sure how pytorch scales the CTC loss, but the updates were just so much smaller compared to the implementation I used previously, that training stopped too early. Increasing the learning rate I noticed that training is happening.
|
st31540
|
Set ‘reduction=None’ on the loss otherwise it is averaged across all time steps resulting in reaaally slow training.
|
st31541
|
Reduce provides the list of losses per sequence in my batch.
How do you propose I use those, if not by averaging them, by using reduction=‘mean’?
|
st31542
|
reduction='mean' will also average over lengths, so by using reduction='sum' or reduction='none' and taking the mean only over the batch dimension, you’ll get a higher gradient.
That said, for Adam, it should cancel with the implied gradient weighting, and for SGD you could use a higher learning rate, too.
Best regards
Thomas
|
st31543
|
I also encountered a similar problem (i.e. only predicting blank). Additionally I found that a nearly perfect prediction has higher loss than the predict_all_blank . (I also pre-pad the ylabels with blank). Here’s the setup for replication, I’m wondering if I’ve used it properly or there might be a bug? Please let me know if additional info is needed. Really appreciate it!
Here are the observations, note loss from perfect prediction is higher than that from all_blank, with/without reduction='none'. Additionally see bottom for experiment setup:
# print loss
tloss = torch.nn.CTCLoss(blank=79, zero_infinity=False, reduction='none')
print('Perfect prediction:\n', tloss(pred_perf, batch_y_cat, inputls, outputls))
print('Model prediction:\n',tloss(pred_model, batch_y_cat, inputls, outputls))
# output
# Perfect prediction:
# tensor([110.0361, 109.6828, 107.2605], device='cuda:0')
# Model prediction:
# tensor([86.3294, 90.4917, 38.5629], device='cuda:0')
# print the predicted raw results
tloss = torch.nn.CTCLoss(blank=79, zero_infinity=False, reduction='mean')
for idx in range(3):
print('=========================================')
print('Prediction - perfect prediction')
print(pred_perf.argmax(dim=2).permute((1,0))[idx])
print("loss:", tloss(pred_perf[:,idx, :].unsqueeze(1), batch_y[idx].unsqueeze(0), inputls[idx], outputls[idx]))
print('--------')
print('Prediction - model')
print(pred_model.argmax(dim=2).permute(1,0)[idx])
print("loss:", tloss(pred_model[:,idx, :].unsqueeze(1), batch_y[idx].unsqueeze(0), inputls[idx], outputls[idx]))
print('--------')
print('Ground Truth')
print(batch_y[idx])
print('Unpadded ground truth')
unpad_y = batch_y[idx][: outputls[idx]]
print(unpad_y)
# output
# =========================================
# Prediction - perfect prediction
# tensor([55, 43, 40, 62, 41, 44, 53, 40, 62, 41, 50, 53, 62, 58, 36, 54, 43, 44,
# 49, 42, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79], device='cuda:0')
# loss: tensor(30.2797, device='cuda:0')
# --------
# Prediction - model
# tensor([79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79], device='cuda:0')
# loss: tensor(3.2668, device='cuda:0')
# --------
# Ground Truth
# tensor([55, 43, 40, 62, 41, 44, 53, 40, 62, 41, 50, 53, 62, 58, 36, 54, 43, 44,
# 49, 42, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79],
# device='cuda:0')
# Unpadded ground truth
# tensor([55, 43, 40, 62, 41, 44, 53, 40, 62, 41, 50, 53, 62, 58, 36, 54, 43, 44,
# 49, 42], device='cuda:0')
# =========================================
# Prediction - perfect prediction
# tensor([42, 50, 62, 48, 56, 38, 43, 62, 41, 56, 53, 55, 43, 40, 53, 62, 55, 43,
# 36, 49, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79], device='cuda:0')
# loss: tensor(30.3025, device='cuda:0')
# --------
# Prediction - model
# tensor([79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79], device='cuda:0')
# loss: tensor(3.4614, device='cuda:0')
# --------
# Ground Truth
# tensor([42, 50, 62, 48, 56, 38, 43, 62, 41, 56, 53, 55, 43, 40, 53, 62, 55, 43,
# 36, 49, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79],
# device='cuda:0')
# Unpadded ground truth
# tensor([42, 50, 62, 48, 56, 38, 43, 62, 41, 56, 53, 55, 43, 40, 53, 62, 55, 43,
# 36, 49], device='cuda:0')
# =========================================
# Prediction - perfect prediction
# tensor([36, 62, 38, 50, 48, 51, 47, 40, 55, 40, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79], device='cuda:0')
# loss: tensor(61.9394, device='cuda:0')
# --------
# Prediction - model
# tensor([79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79], device='cuda:0')
# loss: tensor(3.8248, device='cuda:0')
# --------
# Ground Truth
# tensor([36, 62, 38, 50, 48, 51, 47, 40, 55, 40, 79, 79, 79, 79, 79, 79, 79, 79,
# 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79, 79],
# device='cuda:0')
# Unpadded ground truth
# tensor([36, 62, 38, 50, 48, 51, 47, 40, 55, 40], device='cuda:0')
# create perfect prediction and model prediction
# key parameters
time_steps = 189
n_class = 80
blank_idx = 79
# construct the perfect prediction based on ground truth
eps = 0.0001
B, curr_t = batch_y.shape
batch_y = batch_y.cpu()
m_fit = torch.cat([torch.zeros(time_steps-curr_t, blank_idx), torch.ones(time_steps-curr_t, 1), torch.zeros(time_steps-curr_t, n_class-1-blank_idx)], dim=1)
pred_perf_prob = torch.stack([(torch.cat([torch.eye(n_class)[batch_y[i]], m_fit], dim=0)*(1-eps*n_class)+eps) for i in range(len(batch_y))], dim=0).to(device) # (B, T, n_class)
pred_perf = torch.nn.functional.log_softmax(pred_perf_prob, dim=2).permute((1,0,2)) # (T, B, n_class)
# get model prediction, which predicts all blank
with torch.no_grad():
pred_model, inputs = model.network(batch_x) # (T, B, n_class)
inputls = torch.full(size=(B,), fill_value=time_steps, dtype=torch.long).to(device)
outputls = (torch.sum(batch_y != 79, dim=1)).to(torch.long).to(device) #tensor([20, 20, 10])
Env: PyTorch 1.1, CUDA 9
|
st31544
|
A follow-up with additional observation and minimal code for repro: create a perfect prediction from one-hot-encoding from ylabel and a all blank. Both are a batch of 1 datum. Weirdly, the loss of perfect prediction is higher than all_blank when input_length is high, but lower than all_blank when input_length is low. Is it expected?
import torch
import torch.nn.functional as F
T = 189
n_class = 80
y = torch.tensor([[55, 43, 40, 62, 41, 44, 53, 40, 62, 41, 50, 53, 62, 58, 36, 54, 43, 44, 49, 42]])
output_length = torch.tensor(y.shape[1])
pred_model_idx = 79*torch.ones(T, dtype=torch.long)
pred_perf_idx = torch.cat([y[0], (n_class-1) * torch.ones(T-y.shape[1], dtype=torch.long)]) # the first idx are perfect with y, then padded with blanks
pred_model = torch.eye(n_class)[pred_model_idx].unsqueeze(1) # one-hot encoding
pred_perf = torch.eye(n_class)[pred_perf_idx].unsqueeze(1) # one-hot encoding
for input_length in [torch.tensor(y.shape[1]), torch.tensor(T)]:
print("=============\ninput length:", input_length)
print("perfect loss:", F.ctc_loss(F.log_softmax(pred_perf, dim=2), y, input_length, output_length, n_class-1, 'none', True))
print("all_blank loss:", F.ctc_loss(F.log_softmax(pred_model, dim=2), y, input_length, output_length, n_class-1, 'none', True))
# OUTPUT
# =============
# input length: tensor(20)
# perfect loss: tensor([68.0656])
# all_blank loss: tensor([88.0655])
# =============
# input length: tensor(189)
# perfect loss: tensor([605.4802])
# all_blank loss: tensor([593.8168])
|
st31545
|
To be honest, neither calling it “perfect prediction” (note that the log_softmax result will assign probability mass (log) to non-target classes) nor changing the length in the way you do makes much sense to me. Could just be me, though.
Here is an observation: if your input is longer than your targets the “aligned paths” over which you take probabilities will necessarily include blanks. So if you need enough blanks, assigning a high probability to them will reduce the loss.
Best regards
Thomas
|
st31546
|
Thanks Thomas, really appreciate your reply! Probably I didn’t explain myself well: I assume perfect_prediction is more similar to ylabel than predict blank, since perfect_prediction contains a prediction path that is exactly the same as ylabel, while predict_blank will only be reconstructed as a empty sequence. If this is right, then why perfect_prediction will ever have a higher loss than predict_blank? (The former should always have lower loss, no?) The experiment on the changing length is less relevant, I’m just surprised this behavior is not monotonic and is related to input_length.
BTW, do you mind explaining ‘So if you need enough blanks, assigning a high probability to them will reduce the loss’? Maybe I’ve missed some part.
The context here is: I’m training an OCR model with CTCLoss, and during training I can see the loss goes down but the model just keep predicting all blanks (seems other people also have similar observation), and I’m not sure where the bug is, which leads to my above loss comparison.
|
st31547
|
Hi, have you solve the problem.
I’m having a similar problem.
It seems some data give me extremely high loss score even when the inference result is perfect. I worry if nn.CTCLoss is even useful.
I want to know how to avoid the problem if possible.
|
st31548
|
Note that in the above example the inference result is not “perfect” because you need to pass log-softmaxed activations into the ctc loss. This means that the ideal case would be an alignment where all targets except the true target have -inf and the true target has 0.
|
st31549
|
Thank. I got it.
It seems I got big loss because probability of each time steps is relatively small even though they were biggest in the time steps. I don’t know which works better in real world use case between small loss ( predict something extremely confidently ) and relatively big loss ( predict same thing but not for so sure ).
|
st31550
|
@randinoo what CNN architecture do you have in mind as an example? And, what do you mean to get a result as an image?
|
st31551
|
randinoo:
classification
If the output is an image, the network you are use is not for classification. Your case is a segmentation problem. In your case, which image would you be looking for? Distance map D?
|
st31552
|
@eduardo4jesus yes in this case is the segmentation, but who I find final change map image?
distance =f1 -f2 who I find diantance in image??
|
st31553
|
How can I feed images with 1024*1024 pixels into pre-trained model Vgg16?
how to modify that in pytorch code?
|
st31554
|
It depends on what you mean by feed images that are 1024*1024. If you are using a pretrained model without further finetuning, you likely want to simply resize the images to something around 256x256 and take a 224x224 center crop. If you decide not to resize or crop, the model should “work” in that there should not be any shape mismatches due to the use of average pooling with input images at 1024x1024, but the model accuracy will likely not be great due to the potential scale mismatch. Choosing the right resolution is actually a tricky problem (e.g., as studied in [1906.06423] Fixing the train-test resolution discrepancy).
Finally, you might need to consider at at 1024x1024, the model will likely use ~(1024/224)**2 or ~9 times the memory and computation due to the quadratic scaling of convolution activations.
|
st31555
|
the images are different in sizes >10001000 and I was feed 224224 sizes to the vgg16_bn model, the accuracy is 53%.
I think if I’ll change the image size input the accuracy will higher than that? or NOT?
and in code, I can’t change the input sizes of those images.
how I modify the model to receive the 1024*1024 images.
Thanks man
|
st31556
|
The important thing is that the scale of objects (e.g., how many pixels a cat or dog is) is close to what was shown during training time. So if you just change the input size without finetuning or retraining, the accuracy will most likely go down.
You don’t need to modify the model for higher resolution, but some kind of finetuning or retraining is probably needed.
|
st31557
|
Hello everyone,
over which dimension do we calculate the mean and std? Is it over the hidden dimensions of the NN Layer, or over all the samples in the batch for every hidden dimension separately?
In the paper it says we normalize over the batch.
In torch.nn.BatchNorm1d hower the input argument is “num_features”. Why would we calculate the mean and std over the different features instead of the different samples?
Thanks in advance
|
st31558
|
Solved by eqy in post #2
You are correct that num_features corresponds to the “hidden dimension” rather than the batch size. However, if you think about this from the perspective of what statistics batchnorm needs to track, this makes sense. For example, for a hidden dimension of size 512, batchnorm needs to keep track of m…
|
st31559
|
You are correct that num_features corresponds to the “hidden dimension” rather than the batch size. However, if you think about this from the perspective of what statistics batchnorm needs to track, this makes sense. For example, for a hidden dimension of size 512, batchnorm needs to keep track of mean and variance for each of the 512 dimensions. Here, num_features is really just telling the module how much storage it needs to track its stats. Note that this size doesn’t depend on the batch size as taking the mean reduces across the batch dimension.
|
st31560
|
@eqy
Thanks for the answer. Assuming I have sequential data of the shape (bs, dim, seq_len), does BatchNorm1d calculate the mean and std of the batch seperately for every timestep, or are the timesteps somehow merged as well?
|
st31561
|
hello everyone,
So, I implemented the ssim loss with pytorch. it was working one month ago. Now, when I tried to execute the code, it returns that the ssim output is negative and should be at least 0.
the code is:
> !pip install piqa
> from piqa import ssim
>
> class SSIMLoss(ssim.SSIM):
> def forward(self, x, y):
> return 1. - super().forward(x, y)
> criterion_ssim = SSIMLoss().cuda() # .cuda() if you need GPU support
|
st31562
|
Can you share the explicit SSIM calculation used and an example input pair that produces a negative result?
|
st31563
|
I am not being able to install PyTorch, I am getting an error about conflicting packages, although it is a new conda environment.
Created env using -
conda create --prefix ./ ipykernel -y
List of installed packages -
# packages in environment at /home/ayan/data/conda_envs/jovian_fcc:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
backcall 0.2.0 pyhd3eb1b0_0
ca-certificates 2021.4.13 h06a4308_1
certifi 2020.12.5 py39h06a4308_0
decorator 5.0.9 pyhd3eb1b0_0
ipykernel 5.3.4 py39hb070fc8_0
ipython 7.22.0 py39hb070fc8_0
ipython_genutils 0.2.0 pyhd3eb1b0_1
jedi 0.17.2 py39h06a4308_1
jupyter_client 6.1.12 pyhd3eb1b0_0
jupyter_core 4.7.1 py39h06a4308_0
ld_impl_linux-64 2.33.1 h53a641e_7
libffi 3.3 he6710b0_2
libgcc-ng 9.1.0 hdf63c60_0
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 9.1.0 hdf63c60_0
ncurses 6.2 he6710b0_1
openssl 1.1.1k h27cfd23_0
parso 0.7.0 py_0
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pip 21.1.1 py39h06a4308_0
prompt-toolkit 3.0.17 pyh06a4308_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pygments 2.9.0 pyhd3eb1b0_0
python 3.9.5 hdb3f193_3
python-dateutil 2.8.1 pyhd3eb1b0_0
pyzmq 20.0.0 py39h2531618_1
readline 8.1 h27cfd23_0
setuptools 52.0.0 py39h06a4308_0
six 1.15.0 py39h06a4308_0
sqlite 3.35.4 hdfb4753_0
tk 8.6.10 hbc83047_0
tornado 6.1 py39h27cfd23_0
traitlets 5.0.5 pyhd3eb1b0_0
tzdata 2020f h52ac0ba_0
wcwidth 0.2.5 py_0
wheel 0.36.2 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
zeromq 4.3.4 h2531618_0
zlib 1.2.11 h7b6447c_3
All of the three following installation commands give same error -
conda install pytorch torchvision torchaudio cudatoolkit -c pytorch -c nvidia
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
conda install pytorch torchvision torchaudio cudatoolkit=11.2 -c pytorch -c nvidia
I am getting the following error if I am not specifying cudatoolkit version -
➜ conda install pytorch torchvision torchaudio cudatoolkit -c pytorch -c nvidia [127]
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package libstdcxx-ng conflicts for:
cudatoolkit -> libstdcxx-ng[version='>=7.3.0|>=9.3.0']
pytorch -> cudatoolkit[version='>=11.1,<11.2'] -> libstdcxx-ng[version='>=7.2.0|>=9.3.0']
python=3.9 -> libstdcxx-ng[version='>=7.3.0']
torchvision -> libstdcxx-ng[version='>=5.4.0|>=7.3.0']
torchvision -> cudatoolkit[version='>=11.1,<11.2'] -> libstdcxx-ng[version='>=7.2.0|>=9.3.0']
pytorch -> libstdcxx-ng[version='>=5.4.0|>=7.3.0']
torchaudio -> python[version='>=3.9,<3.10.0a0'] -> libstdcxx-ng[version='>=5.4.0|>=7.2.0|>=7.3.0']
Package cudatoolkit conflicts for:
pytorch -> cudatoolkit[version='10.0.*|8.*|>=10.0,<10.1|>=10.1,<10.2|>=11.1,<11.2|>=10.2,<10.3|>=11.0,<11.1|>=9.2,<9.3|>=9.0,<9.1|>=8.0,<8.1|9.*|>=10.1.243,<10.2.0a0|>=9.2,<9.3.0a0|>=10.0.130,<10.1.0a0|9.2.*|>=9.0,<9.1.0a0|>=8.0,<8.1.0a0|9.0.*|8.0.*|7.5.*']
torchvision -> cudatoolkit[version='>=10.0,<10.1|>=10.1,<10.2|>=11.1,<11.2|>=10.2,<10.3|>=11.0,<11.1|>=9.2,<9.3|>=9.0,<9.1|>=10.0.130,<10.1.0a0|>=9.2,<9.3.0a0|>=9.0,<9.1.0a0']
torchvision -> pytorch==1.4.0 -> cudatoolkit[version='10.0.*|>=10.1.243,<10.2.0a0|9.2.*|>=8.0,<8.1|>=8.0,<8.1.0a0|8.*|9.*|9.0.*|8.0.*|7.5.*']
cudatoolkit
torchaudio -> pytorch==1.8.1 -> cudatoolkit[version='10.0.*|>=10.0,<10.1|>=10.1,<10.2|>=11.1,<11.2|>=10.2,<10.3|>=11.0,<11.1|>=9.2,<9.3|>=10.1.243,<10.2.0a0|>=9.2,<9.3.0a0|>=10.0.130,<10.1.0a0|9.2.*|>=9.0,<9.1|>=9.0,<9.1.0a0']
Package nccl conflicts for:
torchvision -> pytorch[version='>=0.4'] -> nccl[version='<2']
pytorch -> nccl[version='<2']
Package pytorch conflicts for:
pytorch
torchvision -> pytorch[version='1.1.*|1.2.0+cu92|1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|>=1.1.0|>=1.0.0|>=0.4|>=0.3|1.3.1.*|1.2.0.*']
torchaudio -> pytorch[version='1.2.0|1.3.0|1.3.1|1.4.0|1.5.0|1.5.1|1.6.0|1.7.0|1.7.1|1.8.0|1.8.1|>=1.1.0']
Package _libgcc_mutex conflicts for:
cudatoolkit -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]
torchvision -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]
python=3.9 -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]
pytorch -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex=[build=main]The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.33=0
- feature:|@/linux-64::__glibc==2.33=0
- pytorch -> cudatoolkit[version='>=11.1,<11.2'] -> __glibc[version='>=2.17,<3.0.a0']
- torchvision -> cudatoolkit[version='>=11.1,<11.2'] -> __glibc[version='>=2.17,<3.0.a0']
Your installed version is: 2.33
My cuda version from nvidia-smi -
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.80 Driver Version: 460.80 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:01:00.0 Off | N/A |
| 0% 90C P5 29W / 250W | 131MiB / 11178MiB | 2% Default |
| | | N/A |
|
st31564
|
It seems that at least two PyTorch packages are conflicting as 1.8.1 and 1.4.0. Did you also activate this new environment? If so, which Python version are you using?
|
st31565
|
Yes I have activated the environment and the python version is 3.8.10, it is the default version that Conda created.
|
st31566
|
I’m unable to reproduce the issue, as I can create a new env and install the packages there.
Could you also create a new env without the prefix usage via:
conda create -n tmp python=3.8
conda activate tmp
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c nvidia
python -c "import torch; print(torch.__version__)"
|
st31567
|
In NN training when epochs loss decrease monotonically ,then score on train and validation data must improve. Its not happening consistently , some time score improve when epochs loss decrease monotonically sometime it does not decrease and gets worst. I am using binary cross entropy , How can i rely on loss rather than train/test score?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.