id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st46068
|
Thank you very much for your reply!
The input shapes are static
Data is pushed to the GPU not inside the Dataset, but just before the forward pass
By now I also ran a memory trace suggested in this thread: How to debug causes of GPU memory leaks? 20 . However, somehow the training runs smoothly (and extremely slowly) for several epochs in this case… Could it be that at some point GPU memory is not deallocated quickly enough and that some wait statements would solve it?
Also forgot to mention: I am using an older version of monodepth, namely 0.4.1
|
st46069
|
gebbissimo:
Could it be that at some point GPU memory is not deallocated quickly enough and that some wait statements would solve it?
This shouldn’t be the case, as PyTorch tries to recover from an OOM by clearing the cache for you.
Are you seeing these issues only in this old monodepth2 repository or also using plain PyTorch code?
|
st46070
|
Ok, thanks for the clarification.
What exactly do you mean with plain Pytorch code? What I can say is that the monodepth2 training runs with different parameter settings, e.g. different input size, batch size or network architectures. The current configuration is already pretty close to the GPU memory limit. But I will test today whether this memory uage spike in the last batch also happens for other configurations.
|
st46071
|
I have some code for a predict method (shown below) that I got to work after some trial and error. Is there some way to simplify this code? I am running the predict method in batches because if I don’t too much memory is used up. Still, I do not understand why I have to use the innermost loop. Instead of the innermost loop I tried batch_preds = self.model.forward(xb), but that fails. Why? Is there a better way to do this? Here is the code:
def predict(self, X):
if self.gpuid is not None:
device = torch.device(f"cuda:{self.gpuid}")
else:
device = torch.device("cuda")
self.model.to(device)
self.model.eval()
with torch.no_grad():
X = torch.tensor(X).float().to(device)
predict_ds = TensorDataset(X)
if self.predict_by_batch:
predict_dl = DataLoader(predict_ds, batch_size=self.batch_size)
preds = []
for xb in predict_dl:
for x in xb:
batch_preds = self.model.forward(x)
batch_preds = batch_preds.to('cpu')
preds.extend(list(batch_preds.numpy()))
preds = np.asarray(preds)
else:
preds = self.model.forward(X).to('cpu').numpy()
return np.squeeze(preds)
I should add that it seems to run slowly.
|
st46072
|
So I am really new at pytorch and did not really know where to start. For my problem all of the data fits into memory so just using simple indexing of the batches worked. That runs very fast.
|
st46073
|
class Model(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(2,2)
def forward(self,x):
x = self.fc1(x)
x = self.fc1(x)
return x
Here is my model, as you can see I have defined fc1 just once and using it in my forward function twice sequentially.
So ideally i must get separate weights for the 2 layers in my forward function, but I am getting weights for just 1 layer. Please help!!!
for name,param in model.named_parameters():
print(name,param, param.shape)
Output:
fc1.weight Parameter containing:
tensor([[ 0.0108, -0.2179],
[ 0.5695, -0.1553]], requires_grad=True) torch.Size([2, 2])
fc1.bias Parameter containing:
tensor([ 0.4658, -0.6482], requires_grad=True) torch.Size([2])
|
st46074
|
Solved by ptrblck in post #3
Why should you get two different weights? In your code snippet you are reusing the layer, which only contains one set of parameters.
|
st46075
|
I don’t see why this is a problem.
Please have a look at the documentation 1 if you haven’t already done so.
|
st46076
|
Why should you get two different weights? In your code snippet you are reusing the layer, which only contains one set of parameters.
|
st46077
|
I need to parallelize the training of a ANN using n cores of a CPU not GPU, is that possible to achieve this in Pytorch, all the parallelization examples that I have seen here use GPU’s…
|
st46078
|
Solved by ptrblck in post #2
You could use torch.set_num_threads(int) to define the number of threads used for intraop parallelism and torch.set_num_interop_threads(int) for interop parallelism (e.g. in the JIT interpreter) on the CPU. Also, the env vars OMP_NUM_THREADS and MKL_NUM_THREADS might be useful.
More information are…
|
st46079
|
You could use torch.set_num_threads(int) to define the number of threads used for intraop parallelism and torch.set_num_interop_threads(int) for interop parallelism (e.g. in the JIT interpreter) on the CPU. Also, the env vars OMP_NUM_THREADS and MKL_NUM_THREADS might be useful.
More information are given in the CPU threading docs 180.
|
st46080
|
Hi,
I am unable to match the performance of a3c in pytorch 1.6 for breakout as compared to standard results.
Can someone please help to share clean implementation for a3c lstm in pytorch 1.0 onwards.
|
st46081
|
I want to insert a small tensor B into another larger tensor A but the insert location changes over an index.(I’m not sure how to explain exactly…)
For example,
a = torch.zeros((2,8,10))
b = torch.rand((2,8,2))
for i in range(8):
a[:,i,i+0:i+2] = b[:,i,:]
Suppose the first dimension is batch dimension and the second dimension is time. Then I want to insert b into a but the location in third dimension changes over time. The location is not necessarily depending on the time.
Is there any tensor operation that can replace the loop which makes it faster in GPU?
|
st46082
|
hi,
let’s say we have an image (single plane), and a corresponding matrix V. we want to smooth the image using a 2d gaussian kernel at each pixel (i, j). the gaussian kernel is parametrized with the mean that is the pixel value (i, j) of the image and standard deviation that is the corresponding value in the matrix V at the position (i, j).
the image could be single or minibatch.
is there an efficient way to do this?
can we use one single filter that re-parametrize itself dynamically with respect to V?
thanks
|
st46083
|
This might not be the most efficient way, but you could unfold the image to create image patches (same as the im2col operation), create the filters for all patches (since each patch would need a separate filter), and use a matrix multiplication to compute the output. Once this is done you could reshape the result back to the expected 4-dimensional shape.
Note that this approach would use a lot of memory, since you are explicitly creating the patches and filters.
|
st46084
|
WeightedRandomSampler
Can someone please explain how does WeightedRandomSampler work.
It is confusing ?
I have 5 imbalanced classes with count say [100, 20, 167, 700, 500,].
How shall I choose weights for it .
Could someone please explain it with a detailed example
@ptrblck
|
st46085
|
The WeightedRandomSampler expects weights for each sample and uses it to draw the corresponding sample. Here 49 is an example how to use it.
|
st46086
|
@ptrblck
Thank you but I still have one confusion
for a batch_size = 50/100
If minority class has 100 and majority class has 1000 samples respt.
Does it draw these 50/100 samples repeatedly during batch processing from minority one. I mean
Does the data loader takes the same 50/100 samples from minority class after the first batch (as there is no sample point left now) but there are still a lot sample points left now for majority class
|
st46087
|
In the default setup (replacement = True), this would be the case and the sampler would oversample the minority class, i.e. draw the same samples multiple times (and augment them if a transformation is defined in your Dataset).
|
st46088
|
I want to find this feature —log mel filter bank energy for a speech data, and eventually feed it to the deep learning model.
Thats a different thing, as of now I am confused is it log of MFCC as we are doing in librosa or there is any other way?
any help will do
Regards
|
st46089
|
I am having trouble implementing this expression using indexing, and would like to know how to implement it without iterating over the mini-batch.
max_args has shape (batch_size, num_channels)
each chunk has shape (batch_size, x, y, y)
x.shape[2] == x.shape[3] == y
out = []
for i, chunk in enumerate(chunks):
temp = []
for j in range(max_args.shape[0]):
temp.append(chunk[j, max_args[j, i], :, :].view(1, 1, x.shape[2], x.shape[3]))
out.append(torch.cat(temp, 0))
out = torch.cat(out, 1)
|
st46090
|
Hello everyone,
I am training some deep learning models using pytorch, which also includes the usage of numpy. Since the randomisation is not truly random and it is pseudo-random, why aren’t the numbers i.e. accuracy etc same across different runs?
I am doing torch.cuda.manual_seed(seed_val) to set the random seed.
I mean, even if I do not set some random seed, there should be some default random seed according to which my code must run and give same results across different runs. Is there something more to it?
Please do let me know if something is not clear.
Thanks,
Megh
|
st46091
|
Solved by ptrblck in post #2
By default the system PRNG is used, if I’m not mistaken, so your code will not be deterministic if you don’t properly seed it manually.
Since you are using 3rd party libraries, such as numpy, I would recommend to also seed them.
Have a look at the Reproducibility docs for more information.
|
st46092
|
Megh_Bhalerao:
I mean, even if I do not set some random seed, there should be some default random seed according to which my code must run and give same results across different runs. Is there something more to it?
By default the system PRNG is used, if I’m not mistaken, so your code will not be deterministic if you don’t properly seed it manually.
Since you are using 3rd party libraries, such as numpy, I would recommend to also seed them.
Have a look at the Reproducibility docs 2 for more information.
|
st46093
|
Hi there, I am a 3-month freshman who is doing small NLP projects with Pytorch.
Recently I am trying to reappear a GAN network introduced by a paper, using my own text data, to generate some specific kinds of question sentences.
Here is some background… If you have no time or interest about it, just kindly read the following question is OK.
As that paper says, the generator is firstly trained normally with normal question data to make that the output at least looks like a real question. Then by using an auxiliary classifier’s result (of classifying the outputs), the generator is trained again to just generate the specific (several unique categories) questions.
However, as the paper do not reveal its code, I have to do the code all myself. I have these three training thoughts, but I do not know their differences, could you kindly tell me about it?
If they have almost the same effect, could you tell me which is more recommended in Pytorch’s grammar? Thank you very much!
Suppose the discriminator loss to generator is loss_G_D, the classifier loss to generator is loss_G_C, and loss_G_D and loss_G_C has the same shape, i.e. [batch_size, loss value], then what is the difference?
1.
optimizer.zero_grad()
loss_G_D = loss_func1(discriminator(generated_data))
loss_G_C = loss_func2(classifier(generated_data))
loss = loss_G+loss_C
loss.backward()
optimizer.step()
optimizer.zero_grad()
loss_G_D = loss_func1(discriminator(generated_data))
loss_G_D.backward()
loss_G_C = loss_func2(classifier(generated_data))
loss_G_C.backward()
optimizer.step()
optimizer.zero_grad()
loss_G_D = loss_func1(discriminator(generated_data))
loss_G_D.backward()
optimizer.step()
optimizer.zero_grad()
loss_G_C = loss_func2(classifier(generated_data))
loss_G_C.backward()
optimizer.step()
Additional info: I observed that the classifier’s classification loss is always very big compared with generator’s loss, like -300 vs 3. So maybe the third one is better?
|
st46094
|
hi,
I’m new to PyTorch. I have a network that trains and runs ok, except that Tensorboard doesnt work fully. With the following lines-
image = torch.zeros((2, 3, args.image_size, args.image_size))
model(image)
writer.add_graph(model, image)
I get the error-
*** TypeError: forward() takes 2 positional arguments but 3 were given
which I dont understand becoz training itself works.
Any thought?
Thanks!
|
st46095
|
I have kind of a similar problem. Training runs fine for me and I also have some add_scalar() methods in my code.
But as soon as I add:
tensorboard.add_graph(net, input_to_model=img, verbose=False)
in my training loop I get following error:
File "c:/SCNN_Pytorch/new_train.py", line 108, in train
tensorboard.add_graph(net, input_to_model=img, verbose=False)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\utils\tensorboard\writer.py", line 724, in add_graph
self._get_file_writer().add_graph(graph(model, input_to_model, verbose))
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\utils\tensorboard\_pytorch_graph.py", line 286, in graph
trace = torch.jit.trace(model, args)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\jit\_trace.py", line 733, in trace
return trace_module(
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\jit\_trace.py", line 934, in trace_module
module._c._create_method_from_trace(
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\modules\module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\modules\module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\parallel\data_parallel.py", line 149, in forward
return self.module(*inputs, **kwargs)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\modules\module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\modules\module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
File "c:\SCNN_Pytorch\model.py", line 52, in forward
loss_seg = self.ce_loss(seg_pred, seg_gt) #seg_gt.shape = [4, 288, 512]
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\modules\module.py", line 725, in _call_impl
result = self._slow_forward(*input, **kwargs)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\modules\module.py", line 709, in _slow_forward
result = self.forward(*input, **kwargs)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\modules\loss.py", line 961, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\functional.py", line 2468, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "C:\anaconda3\envs\SCNN\lib\site-packages\torch\nn\functional.py", line 2260, in nll_loss
if input.size(0) != target.size(0):
AttributeError: 'NoneType' object has no attribute 'size'
Which doesnt really make any sense. I checked input (which is img) and my target. Both have .size(0)=4 type int. If I print following statement:
print(img.size(0) != target.size(0))
the result is False. Can anyone help me with this?
Thank you in advance!
|
st46096
|
In this case I made some modifacations to the parameters of a DataParallel wrapped model.
Such as
model = resnet50()
model = nn.DataParallel(model).cuda()
# several parameter reinitialization tricks here
model.module.conv1.weight.data = xxx
In my understanding there is a copy of the original model in each GPU device.
My question is: ‘will the aforementioned modifications to the nn.DataParallel wrapped model be broadcasted to all GPUs’?
And how about if I change the architecture of the model (such as replacing a conv3x3 in the original
model with dilated convolution)?
If it won’t be broadcasted to all other GPUs, how can I broadcast the modifications to all other divices
INSTANTLY?
|
st46097
|
Would it be possible to apply your manipulations to the model before wrapping it to DataParallel?
I think this would be the cleaner approach.
|
st46098
|
Because I will dynamicaly change the module to perform some kind of online network pruning,it’s impossible to apply the manipulations before wrapping to DataParallel.
|
st46099
|
Would re-wrapping into nn.DataParallel work in this case? Or are you manipulating the model during training?
|
st46100
|
Yes I’m manipulating the model during training (after each epoch).
So I think re-wrapping the model to nn.Parallel would be okay.
model = nn.DataParallel(model)
# manipulate the model
model = model.module
model.conv1 = xxxxx
# re-wrap
model = nn.DataParallel(model)
Is this right?
|
st46101
|
Re-wrapping after the epoch should be alright.
However, I would recommend to create some dummy example and make sure the manipulation an re-wrapping is really working, e.g. set all parameters to zero and check the parameters in the next iteration for these values.
|
st46102
|
Does the re-wrapping technique works well for you? I’d imagine it’s slow to use as wrapping with dataparallel each time copies ALL its weights to other GPUs, rather than copying just the ones needed when pruning the weights.
|
st46103
|
I have a similar question: I am changing the requires_grad of module parameters after wrapping with DataParallel. In DistributedDataParallel 4, which is now recommended instead of DataParallel, there is a warning that says “don’t do it!”. But I do not see the same warning for DataParallel. I agree that the proper way is to re-wrap it after the change, but that is a little more code.
|
st46104
|
Hi guys, I want my weight matrix to be block triangular matrix (and maintain that way during training), what will be the memory-efficient way to achieve this? Currently, I’m creating a mask using torch.triu (and Kronecker product with an all-one matrix of my block size), then register_hook using this mask matrix. However, my weight matrix is quite large so I would like to avoid storing the mask matrix.
There’s a straightforward way that I can implement the triangular matrix multiplication using a list, but are there any existing APIs I can use to avoid that path? Thanks.
|
st46105
|
I have what seems like a simple problem, but I cannot find answers for it anywhere. If I have two arrays, I want to multiply/combine the elements of one of them based on whether or not the elements in the other are sequential or repeated. For instance,
array_with_repeated_elements = tensor([1, 2, 0, 0, 2, 2, 2, 1, 0, 0])
# could just as well be [a, b, c, c, d, d, d, e, f, f]
array_to_be_multiplied = tensor([1., 3., 5., 2., 2., 7., 2., 4., 3., 4.])
desired_output = tensor([1, 3, 10, 28, 4, 12])
In numpy, this can be done easily:
first_index_of_each_sequence = np.hstack([0,np.where(array_with_repeated_elements[1:] != array_with_repeated_elements[0:-1])[0]+1])
# this creates array([0, 1, 2, 4, 7, 8])
desired_output = 1-np.multiply.reduceat(array_to_be_multiplied, first_index_of_each_sequence)
I can’t seem to do this in pytorch. The best guess I have is this monster:
first_index_of_each_sequence = torch.cat([torch.LongTensor((0,)), torch.where(array_with_repeated_elementst[1:] != array_with_repeated_elementst[0:-1])[0]+1, torch.LongTensor((len(array_with_repeated_elements),))])
# makes tensor([0, 1, 2, 4, 7, 8, 10])
size_of_each_sequence = first_index_of_each_sequence[1:] - first_index_of_each_sequence[0:-1]
# makes tensor([1, 1, 2, 3, 1, 2])
full_length_array_of_ascending_index_elements = torch.arange(len(size_of_each_sequence)).repeat_interleave(size_of_each_sequence)
desired_output_base = torch.zeros(len(size_of_each_sequence))
# makes tensor([0, 1, 2, 2, 3, 3, 3, 4, 5, 5])
desired_output_base.index_add_(0, full_length_array_of_ascending_index_elements, torch.log(array_to_be_multipliedt))
# does what I want in log space, but ew if I ever have a zero
desired_output = torch.exp(desired_output_base)
# duh
Does anyone have any ideas on how to do this nicely? The easy numpy implementation suggests I’ve missed something in pytorch…
|
st46106
|
That answer wouldn’t work for sequences with zeros or near-zeros in them (also really large numbers, but they’d render this whole thing a bit moot). I need a solution which won’t have numerical errors. That’s why I built the monstrosity up above.
In numpy, this is trivial. Is it not trivial in pytorch? Again, seems like I’ve missed some function.
|
st46107
|
from documentation here : http://pytorch.org/docs/master/nn.html#torch.nn.LogSoftmax 330, log-softmax is defined as:
f(x)=log(softmax(x))
as I know, using log in probabilities will change the high-low value, so biggest value in softmax(x) will be the smallest value in log(softmax(x)).
Will it change the way Negative Log Likelihood compute loss when it implements it together in CrossEntropy?
|
st46108
|
Oh., I’ve just read from NLLLoss documentation 885, that NLLLoss is implemented differently than what I’ve known before:
loss(x, class) = -x[class]
it doesn’t use log function. So, does it means I can’t use Softmax and NLLLoss together?
|
st46109
|
so biggest value in softmax(x) will be the smallest value in log(softmax(x)).
The softmax function returns probabilities between [0, 1].
The log of these probabilities returns values between [-inf, 0], since log(0) = -inf and log(1) = 0.
That is why the order won’t change.
However, you should use the NLLLoss with a log_softmax output
or CrossEntropyLoss with logits if you prefer not to add an extra log_softmax layer into your model.
|
st46110
|
If the order doesn’t change, why do we have to use a ‘log-softmax’ with NLLLoss?
|
st46111
|
By “order” I meant the range of the outputs will still be in the same order.
I.e. if p0 < p1 created by softmax , then log(p0) < log(p1).
nn.NLLLoss expects the log probabilities, as the loss will be calculated as described here 177.
Note that nn.CrossEntropyLoss applies internally F.log_softmax and nn.NLLLoss afterwards, which is why it expects raw logits instead.
|
st46112
|
Yes I do get that. Yet I have another question if log-softmax gives values in the range [-infinity,0] that means that the values are negative,and the NLLLoss functon is -log(y), where y=log-softmax(x) , but log of some negative value isn’t defined so how does is work?
|
st46113
|
The formula in the docs is the negative log softmax written as:
- log ( exp(x[class]) / sum(exp(x[j]))
x are the logits here while exp()/sum(exp()) is the softmax function.
|
st46114
|
Sorry but, I searched for it in the documentation,I didn’t find it explicitly mentioned anywhere that negative of log_softmax goes in as input to the nllloss function.I don’t know if I’m missing something important, but please check the docs once.
|
st46115
|
From the nn.NLLLoss docs 77:
The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either […].
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
Examples:
m = nn.LogSoftmax(dim=1)
loss = nn.NLLLoss()
# input is of size N x C = 3 x 5
input = torch.randn(3, 5, requires_grad=True)
# each element in target has to have 0 <= value < C
target = torch.tensor([1, 0, 4])
output = loss(m(input), target)
output.backward()
|
st46116
|
But, there isn’t any mention of negative log_softmax here! as you mentioned in this :Does NLLLoss handle Log-Softmax and Softmax in the same way?
|
st46117
|
Ah, sorry for the confusion, as I can see the misunderstanding now.
nn.NLLLoss expects log probabilities, so you should just apply F.log_softmax on your model output (not multiplying with -1!).
The formula posted in my previous post is, how the loss can be calculated, but you shouldn’t worry about the minus sign, as it will be applied internally for you.
The posted example shows, how to apply the criterion.
|
st46118
|
If the minus will be applied internally my doubt has been cleared, but they haven’t mentioned that in the documentation as per my findings.
|
st46119
|
The documentation mentions that log probabilities are expected and gives a code example.
What is missing and what statement do you think might have been helpful to solve your misunderstanding? The docs are by far perfect, so feedback is always welcome.
|
st46120
|
What needs to be given as input and what we get as output is fine, there is no problem in understanding that, but, if a minus is introduced internally I think there should be mention of that, as the screenshot of the documentation that you provided above it says
ptrblck:
The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either […].
If it was mentioned somewhere that “nn.NLLLoss internally multiplies the log-probabilities by a minus”, it would have cleared the confusions!
|
st46121
|
As you said so biggest value in softmax(x) will be the smallest value in log(softmax(x)).
Does that mean to calculate accuracy, argmin of predictions will be used for classification in Sentimental analysis? I am using NLLLoss to calculate the loss.
Thankyou.
|
st46122
|
Hi, pls could you clear my doubt, I have a multi-label classification problem, where each label can either be positive or negative, so each label on its own is binary in nature. The last layer in my network is LogSoftmax , so what loss function should I use ? BCEloss ? And how do i calculate the class weights? Pls help.
|
st46123
|
When each label is either positive or negative it would be a binary classification or a (2 class) multi-class classification.
You can either return two logits (or log probabilities) from the model and use nn.CrossEntropyLoss (or nn.NLLLoss) or alternatively you can return a single logit and use nn.BCEWithLogitsLoss.
|
st46124
|
Hi guys, I was reading this DataParallel tutorial https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html and I have several questions, suppose I have two gpus: gpu A and gpu B:
How is the model allocated on A and B? Based on my experiments it seems that a copy of the model will be made on both A and B. That is, if my model has size 1G, then both A and B will use 1G memory. Is this correct?
How is the data allocated on A and B? From the tutorial, it seems you only need to use nn.DataParallel on the model, and not on the data?
How does batch size affect memory allocation? My gpu can allocate using a small batch size (say 10), but run out of memory on large batch size (say 30).
Appreciate any answers/references. Thanks.
|
st46125
|
Hello everyone,
I am brand new to using PyTorch and just trying to implement some older computer vision white papers to get a hang of how this all works. I had two main questions which I will ask and then elaborate on below, any guidance would be much appreciated.
How to use the PyTorch DataSet class and Data loader to train my network if the data set is a folder containing a few thousand images.
How do the images get passed to the model class I create?
I am implemented the semantic segmentation model known as SegNet. I have been able to figure out how to build my model at least in terms of building the constructor and forward definition. I am now trying to build my train.py, but am really struggling with using the PyTorch data loader for image data sets contained in a folder directory and not a csv setup. Can anyone point me to an example that shows me how this works? And possibly an explanation? All the examples I find use CSV files where as I just have a few thousand .png files for training and testing.
I am also confused on how the images are passed to the forward function in my class “class SegNet(nn.Module):”. I never see a direct pass of the images to the network model in most examples and just wondered how they get there. What I mean is most examples I see call the class in the the train.py script and then load the data but is there some method to passing the data directly to the forward definition? I never see a directly call to the forward function where some variable storing the dataset is passed. I’m pretty new to all this so any help would be much appreciated cause I might be thinking about this all wrong.
|
st46126
|
Hi,
this tutorial https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html 2 have a example for such a dataloader which you are sreaching.
Greets
|
st46127
|
Hi,
I have a tensor and I want to calculate softmax along the rows of the tensor.
action_values = t.tensor([[-0.4001, -0.2948, 0.1288]])
as I understand cutting the tensor row-wise we need to specify dim as 1.
However I got an unexpected result.
can someone please help me in understanding how softmax and dim in softmax works.
Below is what I tried, but none gave me successful results.
F.softmax(action_values - max(action_values), dim = 0)
Out[15]: tensor([[1., 1., 1.]])
F.softmax(action_values - max(action_values), dim = 1)
Out[16]: tensor([[0.3333, 0.3333, 0.3333]])
F.softmax(action_values - max(action_values), dim = -1)
Out[17]: tensor([[0.3333, 0.3333, 0.3333]])
|
st46128
|
Solved by KFrank in post #2
Hi Granth!
The short answer is that you are calling python’s max() function,
rather than pytorch’s torch.max() tensor function. This is causing
you to calculate softmax() for a tensor that is all zeros.
You have two issues:
First is the use of pytorch’s max(). max() doesn’t understand
tensor…
|
st46129
|
Hi Granth!
granth_jain:
I want to calculate softmax along the rows of the tensor.
…
However I got an unexpected result.
The short answer is that you are calling python’s max() function,
rather than pytorch’s torch.max() tensor function. This is causing
you to calculate softmax() for a tensor that is all zeros.
You have two issues:
First is the use of pytorch’s max(). max() doesn’t understand
tensors, and for reasons that have to do with the details of max()'s
implementation, this simply returns action_values again (with the
singleton dimension removed).
The second is that there is no need to subtract a scalar from your
tensor before calling softmax(). Any such scalar drops out anyway
in the softmax() calculation.
This script illustrates what is going on:
import torch
torch.__version__
action_values = torch.tensor([[-0.4001, -0.2948, 0.1288]])
action_values
max (action_values) # this is python's max, not pytorch's
torch.max (action_values) # pytorch's tensor-version of max
action_values - max (action_values)
action_values - torch.max (action_values)
tzeros = torch.zeros ((1, 3))
tzeros
torch.nn.functional.softmax (tzeros, dim = 0)
torch.nn.functional.softmax (tzeros, dim = 1)
torch.nn.functional.softmax (action_values, dim = 1) # what you want
torch.nn.functional.softmax (action_values - 2.3, dim = 1) # shift drops out
Here is its output:
>>> import torch
>>> torch.__version__
'1.6.0'
>>> action_values = torch.tensor([[-0.4001, -0.2948, 0.1288]])
>>> action_values
tensor([[-0.4001, -0.2948, 0.1288]])
>>> max (action_values) # this is python's max, not pytorch's
tensor([-0.4001, -0.2948, 0.1288])
>>> torch.max (action_values) # pytorch's tensor-version of max
tensor(0.1288)
>>> action_values - max (action_values)
tensor([[0., 0., 0.]])
>>> action_values - torch.max (action_values)
tensor([[-0.5289, -0.4236, 0.0000]])
>>> tzeros = torch.zeros ((1, 3))
>>> tzeros
tensor([[0., 0., 0.]])
>>> torch.nn.functional.softmax (tzeros, dim = 0) tensor([[1., 1., 1.]])
>>> torch.nn.functional.softmax (tzeros, dim = 1) tensor([[0.3333, 0.3333, 0.3333]])
>>> torch.nn.functional.softmax (action_values, dim = 1) # what you want
tensor([[0.2626, 0.2918, 0.4456]])
>>> torch.nn.functional.softmax (action_values - 2.3, dim = 1) # shift drops out
tensor([[0.2626, 0.2918, 0.4456]])
Best.
K. Frank
|
st46130
|
I’m failing to pass a simple tensor into a simple deep nn.
What am I doing wrong?
ERROR MESSAGE:
“RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x2 and 1x128)”
image668×819 28.9 KB
import numpy as np
import torch as t
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class DNN(nn.Module):
def __init__ (self,input=1,h1=128,output=1):
super().__init__()
self.input = input
self.h1 = h1
self.output = output
self.fc1 = nn.Linear(input,h1)
self.fc2 = nn.Linear(h1,output)
def forward(self,kiwi):
out = F.relu(self.fc1(kiwi))
out = self.fc2(out)
return out
dnn = DNN()
data = t.tensor([1,2])
dnn.forward(data)
|
st46131
|
Try sending a 1x1 tensor instead of 1x2
data = t.tensor([1]) instead of t.tensor([1, 2])
|
st46132
|
i did it but another error has emerged:
“RuntimeError: expected scalar type Long but found Float”
|
st46133
|
@Eric_Cartman
This appears to be a issue with data types.
Try sending t.tensor([1.0])
Usually to test a network, I send random data using t.randn(shape)
In this case, shape = (1, )
|
st46134
|
I did this modifications and everything went fine:
input = 2
data = t.tensor([1,2],dtype = t.float)
Thanks for your attention, @torch_bearer !!!
|
st46135
|
I filled out the form but hitting the Submit Project button does not give any response about the submission being successful.
Is this intended or a glitch?
Bests,
Benedek
|
st46136
|
Solved by jspisak in post #9
I ran a test and the form seemed to work fine (i.e. i received an email with the info). Yours did not come through through. Feel free to send me an email directly and I can take a look. [email protected]. So sorry for this!
|
st46137
|
Yes, it is still unresolved. I wanted to submit this project:
GitHub
benedekrozemberczki/pytorch_geometric_temporal 4
A Temporal Extension Library for PyTorch Geometric - benedekrozemberczki/pytorch_geometric_temporal
|
st46138
|
No not aware of any issues. Let me try a test and see. If there is still an issue, we can just connect directly and get this figured out…
|
st46139
|
I ran a quick test and things look fine and i received a submission email. Please try again to submit and i’ll take a look when it comes in: https://pytorch.org/ecosystem/join
thanks and apologies for any issues!
|
st46140
|
I ran a test and the form seemed to work fine (i.e. i received an email with the info). Yours did not come through through. Feel free to send me an email directly and I can take a look. [email protected]. So sorry for this!
|
st46141
|
@jspisak I have sent you an e-mail with the answers to the questions from the form.
|
st46142
|
Hi, everyone, I’ve implemented the accumulate gradients using Pytorch, and trained on cifar100 dataset, here is my code snippet:
net.train()
loss = 0
acc_batches = args.acc_b
for batch_index, (images, labels) in enumerate(cifar100_training_loader):
if epoch <= args.warm:
warmup_scheduler.step()
if args.gpu:
labels = labels.cuda()
images = images.cuda()
outputs = net(images)
scaled_loss = loss_function(outputs, labels) / args.acc_b
scaled_loss.backward()
loss += scaled_loss
print(1)
if acc_batches > 1:
acc_batches -= 1
continue
optimizer.step()
optimizer.zero_grad()
n_iter = (epoch - 1) * len(cifar100_training_loader) + batch_index + 1
last_layer = list(net.children())[-1]
for name, para in last_layer.named_parameters():
if 'weight' in name:
writer.add_scalar('LastLayerGradients/grad_norm2_weights', para.grad.norm(), n_iter)
if 'bias' in name:
writer.add_scalar('LastLayerGradients/grad_norm2_bias', para.grad.norm(), n_iter)
print('Training Epoch: {epoch} [{trained_samples}/{total_samples}]\tLoss: {:0.4f}\tLR: {:0.6f}'.format(
loss.item(),
optimizer.param_groups[0]['lr'],
epoch=epoch,
trained_samples=batch_index * args.b + len(images),
total_samples=len(cifar100_training_loader.dataset)
))
#update training loss for each iteration
writer.add_scalar('Train/loss', loss.item(), n_iter)
if acc_batches == 1:
acc_batches = args.acc_b
loss = 0
for name, param in net.named_parameters():
layer, attr = os.path.splitext(name)
attr = attr[1:]
writer.add_histogram("{}/{}".format(layer, attr), param, epoch)
then I run the following command to train my resnet50:
python train.py -net resnet50 -gpu -b 256
python train.py -net resnet50 -gpu -b 256 -acc_b 2
python train.py -net resnet50 -gpu -b 256 -acc_b 4
python train.py -net resnet50 -gpu -b 256 -acc_b 8
I’ve plotted the test acc using tensorboard, and here is the result:
2020-11-24 22-27-27屏幕截图1439×586 80.1 KB
orange line: args.acc_b = 1
dark blue line: args.acc_b = 2 (loss becomes nan at 37th epoch)
Training Epoch: 37 [3072/50000] Loss: 1.2027 LR: 0.100000
1
1
Training Epoch: 37 [3584/50000] Loss: 5.3613 LR: 0.100000
1
1
Training Epoch: 37 [4096/50000] Loss: 5.7180 LR: 0.100000
1
1
Training Epoch: 37 [4608/50000] Loss: nan LR: 0.100000
1
1
Training Epoch: 37 [5120/50000] Loss: nan LR: 0.100000
red line: args.acc_b = 4 (loss increases suddenly from 0.7734 to 6.5302 at 38 epoch)
Training Epoch: 38 [43008/50000] Loss: 0.7734 LR: 0.100000
1
1
1
1
Training Epoch: 38 [44032/50000] Loss: 6.5302 LR: 0.100000
1
1
1
1
Training Epoch: 38 [45056/50000] Loss: 5.4722 LR: 0.100000
1
1
1
1
light blue line: args.acc_b = 8 (loss increases suddenly from 0.0202 to 6.8125 at 105 epoch):
Training Epoch: 105 [6144/50000] Loss: 0.0202 LR: 0.020000
1
1
1
1
1
1
1
1
Training Epoch: 105 [8192/50000] Loss: 6.8125 LR: 0.020000
The whole project is at https://github.com/weiaicunzai/pytorch-cifar100/tree/feat/accumulate_grad_batches 1
I just do not understand why my loss keeps exploding when I set arg.acc_b > 1.
Thanks in advance.
|
st46143
|
I am experiencing unexpected changes to the results of my network when I switch between model.eval() and model.train(). The model gives good results while model.training=True (~88% accuracy on test data) but when I switch to model.eval() the accuracy is reduced to ~55% (the same test data).
The network aims to predict one of two outputs A or B. Model architecture is: 278 inputs, 2 hidden layers of 90 neurons, 2 outputs. Optimiser: SGD. Activation: Tanh. Dropout is used with default arguments.
Here is the breakdown by outcome, A and B:
with model.train():
percentage correct predicting A: 91%
percentage correct predicting B: 79%
with model.eval():
percentage correct predicting A: 42%
percentage correct predicting B: 98%
There is a large swing towards predicting B rather than A when eval() mode is turned on.
There are a number of inputs to the model that are either 0 or 1, one of the inputs will be 1 at a time, alongside other inputs of varying value - 34 of 278 inputs are of this binary nature. Could using inputs that are 0 a lot of the time with dropout be the cause of these changes in predictions? When dropout is not used this does not occur; the results are the same in eval mode as they are in training mode.
Any help would be appreciated.
|
st46144
|
I am currently working on my mini-project, where I predict movie genres based on their posters. So in the dataset that I have, each movie can have from 1 to 3 genres, therefore each instance can belong to multiple classes. I have total of 15 classes(15 genres). I use mini-batch of 4.When I train my classifier, my labels is a list of 3 elements and it looks like that:
tensor([[ 2., 10., 5.],
[ 2., 5., 0.],
[14., 0., 0.],
[ 1., 0., 0.]]) , where 0 means there is no genre for that position available
and my output at the last stage is
tensor([[-0.0968, -0.0381, -0.0629, -0.0519, 0.1343, -0.0395, 0.0480, -0.0035,
0.0559, -0.0791, 0.0652, 0.0573, -0.0751, 0.0459, -0.0035],
[-0.0978, -0.0385, -0.0551, -0.0518, 0.1312, -0.0432, 0.0539, 0.0017,
0.0460, -0.0868, 0.0627, 0.0534, -0.0666, 0.0420, 0.0013],
[-0.0939, -0.0549, -0.0444, -0.0664, 0.1229, -0.0561, 0.0458, 0.0021,
0.0328, -0.0869, 0.0710, 0.0462, -0.0734, 0.0459, 0.0065],
[-0.0916, -0.0274, -0.0734, -0.0436, 0.1443, -0.0329, 0.0525, -0.0043,
0.0679, -0.0738, 0.0639, 0.0557, -0.0754, 0.0459, -0.0087]],
my total number of genres is 15 , therefore my last fully connected layer gives me the output of a list with 15 weights. But now, the problem is, I don’t know which Loss Function to choose here, so it would properly calculate loss of my problem. I tried CrossEntropy, but it does not work since it does not support multilabeling problem. ( multi-target not supported at c:\new-builder_3\win-wheel\pytorch\aten\src\thnn\generic/ClassNLLCriterion.c:21)
I also tried nn.MultiLabelSoftMarginLoss(),
but here the problem is that the numbers of elemetns in my output and target do not match t, 60 != 12… S
o I am wondering what would be the best LossFunction in this case and how to implement it…
Currently trainin part in my code looks like that:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 10, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(10, 20, 5)
self.fc1 = nn.Linear(20 * 22 * 39, 100)
self.fc2 = nn.Linear(100, 50)
self.fc3 = nn.Linear(50, 10)
self.fc4 = nn.Linear(10,3)
def forward(self, x):
x = x.view(-1, 3, 100, 170)
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 20 * 22 * 39)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
#print(x)
return x
net = Net()
import torch.optim as optim
criterion = nn.MultiLabelSoftMarginLoss()#nn.CrossEntropyLoss()#BCEWithLogitsLoss()#
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
########################################################################
4. Train the network
^^^^^^^^^^^^^^^^^^^^
for epoch in range(4): # loop over the dataset multiple times
losses = []
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs = inputs.float()
labels = labels.float()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 200 == 199: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 200))
running_loss = 0.0
print('Finished Training')
|
st46145
|
You could try to transform your target to a multi-hot encoded tensor, i.e. each active class has a 1 while inactive classes have a 0, and use nn.BCEWithLogitsLoss as your criterion.
Your target would thus have the same shape as your model output.
This worked pretty well in the past for me.
|
st46146
|
Hi @ptrblck , thanks for taking a look at my problem, could you provide example of how to do this transformation?
I also resized the number of my labels since last question due to high imbalance in my target data. So now, each label has 6 genres and looks as follows :
tensor([1, 4, 1, 0, 5, 2])
and the output of my model looks like that :
tensor([[-0.0372, -0.0156, -0.0152, 0.0168, -0.0080, 0.0074],
[-0.0337, -0.0016, -0.0026, -0.0089, -0.0027, 0.0187]],
I am not sure how do some classes active while other inactive, could you give a hint or provide example? I know, that nn.BCEWithLogitsLoss has to be followed by sigmoid as activation function, but I am not sure what’s the best way to use it in my case.
|
st46147
|
Sure!
Given your example target, you could use scatter to create the multi-hot target:
labels = torch.tensor([1, 4, 1, 0, 5, 2])
labels = labels.unsqueeze(0)
target = torch.zeros(labels.size(0), 15).scatter_(1, labels, 1.)
print(target)
> tensor([[1., 1., 1., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
nn.BCEWithLogitsLoss takes the raw logits of your model (without any non-linearity) and applies the sigmoid internally.
If you would like to add the sigmoid activation to your model, you should use nn.BCELoss instead.
|
st46148
|
Could you explain little more in details this line of your code:
target = torch.zeros(labels.size(0), 15).scatter_(1, labels, 1.)
Why did you choose dimension of 1 in 1st argument of scatter function and where number 15 came from ?
Also, I don’t need to change the format of my data outputs right?
And how to measure loss during the loop of my iterations? Will the following metric be good for that?
losses = [ ]
loss = loss_fn (prediction, to_variable(target)) # Compute losses
loss.backward() # Backpropagate the gradients
losses.append(loss.data.cpu().numpy())
optim.step() # Update the network
print("Epoch {} Loss: {:.4f}".format(epoch, np.asscalar(np.mean(losses))))
|
st46149
|
I used 15 for dim1 since you are dealing with 15 classes (genres) as far as I’ve understood it.
Sure, let’s see what this line of code is actually doing.
torch.zeros(labels.size(0), 15) initializes a new tensor with all zeros in the shape of [batch_size, 15]. This should be the same shape as your model output. For dim1 I’m using the number of classes, so let’s call this dimension “class dimension”.
.scatter_ is an inplace method, which uses an index tensor (labels in this case) to fill the indices given by labels with a certain value in a specified dimension.
I’m using dim=1, since I would like to use the passed indices in labels ([1, 4, 1, 0, 5, 2]) to index dim1 (the “class dimension”).
Then I’m setting src=1. to fill all specified indices with the values of 1.
Does this explanation make sense to you? Let me know, if you need some further examples or explanations.
Yes, the loop look alright. You should use item() instead of .data, but besides that it looks good!
|
st46150
|
Oh ok, I was little confused by # 15 because now I reduced my classes to 6, but I think I get this function now. I ran your code for the case when classes = 6, but I get the output as tensor([[1., 1., 1., 0., 1., 1.]]), however in my case 0 still means a class. Do I need to change it in a dictionary of my labels?
If I do, I will get then ([[1., 1., 1., 1., 1., 1.]])
for labels = torch.tensor([2, 5, 2, 1, 6, 3])
and it will be true for all of my labels, since every each of them has 6 different values and thus they all will be activated. Will the loss function be accurate in this case then?
My batch is size of 4, so according to your example I have my code as follows:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 22 * 39, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 15)
def forward(self, x):
# my inputs have size of [170,100,3] so I am swapping dimensions here so it would comply model requirements
x = x.view(-1, 3, 100, 170)
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 22 * 39)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net.to(device)
import torch.optim as optim
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
print(len(trainloader))
for epoch in range(4): # loop over the dataset multiple times
losses = []
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
labels = labels.unsqueeze(-1)
targets = torch.zeros(labels.size(0),15).scatter_(1, labels, 1.)
targets = targets.squeeze(0)
targets = targets.float()
inputs, targets = inputs.to(device), targets.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
losses.append(loss.data.cpu().numpy())
print("Epoch {} Loss: {:.4f}".format(epoch, np.asscalar(np.mean(losses))))
print('Finished Training')
dataiter = iter(testloader)
images, labels = dataiter.next()
net.to(device)
images, labels = images.to(device), labels.to(device)
outputs = net(images)
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to(device), labels.to(device)
y = net(images)
print('Y (logits): {}'.format(y.data.cpu().numpy()))
print('Y (argmax): {}'.format(y.data.cpu().numpy() > 0))
but the output that I am getting is :
Y (logits): [[-1.631788 -1.1816276 -2.5093026 -3.4875336 -2.4541166 -3.1875174
-2.6211216 -3.6854322 -2.8854806 -3.81516 -3.0067604 -3.414101
-3.5943007 -3.71672 -3.5693698]
[-1.7025979 -1.2310716 -2.612976 -3.6319766 -2.5535479 -3.3250632
-2.7306297 -3.8352551 -3.0070634 -3.9727454 -3.1343684 -3.5583117
-3.7449632 -3.8727129 -3.7125916]
[-1.6599652 -1.1989014 -2.5494416 -3.540475 -2.487864 -3.2383528
-2.6703634 -3.7370138 -2.932359 -3.877006 -3.0570838 -3.470197
-3.6498802 -3.7754254 -3.6216202]
[-1.4784788 -1.0684851 -2.273665 -3.1516986 -2.2127426 -2.8756993
-2.3788373 -3.3367095 -2.598027 -3.450463 -2.7082787 -3.08899
-3.2494898 -3.3505046 -3.230426 ]]
Y (argmax): [[False False False False False False False False False False False False
False False False]
[False False False False False False False False False False False False
False False False]
[False False False False False False False False False False False False
False False False]
[False False False False False False False False False False False False
False False False]]
even though my loss is on average 0.2213
so I am not sure what is wrong in this case… I feel like there are 3 possible reasons for that:
I don’t measure accuracy right
Something wrong with my training part( I implemented wrong my loss criterion)
Something wrong with my data ( imbalanced?)
|
st46151
|
If you reduced the number of classes to 6, your model should also output the logits for these 6 classes.
Currently it seems like your model has an output of shape [batch_size, 15].
I’m not sure to understand your labels properly, as I thought the indices point to the classes in the current sample, e.g. a tensor of [2, 5] would indicate class2 and class5 are present for the current sample, while all others are not.
Could you explain your labels a bit more, since I think I misunderstood them?
|
st46152
|
Sorry for confusion, I tried my code with 15 classes and let’s better stick with that assumption for this discussion. My labels are genres that were vectorized into numbers: For example [ Action, Comedy, Romance, Horror] would have [ 1, 3 , 5, 8] in current label. So you think smth is wrong with that part?
|
st46153
|
No, that’s alright.
I was just wondering, about the other example, since there were some repetitions in the target:
labels = torch.tensor([2, 5, 2, 1, 6, 3])
This sample would have class2 “twice”. Is this a typo?
Also, you should use 0-based indices, i.e. your targets should be in the range [0, nb_classes-1].
I’m not sure, what this means:
ViniLL:
however in my case 0 still means a class.
class0 is still a valid class is will be set to 1, if the labels tensor indicates it:
labels = torch.tensor([1, 0, 5])
labels = labels.unsqueeze(0)
target = torch.zeros(labels.size(0), 6).scatter_(1, labels, 1.)
print(target)
> tensor([[1., 1., 0., 0., 0., 1.]])
|
st46154
|
Oh ok I think it wasmisinterpreting the output… so e.g.
labels = torch.tensor([1, 4, 1, 0, 5, 2])
labels = labels.unsqueeze(0)
target = torch.zeros(labels.size(0), 6).scatter_(1, labels, 1.)
print(target)
will give output
tensor([[1., 1., 1., 0., 1., 1.]]) meaning no classes 3 was found right?
Regarding the labels, it was a typo in terms of “6” cause I was just giving an example and gave random numbers, but I do indeed use 0-based indices in my code. And regarding “2” it is not a typo, since some training batches may contain more genres from one class than another.
|
st46155
|
Yes, exactly. Your model should output a high probability of all classes but class3 for this sample.
Sure, a batch may contain multiple samples with the same class, but my code snippet currently works on a single sample. What would the two 2s mean in that case?
|
st46156
|
Ok, thank you for making it clear.
That means that in this particular case I have [ Adventure, Action, Adventure, Romance, Horror, Documentary] in my label tensor, where Adventure genre appears twice as a ground truth label.
I think I see what it is going on… My batch label should not have duplicates, right?
|
st46157
|
Well, you might have duplicates in a certain batch, but it is strange to have it for a single sample.
Let’s say you have a batch of two samples with the following labels:
batch[
sample0: [Adventure, Action]
sample1: [Adventure, Action, Romance]
]
This example is perfectly fine. The corresponsing target tensors could look like this (depending on the mapping between the genres and the class index):
[[2, 1],
[2, 1, 5]]
It’s still a bit strange for me to see the same label (Adventure) for the same sample.
This would mean the example from above could look like this:
sample0: [Adventure, Adventure, Action] - [2, 2, 1]
Would you like to ignore duplicates for these samples or does it have any meaning?
|
st46158
|
Alright, let me try to make it less confusing by printing out the actual intermediate outputs from my code:
tensor([1, 0, 1, 1]) - original label from trainloader,
torch.Size([4]) - size of this label
torch.Size([4, 1]) - size after unsqueezing
tensor([0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) - this is what I have after using scatter function and squeezing it back
tensor([ 4, 2, 6, 10])
torch.Size([4])
tensor([[ 4],
[ 2],
[ 6],
[10]])
torch.Size([4, 1])
tensor([0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) - same output in the next iteration
And no, duplicates don’t mean nothing in my problem and I think they appeared as a result of data preprocessing…
Initially, my labels is a list of lists, which is as such, e.g.
[ [Adventure, Action, Romance], [Horror, Thriller], [ Documentary], … ,[ Adventure, Action]]
I vectorized the data according to a dictionary and then
Since it was a list of lists, I decided to flatten it to make sure it will be easier to work with when it comes to batches… Since the # of my labels after flattening the list was not equal to the number of instances given, I decided it to cut in a way like
y_train = y_train [0 : len( x_train)]
so it would be easier for Dataloader to split it into batches. And I think that is why I ended up with some duplicates. Do you have any suggestion of how to avoid this problem?
|
st46159
|
Ah OK, thanks for clarification.
Well, I think if we can just ignore duplicates, your code should be fine, since scatter_ will just do its job.
|
st46160
|
But why then I get so many falses and my outputs all come out as negative? What I do wrong? or is it something wrong with the metric?
It also seems like I am loosing some tensors when I use squeeze unsqueeze method. E.g. in your example ,
`labels = torch.tensor([1, 4, 1, 0, 5, 2])
tensor([[1., 1., 1., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])`
so , all 5 values are being converted to 1s
while in my case
tensor([ 4, 2, 6, 10])
tensor([[ 4],
[ 2],
[ 6],
[10]])
torch.Size([4, 1])
tensor([0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
Only one is converted … I feel like that the problem is here
|
st46161
|
It looks like you are unsqueezing dim1 instead of dim0.
Could you check that and see how the target tensor looks?
|
st46162
|
Sure. If I do unsqueeze (0 ), I am having the following output:
before unsqueezing
tensor([13, 14, 9, 4])
torch.Size([4])
after unsqueezing
tensor([[13, 14, 9, 4]])
torch.Size([1, 4])
after scattering
tensor(0.)
and a following error
raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size()))
ValueError: Target size (torch.Size([15])) must be the same as input size (torch.Size([4, 15]))
Actually that is why I changed my dimension from 0 to 1
|
st46163
|
Could you post the code where you are transforming your class indices to this multi-hot encoded format?
I would recommend to apply it somewhere beforehand on each target sample or in the __getitem__ of your Dataset, so that your training code will get the already processed targets.
Your criterion (nn.BCEWithLogitsLoss) expects the model output and target to have the same shape, so I guess the error is thrown somewhere using this criterion.
The scatter_ should work using my code. I’m not sure, why you get a scalar output.
|
st46164
|
Hello everybody,
I am sorry if the post is uncategorized but this is my first post in the forum and I am learning how it works.
I need to convert a pytorch based CNN into a keras based one. In particular I need to convert the pytorch code
conv2 = torch.nn.Conv2d(32, 64, 3, 2, 1)
x=torch.nn.functional.relu(self.conv2(x))
into a keras one. I tried with
conv2 = keras.layers.Conv2D(64,3,strides=(2,2),padding=‘same’,activation=‘relu’,name=‘conv2’)
but when I call the keras layer and the pytorch one, starting from the same input data and with the same initialization for weights and biases, I obtain two tensors (one in pytorch and one in keras) with the same size but different values. Everything works if I set stride=1 in both the cases.
Does the stride work differently in keras and in pytorch?
Does anyone have some suggestions for my issue?
Thank you.
|
st46165
|
Hi,
I created a CNN that uses PyTorch to learn multi-class multi-label problems.
class Net(torch.nn.Module):
def __init__(self):
super(Net, self).__init__()
self.ConvLayer1 = nn.Sequential(
nn.Conv2d(3, 16, 5),
nn.MaxPool2d(2),
nn.ReLU(),
)
self.ConvLayer2 = nn.Sequential(
nn.Conv2d(16, 32, 5),
nn.MaxPool2d(2),
nn.ReLU(),
)
self.ConvLayer3 = nn.Sequential(
nn.Conv2d(32, 64, 5),
nn.MaxPool2d(2),
nn.ReLU(),
)
self.ConvLayer4 = nn.Sequential(
nn.Conv2d(64, 32, 5),
nn.MaxPool2d(2),
nn.ReLU(),
#nn.Dropout(0.2, inplace=True),
)
self.Linear1 = nn.Linear(32 * 10 * 10, 2048)
self.Linear2 = nn.Linear(2048, 1024)
self.Linear3 = nn.Linear(1024, 512)
self.Linear4 = nn.Linear(512, 5)
def forward(self, x):
x = self.ConvLayer1(x)
x = self.ConvLayer2(x)
x = self.ConvLayer3(x)
x = self.ConvLayer4(x)
#print(x.shape)
x = x.view(-1, 32 * 10 * 10)
#print(x.shape)
x = self.Linear1(x)
x = self.Linear2(x)
x = self.Linear3(x)
x = self.Linear4(x)
return nn.functional.sigmoid(x)
labels = ["desert","mountains","sea","sunset","trees"]
net = Net()
#criterion = torch.nn.BCEWithLogitsLoss()
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=0.0001)
threshold = 0.7
n_epochs = 3
history = {"train_loss_mean":[], "train_acc_mean":[], "test_loss_mean":[], "test_acc_mean":[]}
for epoch in range(n_epochs):
""" train mode """
net.train()
train_loss = 0.0
for inputs, labels in train_dataloader:
optimizer.zero_grad()
outputs = net(inputs)
print (outputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
train_loss_mean = train_loss / len(train_dataloader)
print("Epoch: {}; Loss(mean): {}".format(epoch, train_loss_mean))
history["train_loss_mean"].append(train_loss_mean)
There are 5 labels (“desert”, “mountains”, “sea”, “sunset”, “trees”), which are landscape images. There are 2000 images. Image data was obtained from Kaggle.
kaggle.com
miml_image_data
multi-instance multi-label classification under natural scene
Each image is resized to 224 * 224, converted to a tensor and normalized.
self.transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])
The output result is the probability (0-1) for each label.
(ex: desert: 0.52, mountains: 0.83, sea: 0, sunset: 0.12, trees: 0.26)
Therefore, the output layer adapts the sigmoid function.
return nn.functional.sigmoid(x)
It processes each batch, gives data (image data, label) to the created network, and displays the result.
for inputs, labels in train_dataloader:
optimizer.zero_grad()
outputs = net(inputs)
print (outputs)
The content of outputs is the probability of each label.
I hope that this output will improve as the processing of 1 batch and 1 epoch progresses.
ex)
1 epoch
tensor ([0.52, 0.55, 0.23, 0.11, 0.32])
100 epoch
tensor ([0.98, 0.83, 0, 0, 0.12])
However, in reality, it is getting worse.
1 loop
tensor ([[0.4911, 0.5088, 0.4933, 0.4954, 0.5039],
[0.4914, 0.5085, 0.4938, 0.4958, 0.5038],
[0.4908, 0.5082, 0.4936, 0.4962, 0.5036],
30 loop
tensor ([[7.1020e-01, 1.0376e-01, 2.8710e-01, 7.2361e-02, 6.9248e-02],
[5.2053e-02, 1.5246e-01, 3.2112e-01, 5.2325e-01, 1.5300e-01],
[4.0268e-02, 4.2312e-01, 3.4056e-01, 3.7324e-02, 2.2200e-01],
Am I making a big mistake?
Thank you!
|
st46166
|
What do you mean “it is getting worse” ?
The loss doesn’t decrease ? The accuracy doesn’t increase ?
You only show the output of the model, which does not reflect its performance.
If you cannot make it learn, try to train your model on a very small amount of data, to see if it can at least overfit.
|
st46167
|
What do you mean “it is getting worse” ?
“print (outputs)” is not the expected value.
outputs is a value of 0-1 output by the sigmoid function and represents the probability.
If the learning is good, the output will be as follows.
1 loop
tensor ([[0.4911, 0.5088, 0.4933, 0.4954, 0.5039],
[0.4914, 0.5085, 0.4938, 0.4958, 0.5038],
[0.4908, 0.5082, 0.4936, 0.4962, 0.5036],
30 loop
tensor ([[0.7911, 0.8088, 0.1933, 0.1954, 0.0139],
[0.8914, 0.8085, 0.2938, 0.1958, 0.2038],
[0.1908, 0.082, 0.036, 0.2962, 0.8036],
Actually, it is as follows.
30 loop
tensor ([[7.1020e-01, 1.0376e-01, 2.8710e-01, 7.2361e-02, 6.9248e-02],
[5.2053e-02, 1.5246e-01, 3.2112e-01, 5.2325e-01, 1.5300e-01],
[4.0268e-02, 4.2312e-01, 3.4056e-01, 3.7324e-02, 2.2200e-01],
It’s getting lower …
Perhaps this understanding is wrong…
loss is decreasing.
acc does not change as learning progresses.
Most of this data has only one label. (mean 1.24 labels)
In the current learning model, if you convert output to a 0/1 bool type with a certain threshold (> 0.7), almost everything will be 0.
Therefore, acc is about 0.8 (4/5 match).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.