id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st81568 | 1: optimizer does not include the parameters of CNN layer.
2: set the gard to 0 before updating model parameters or set requires_grad=False for the parameters of CNN before model training. |
st81569 | @DoubtWang
Thank you for help.
But can you show me the example code…?
I’m beginner^^ |
st81570 | Hi @wonchulSon ,
Please find two threads about it here:
How the pytorch freeze network in some layers, only the rest of the training?
Setting .requires_grad = False should work for convolution and FC layers. But how about networks that have instanceNormalization? Is setting .requires_grad = False enough for normalization layers too?
Correct way to freeze layers
I would like to do it the following way -
# we want to freeze the fc2 layer this time: only train fc1 and fc3
net.fc2.weight.requires_grad = False
net.fc2.bias.requires_grad = False
# passing only those parameters that explicitly requires grad
optimizer = optim.Adam(filter(lambda p: p.requires_grad, net.parameters()), lr=0.1)
# then do the normal execution of loss calculation and backward propagation
# unfreezing the fc2 layer for extra tuning if needed
net.fc2.weight.requires_grad = True
n…
The second one is doing exactly what @DoubtWang is suggesting above.
Let us know if you need more details about it! |
st81571 | Hi, @wonchulSon ,
I believe u can find some example and code in these threads given by @spanev. |
st81572 | If I have
class A(nn.Module)
and
class B(nn.Module):
def __init__(self):
....
self.a_network = A()
Will B.eval() also set a_network to evaluation mode?
Thanks! |
st81573 | Sets the module in evaluation mode.
I think this function is also effective for sub-modules. In my previous research, there was a similar operation. WHY do you doubt it ? |
st81574 | if i have A=torch.rand(2,3,5,6) and B = torch.rand(2,3,6,7) how I can do multiplication in a way that my output C has 2x3x5x7 dimension. in other words, i do matrix multiplication in a way that
C[0,0,:,:] = A[0,0,:,:]xB[0,0,:,:]
C[0,1,:,:] = A[0,1,:,:]xB[0,1,:,:]
C[0,2,:,:] = A[0,0,:,:]xB[0,2,:,:]
C[1,0,:,:] = A[1,0,:,:]xB[1,0,:,:]
C[1,1,:,:] = A[1,1,:,:]xB[1,1,:,:]
C[1,2,:,:] = A[1,2,:,:]xB[1,2,:,:]
.... |
st81575 | Solved by vainaijr in post #2
I think einsum could be used here
x = torch.randn(2, 3, 5, 6)
y = torch.randn(2, 3, 6, 7)
z = torch.einsum('abcd, abde -> abce', x, y)
z.shape |
st81576 | I think einsum could be used here
x = torch.randn(2, 3, 5, 6)
y = torch.randn(2, 3, 6, 7)
z = torch.einsum('abcd, abde -> abce', x, y)
z.shape |
st81577 | Hi everyone,
I am trying to slice a batch tensor (batch, channels, size, size) with a batch of coordinates (batch, 2, 2). This is similar to the this topic 29, which does not seem to have a proper solution.
Here is a toy example:
batch = 6
channels = 5
board_size = 4
agent_size = 2
iterations = 1000
B = torch.FloatTensor(batch, channels, board_size, board_size).uniform_(-100, 100)
pos = torch.LongTensor([[0, 1], [1, 3], [2, 0], [2, 2], [2, 1], [2, 2]])
# pos = torch.randint(0, board_size - agent_size, (batch, 2))
In this example ,the resulting tensor would have shape (batch, channel, agent_size, agent_size) = (6, 5, 2, 2) and would be formed by blocks (5, 2, 2) that are not aligned in the original tensor.
The problem is that while indexing accepts a multi-element tensor, slicing does not. Therefore, the solution using numpy slicing notation is not valid and triggers “TypeError: only integer tensors of a single element can be converted to an index”.
M = B[:, :, pos[:, 0]: pos[:, 0] + agent_size, pos[:, 1]: pos[:, 1] + agent_size]
One alternative would be to pass all the indices of the sliced tensor axis by axis, but that would require building one tensor per axis with the total number of elements in the final slice (see this post 2). While this might be simple for lower dimensions and small slices, in my toy example it would require building 4 tensors, each with 6 x 5 x 2 x 2 = 120 indices and each following a different logic.
My current solutions are:
use a loop to traverse the batches
Use the tensors for indexing. I do it twice to get the sliced “frame”.
#method 1: loop along batch dimension
def multiSlice1(B, pos, size):
s = B.shape
M = torch.zeros(s[0], s[1], size, size)
for i in range(B.shape[0]):
M[i] = B[i, :, pos[i, 0]: pos[i, 0] + size, pos[i, 1]: pos[i, 1] + size]
return M
#method2
def multiSlice2(B, pos, size):
pos_row = pos[:,0]
pos_row = pos_row.view(-1, 1) + torch.arange(size)
pos_row = pos_row.view(pos_row.shape[0], 1, pos_row.shape[1], 1)
expanse = list(B.shape)
expanse[0] = -1
expanse[2] = -1
pos_row = pos_row.expand(expanse)
M1 = torch.gather(B, 2, pos_row)
pos_col = pos[:,1]
pos_col = pos_col.view(-1, 1) + torch.arange(size)
pos_col = pos_col.view(pos_col.shape[0], 1, 1, pos_col.shape[1])
expanse = list(M1.shape)
expanse[0] = -1
expanse[3] = -1
pos_col = pos_col.expand(expanse)
M2 = torch.gather(M1, 3, pos_col)
return M2
Is there any simpler solution that is more efficient? |
st81578 | Is there a fast and efficient way in PyTorch to sample a vector (of potentially large dimension), given only its pdf in closed-form? In my case, it is intractable to analytically compute the inverse of the cdf. |
st81579 | Hi Subho!
Subho:
Is there a fast and efficient way in PyTorch to sample a vector (of potentially large dimension), given only its pdf in closed-form? In my case, it is intractable to analytically compute the inverse of the cdf.
Well, it depends, of course, on your probability density function …
You say nothing about it other than that you have a closed-form
expression for it.
Given that, pre-compute the following:
Analytically or numerically compute its integral to get the
cumulative distribution function. You say that you cannot invert
this analytically, so invert it numerically, storing the (pre-computed)
inverse as a look-up / interpolation table. (The granularity and
interpolation scheme for the interpolation table will depend on
the accuracy required of your inverted cumulative distribution
function.)
Then on a tensor (vector) basis, generate a bunch of uniform
deviates and pump them through (on a tensor basis) your inverse
cumulative distribution function. You now have a tensor (vector)
of samples from your original probability density function.
Good luck!
K. Frank |
st81580 | Hello Subho!
Subho:
Turns out that I can use simple rejection sampling in my case. Thanks anyways.
Rejection sampling doesn’t parallelize naturally. You would
therefore typically run the rejection-sampling loop individually
for each element of the vector you want to populate with random
deviates, thereby forgoing the benefits of using pytorch’s tensor
operations.
Best.
K. Frank |
st81581 | I am new to pytorch and stuck with probably a basic problem. I have two images. One of them is my input which must to be trained wrt other one which is ground truth. I built 250x250 pixelwise data for these two images. I wrote a simple dataset class and dataloader for them . When I check the size of my image, it gives me 1x250 instead of 250x250. So it basically considers every array in my matrix as a separate data but the whole 250x250 matrix is my single data. What am doing wrong? or is it just right?
If i do this correctly, i will load about hundreds of input and target images (250x250 matrices) for my study. Now i just try to learn how to do it.
I am posting my code below. Thank you in advance.
FOLDER_DATASET = “./ames03/”
class CustomData(Dataset):
def __init__(self,mat_path):
data_DF = scipy.io.loadmat(mat_path + "ames03.mat")
self.images = torch.from_numpy(data_DF['MD'])
data_targets = scipy.io.loadmat(mat_path + "ames03_AOA=0.mat")
self.targets = torch.from_numpy(data_targets['z1'])
def __getitem__(self,index):
x = self.images[index]
y = self.targets[index]
return x, y
def __len__(self):
return len(self.images)
dataset = CustomData(FOLDER_DATASET)
loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=1)
dataiter = iter(loader)
image, target = dataiter.next()
print(image.size()) |
st81582 | It looks like you’ve just loaded a single image using ames03.mat or does this .mat file contain multiple images?
Usually you would pass a list of files to __init__ and load them lazily in __getitem__.
Would that be possible? |
st81583 | Thank you for your reply.
Yes I have loaded one image with “MD” variable in ames03.mat in the form of 250x250 matrix and convert it to tensor for my trainable image. Also, i’ve loaded another image with “z1” variable in ames03_AOA=0.mat for my ground truth image. So far , if I have a mistake, tell me please.
Since i loaded only one image data for “images” and “targets” respectively, I expect only 0 for the index of getitem. But when I call my dataset, there seems to be 250 data in the form of 1x250 not 1 data in the form of 250x250.
You asked that whether this .mat file contain multiple images? It contains only one image. This file is the output of my matlab script. It stores just a matrix in the form of 250x250. My code somehow thinks that this file has 250 data with 1x250( which are each row of this matrix). |
st81584 | If you only have a single image (and target), you don’t necessarily need to wrap it in a Dataset.
Usually you would like to load more than a single sample, so that the index in __getitem__ is used to select the current sample.
This is done via lazy loading:
You would pass e.g. a list of paths to __init__ and load each image/file in __getitem__ by selecting the path to the current image from the passed list. Here is a small dummy example (without loading a target)
class MyLazyDataset(Dataset):
def __init__(self, image_paths, transform=None):
self.image_paths = image_paths
self.transform = transforms # e.g. transforms.ToTensor()
def __getitem__(self, index):
# Get current path
current_image_path = self.image_paths[index]
# Load sample
x = Image.open(current_image_path)
# Apply transformation, if defined
if self.transform is not None:
x = self.transform(x)
return x
def __len__(self):
# return number of images
return len(self.image_paths)
Alternatively, you could also preload the data, if it’s small:
class MyDataset(Dataset):
def __init__(self, data):
self.data = data # data could have the shape [nb_samples, 3, 224, 224]
def __getitem__(self, index):
x = self.data[index]
# x will have the shape [3, 224, 224]
return x
def __len__(self):
# return number of images
return len(self.data)
Since your loaded tensor has the shape [250, 250], indexing it in the first dimension will return a tensor with the shape [250]. |
st81585 | Actually I don’t have single image. I have hundreds of images. I loaded just a single one because i am trying to learn how dataset class and dataloader works.
I thought loading my values in each pixel (let’s say pressure or velocity) instead of loading image would be more practical but as i understand correctly it can’t be done with the shape of [n, n]. In this matrix a nondimensional physical quantity over a domain is stored so maybe i wouldn’t deal with normalization later instead of working with pixel values between 0-255.
Can you give me an example of data format that i should pass dataset if it isn’t too much for you? As i said before i am new in pytorch. Thank you in advance. |
st81586 | I’m not sure I understand your use case completely.
Rakuen:
I thought loading my values in each pixel (let’s say pressure or velocity) instead of loading image would be more practical but as i understand correctly it can’t be done with the shape of [n, n].
Would you like to load each pixel value individually and pass it as a single value to your model?
That would be possible, so let me know, if that’s really your use case.
Rakuen:
Can you give me an example of data format that i should pass dataset if it isn’t too much for you?
Sure! I would suggest to use my first example and pass a list of paths to your .mat files as the arguments to __init__:
class MyMatDataset(Dataset):
def __init__(self, image_paths, target_paths):
self.image_paths = image_paths
self.target_paths = target_paths
def __getitem__(self, index):
x = scipy.io.loadmat(self.image_paths[index])
x = torch.from_numpy(x['MD'])
y = scipy.io.loadmat(self.target_paths[index])
y = torch.from_numpy(y['z1'])
return x, y
def __len__(self):
return len(self.image_paths)
# Create your data and target paths and make sure the order is right
image_paths = ['./ames01.mat', './ames02.mat', ...]
target_paths = ['./ames01_target.mat', './ames02_target.mat', ...]
dataset = MyMatDataset(image_paths, target_paths)
loader = DataLoader(dataset) |
st81587 | Consider I have a geometry and some pressure distribution around it. For each case geometry is changing so does pressure distribution. I have solutions of pressure distribution for each geometry. What i want to do is to build an encoder decoder model which predicts pressure distribution for a new geometry when i want to. To do that i need two images in my dataset. One is showing geometry and the other one is showing pressure distribution around it which behaves ground truth.
As i said i have solutions of pressure for each geometry. I can plot them and generate images of them but i already have raw data for pressures which is in the shape of [n, n]. I can also have geometry data [n, n]. So i thought that i do not need to generate images if i have this raw data.
I am saying again i am new in pytorch:) If you have any suggestion about my study or tutorial or any other document, please share with me. |
st81588 | In that case my example code should work.
Have a look at the Training a Classifier tutorial 5 as it might give you a good starting point.
PS: Also, these tutorials 2 are useful to get started with PyTorch |
st81589 | Yes, i am about to reply. It works fine. Thank you:)
I tried it with giving paths for two data. It doesn’t print second one. It says index 1 is out of bounds but i will deal with it:) Thanks again. |
st81590 | Solved by Oli in post #2
You don’t have to, its already in pytorch! docs |
st81591 | My model does a pretty good making inferences on images provided in the dataset that I used to train and test the model, but I was wondering if it is possible to get a prediction using an outside image. If so, I was hoping someone can get me started in the right direction on how to load and properly prepare the image. I am currently working with the MNIST dataset by the way,
My notebook if interested:
https://colab.research.google.com/drive/16RFi_fERIyHMpiNXlGJb01faUsJWrkot |
st81592 | I want to build a model like
Model(
(rnn): LSTM(25088, 5, num_layers=2, batch_first=True)
(output_layer): Linear(in_features=5, out_features=2, bias=True)
)
and I know when train a RNN-like model, we can use functions like
torch.nn.utils.rnn.pad_sequence()
torch.nn.utils.rnn.pack_padded_sequence()
torch.nn.utils.rnn.pad_packed_sequence()
My question is about the forward() in class Model()
class Model(nn.Module)
def __init__(self):
super(Model, self).__init__()
self.rnn = nn.LSTM(25088, 5, 2, batch_first=True)
self.output_layer = nn.Linear(5, 2)
def forward(self, inputs, hidden_0=None)
output, (hidden, cell) = self.rnn(inputs, hidden_0)
output = self.output_layer(output) # AttributeError: 'PackedSequence' object has no attribute 'dim'
I know I can get packed sequence by pack_padded_sequence(), and also can feed the part of model, self.rnn. But when feed the part of model, self.output_layer, it will error as above. The point is that, there is batch-size info in the output of self.rnn(inputs, hidden_0), but self.output_layer(output), which is a nn.Linear() has no info about batch. It just has parameters ( in_features , out_features , bias=True), which meas there is mistake that batch-size info and no-batch-size info. How can train the model properly ?
Thank you for your help! |
st81593 | I am attempting to create a model for halfway fusion using visual and thermal data. The following is the model:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 53 * 53, 157)
self.fc2 = nn.Linear(157+157, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x1, x2):
x1 = self.pool(F.relu(self.conv1(x1)))
x1 = self.pool(F.relu(self.conv2(x1)))
x1 = x1.view(x1.size(0), -1)
x1 = F.relu(self.fc1(x1))
x2 = self.pool(F.relu(self.conv1(x2)))
x2 = self.pool(F.relu(self.conv2(x2)))
x2 = x2.view(x2.size(0), -1)
x2 = F.relu(self.fc1(x2))
#print(x1.shape, x2.shape)
x3 = torch.cat((x1, x2), dim=1)
#print(x3.shape)
x3 = F.relu(self.fc2(x3))
x3 = self.fc3(x3)
return x3
net = Net()
for epoch in range(1):
running_loss = 0.0
for i, vs_data in enumerate(vs_trainloader, 0):
vs_images, vs_labels, vs_bbox = vs_data
vs_images = Variable(images).to(device)
vs_labels = Variable(labels).to(device)
for i, th_data in enumerate(th_trainloader, 0):
th_images, th_labels, th_bbox = th_data
th_images = Variable(images).to(device)
th_labels = Variable(labels).to(device)
optimizer.zero_grad()
outputs = net(vs_images.to(device), th_images.to(device))
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 30 == 29: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 30))
running_loss = 0.0
print('Finished Training Trainset')
The issue I am having is with loss = criterion(outputs, labels) line. How can I make sure that the loss is calculated properly for the two different datasets? Each image for both datasets will have its own corresponding labels.
Any advice would be greatly appreciated. Thank you in advance. |
st81594 | If each dataset has its own label tensor the concatenation seems to be wrong.
Currently you are fusing the activations of both images so that your model has only a single output layer. Could you explain your use case a bit?
Maybe processing both images separately without the fusion would be the right approach? |
st81595 | I am trying to design a halfway fusion technique, where I fuse the features from the two pictures (visual and thermal) to design a classifier. The technique requires that the feature extraction takes place within the network. Something like the image below.
image.jpg628×750 165 KB
I’m sorry if this is a bit confusing. I am new to Deep Learning and still learning to use PyTorch. |
st81596 | It seems the output should be bounding boxes for pedestrians using both images as inputs.
In that case you would only have the bbox output and a single loss function.
The gradients in the shared layers will be accumulated.
I think your approach should be right. |
st81597 | Sorry, I do not fully understand your reply. Could, if possible, give me a basic example of what you mean? Or do you know of any examples that exist that I could look at? |
st81598 | Sorry for not being clear enough.
What I meant is that your model architecture should generally work as you would expect.
As far as I understand your use case, you would like to predict bounding boxes using two different image modalities. I assume you only have one set of bounding boxes for each image pair, i.e. the boxes have the same location in the visible and IR image. Is that correct?
If so, all shared layers /Multispectral Pedestrian Detection layer, Multispectral Feature Fusion layer) will get gradients which will be calculated using a single loss based on both image inputs.
The Visible and Infrared Feature Extraction layers will also get valid gradients corresponding to the different image modalities.
What confuses me is the statement “each image for both datasets will have its own corresponding labels”. Could you explain it a bit? Are the bounding box locations different for the visible and IR image?
Also, let me know if I completely misunderstood your use case. |
st81599 | Sorry, I must not have been clear when I stated my use case initially. I do have bounding boxes for each colour and thermal images in different csv files. So I have a separate csv file, one for the colour images and one for the thermal images. Each csv file contains the name and location of the image, the label and bounding box values.
So for example, if I have colour image1, the csv file will have its location and label. The thermal image1 will also have its own csv file with location and label information.
However, for what I’m attempting to design, I was going to use only the images and their corresponding labels only and add the bounding box values later.
So just to clarify, the information of the colour and thermal images have separate csv files. I hope that makes sense, and I hope I haven’t over-explained it. If it doesn’t make sense, perhaps I can post a snippet of the csv files? |
st81600 | Thanks for the explanation!
In that case I’m not sure the fusion layer would be the best approach.
If your use case would just involve a single target, I think the fused layer approach could be a good idea (at least in my opinion ).
Both feature extraction blocks would work separately on the two image modalities, and the fusion layers would use these features to learn the label and later the bounding boxes. Basically the “classifier” part (fused layers) would be able to select the necessary features of the feature extractors.
However, you are apparently dealing with separate labels and bounding box coordinates.
If you feed the features from the feature extractors to the fused block, you would have to separate them afterwards to use two (or four) different output layers (label + bbox for both modalities). Alternatively you could try to use a multi-label approach and use a single large output layer, which should learn to predict both labels, but I’m not sure if that’s the best way here.
Does it make sense to you or did I misunderstood something? |
st81601 | Thank so much for taking time out to understand my problem and the advise that you have given me. I will try your suggestion of the multi-label approach as that makes more sense for what I am wanting to do. I will let you know how i get on.
Again, thank you for all your advise. |
st81602 | I know this problem was posted a while back and I am currently working on sensor/feature fusion. I used torch.cat to use the output features of certain layers of my network. However, I have recently seen an example where the fusion was achieved by simply adding the two output features. Something like this:
def forward(self,input):
conv_1 = self.conv_1(input)
conv_2 = self.conv_2(conv_1)
res = conv_1 + conv_2
conv_3 = self.conv_3(res)
return conv_3
Could some be able to confirm which method is better for my problem? Thanks in advance. |
st81603 | Cannot iterate over train_dl where dataset is moved to GPU.
train_dl= DataLoader(dataset, batch_size, sampler=train_sampler, pin_memory=True)
working over MNIST data
for images,labels in train_dl:
outputs=model(images)
loss=F.cross_entropy(outputs,labels)
print("loss",loss.item())
break
print(outputs.shape)
print(outputs[:10].data)
RuntimeError: cannot pin ‘torch.cuda.ByteTensor’ only dense CPU tensors can be pinned |
st81604 | Since your tensors are already CUDATensors, you cannot use pin_memory=True (this will use pinned host memory for fast data transfer).
The error message points rather to CPU tensors (with the additional requirement that they should be dense, not sparse). |
st81605 | How do I get the input noise vector to a generator to train, while freezing the generator weights? I’ve been trying to set requires_grad=True for the input, freezing the model weights, and training. However, the input as I print it out does not change over the coarse of training (nor does the model), so I’m clearly missing something.
Do I have to use the input as a parameter into the optimizer when I initialize the optimizer (like in neural style transfer), and if so, how do I do that with MSELoss? Or is there a simpler method?
# Get the noise vector
z = util.get_input_noise() # vector of size 100 with values [0,1)
z = z.detach().requires_grad_()
# Freeze the model
for param in model.parameters():
param.requires_grad = False
model.eval() # Can I use model in eval mode for this or does this do something weird with backprop?
# Training loop (simplified -- assume certain vars already initialized)
for i in range(n_iters):
probs = model.forward(z)
loss = torch.zeros(1, requires_grad=True).to(device)
loss = loss_fn(probs, target)
loss.backward()
optimizer.step()
optimizer.zero_grad()
Thanks so much!
Related posts (did not work for me ):
How can i train input (not weight)?
In GAN, i want to find latent vector z corresponding to the real image. One way to do this is to train z to minimize the error between sampled image and real image.
However, when i ran the code below, Error message appeared: “ValueError: can’t optimize a non-leaf Variable”.
targets # target images of shape (batch_size, 3, 64, 64)
z = Variable(torch.randn(batch_size, 100), requires_grad=True).cuda()
optim = torch.optim.Adam([z], lr=0.01) # This causes an error. (if i del…
Trainable parameter as input autograd
I can’t understand the difference:
prior = Variable(torch.zeros(24), requires_grad=True)
prior_param = [nn.Parameter(prior.data)]
h_test = torch.autograd.grad(outputs=prior.mean(), inputs=prior_param) #does not work
h_test = torch.autograd.grad(outputs=prior.mean(), inputs=prior) #works
I want to make a trainable parameter as an input to the computation graph.
The error is
RuntimeError: One of the differentiated Variables appears to not have been used in the graph
Neural Style transfer: https://pytorch.org/tutorials/advanced/neural_style_tutorial.html#sphx-glr-advanced-neural-style-tutorial-py 10 |
st81606 | Solved by JuanFMontesinos in post #2
I’d say you have to parse the input to the optimizer. Are you doing so? |
st81607 | Yes, that’s what I was doing incorrectly - the optimizer needs to take in the input. |
st81608 | Would it be possible to use nn. CrossEntropyLoss to update the generator of a GAN? for example if I have 5 classes and I want to update the generator based only on the results for a specific class I get from the discriminator. |
st81609 | I think its similar to what Auxilliary GANs do.
arXiv.org
Conditional Image Synthesis With Auxiliary Classifier GANs 2
Synthesizing high resolution photorealistic images has been a long-standing
challenge in machine learning. In this paper we introduce new methods for the
improved training of generative adversarial networks (GANs) for image
synthesis. We construct a... |
st81610 | Hello. I want my as output same as tensorflow output, but i’m unable to figuring out how array behaves? Why np.array perform the same way as in tensorflow and what is the difference between the output of tensorflow and pytorch, desdpite that both are tensors with dim = 10? Is there any other thing which i’m doing wrong
output.png773×486 15.2 KB
tensorflow output = [1. 2. 3. 4. 5. 6. 7. 8. 9. 10.]
pytorch output = [1., 2., 3., 4., 5., 6., 7., 8., 9., 10.] |
st81611 | Solved by spanev in post #4
Correct, that’s what I meant. |
st81612 | Hi @muhibkhan,
There are both the same, of shape (10). The difference you see is just an aesthetic difference in the printing.
If you print tensor1 or tensor2 shape and type in PyTorch you will have:
torch.Size([10]), torch.float32
which is strictly equivalent to TensorFlow’s TensorShape([Dimension(10)]) that you see. |
st81613 | So logically both outputs like:
tensorflow output = [1. 2. 3. 4. 5. 6. 7. 8. 9. 10.]
pytorch output = [1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]
are same. Comma separation (,) doesn’t have any affect. |
st81614 | muhibkhan:
are same. Comma separation ( , ) doesn’t have any affect.
Correct, that’s what I meant. |
st81615 | Hello everyone.
I made a simple network and tried to access the modules.(print them for now)
This is the network I’m talking about:
def convlayer(input_dim, output_dim, kernel_size=3, stride=1, padding=1, batchnorm=False):
layers = []
conv = nn.Conv2d(input_dim, output_dim, kernel_size, stride, padding)
layers.append(conv)
if batchnorm:
layers.append(nn.BatchNorm2d(output_dim))
layers.append(nn.ReLU())
return nn.Sequential(*layers)
class sequential_net3_2(nn.Module):
def __init__(self):
super().__init__()
self.features = nn.Sequential(
convlayer(3, 6, 3, stride=2),
convlayer(6, 12, 3, stride=2, batchnorm=True),
convlayer(12, 12, 3, stride=2, batchnorm=True)
)
self.classifer = nn.Linear(12, 2)
def forward(self, x):
output = self.features(x)
output = output.view(x.size(0), -1)
output = self.classifer(output)
return output
sequential_net3_2 = sequential_net3_2()
for i, m in enumerate(sequential_net3_2.modules()):
print(f'{i}, {m}')
I expected to see the modules I see when printing the model which is :
sequential_net3_2(
(features): Sequential(
(0): Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(1): Sequential(
(0): Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(2): Sequential(
(0): Conv2d(12, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
)
(classifer): Linear(in_features=12, out_features=2, bias=True)
)
But instead I got:
0, sequential_net3_2(
(features): Sequential(
(0): Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(1): Sequential(
(0): Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(2): Sequential(
(0): Conv2d(12, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
)
(classifer): Linear(in_features=12, out_features=2, bias=True)
)
1, Sequential(
(0): Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(1): Sequential(
(0): Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(2): Sequential(
(0): Conv2d(12, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
)
2, Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
3, Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
4, ReLU()
5, Sequential(
(0): Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
6, Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
7, BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
8, ReLU()
9, Sequential(
(0): Conv2d(12, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
10, Conv2d(12, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
11, BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
12, ReLU()
13, Linear(in_features=12, out_features=2, bias=True)
I was only expecting :
0, sequential_net3_2(
(features): Sequential(
(0): Sequential(
(0): Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): ReLU()
)
(1): Sequential(
(0): Conv2d(6, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
(2): Sequential(
(0): Conv2d(12, 12, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(1): BatchNorm2d(12, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU()
)
)
(classifer): Linear(in_features=12, out_features=2, bias=True)
)
and not all the combinations of the modules!
Is this a bug or is it just expected behavior? if it is expected behavior what is the use of such spurious information?
Thank you all very much |
st81616 | Solved by Shisho_Sama in post #2
The answer is no. modules() was created for this very reason that is to recursively return all modules.
named_children() and children() are what you (me) want . they only return the top module as the name suggests.
for altering networks, nearly 99% of the times, one should use children() or named… |
st81617 | The answer is no. modules() was created for this very reason that is to recursively return all modules.
named_children() and children() are what you (me) want . they only return the top module as the name suggests.
for altering networks, nearly 99% of the times, one should use children() or named_children(). |
st81618 | Hi ! I recently found a strange problem when using the PyTorch training network.
When the model is trained, the loss does not decrease
I’m not sure what caused this
Please allow me to show my code below.
This is my train.py where I run the training code.
import torch
import torch.nn as nn
import torch.optim as optim
from make_data import train_dataloader, test_dataloader
from make_net import Net
import time
num_epochs = 900
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
criterion = nn.MSELoss()
def train_model(model, my_criterion):
since = time.time()
train_loader = train_dataloader
criterion = my_criterion
net = model()
opt_Adam = optim.Adam(net.parameters(), lr=0.2, betas=(0.9, 0.99))
if torch.cuda.is_available():
print("Let's use", torch.cuda.device_count(), "GPUs")
net = nn.DataParallel(net)
net.to(device)
for epoch in range(num_epochs):
running_loss = 0.0
for i, sample in enumerate(train_loader, 0):
image, pressure = sample['image'], sample['pressure']
image = image.float()
image = image.to(device)
# image.shape torch.Size([256, 403, 640])
# print("image.shape", image.shape)
pressure = pressure.float()
pressure = pressure.to(device)
opt_Adam.zero_grad()
output = net(pressure)
# output.shape torch.Size([256, 403, 640])
# print("output.shape", output.shape)
loss = criterion(output, image)
loss.backward()
opt_Adam.step()
running_loss += loss.item()
if i % 10 == 9:
# print every 200 mini-batch
print("[%d, %5d], loss: %.3f" % (epoch+1, i+1, running_loss/10))
running_loss = 0.0
print("Finished Training!")
train_model(Net, criterion)
And this is my net.py where I define my network
import torch
import torch.nn as nn
import torch.nn.functional as F
from make_ops import conv_out_size_same
from make_data import batch_size
s_h, s_w = 403, 640
# 403,640
s_h2, s_w2 = conv_out_size_same(s_h, 2), conv_out_size_same(s_w, 2)
# 202, 320
s_h4, s_w4 = conv_out_size_same(s_h2, 2), conv_out_size_same(s_w2, 2)
# 101, 160
s_h8, s_w8 = conv_out_size_same(s_h4, 2), conv_out_size_same(s_w4, 2)
# 51, 80
s_h16, s_w16 = conv_out_size_same(s_h8, 2), conv_out_size_same(s_w8, 2)
# 25, 40
s_h32, s_w32 = conv_out_size_same(s_h16, 2), conv_out_size_same(s_w16, 2)
# 12, 20
s_h64, s_w64 = conv_out_size_same(s_h32, 2), conv_out_size_same(s_w32, 2)
# 6, 10
s_h128, s_w128 = conv_out_size_same(s_h64, 2), conv_out_size_same(s_w64, 2)
# 3,5
s_h256, s_w256 = conv_out_size_same(s_h128, 2), conv_out_size_same(s_w128, 2)
# 2, 3
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.CONV1_DEPTH = 2
self.CONV2_DEPTH = 4
self.CONV3_DEPTH = 8
self.CONV4_DEPTH = 16
self.CONV5_DEPTH = 32
self.CONV6_DEPTH = 64
self.CONV7_DEPTH = 128
self.CONV8_DEPTH = 256
self.f_dim = 32
self.channel_dim = 1
self.FC_NODE = 512
self.IMG_HEIGHT = 403
self.IMG_WIDTH = 640
self.batch_size = batch_size
self.fc1 = nn.Linear(in_features=10, out_features=self.f_dim*8)
self.fc2 = nn.Linear(in_features=self.f_dim*8, out_features=self.f_dim*8*s_w256*s_h256)
self.fc3 = nn.Linear(in_features=self.CONV8_DEPTH*s_h256*s_w256, out_features=self.FC_NODE)
self.fc4 = nn.Linear(in_features=self.FC_NODE, out_features=self.IMG_HEIGHT * self.IMG_WIDTH)
self.avg_pool = nn.AdaptiveAvgPool2d((s_h256, s_w256))
self.deconv1 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.f_dim*8, out_channels=self.f_dim*4,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.f_dim*4),
nn.ELU()
)
self.deconv2 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.f_dim * 4, out_channels=self.f_dim * 2,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.f_dim * 2),
nn.ELU()
)
self.deconv3 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.f_dim * 2, out_channels=self.f_dim,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.f_dim),
nn.ELU()
)
self.deconv4 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.f_dim, out_channels=self.f_dim//2,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.f_dim//2),
nn.ELU()
)
self.deconv5 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.f_dim//2, out_channels=self.f_dim//4,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.f_dim//4),
nn.ELU()
)
self.deconv6 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.f_dim//4, out_channels=self.f_dim//8,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.f_dim//8),
nn.ELU()
)
self.deconv7 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.f_dim//8, out_channels=self.f_dim//16,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.f_dim//16),
nn.ELU()
)
self.deconv8 = nn.Sequential(
nn.ConvTranspose2d(in_channels=self.f_dim//16, out_channels=self.channel_dim,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.channel_dim),
nn.Tanh()
)
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels=self.channel_dim, out_channels=self.CONV1_DEPTH,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.CONV1_DEPTH),
nn.ELU()
)
self.conv2 = nn.Sequential(
nn.Conv2d(in_channels=self.CONV1_DEPTH, out_channels=self.CONV2_DEPTH,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.CONV2_DEPTH),
nn.ELU()
)
self.conv3 = nn.Sequential(
nn.Conv2d(in_channels=self.CONV2_DEPTH, out_channels=self.CONV3_DEPTH,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.CONV3_DEPTH),
nn.ELU()
)
self.conv4 = nn.Sequential(
nn.Conv2d(in_channels=self.CONV3_DEPTH, out_channels=self.CONV4_DEPTH,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.CONV4_DEPTH),
nn.ELU()
)
self.conv5 = nn.Sequential(
nn.Conv2d(in_channels=self.CONV4_DEPTH, out_channels=self.CONV5_DEPTH,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.CONV5_DEPTH),
nn.ELU()
)
self.conv6 = nn.Sequential(
nn.Conv2d(in_channels=self.CONV5_DEPTH, out_channels=self.CONV6_DEPTH,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.CONV6_DEPTH),
nn.ELU()
)
self.conv7 = nn.Sequential(
nn.Conv2d(in_channels=self.CONV6_DEPTH, out_channels=self.CONV7_DEPTH,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.CONV7_DEPTH),
nn.ELU()
)
self.conv8 = nn.Sequential(
nn.Conv2d(in_channels=self.CONV7_DEPTH, out_channels=self.CONV8_DEPTH,
kernel_size=2, stride=2),
nn.BatchNorm2d(num_features=self.CONV8_DEPTH),
nn.ELU()
)
self.layer = nn.Sequential(
nn.BatchNorm2d(num_features=self.f_dim*8),
nn.ELU()
)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
x = x.view(-1, self.f_dim*8, s_h256, s_w256)
x = self.layer(x)
x = self.deconv1(x)
x = self.deconv2(x)
x = self.deconv3(x)
x = self.deconv4(x)
x = self.deconv5(x)
x = self.deconv6(x)
x = self.deconv7(x)
x = self.deconv8(x)
x = self.conv1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.conv4(x)
x = self.conv5(x)
x = self.conv6(x)
x = self.conv7(x)
x = self.conv8(x)
x = self.avg_pool(x)
x = x.view(-1, self.num_flat_features(x))
x = F.elu(self.fc3(x))
x = F.elu(self.fc4(x))
x = x.view(-1, self.IMG_HEIGHT, self.IMG_WIDTH)
return x
def num_flat_features(self, x):
size = x.size()[1:]
num_features = 1
for s in size:
num_features *= s
return num_features
When I run the training code, I got the loss values as shown below.
[1, 10], loss: 27806.338
[1, 20], loss: 1088497.401
[1, 30], loss: 2364.557
[1, 40], loss: 2366.722
[1, 50], loss: 2368.215
[1, 60], loss: 2370.851
[1, 70], loss: 2365.583
[1, 80], loss: 2366.041
[2, 10], loss: 2368.178
[2, 20], loss: 2363.056
[2, 30], loss: 2374.572
[2, 40], loss: 2361.862
[2, 50], loss: 2364.390
[2, 60], loss: 2366.633
[2, 70], loss: 2372.771
[2, 80], loss: 2362.416
[3, 10], loss: 2369.942
[3, 20], loss: 2367.277
Sooo confused !!!
I would appreciate it if you could put forward some suggestions on this question.
Thank you very much!!!
Wish you a happy life!! |
st81619 | Solved by ptrblck in post #2
Try to play around with some hyperparameters, e.g. lowering the learning rate. |
st81620 | Hi, I want to read mat files in jupyter and convert as tensor. So I use scipy.io 14 but I can’t convert to tensor.
import scipy.io 14 as spio
import torch
import numpy as np
data = spio.loadmat(‘train.mat’)
np_data = np.array(data)
tensor_data = torch.Tensor(np_data)
but result is
TypeError Traceback (most recent call last)
in
----> 1 tensor_data = torch.Tensor(np_data)
TypeError: can’t convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, int64, int32, int16, int8, uint8, and bool.
what should I have to do? |
st81621 | What is currently stored in np_data?
It looks like it’s stored as a numpy.object_, which happens if not all elements have the same dtype.
Dummy example:
x = np.array([1.0, np.sum])
print(x)
> array([1.0, <function sum at 0x7fb77bcdbd08>], dtype=object) |
st81622 | Hi, i am going to use torch 0.3.1 but i dont know how to install torchvison 0.2.0 |
st81623 | Hello, do you solve the problem? I just want to use torch 0.4.1, but when I use “pip install TorchVision”, it will automatically install TorchVision-0.4.0 and update my PyTorch to the latest version (1.2.0). |
st81624 | Hi everyone, i build simple linear nn but output of this network is not same as desired result.
I got this result from my network
this is my loss graph
this is the desired result
my codes are:
class Module(nn.Module):
def __init__(self, D_in, H1, H2, D_out):
super().__init__()
self.linear1 = nn.Linear(D_in, H1)
self.linear2 = nn.Linear(H1, H2)
self.linear3 = nn.Linear(H2, D_out)
def forward(self, x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
train_dataset = TensorDataset(train_x, train_y)
train_generator = DataLoader(train_dataset, batch_size=32,shuffle=False)
valid_dataset = TensorDataset(val_x, val_y)
valid_generator = DataLoader(valid_dataset, batch_size=32)
model=Module(3,27,11,1)
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
for e in range(epochs):
running_loss = 0.0
running_corrects = 0.0
val_running_loss = 0.0
val_running_corrects = 0.0
for inputs,out in train_generator:
print(out.size())
inputs=inputs.to(device)
out=out.to(device)
output=model(inputs)
new_output = torch.squeeze(output)
print("input size: ",inputs.size())
loss = criterion(new_output,out)
print("output size: ",output.size())
print("out size: ",out.size())
preds,_=torch.max(new_output,1)
outputss.append(preds.max().detach().numpy())
losses.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
#outputss.append(outputs.detach().numpy())
#print(loss.item())
else:
with torch.no_grad():
for val_inputs, val_labels in valid_generator:
#val_inputs = val_inputs.view(val_inputs.shape[0], -1)
val_inputs=val_inputs.to(device)
val_labels=val_labels.to(device)
val_outputs = model(val_inputs)
val_loss = criterion(val_outputs, val_labels)
val_preds,_ = torch.max(val_outputs, 1)
val_running_loss_history.append(val_loss)
val_running_corrects_history.append(val_preds.max().detach().numpy())
If you can help me, I will be very thankful |
st81625 | What kind of input data are you using?
Based on the loss curve it looks like your model is still training and the loss goes down.
Have you seen a plateau after a while? |
st81626 | the loss curve is normal. the result curve is abnormal,
I think that the distribution of the trian and vaild datasets is different. |
st81627 | Hi, if we said that dataset is 100x then train set is 85x and valid set is 15x so actually they are not so different |
st81628 | Thanks for the information. I’m still unsure, why you’ve stopped the training when the loss was still going down. Did you try to train it a bit longer and check the results? |
st81629 | Of course, the number of the train and the valid set is different.
My idea is whether the distribution of the two is the same.
u can eandomly split the dataset into train and vaild set, then train your model, and observe the results. |
st81630 | I thought validation set size should be smaller than training set? If I am wrong, can you explain me briefly?
Regards, |
st81631 | yeah, the train set size should bigger than the validation set size.
My means is that u can randomly split the dataset into train and valid set according to a ratio, e.g., 8:2.
Then, train your model and oberve the results. |
st81632 | I am using transforms.Normalize to normalize chest x-rays before feeding into a DenseNet. I’ve noticed that some of these images have a large white square present somewhere in the image, and when this happens the normalized image ends up very dark. Aside from these random white squares the images should have a pretty normal distribution…so I’m assuming histograms of these have a normal distribution and then very high very narrow jump up around 255. Is there a way with transforms.Normalize to ignore these abnormal peak values? |
st81633 | Hi @bkonk ,
When using transforms.Normalize, you are the one specifying the mean and std for each image. How do you compute it per sample? |
st81634 | Thanks for the reply @spanev,
This is branching off topic a bit, but I’m a little unclear on how to set the means and stds. I initially went with those used in Chexnet (densenet for a huge x-ray database) as I thought these would translate well, but the images I was viewing (with matplotlib.pyplot…maybe this distorts things a bit?) were looking very dark or very saturated. My images are 0 to 255 value .png files. I tried loading them, converting to a -1 to 1 range, and then grabbing the means and stds across all images, but this gave even worse looking results. I similarly opened the pngs as numpy arrays with cv2, and tried changing those white-box values (usually 1 or 255 depending on range) to whatever the mean value was of the other pixels and then re-saving. This seemed to help a bit when it came to previewing those white-box images (i.e., they weren’t super dark except for the box) but doesn’t seem like a great solution. |
st81635 | I read the source code of the PyTorch. And I have know the autogrid of the function of relu, sigmod and so on. All the function have a forward and backward function. But I don’t find the backward function of the conv2d. I want to know how PyTorch do the backward of conv2d |
st81636 | gpu thnn path: https://github.com/pytorch/pytorch/blob/bc7a41af7d541e64f8b8f7318a7a2248c0119632/aten/src/THCUNN/generic/SpatialConvolutionMM.cu#L211-L486 184 (first method is grad_input, and second is grad_weight)
cpu thnn path is in a similar place.
cudnn path: https://github.com/pytorch/pytorch/blob/bc7a41af7d541e64f8b8f7318a7a2248c0119632/aten/src/ATen/native/cudnn/Conv.cpp#L1048 80 (less informative as it just calls cudnn) |
st81637 | Thanks for you response! I also have a question,Is there some helpful document for reading the source code of the PyTorch.Because I want to quantize the weight or feature map of the neural
network. Such BWN ,BNN and so on. |
st81638 | I doubt these code will be helpful in achieving that. Can you do that directly from Python land using conv_layer.weight etc? |
st81639 | I have find a example about bnn project on the PyTorch,But I confuse to the progress of the auto grad. I just want to learn about the bnn algorithm on the PyTorch. And I have used the darknet framework to learn the bnn algorithm, It use some temp binarize weights to froward, then backward to update the float weight.
github.com
itayhubara/BinaryNet.pytorch/blob/master/models/binarized_modules.py 27
import torch
import pdb
import torch.nn as nn
import math
from torch.autograd import Variable
from torch.autograd import Function
import numpy as np
def Binarize(tensor,quant_mode='det'):
if quant_mode=='det':
return tensor.sign()
else:
return tensor.add_(1).div_(2).add_(torch.rand(tensor.size())).clamp_(0,1).mul_(2).add_(-1)
class HingeLoss(nn.Module):
This file has been truncated. show original |
st81640 | You can think of autograd as reversely traversing a graph and calculating gradients for the leaf nodes, or even simpler as a blackbox algorithm for calculating gradients of a function. |
st81641 | I think this is what you’re looking for:
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Function
from torch.nn.modules.utils import _pair
from torch.nn import init
import math
import numpy as np
class Conv2dShiftFunction(Function):
# Note that both forward and backward are @staticmethods
@staticmethod
# bias is an optional argument
def forward(ctx, input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1):
output = F.conv2d(input, weight, bias, stride, padding, dilation, groups)
ctx.save_for_backward(input, weight, bias)
ctx.stride = stride
ctx.padding = padding
ctx.dilation = dilation
ctx.groups = groups
return output
# This function has only a single output, so it gets only one gradient
@staticmethod
def backward(ctx, grad_output):
# This is a pattern that is very convenient - at the top of backward
# unpack saved_tensors and initialize all gradients w.r.t. inputs to
# None. Thanks to the fact that additional trailing Nones are
# ignored, the return statement is simple even when the function has
# optional inputs.
input, weight, bias = ctx.saved_tensors
stride = ctx.stride
padding = ctx.padding
dilation = ctx.dilation
groups = ctx.groups
grad_input = grad_weight = grad_bias = None
# These needs_input_grad checks are optional and there only to
# improve efficiency. If you want to make your code simpler, you can
# skip them. Returning gradients for inputs that don't require it is
# not an error.
if ctx.needs_input_grad[0]:
grad_input = torch.nn.grad.conv2d_input(input.shape, weight, grad_output, stride, padding, dilation, groups)
if ctx.needs_input_grad[1]:
grad_weight = torch.nn.grad.conv2d_weight(input, weight.shape, grad_output, stride, padding, dilation, groups)
if bias is not None and ctx.needs_input_grad[2]:
grad_bias = grad_output.sum((0,2,3)).squeeze(0)
return grad_input, grad_weight, grad_bias, None, None, None, None
class _ConvNdShift(nn.Module):
__constants__ = ['stride', 'padding', 'dilation', 'groups', 'bias', 'padding_mode']
def __init__(self, in_channels, out_channels, kernel_size, stride,
padding, dilation, transposed, output_padding,
groups, bias, padding_mode, check_grad=False):
super(_ConvNdShift, self).__init__()
if in_channels % groups != 0:
raise ValueError('in_channels must be divisible by groups')
if out_channels % groups != 0:
raise ValueError('out_channels must be divisible by groups')
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.stride = stride
self.padding = padding
self.dilation = dilation
self.transposed = transposed
self.output_padding = output_padding
self.groups = groups
self.padding_mode = padding_mode
if check_grad:
tensor_constructor = torch.DoubleTensor # double precision required to check grad
else:
tensor_constructor = torch.Tensor # In PyTorch torch.Tensor is alias torch.FloatTensor
if transposed:
self.weight = nn.Parameter(tensor_constructor(
in_channels, out_channels // groups, *kernel_size))
else:
self.weight = nn.Parameter(tensor_constructor(
out_channels, in_channels // groups, *kernel_size))
if bias:
self.bias = nn.Parameter(tensor_constructor(out_channels))
else:
self.register_parameter('bias', None)
self.reset_parameters()
def reset_parameters(self):
init.kaiming_uniform_(self.weight, a=math.sqrt(5))
self.shift.data, self.sign.data = get_shift_and_sign(self.weight)
if self.bias is not None:
fan_in, _ = init._calculate_fan_in_and_fan_out(self.shift)
bound = 1 / math.sqrt(fan_in)
init.uniform_(self.bias, -bound, bound)
def extra_repr(self):
s = ('{in_channels}, {out_channels}, kernel_size={kernel_size}'
', stride={stride}')
if self.padding != (0,) * len(self.padding):
s += ', padding={padding}'
if self.dilation != (1,) * len(self.dilation):
s += ', dilation={dilation}'
if self.output_padding != (0,) * len(self.output_padding):
s += ', output_padding={output_padding}'
if self.groups != 1:
s += ', groups={groups}'
if self.bias is None:
s += ', bias=False'
return s.format(**self.__dict__)
class Conv2dShift(_ConvNdShift):
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1,
bias=True, padding_mode='zeros', check_grad=False):
kernel_size = _pair(kernel_size)
stride = _pair(stride)
padding = _pair(padding)
dilation = _pair(dilation)
super(Conv2dShift, self).__init__(
in_channels, out_channels, kernel_size, stride, padding, dilation,
False, _pair(0), groups, bias, padding_mode)
#@weak_script_method
def forward(self, input):
if self.padding_mode == 'circular':
expanded_padding = ((self.padding[1] + 1) // 2, self.padding[1] // 2,
(self.padding[0] + 1) // 2, self.padding[0] // 2)
return Conv2dShiftFunction.apply(F.pad(input, expanded_padding, mode='circular'),
self.weight, self.bias, self.stride,
_pair(0), self.dilation, self.groups)
else:
return Conv2dShiftFunction.apply(input, self.weight, self.bias, self.stride,
self.padding, self.dilation, self.groups) |
st81642 | my imshow function accepts a tensor of (1, 28, 28), but my dataloader is returning a tensor of (1000, 1, 28, 28). The “1000” seems to be the value of the batch_size that I specify in the loader. Is there anyway to fix this. |
st81643 | Solved by ptrblck in post #2
You could index the specific image tensor you would like to visualize:
my_imshow(data[0]) |
st81644 | You could index the specific image tensor you would like to visualize:
my_imshow(data[0]) |
st81645 | I have a network likes
x-->fc1(x)-->outputx
y-->fc2(y)-->outputy
So, when I define the network, can I used the same name for fc1 and fc2 likes
class network(nn.Module):
def __init__(self):
self.fc = nn.Linear(1,2)
def forward(x,y)
output1= self.fc(x)
output2 = self.fc(y)
Or I have to seperate into two layer such as
class network(nn.Module):
def __init__(self):
self.fc1 = nn.Linear(1,2)
self.fc2 = nn.Linear(1,2)
def forward(x,y)
output1= self.fc1(x)
output2 = self.fc2(y)
Thanks |
st81646 | Solved by ptrblck in post #2
Both models define valid forward methods (besides the missing return statement) and it depends on your use case, which one is correct.
If you would like to apply the same parameters for both inputs (weight sharing), the first definition is correct, while the second model will use two different laye… |
st81647 | Both models define valid forward methods (besides the missing return statement) and it depends on your use case, which one is correct.
If you would like to apply the same parameters for both inputs (weight sharing), the first definition is correct, while the second model will use two different layers and parameter sets.
Based on the first pseudo code, it seems you would like to use two separate layers, so the second model definition should be right. |
st81648 | Hello!
Is there a way to sample efficiently from a large collection of indexes (1M~) without repetition? I was using the function multinomial(torch.ones(length), num_samples, replacement=False) but it is taking too much time to generate the samples.
Thanks! |
st81649 | Solved by LeviViana in post #2
This PR implements torch.choice, it accelerates sampling from multinomial distributions.
As a second option, this repo does basically the same thing, but it is older than the PR. |
st81650 | This PR 2 implements torch.choice, it accelerates sampling from multinomial distributions.
As a second option, this repo does basically the same thing, but it is older than the PR. |
st81651 | (pytorch) quoniammm@quoniammm:~/version-control/DrQA$ python
Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:51:32)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'
>>>
why can I import torch by this?
I can use torch in notebook.
what’s the reason of it? |
st81652 | Are you using Anaconda? In that case you’ve probably forgotten to activate the environment where pytorch is installed. It can also be the library missing in your PYTHONPATH variable. |
st81653 | Yes,I use it.The pytorch is the name of env.When I use pytorch in notebook it’s ok.However,when it is in the terminal.The problem occured.I do not know why. |
st81654 | Then it seems that pytorch is not installed in that environment.
How did you install it? |
st81655 | conda install -c peterjc123 pytorch=0.1.12 while running these command i got an error saying PackagesNotFoundError: The following packages are not available from current channels:
pytorch=0.1.12
Current channels:
https://conda.anaconda.org/peterjc123/win-64 56
https://conda.anaconda.org/peterjc123/noarch 2
https://repo.anaconda.com/pkgs/main/win-64 9
https://repo.anaconda.com/pkgs/main/noarch 1
https://repo.anaconda.com/pkgs/free/win-64 3
https://repo.anaconda.com/pkgs/free/noarch 2
https://repo.anaconda.com/pkgs/r/win-64 2
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64 2
https://repo.anaconda.com/pkgs/msys2/noarch 1
After that i tried with the below commmand
conda create --name pytorch
activate pytorch
conda install pytorch cuda92 -c pytorch
and got the below error
active environment : pytorch
active env location : C:\Users\Vineeth\Anaconda3\envs\pytorch
shell level : 2
user config file : C:\Users\Vineeth.condarc
populated config files : C:\Users\Vineeth.condarc
conda version : 4.6.2
conda-build version : not installed
python version : 3.6.8.final.0
base environment : C:\Users\Vineeth\Anaconda3 (writable)
channel URLs : https://conda.anaconda.org/pytorch/win-64 20
https://conda.anaconda.org/pytorch/noarch 2
https://repo.anaconda.com/pkgs/main/win-64 9
https://repo.anaconda.com/pkgs/main/noarch 1
https://repo.anaconda.com/pkgs/free/win-64 3
https://repo.anaconda.com/pkgs/free/noarch 2
https://repo.anaconda.com/pkgs/r/win-64 2
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64 2
https://repo.anaconda.com/pkgs/msys2/noarch 1
package cache : C:\Users\Vineeth\Anaconda3\pkgs
C:\Users\Vineeth.conda\pkgs
C:\Users\Vineeth\AppData\Local\conda\conda\pkgs
envs directories : C:\Users\Vineeth\Anaconda3\envs
C:\Users\Vineeth.conda\envs
C:\Users\Vineeth\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.6.2 requests/2.21.0 CPython/3.6.8 Windows/10 Windows/10.0.16299
administrator : False
netrc file : None
offline mode : False
How to install torch in anaconda? is it different from pytorch?
what is the command to install torch? |
st81656 | PyTorch 0.1.12 is really old by now and I would recommend to install the current stable release (1.0).
Have a look at the website 355 for all install commands (including for Windows). |
st81657 | Could you update conda using conda update conda and try the install steps again? |
st81658 | I updated conda and got the result as
(base) C:\Users\Vineeth>conda update conda
Collecting package metadata: done
Solving environment: done
All requested packages already installed.
After that i run below command
conda install pytorch torchvision -c pytorch
Got the same error as before |
st81659 | That’s strange.
Based on the logs you’ve provided it looks like you are using:
Windows 10 - 64bit
Python 3.6
Is that correct?
I don’t have a Windows machine here, so I can’t test it right now. |
st81660 | You might have missed setting the environment option -n pytorch.
conda install -n pytorch pytorch cuda92 -c pytorch |
st81661 | As written in Bug Fix Release, cudatoolkit=9.0 must be used instead of cuda92.
conda install -n pytorch pytorch torchvision cudatoolkit=9.0 -c pytorch |
st81662 | i have tried this command also
conda install pytorch cudatoolkit=10.0 -c pytorch |
st81663 | You can check installed packages in the environment pytorch:
conda list -n pytorch
Post the package list. |
st81664 | (base) C:\Users\Vineeth>conda list -n pytorch
packages in environment at C:\Users\Vineeth\Anaconda3\envs\pytorch:
Name Version Build Channel
(base) C:\Users\Vineeth>conda list
packages in environment at C:\Users\Vineeth\Anaconda3:
Name Version Build Channel
_license 1.1 py36_1
alabaster 0.7.10 py36_0
anaconda custom py36h363777c_0
anaconda-client 1.6.3 py36_0
anaconda-navigator 1.6.2 py36_0
anaconda-project 0.6.0 py36_0
asn1crypto 0.22.0 py36_0
astroid 1.4.9 py36_0
astropy 1.3.2 np112py36_0
babel 2.4.0 py36_0
backports 1.0 py36_0
beautifulsoup4 4.6.0 py36_0
bitarray 0.8.1 py36_1
blas 1.0 mkl
bleach 1.5.0 py36_0
boto 2.46.1 py36_0
boto3 1.6.18 py36_0
botocore 1.9.18 py36_0
bottleneck 1.2.1 np112py36_0
bz2file 0.98 py36_0
bzip2 1.0.6 hfa6e2cd_5
ca-certificates 2018.12.5 0
certifi 2018.11.29 py36_0
cffi 1.10.0 py36_0
chardet 3.0.3 py36_0
click 6.7 py36_0
cloudpickle 0.2.2 py36_0
clyent 1.2.2 py36_0
colorama 0.3.9 py36_0
comtypes 1.1.2 py36_0
conda 4.6.2 py36_0
conda-env 2.6.0 h36134e3_1
console_shortcut 0.1.1 py36_1
contextlib2 0.5.5 py36_0
cryptography 2.4.2 py36h7a1dbc1_0
curl 7.63.0 h2a8f88b_1000
cycler 0.10.0 py36_0
cython 0.25.2 py36_0
cytoolz 0.8.2 py36_0
decorator 4.0.11 py36_0
docutils 0.13.1 py36_0
entrypoints 0.2.2 py36_1
et_xmlfile 1.0.1 py36_0
fastcache 1.0.2 py36_1
flask 0.12.2 py36_0
flask-cors 3.0.2 py36_0
flask-wtf 0.14.2 py36_0
freetype 2.9.1 ha9979f8_1
get_terminal_size 1.0.0 py36_0
gevent 1.2.1 py36_0
greenlet 0.4.12 py36_0
h5py 2.7.0 np112py36_0
hdf5 1.10.4 h7ebc959_0
heapdict 1.0.0 py36_1
html5lib 0.999 py36_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
idna 2.5 py36_0
imagesize 0.7.1 py36_0
intel-openmp 2019.1 144
ipykernel 4.6.1 py36_0
ipython 5.3.0 py36_0
ipython_genutils 0.2.0 py36_0
ipywidgets 6.0.0 py36_0
isort 4.2.5 py36_0
itsdangerous 0.24 py36_0
jdcal 1.3 py36_0
jedi 0.10.2 py36_2
jinja2 2.9.6 py36_0
jmespath 0.9.3 py36h0745840_0
jpeg 9b hb83a4c4_2
jsonschema 2.6.0 py36_0
jupyter 1.0.0 py36_3
jupyter_client 5.0.1 py36_0
jupyter_console 5.1.0 py36_0
jupyter_core 4.3.0 py36_0
krb5 1.16.1 hc04afaa_7
lazy-object-proxy 1.2.2 py36_0
libcurl 7.63.0 h2a8f88b_1000
libpng 1.6.36 h2a8f88b_0
libssh2 1.8.0 h7a1dbc1_4
libtiff 4.0.10 h2929a5b_1001
llvmlite 0.18.0 py36_0
locket 0.2.0 py36_1
lxml 3.7.3 py36_0
markupsafe 0.23 py36_2
matplotlib 2.0.2 np112py36_0
menuinst 1.4.14 py36hfa6e2cd_0
mistune 0.7.4 py36_0
mkl 2019.1 144
mkl-service 1.1.2 py36_3
mkl_fft 1.0.10 py36h14836fe_0 anaconda
mkl_random 1.0.2 py36h343c172_0
mpmath 0.19 py36_1
msgpack-python 0.4.8 py36_0
multipledispatch 0.4.9 py36_0
navigator-updater 0.1.0 py36_0
nbconvert 5.1.1 py36_0
nbformat 4.3.0 py36_0
networkx 1.11 py36_0
ninja 1.8.2 py36he980bc4_1
nltk 3.2.3 py36_0
nose 1.3.7 py36_1
notebook 5.0.0 py36_0
numba 0.33.0 np112py36_0
numexpr 2.6.2 np112py36_0
numpy 1.15.4 py36h19fb1c0_0 anaconda
numpy-base 1.15.4 py36hc3f5095_0
numpydoc 0.6.0 py36_0
olefile 0.44 py36_0
openpyxl 2.4.7 py36_0
openssl 1.1.1a he774522_0
packaging 16.8 py36_0
pandas 0.20.1 np112py36_0
pandocfilters 1.4.1 py36_0
partd 0.3.8 py36_0
path.py 10.3.1 py36_0
pathlib2 2.2.1 py36_0
pep8 1.7.0 py36_0
pickleshare 0.7.4 py36_0
pillow 5.4.1 py36hdc69c19_0
pip 9.0.1 py36_1
ply 3.10 py36_0
prompt_toolkit 1.0.14 py36_0
psutil 5.2.2 py36_0
py 1.4.33 py36_0
pycodestyle 2.3.1 py36h7cc55cd_0
pycosat 0.6.3 py36h413d8a4_0
pycparser 2.17 py36_0
pycrypto 2.6.1 py36_6
pycurl 7.43.0.2 py36h7a1dbc1_0
pyflakes 1.5.0 py36_0
pygments 2.2.0 py36_0
pylint 1.6.4 py36_1
pyodbc 4.0.16 py36_0
pyopenssl 17.0.0 py36_0
pyparsing 2.1.4 py36_0
pyqt 5.9.2 py36h6538335_2
pysocks 1.6.8 py36_0
pytables 3.2.2 np112py36_4
pytest 3.0.7 py36_0
python 3.6.8 h9f7ef89_0
python-dateutil 2.6.0 py36_0
pytz 2017.2 py36_0
pywavelets 0.5.2 np112py36_0
pywin32 220 py36_2
pyyaml 3.12 py36_0
pyzmq 16.0.2 py36_0
qt 5.9.7 vc14h73c81de_0
qtawesome 0.4.4 py36_0
qtconsole 4.3.0 py36_0
qtpy 1.2.1 py36_0
requests 2.21.0 py36_0
rope 0.9.4 py36_1
ruamel_yaml 0.11.14 py36_1
s3transfer 0.1.13 py36_0
setuptools 40.6.3 py36_0
simplegeneric 0.8.1 py36_1
singledispatch 3.4.0.3 py36_0
sip 4.19.8 py36h6538335_0
six 1.10.0 py36_0
smart_open 1.5.7 py36_0
snowballstemmer 1.2.1 py36_0
sortedcollections 0.5.3 py36_0
sortedcontainers 1.5.7 py36_0
sphinx 1.5.6 pypi_0 pypi
spyder 3.2.8 py36_0
sqlalchemy 1.1.9 py36_0
sqlite 3.26.0 he774522_0
sympy 1.0 py36_0
tblib 1.3.2 py36_0
testpath 0.3 py36_0
tk 8.6.8 hfa6e2cd_0
toolz 0.8.2 py36_0
tornado 4.5.1 py36_0
traitlets 4.3.2 py36_0
unicodecsv 0.14.1 py36_0
urllib3 1.24.1 py36_0
vc 14.1 h21ff451_3 anaconda
vs2015_runtime 15.5.2 3 anaconda
wcwidth 0.1.7 py36_0
werkzeug 0.12.2 py36_0
wheel 0.29.0 py36_0
widgetsnbextension 2.0.0 py36_0
win_inet_pton 1.0.1 py36_1
win_unicode_console 0.5 py36_0
wincertstore 0.2 py36h7fe50ca_0
wrapt 1.10.10 py36_0
wtforms 2.1 py36_0
xlrd 1.0.0 py36_0
xlsxwriter 0.9.6 py36_0
xlwings 0.10.4 py36_0
xlwt 1.2.0 py36_0
zict 0.1.2 py36_0
zlib 1.2.11 h62dcd97_3 |
st81665 | Hi,
I have a model called ‘NET’ which is bunch convolutional layers.
as my loss function I am using soft-dtw which has a work-around to make DTW differentiable. the gradient of soft-dtw is a matrix with the size of my input data (x).
x: input data
Net: CNN
loss: stdw
derivative of loss w.r.t. y = G (matrix)
y = Net(x) --> output data
normally when we calculate loss and do loss.backward(), the gradient of the Loss on last layer w.r.t model params is calculated and back-propagated through the model parameters. however here, my loss function is not a pytorch loss library , it’s a self-define function (soft-dtw). To calculate the gradient of loss w.r.t Net.parameters, I use the chain rule:
derivative of Loss w.r.t y * derivative of y w.r.t model’s parameters
here if I define a Loss class that is inheriting from autograd, and do Loss.backward, is there a way to pass G to loss.backward() so that it uses G as the gradient of last layer and multiply it by the gradient of y w.r.t. model params?
y = Net(x)
loss = SDTWLoss(x,y)
# gradient of loss w.r.t y is Loss_grad = G ,a matrix where G.shape = x.shape
loss.backward()
optimizer.step()
# right now in my loss function I have implemented the code as the following:
class SDTWLoss():
def __init__(self, y_pred, y):
self.y_pred = y_pred
self.y = y
def forward(self):
_dtw = SoftDTW(self.y_pred, self.y ,device)
dtw_value, dtw_grad, dtw_E_mat= _dtw.compute()
self.dtw_grad = dtw_grad
dtw_loss = torch.mean(dtw_value)
return dtw_loss
def backward(self):
batch_size, _, n = self.y_pred.shape
G = jacobian_product_with_expected_sdtw_grad(self.y_pred, self.y, self.dtw_grad,device)
param_grad = []
for param in Net.parameters():
param_grad.append(param)
for k in range(batch_size):
for i in range(n):
Net.zero_grad()
self.y_pred[k,0,i].backward(retain_graph=True)
for j,param in enumerate(Net.parameters()):
param_grad[j] =param_grad[j]+ (G[k,0,i] * param.grad)
for j,param in enumerate(Net.parameters()):
param.grad = param_grad[j]
this code is extremely slow!
I think the fastest way is to have the option of passing G as an input to loss.backward:
‘’’ loss.backward(G) ‘’’’
Any suggestion is appreciated! |
st81666 | Have you tried this?
https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html 73 |
st81667 | my understanding is that this example is implementing Relu activation function. I need to implement Loss function for specific case where the last layer’s derivative is calculated manually and is back propagated through the network using chain rule. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.