id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st97568
|
I modefy the code but I get error:
"
ValueError: Expected input batch_size (8) to match target batch_size (32)."
and I change the output size of ConvTranspose2d to 32 but get error
“ValueError: Expected target size (32, 76), got torch.Size([32])”
The dim of target and data and out are:
target shape torch.Size([32])
data shape torch.Size([32, 3, 64, 64])
out shape torch.Size([32, 32, 39, 39])
|
st97569
|
Assuming you are trying to classify your data into 4 classes, your model output should have the shape [batch_size, 4].
I’m not sure, how nn.ConvTranspose2d should deal with your activation volumes, as it’ll increase the spatial size in your setup.
Could you explain your architecture a bit?
I’m familiar with an adaptive pooling layer before the linear or with global pooling for fully convolutional models. However, I’m currently not sure how your models should works.
|
st97570
|
I have data with 4 classes and size of each image is 64.
But I try to use conv layer instead of fully connected layer.
I used adaptive pooling layer and it is solve the problem of train network with image of any size
How we make ConvTranspose2d does the job of
nn.AdaptiveAvgPool2d((1,1)))
then
self.fc2 = nn.Linear(32, num_classes)
|
st97571
|
In paper dilated resudal netwrok
arxiv.org
1705.09914.pdf 4
5.49 MB
If you see Figure 2 in the paper. That what I want to do.
|
st97572
|
Thanks for the paper!
Skimming through it, I think their method uses dilated convolutions to get a spatially bigger activation map in Group5 of a ResNet. However, for classification they still seem to use global average pooling.
From section 2:
The output of G5 in a DRN is 28 × 28. Global average pooling therefore takes in 2**4 times more values, which can help the classifier recognize objects that cover a smaller number of pixels in the input image and take such objects into account in its prediction.
Figure 2 is related to a localization use case, i.e. if you have some kind of localization/segmentation target.
Also, I couldn’t find any transposed convolutions in the paper, just dilated convolutions.
Is this a misunderstanding or how would you like to use transposed convolutions for this method?
|
st97573
|
Hi,
I’m trying to get intermedia output using forward hooks. Howerver with multiple-gpus, this does not work. Each gpu will receive a fraction of the input, so we need to aggregate the results coming from different gpus. I find data_parallel meet my need and I write a simple test program below. But weird things happened.
I give 3 inputs to vgg model in turn. The first is all -1 and the second is all zeros and the third is all 1.
For the first input, the output is None(I test, whaterer the first input is, it outputs None without any change)
For the second input, the output is not None but not all zeros(In fact when you feed all zeros tensor to net with ReLU, the intermedia is all zeros naturally). However, look closely, the output is corresponding to the first input
For the third input, the output corresponds to the second input, are all zeros.
Am I missing something important and writing problematic code ? Please let me known.
import torch
from torchvision.models.vgg import vgg19
class Wrapper(torch.nn.Module):
def __init__(self, model):
super(Wrapper, self).__init__()
self.model = model
self.target_outputs = None
def forward_hook(_, __, output):
self.target_outputs = output.detach()
self.model.features[2].register_forward_hook(forward_hook)
def forward(self,input):
self.model(input)
return self.target_outputs
model = vgg19()
model = model.cuda(4)
wrapper = Wrapper(model)
devices = [4, 5]
input1 = torch.randn(60,3,224,224).fill_(-1).cuda(4)
out1 = torch.nn.parallel.data_parallel(wrapper, input1, devices)
print(out1)
input2 = torch.randn(60,3,224,224).fill_(0).cuda(4)
out2 = torch.nn.parallel.data_parallel(wrapper, input2, devices)
print(out2.shape)
input3 = torch.randn(60,3,224,224).fill_(1).cuda(4)
out3 = torch.nn.parallel.data_parallel(wrapper, input3, devices)
print(out3.shape)
output:
None
(60, 64, 224, 224)
(60, 64, 224, 224)
|
st97574
|
I have tried several ways to implement this in multi-gpus mode, one of them is not so elegent but works, which returns right intermedia feature map, however another one does’t work which returns None and false result.
I gauss the condition is revelent to the assignment self.target_outputs = output.detach() in hook function. What variable the forward output is assigned to matters.
# the one that works
class Wrapper(torch.nn.Module):
def __init__(self, model):
super(Wrapper, self).__init__()
self.model = model
def f_hook(module, __, output):
module.register_buffer('target_outputs', output)
self.model.features[2].register_forward_hook(f_hook)
def forward(self,input):
self.model(input)
return self.model.features[2].target_outputs
def __repr__(self):
return "Wrappper"
# another one that not works. The implemetation and the result is the same as what is stated in question part.
class Wrapper1(torch.nn.Module):
def __init__(self, model):
super(Wrapper1, self).__init__()
self.model = model
self.target_outputs = None
def f_hook(_, __, output):
self.target_outputs = output.detach()
self.model.features[2].register_forward_hook(f_hook)
def forward(self,input):
self.model(input)
return self.target_outputs
def __repr__(self):
return "Wrappper1"
# test code
if __name__ == '__main__':
devices = [4,5]
model = vgg19().cuda(4)
model = model.cuda(4)
wrapper = Wrapper(model)
wrapper = wrapper.cuda(4)
input1 = torch.randn(60,3,224,224).fill_(0).cuda(4)
out1 = torch.nn.parallel.data_parallel(wrapper, input1, devices)
print(out1) if out1 is not None else None
# print a right feature map
input2 = torch.randn(60,3,224,224).fill_(1).cuda(4)
out2 = torch.nn.parallel.data_parallel(wrapper, input2, devices)
print(out2) if out2 is not None else None
# print a right feature map
model = vgg19().cuda(4)
model = model.cuda(4)
wrapper = Wrapper1(model)
wrapper = wrapper.cuda(4)
input1 = torch.randn(60,3,224,224).fill_(0).cuda(4)
out1 = torch.nn.parallel.data_parallel(wrapper, input1, devices)
print(out1) if out1 is not None else None
# print None
input2 = torch.randn(60,3,224,224).fill_(1).cuda(4)
out2 = torch.nn.parallel.data_parallel(wrapper, input2, devices)
print(out2) if out2 is not None else None
# print a false feature map, which corresponds to output of input1 rather than output of input2.
# This is what confuses me all the time.
Need help! Is there someone could explain why this happened ? Thanks in advance.
|
st97575
|
Solved: I made mistake from other code.
Error from
conv1.weight=nn.Parameter(a.float().unsqueeze(0).unsqueeze(0))
# RuntimeError: Given groups=1, weight of size [1, 1, 3, 3], expected input[2, 3, 250, 250] to have 1 channels, but got 3 channels instead
can be resolved by
conv1.weight=nn.Parameter(a.float().unsqueeze(0).unsqueeze(0).repeat(1,3,1,1))
Hi, I have torch.Size([2, 3, 250, 250]) image
And I want to use following 2 custom sobel filters
a=torch.Tensor(
[[1,0,-1],
[2,0,-2],
[1,0,-1]]).cuda()
b=torch.Tensor(
[[1,2,1],
[0,0,0],
[-1,-2,-1]]).cuda()
print("ten",ten.shape)
# one_b_gt_imgs torch.Size([2, 3, 250, 250])
conv1=nn.Conv2d(3,3,kernel_size=3,stride=1,padding=1,bias=False)
# But I got following errors
# It looks like it's because kernel depth is not 3
conv1.weight=nn.Parameter(a.float().unsqueeze(0).unsqueeze(0))
# RuntimeError: Given groups=1, weight of size [1, 1, 3, 3], expected input[2, 3, 250, 250] to have 1 channels, but got 3 channels instead
# So, I used repeat()
# And I checked
that "a.float().unsqueeze(0).unsqueeze(0).repeat(1,3,1,1)" converts tensor shape into (1,3,3,3)
conv1.weight=nn.Parameter(a.float().unsqueeze(0).unsqueeze(0).repeat(1,3,1,1))
# But I still have error and weight size is (1,1,3,3)
# RuntimeError: Given groups=1, weight of size [1, 1, 3, 3], expected input[2, 3, 250, 250] to have 1 channels, but got 3 channels instead
G_x=conv1(ten)
How to fix this?
|
st97576
|
https://pytorch.org/docs/master/search.html?q=Tensor stucks in SEARCHING.
2018-11-14-131850_1637x579_scrot.png1637×579 45.9 KB
By the way, the search engine in the stable docs works.
|
st97577
|
Solved by albanD in post #2
I can reproduce that, we’ll look into this. Thanks for the report !
|
st97578
|
Hello everyone,
I am pretty new in PyTorch (and python) and therefore this issue confuses me a lot.
I have a LSTM and I use it to process batch of sentences. The task is not important for my question. Basically, I use the final hidden state of the LSTM and multiply it wit a 2 dimensional linear weight matrix. Here is a snippet to illustrate what I’ve described:
>>>BatchSize=2
>>>EmbeddingSize=5
>>>VocabularySize=3
>>> HiddenSize = 4
>>>instance1 = [0,0,1,2,1,0] # a sentence contains 6 words
>>>instance2 = [0,1,2,1] # another sentence containing 4 words
>>>batch = [instance1,instance2]
>>>lstm = nn.LSTM(EmbeddingSize,HiddenSize,num_layers=1,bidirectional=False)
>>>emb = nn.Embedding(VocabularySize,EmbeddingSize)
>>>embedded = [emb(torch.tensor(s)) for s in batch]
>>>packed = nn.utils.rnn.pack_sequence(embedded)
>>>output,(h,c) = lstm(packed)
>>>print(h.shape," ",c.shape)
torch.Size([1, 2, 4]) torch.Size([1, 2, 4])
Now, I want to reshape it to batchSizexHiddenSize dimensional tensor. I can do it as follows:
h = torch.reshape(h,(BatchSize,HiddenSize))
But I wonder, if such an operation causes values from different instances to mixed ?
Please let me know if my question is not clear.
|
st97579
|
hi! I install pytorch from conda:
conda install pytorch torchvision -c pytorch
when I import torch, I encounter the issue such as:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/user/anaconda3/envs/python2/lib/python2.7/site-packages/torch/__init__.py", line 80, in <module>
from torch._C import *
ImportError: libcurand.so.9.0: cannot open shared object file: No such file or directory
how to solve it? thanks!
|
st97580
|
I used git clone to get the whole PyTorch repo recursively.
And I wonder how to compile the doc, particularly the Caffe2’s doxygen API doc.
I am new in this aspect. Can anyone kindly to help me?
Thank you very much!
|
st97581
|
I installed CUDA 9.2, cudnn 7.1 for CUDA 9.2, and nccl for CUDA 9.2.
I got pytorch source code from github, build was successful. (commit efba555a389100d391764678e953dcaae227dd43)
But, when I tried to import torch, I got the error as follows:
python -c "import torch"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/polphit/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 78, in <module>
from torch._C import *
ImportError: libcurand.so.9.0: cannot open shared object file: No such file or directory
Thank you.
|
st97582
|
Oh, I found it was due to caffe2 that I installed several months ago.
I don’t know the sequence of import, but I guess _C.cpython-36m-x86_64-linux-gnu.so tries to load libcaffe2_gpu.so if caffe2 is installed.
|
st97583
|
Hello ,i met the same problems.i don’t know how to solve it.and i did’t installl caffe2 ever.
|
st97584
|
hi! I install it from conda, and I encounter the same issue. I used to install caffe2 from source, but now I have uninstalled it.
>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/anaconda3/lib/python3.6/site-packages/torch/__init__.py", line 80, in <module>
from torch._C import *
ImportError: libcurand.so.9.0: cannot open shared object file: No such file or directory
how to solve this issue? thanks~
|
st97585
|
Hi,
I have trained a model using Adam optimizer for 30 epochs. Now I want to the load saved model and train using SGD optimizer. Will it decrease my loss value? Is it Okay?
|
st97586
|
It’s fine. In fact it’s a “typical” way of training. Notice the optimal learning rate is different for both optimizers and you have no prior knowledge about which one will be adequate for training with SGD. Therefore you will have to do some trials until you find the proper one.
|
st97587
|
Is it possible to read data on the fly in DataLoader? I have a huge dataset and I can’t just get all the data. It’s my first question. And second question: how to unfold it in two variables:
for input, target in dataset:
|
st97588
|
Have a look at the Data Loading tutorial 91.
Basically your data can be lazily loaded in Dataset's __getitem__ method, if you for example provide the image paths.
In __getitem__ you should return two tensors, i.e. the data and the corresponding target.
By wrapping the Dataset into a DataLoader multiple worker can be used to load and preprocess the data.
Your for loop will automatically work, if you use for input, target in loader:.
Here 42 is a better explanation.
|
st97589
|
@ptrblck you are absolutely right. DataLoader handles all the headache. Thank you very much for your response.
|
st97590
|
I implemented a Network using ROI-pooling, But got a problem that the usage of GPU memory grows as training, after iteration to 50, OOM occurs.
I use resnet34 as the basic net, and insert ROI pooling layer between layer3 and layer4, with image size 3*1024*2048
at the first iteration, uses only less than 5GB memory, but with training going, usage grows
And how could I debug this issue?
|
st97591
|
Hi - I’m brand new to PyTorch, and have been attempting to port a simple reinforcement learning sample 17 from TensorFlow to PyTorch to help get me up to speed with PyTorch. I haven’t been able to get my PyTorch version to come close to the performance of the TensorFlow network, and I’m struggling to understand why.
I believe my PyTorch network is effectively equivalent to the TensorFlow network, but I suspect there is just something slightly off.
Here’s the TensorFlow version:
import gym
import numpy as np
import random
import tensorflow as tf
env = gym.make("FrozenLake-v0")
tf.reset_default_graph()
inputs1 = tf.placeholder(shape=[1,16], dtype=tf.float32)
W = tf.Variable(tf.random_uniform([16,4], 0, 0.01))
Qout = tf.matmul(inputs1, W)
predict = tf.argmax(Qout, 1)
nextQ = tf.placeholder(shape=[1,4], dtype=tf.float32)
loss = tf.reduce_sum(tf.square(nextQ - Qout))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
updateModel = optimizer.minimize(loss)
y = 0.99
e = 0.1
num_episodes = 2000
jList = []
rList = []
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for i in range(num_episodes):
# Reset environment and get first new observation
s = env.reset()
rAll = 0
d = False
j = 0
# Q-network
while j < 99:
j += 1
# Choose action greedily (with chance of random action) from the Q-network
a,allQ = sess.run([predict, Qout], feed_dict={inputs1:np.identity(16)[s:s+1]})
if np.random.rand(1) < e:
a[0] = env.action_space.sample()
# Get new state and reward from environment
s1,r,d,_ = env.step(a[0])
# Obtain the Q' values by feeding new state through our network
Q1 = sess.run(Qout, feed_dict={inputs1:np.identity(16)[s1:s1+1]})
# Obtain maxQ' and set target value for chosen action
maxQ1 = np.max(Q1)
targetQ = allQ
targetQ[0, a[0]] = r + y*maxQ1
# Train network using target and predicted Q values
_,W1 = sess.run([updateModel,W], feed_dict={inputs1:np.identity(16)[s:s+1], nextQ:targetQ})
rAll += r
s = s1
if d == True:
# Reduce change of random action as we train the model
e = 1./((i/50) + 10)
break
jList.append(j)
rList.append(rAll)
print("Percent of successful episodes: %s%%" % str((sum(rList) / num_episodes) * 100))
And here’s my PyTorch version:
from __future__ import print_function
import gym
import numpy as np
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
env = gym.make("FrozenLake-v0")
def predict(x):
values, indices = torch.max(x, 1)
return indices
class DQN(nn.Module):
def __init__(self):
super(DQN, self).__init__()
self.fc1 = nn.Linear(16, 4, bias=False)
def forward(self, x):
x = self.fc1(x)
return x
y = 0.99
e = 0.1
num_episodes = 2000
num_q_iterations = 99
jList = []
rList = []
dqn = DQN()
criterion = nn.MSELoss()
optimizer = optim.SGD(dqn.parameters(), lr=0.1)
for i in range(num_episodes):
# Reset environment and get first new observation
s = env.reset()
rAll = 0
d = False
j = 0
# Q-network
while j < num_q_iterations:
j += 1
# Choose action greedily (with chance of random action) from the Q-network
input = Variable(torch.from_numpy(np.identity(16)[s:s+1]).type(torch.FloatTensor))
allQ = dqn.forward(input)
a = predict(allQ).data[0]
if np.random.rand(1) < e:
a = env.action_space.sample()
# Get new state and reward from environment
s1,r,d,_ = env.step(a)
# Obtain the Q' values by feeding new state through our network
next_state_input = Variable(torch.from_numpy(np.identity(16)[s1:s1+1]).type(torch.FloatTensor))
Q1 = dqn.forward(next_state_input)
# Obtain maxQ' and set target value for chosen action
maxQ1 = np.max(Q1.data.numpy())
targetQ = allQ.data.numpy()
targetQ[0, a] = r + y*maxQ1
# Train network using target and predicted Q values
first_state_input = Variable(torch.from_numpy(np.identity(16)[s:s+1]).type(torch.FloatTensor))
target = Variable(torch.from_numpy(targetQ).type(torch.FloatTensor))
# Backprop & update weights
dqn.zero_grad()
optimizer.zero_grad()
output = dqn(first_state_input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
rAll += r
s = s1
if d == True:
# Reduce change of random action as we train the model
e = 1./((i/50) + 10)
break
jList.append(j)
rList.append(rAll)
print("Percent of successful episodes: %s%%" % str((sum(rList) / num_episodes) * 100))
The TensorFlow network regularly hits ~45%, while the PyTorch one struggles to get past 5%. After extensive debugging, I suspect the problem might lie in one of these three areas:
The loss function. I suspect that PyTorch’s MSELoss does not work the same way as tf.reduce_sum(tf.square(nextQ - Qout)). As far as I can tell, it’s doing what I think is the same thing, but I’m not 100% sure.
The way I’m updating the weights of my network. I’m not 100% sure that I’m using zero_grad() and the PyTorch SGD optimizer properly.
I’m also not entirely sure if PyTorch’s SGB optimizer works all that similarly to TensorFlow’s GradientDescentOptimizer.
Weight initialization of my Linear layer - I’ve tried nn.init.uniform(self.fc1.weight, 0, 0.01) to try to match tf.random_uniform([16,4], 0, 0.01), but that does not improve performance of my PyTorch network. It looks like Linear might initialize weights by sampling from a uniform distribution anyway, so I’m only somewhat convinced this is a problem area.
Those are the areas I’m actively investigating, but I’m hoping there’s something blatantly obvious about how PyTorch works that I’m just missing. Would love any thoughts!
Thanks!
|
st97592
|
oh man, it’s hard to tell what’s different.
I wanted to add some quick comments:
dqn.forward(next_state_input) # wrong
dqn(next_state_input) # correct
It doesn’t affect things in your particular code sample though.
Also, have a look at our DQN training for any gotchas you might find: http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html#training-loop 24
|
st97593
|
Thank you for taking a look, it’s much appreciated!
Definitely will dig into the tutorial. Going to play around with different optimizers & loss functions. I suspect that’s where the problem lies.
I’ve also read somewhere that there is something you need to do in PyTorch if you have batches with single training examples (as I do). I’m trying to track that down and see if that is (1) something real, and not something I’m imagining I read and (2) actually helpful in this situation.
Thanks!
|
st97594
|
It looks like the fundamental mistake I was making was failing to unsqueeze() the mini-batch I was training the network with. In this example, each mini-batch only had 1 example in it; adding a fake batch dimension (per http://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html 10) seems to do the trick!
# Train network using target and predicted Q values
final_input_tensor = torch.from_numpy(np.identity(16)[s:s+1]).type(torch.FloatTensor)
final_input = Variable(final_input_tensor)
final_input = final_input.unsqueeze(0)
target_tensor = torch.from_numpy(targetQ).type(torch.FloatTensor)
target = Variable(target_tensor)
# Backprop
optimizer.zero_grad()
output = dqn(final_input)
criterion = nn.MSELoss()
loss = criterion(output, target)
loss.backward()
optimizer.step()
|
st97595
|
I have a similar problem where I try to get pytorch to work on Pong-v0, while tensorflow can get past 20, pytorch merely gets past -20 which means it learns nothing! The same pytorch code runs pretty well on another machine with an older version of pytorch and it does not work on a newer version of pytorch, so I think I should downgrade my pytorch version? After I downgrade, the problem solved. So I’d suggest to stay with the original pytorch version 0.1.x and wait for 0.2.x after you are sure it will work.
|
st97596
|
It’s nice that your solution works, but I would not suggest to downgrade to version 0.1.x.
You are missing a lot of nice features. See Pytorch Release Notes 3
One thing you should try when porting your code, is to enable warnings highlighting incompatible code.
Quote from the release notes:
Here is a code snippet that you can add to the top of your scripts.
Adding this code will generate warnings highlighting incompatible code.
Fix your code to no longer generate warnings.
insert this to the top of your scripts (usually main.py 3)
import sys, warnings, traceback, torch
def warn_with_traceback(message, category, filename, lineno, file=None, line=None):
sys.stderr.write(warnings.formatwarning(message, category, filename, lineno, line))
traceback.print_stack(sys._getframe(2))
warnings.showwarning = warn_with_traceback; warnings.simplefilter('always', UserWarning);
torch.utils.backcompat.broadcast_warning.enabled = True
torch.utils.backcompat.keepdim_warning.enabled = True
Once all warnings disappear, you can remove the code snippet.
|
st97597
|
well, actually I was wrong… after all even down grade the pytorch version does not work… the same code works on another machine with old pytorch version and non-anaconda python 2.7, but didn’t work on old pytorch with anaconda python 2.7. But I just can’t find the reason why it didn’t work…
|
st97598
|
rob:
struggles
optimizer = optim.Adam(dqn.parameters(), lr=0.01)
I change from SGD to Adam and lr to 0.01.
the result is “Percent of successful episodes: 37.1% !!”
|
st97599
|
Hello,
I am very in pytorch. I want to know is there any pretrained sentence embeddings that I can use in pytorch secondly is it feasible to feed a model sentence embedding + word embedding together and how can I achieve that?
Any suggestion will be appreciated.
Thanks in advance.
|
st97600
|
Question is exactly as the title says. Suppose I have a piece of code. I want to ensure that the back prop updates are all happening as I expect them to be. I am looking for something along the lines of unit testing or a principled approach to it.
|
st97601
|
You could create unit tests by storing the values of your parameters and check for updates after a training iteration. The concept is explained here 1.8k. It’s for Tensorflow, but you can easily adapt it to PyTorch.
|
st97602
|
That was an amazing post. Thank you for the answer. Time to apply it to my own code. Thanks a lot.
|
st97603
|
@ptrblck In the blog you mentioned, I found the code for testing in tensorflow (https://github.com/Thenerdstation/mltest/blob/master/mltest/mltest.py 198) and am trying to convert it for pytorch.
Is there any simple way to run a line of code defined elsewhere like sess.run does in tensorflow? In particular I want to replicate what is happening in L83 (https://github.com/Thenerdstation/mltest/blob/master/mltest/mltest.py#L83 35) but I can’t think of easing the process in pytorch.
|
st97604
|
In PyTorch you don’t need to work with session like in Tensorflow. Just try to .clone() your values.
|
st97605
|
@ptrblck Thanks for the suggestion. Is there a similar trick which can be used for L84 (https://github.com/Thenerdstation/mltest/blob/master/mltest/mltest.py#L84 36). I was planning on taking the exact operation into the function, but I am not sure if that is correct practice.
|
st97606
|
It seems op is the “training operation”, e.g. op = tf.train.AdamOptimizer().minimize(var).
Using PyTorch you can just perform a training step:
...
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
Does this help?
|
st97607
|
I’m using gradchecks to unittest my models. It’s helping a lot…
from torch.autograd.gradcheck import gradcheck
def test_sanity(self):
input = (Variable(torch.randn(20, 20).double(), requires_grad=True), )
model = nn.Linear(20, 1).double()
test = gradcheck(model, input, eps=1e-6, atol=1e-4)
print(test)
tip: make sure you use .double() Gradcheck will fail with only 32 bit floats.
Also… Interpreting gradcheck errors 102
|
st97608
|
I’ve ported the tests from mltest to pytorch. It’s available at suriyadeepan/torchtest 515.
You can install it via pip
pip install torchtest
|
st97609
|
Test code are attached, it seems that the layers inside self.features are not registered with nn.Module, what should I do to correct it?
Thanks for help!
import torch
class errNet(torch.nn.Module):
def __init__(self, groups):
''' input: [batch, 10] output: [batch, 5] '''
super(errNet, self).__init__()
self.groups = groups
self.features = [torch.nn.Linear(2,1) for i in range(self.groups)]
def forward(self, x):
res = [self.features[i](x[:,i*2:i*2+2]) for i in range(self.groups)]
return torch.cat(res, 1)
model = errNet(5)
x = torch.randn(3,10)
print('Test 1:', model(x).shape) # this is okay
device = torch.device("cuda:0")
model = model.to(device)
x = x.to(device)
print('Test 2:', model(x).shape) # error: model.features[i] is not cuda
|
st97610
|
Hi, I’m tracking down a memory leakage issue, and one of the leakage that valgrind could find was in this minimum working example and the stacktrace pasted below.
I’m using pytorch 0.4.1 on ubuntu 18.04, and I experienced the same issue when I tried with master a7eee0a1e.
Is this just a benign problem only wasting 1 megabyte one time? My other leakages are stemming from multi-threaded mkl_lapack_strtri, and I’m trying to tell if these are two separate issue.
import torch
x = torch.randn(256, 256)
y = 0
for i in range(10):
y += x.inverse().sum()
print(y)
==8901== 1,048,576 bytes in 1 blocks are definitely lost in loss record 2,138 of 2,138
==8901== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==8901== by 0xD2B0441: _INTERNAL_23_______src_kmp_alloc_cpp_fbba096b::bget(kmp_info*, long) (in /home/jongwook/anaconda3/lib/libiomp5.so)
==8901== by 0xD2B016D: ___kmp_fast_allocate (in /home/jongwook/anaconda3/lib/libiomp5.so)
==8901== by 0xD34BC56: __kmp_task_alloc (in /home/jongwook/anaconda3/lib/libiomp5.so)
==8901== by 0xD34BB96: __kmpc_omp_task_alloc (in /home/jongwook/anaconda3/lib/libiomp5.so)
==8901== by 0x130CDEA9: mkl_lapack_strtri (in /home/jongwook/anaconda3/lib/libmkl_intel_thread.so)
==8901== by 0xD357A42: __kmp_invoke_microtask (in /home/jongwook/anaconda3/lib/libiomp5.so)
==8901== by 0xD31ACD9: __kmp_invoke_task_func (in /home/jongwook/anaconda3/lib/libiomp5.so)
==8901== by 0xD31C5B5: __kmp_fork_call (in /home/jongwook/anaconda3/lib/libiomp5.so)
==8901== by 0xD2DABAF: __kmpc_fork_call (in /home/jongwook/anaconda3/lib/libiomp5.so)
==8901== by 0x130CD4A6: mkl_lapack_strtri (in /home/jongwook/anaconda3/lib/libmkl_intel_thread.so)
==8901== by 0xEBD5023: mkl_lapack_sgetri (in /home/jongwook/anaconda3/lib/libmkl_core.so)
==8901== by 0x14AFAFB6: SGETRI (in /home/jongwook/anaconda3/lib/libmkl_intel_lp64.so)
==8901== by 0x22804C48: THFloatLapack_getri (in /home/jongwook/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
==8901== by 0x227CAC7E: THFloatTensor_getri (in /home/jongwook/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
==8901== by 0x221A2DE1: at::CPUFloatType::_getri_out(at::Tensor&, at::Tensor const&) const (in /home/jongwook/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
==8901== by 0x2205FA04: at::native::inverse_out(at::Tensor&, at::Tensor const&) (in /home/jongwook/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
==8901== by 0x2205FE0B: at::native::inverse(at::Tensor const&) (in /home/jongwook/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
==8901== by 0x2225B420: at::Type::inverse(at::Tensor const&) const (in /home/jongwook/anaconda3/lib/python3.6/site-packages/torch/lib/libcaffe2.so)
==8901== by 0x1CE80ED9: torch::autograd::VariableType::inverse(at::Tensor const&) const (VariableType.cpp:24790)
==8901== by 0x1D115671: inverse (TensorMethods.h:973)
==8901== by 0x1D115671: dispatch_inverse (python_variable_methods_dispatch.h:1210)
==8901== by 0x1D115671: torch::autograd::THPVariable_inverse(_object*, _object*) (python_variable_methods.cpp:2937)
==8901== by 0x4F0805A: _PyCFunction_FastCallDict (methodobject.c:192)
==8901== by 0x4FA1499: call_function (ceval.c:4830)
==8901== by 0x4FA571B: _PyEval_EvalFrameDefault (ceval.c:3328)
==8901== by 0x4FA109D: _PyEval_EvalCodeWithName (ceval.c:4159)
==8901== by 0x4FA16CC: PyEval_EvalCodeEx (ceval.c:4180)
==8901== by 0x4FA171A: PyEval_EvalCode (ceval.c:731)
==8901== by 0x4FDD0A1: run_mod (pythonrun.c:1025)
==8901== by 0x4FDD0A1: PyRun_FileExFlags (pythonrun.c:978)
==8901== by 0x4FDD206: PyRun_SimpleFileExFlags (pythonrun.c:420)
==8901== by 0x4FF96FC: run_file (main.c:340)
==8901== by 0x4FF96FC: Py_Main (main.c:810)
==8901== by 0x400BBB: main (python.c:69)
|
st97611
|
I have a custom layer that has a Tensor which should never be modified. It is used together with the input to produce a result and so it should allow gradients to pass backward. Again, it should never be changed.
My original solution was to make this a class member. This was working fine until I needed to switch to a multi-GPU setup. Unless it was a nn.Parameter type, it would not be set to the correct GPU device when model.cuda() is called.
I tried setting requires_grad=False, but the network fails to train:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I suspect that I am misunderstanding something fundamental about the Parameter class.
Another attempt that I tried was to override torch.nn.Module.cuda() in my subclass. This would allow me to easily assign my layer’s data to the correct device. It did not work.
Summary
I have a custom model with a class member used in forward/backward calls.
The class member should NEVER be modified.
I set the class member as nn.Parameter so that it would be assigned to the correct device when model.cuda() is called.
It changes during backward propagation when requires_grad=True.
Training crashes when requires_grad=False.
Is there a way to create a Tensor that will be copied to the correct GPU when model.cuda() is called?
|
st97612
|
Unless there is a better way to fix this, I implemented a hacky fix by copying the class member data that I need every time the forward pass is called. This guarantees that the class data will match the device and type of the input.
|
st97613
|
You may need register_buffer().
Use and Abuse of .register_buffer( )
Hi,
I have some trouble understanding the use of register_buffer().
I found just a little bit of explanation in the docs, mentioning “running_mean” in BatchNorm.
My questions are:
When should I register a buffer? For what sort of Variables and for which not?
Could someone provide me with a simple example and code snippet of using register_buffer()?
[3.] At the moment, I’m running some tests on an implementation of a custom gradient, which I subsequently modify. record the gradients before …
|
st97614
|
I am trying to create some plots of loss and accuracy values. I did install VisualDL by pip install --upgrade visualdl, but when I tried to run a demo, I got the error 'vdl_create_scratch_log' is not recognized as an internal or external command, operable program or batch file.
before I did install my Pytorch version by running this command
pip3 install http://download.pytorch.org/whl/cu80/torch-0.4.1-cp36-cp36m-win_amd64.whl
pip3 install torchvision
|
st97615
|
In the implementation of DistributedDataParallelCPU, looks like we setup the all reduce hook or every layer of the model, but we all reduce whole model grads everytime the allreduce_params() get triggered. My understand is we should do allreduce once an iteration. seems we are doing multiple times in DistributedDataParallelCPU? Did I missed anything?
|
st97616
|
first, sigmoid -> BCE loss
second, softmax -> multilabel classification cross-entropy
but, parameter sharing situation
sorry, I am not good at writing English.
multi label learning very well, but binary learning not good.
loss = loss1 + loss2
Should I give weight to loss?
I would appreciate your reply.
|
st97617
|
I assume you have two different outputs in your model, i.e. one using the nn.BCELoss and the other for the nn.CrossEntropyLoss?
Now one part of your model learn quite good, while the other gets stuck?
A weighting of these losses might be a good idea.
Could you compare the ranges of both losses and try to rescale them to a similar range?
Also, as a small side note, if you are using nn.CrossEntropyLoss for classification, you should pass the logits to this criterion, not the probabilities using nn.Softmax.
|
st97618
|
Hi, i want to know if the pytorch team is planning to do something for genomic data analysis like tensorflow’s nucleus. I would really love to bring in genomic data, preprocess it and then train a deep learning CNN model, all in pytorch. Please share. Thanks
|
st97619
|
I am trying to use Adam optimiser, and after building the model and define optimiser as follows, I am getting an error ValueError: optimizer got an empty parameter list which I don’t know how to deal with it? Any comment would appreciate in advance.
class NeuralNet(nn.Module):
def __int__(self):
super(NeuralNet, self).__init__()
self.conv = nn.conv2d(1, 28, kernel_size=(3, 3))
self.pool = nn.maxpoo2d(2, 2)
self.hidden = nn.Linear(28*13*13, 128)
self.drop = nn.Dropout(0.2)
self.out = nn.linear(128, 10) # fully connected layer
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv(x))
print(x.size())
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.act(self.hidden())
x = self.drop(x)
x = self.out()
return x
net = NeuralNet()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
|
st97620
|
Solved by ptrblck in post #2
You have a few typos in your model definition, e.g. __int__ instead of __init__, which is why your layers won’t be initialized and your parameters are empty.
Also, some layers have types, e.g. nn.maxpoo2d instead of nn.MaxPool2d.
After fixing these issue, you’ll get other errors, since self.hidden …
|
st97621
|
You have a few typos in your model definition, e.g. __int__ instead of __init__, which is why your layers won’t be initialized and your parameters are empty.
Also, some layers have types, e.g. nn.maxpoo2d instead of nn.MaxPool2d.
After fixing these issue, you’ll get other errors, since self.hidden and self.out currently don’t get any input.
Here is a fixed version:
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet, self).__init__()
self.conv = nn.Conv2d(1, 28, kernel_size=(3, 3))
self.pool = nn.MaxPool2d(2, 2)
self.hidden = nn.Linear(28*13*13, 128)
self.drop = nn.Dropout(0.2)
self.out = nn.Linear(128, 10) # fully connected layer
self.act = nn.ReLU()
def forward(self, x):
x = self.act(self.conv(x))
print(x.size())
x = self.pool(x)
x = x.view(x.size(0), -1)
x = self.act(self.hidden(x))
x = self.drop(x)
x = self.out(x)
return x
|
st97622
|
Hello,
Myself Prasanthi. I’m glad being a part of this forum. I am interested in technologies and its upcoming, looking forward to gaining more knowledge through this forum and share my views in discussions.
Thank you.
|
st97623
|
Hi everyone,
After two hours of debugging, I still can’t find the reason for the error I’m getting, ValueError: Expected target size (128, 10000), got torch.Size([128, 1])
I thought the code was pretty straightforward, and even resembles one of the tutorials, but still, an error.
The code is
class FeedForwardLM(nn.Module):
def __init__(self, vocab_size, embedding_dim, hidden_size):
super(FeedForwardLM, self).__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.fc1 = nn.Linear(2 * embedding_dim, hidden_size)
self.fc2 = nn.Linear(hidden_size, vocab_size)
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, w1, w2):
mw1 = self.embedding(w1)
mw2 = self.embedding(w2)
m = torch.cat([mw1, mw2], dim=2)
out = torch.tanh(self.fc1(m))
out = self.fc2(out)
out = self.softmax(out)
return out
dataset = IndexLMDataset(train)
data_loader = data.DataLoader(dataset, batch_size=128, shuffle=True)
losses = []
loss_fn = nn.NLLLoss()
model = FeedForwardLM(10000, 300, 512).to(device)
optimizer = optim.Adam(model.parameters())
for _ in range(100):
total_loss = 0
for batch in data_loader:
(w1,w2), y = batch
model.zero_grad()
yhat = model(w1, w2)
loss = loss_fn(yhat, y)
loss.backward()
optimizer.step()
total_loss += loss
losses.append(total_loss)
torch.save(model.state_dict(), "feedforward.model")
plt.plot(losses);
and the code for the dataset is:
class IndexLMDataset(data.Dataset):
def __init__(self, train):
self._trigrams = list(nltk.trigrams(train))[:1000]
def __len__(self):
return len(self._trigrams)
def __getitem__(self, index):
w1, w2, w3 = self._trigrams[index]
w1 = torch.tensor([word2idx.get(w1, word2idx['UNK'])], dtype=torch.long).to(device)
w2 = torch.tensor([word2idx.get(w2, word2idx['UNK'])], dtype=torch.long).to(device)
w3 = torch.tensor([word2idx.get(w3, word2idx['UNK'])], dtype=torch.long).to(device)
return (w1, w2), w3
where train is just a list of words, and nltk.trigrams(train) returns triplets of words, so nltk.trigrams(['The', 'quick', 'brown', 'fox']) returns [('The', 'quick', 'brown'), ('quick', 'brown', 'fox')].
Any help is appreciated, as always!
|
st97624
|
The error message seems to be a bit strange.
It seems your target has the shape [batch_size, 1].
Could you remove dim1 using y = y.squeeze() and try it again?
Here is a simple dummy code where you can see the shapes:
vocab_size = 10000
model = nn.Sequential(
nn.Linear(vocab_size, vocab_size),
nn.LogSoftmax(dim=1)
)
criterion = nn.NLLLoss()
batch_size = 10
x = torch.randn(batch_size, vocab_size)
target = torch.randint(0, vocab_size, (batch_size, ))
print(x.shape)
> torch.Size([10, 10000])
print(target.shape)
> torch.Size([10])
output =model(x)
loss = criterion(output, target)
loss.backward()
|
st97625
|
unsqueeze didn’t work…Doing y.view(-1) worked in a similar model, where the only difference was using one-hot vectors instead of embedding… Weird :\ It works if I’m not using batches, but that’s no way to work…
|
st97626
|
Oh I have a typo. I meant y.squeeze().
Could you try that again?
Sorry for the confusion. I’ll edit my post!
|
st97627
|
This time the error is Expected target size (128, 10000), got torch.Size([128]).
|
st97628
|
Could you compare your code and shapes to this:
x = torch.randn(128, 10000, requires_grad=True)
output = F.log_softmax(x, dim=1)
target = torch.randint(0, 10000, (128, ))
criterion = nn.NLLLoss()
loss = criterion(output, target)
output is torch.Size([128, 10000]), while target is torch.Size([128]).
|
st97629
|
i think I got it! The shape of yhat was torch.Size([128, 1, 10000]) instead of torch.Size([128, 10000])! Squeezing yhat did the trick! Or at least I’m not getting an error anymore
|
st97630
|
Hello!
There was a transofrms field in DataLoader. I checked the manual it’s gone.
Where to feed transforms?
How to use it? It takes only mean and stddev.
|
st97631
|
from torchvision import transform
mean = [0.5071, 0.4867, 0.4408]
stdv = [0.2675, 0.2565, 0.2761]
train_transforms = transforms.Compose([
transforms.Normalize(mean=mean, std=stdv),
])
|
st97632
|
Solution is worker_init_fn= in DataLoader:
data = torch.utils.data.DataLoader((X_train, labels),worker_init_fn=transforms,batch_size=500)
|
st97633
|
Usually you would pass the transformation to a Dataset, not the DataLoader.
You could use e.g. torchvision.datasets.ImageFolder 1 and pass it as transform=your_transform.
Alternatively, you could write your own Dataset and apply the transformation in the __getitem__ method.
|
st97634
|
The usage for pack sequence here 8 seems simple enough. Sort the tensors by their length then feed it to the function. However, I needed to ask if there are any additional steps that I must be aware of when using pack_sequence to train my RNN on a dataset whose sequences have varying length. I looked through the forums but can’t find a definitive answer.
>>> from torch.nn.utils.rnn import pack_sequence
>>> a = torch.tensor([1,2,3])
>>> b = torch.tensor([4,5])
>>> c = torch.tensor([6])
>>> pack_sequence([a, b, c])
PackedSequence(data=tensor([ 1, 4, 6, 2, 5, 3]), batch_sizes=tensor([ 3, 2, 1]))
|
st97635
|
if there length vary the padding sequence will pad it to have the same size of the input.
for instance when are dealing with seq-seq model where u decode and encode the data to pad the data like a language translation model
|
st97636
|
I did CIFAR10 tutorial inPytorch.org 4 .
I was expecting to get the same accuracy for each class the same as the tutorial. My network accuracy on the 10000 test images is 54% while it is different for some classes in comparison with the tutorial. Does have any specific reason?
here is accuracy for each class:
Accuracy of plane : 54 %
Accuracy of car : 44 %
Accuracy of bird : 40 %
Accuracy of cat : 50 %
Accuracy of deer : 37 %
Accuracy of dog : 24 %
Accuracy of frog : 74 %
Accuracy of horse : 61 %
Accuracy of ship : 74 %
Accuracy of truck : 83 %
|
st97637
|
It looks like the code doesn’t use a seed, so these differences are expected.
CC @chsasank what do you think about adding a seed at the beginning?
If it’s useful, I could create the PR for it.
|
st97638
|
Has anyone tried cross compilation for Jetson TX2, with CUDA enabled. Also the setup.py takes in the compilers and CUDA libraries automatically, I want to know how to pass these as arguments. I tried setting environment variables (CC, CUDA_HOME, LD_LIBRARY_PATH), that didn’t help.
|
st97639
|
Solved by albanD in post #2
Hi,
torch.cuda.synchronize() just wait for all the work to be done on the GPU.
If the synchronize takes different time for different models, I guess that means that one requires more things to be done on the GPU. There is not much you can do here.
Also, unless you are doing time meansurement, you…
|
st97640
|
Hi,
torch.cuda.synchronize() just wait for all the work to be done on the GPU.
If the synchronize takes different time for different models, I guess that means that one requires more things to be done on the GPU. There is not much you can do here.
Also, unless you are doing time meansurement, you don’t need to explicitly synchronize when using pytorch. It will be done automatically when needed !
|
st97641
|
Is there any function same as the numpy.take function ?
a = np.random.rand(10,2)
a
array([[0.14009622, 0.51205327],
[0.79369912, 0.72729038],
[0.7041849 , 0.38336232],
[0.24226114, 0.82128616],
[0.79437292, 0.80704659],
[0.49525265, 0.94109091],
[0.68034488, 0.17979233],
[0.49363128, 0.1746095 ],
[0.01659873, 0.0146231 ],
[0.00280727, 0.04646634]])
#if we want to take the [1,2,3] rows
np.take(a,[1,2,3],axis=0)
array([[0.79369912, 0.72729038],
[0.7041849 , 0.38336232],
[0.24226114, 0.82128616]])
|
st97642
|
Solved by Deepali in post #2
Do you need something like - Select rows from a 2D tensor
|
st97643
|
Hi! I have a doubt about how the torch.cuda.is_available() works.
While training my network, I usually use the code:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
network.to(device)
data.to(device)
...
But I found that torch.cuda.is_available() is still True when the network is being trained.
I am not sure why this happens. Does this mean that the code isn’t running on a GPU?
|
st97644
|
Hi,
torch.cuda.is_available() is just telling you whether or not cuda is available. If it is available, that flag will remain True throughout your program.
|
st97645
|
Thanks a lot! I thought it was a way to find out whether I can use the GPU.
Is there any way I can find out if my program is really using the GPU?
|
st97646
|
You can use nvidia-smi in your command line to check that your program is using some ressourses on your gpu.
|
st97647
|
Thanks. I know this command could work.
By the way, do you think that I can train two networks with the same GPU at the same time?
I just found that no error is returned while doing this, but I thought that the GPU couldn’t handle two tasks at the same time.
|
st97648
|
You can do it, that’s no problem.
Now wether it’s going to be faster than running the two one after the other is very hard to guess. Depends on how much each task use the GPU. You will need to check.
|
st97649
|
Hi,
I have a question regarding allocation of RAM/virtual memory (Not GPU memory) when torch.cuda.init() is called
If i use the code
import torch
torch.cuda.init()
The virtual memory usage goes up to about 10GB, and 135M in RAM (from almost non-existing).
If I then run
torch.rand((256, 256)).cuda()
The virtual memory used is increased to 15.5GB, and 2GB in RAM.
What is the reason behind this?
Is there any way to prevent it?
If needed:
torch version: 1.0.0
cuda: 8.0
Python: 3.6.5
|
st97650
|
Yes, this seems to be the same issue!
After doing more testing it seems that the amount of memory allocated after calling torch.cuda.init() depends on the numbers of GPUs visible. (With 1xP100 about 10GB is allocated. With 2xP100 about 18GB is allocated…)
Did you find any workarounds for this issue?
|
st97651
|
I’m afraid not
The fact that it uses a lot of virtual memory is expected because it use that for GPU memory management from what I remember. And you should have plenty of virtual memory to spare anyway
The fact that it uses 2GB of RAM is a bit but I’m not sure what is the root reason for it.
|
st97652
|
I am following this tutorial 5 to create a C++ extension for Pytorch. My C++ code is giving following error :
test.cpp:3:10: fatal error: torch/torch.h: No such file or directory
#include <torch/torch.h>
How to get torch.h header file ?
|
st97653
|
Hi,
Which method do you use to make your extension? Creating a module or the jit version?
|
st97654
|
Store a list on GPU server into a pkl file using pickle.dump(…)
Now I want to pickle.load the pkl file on a CPU-only machine, but got the following error message:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=‘cpu’ to map your storages to the CPU.
I tried to load the pkl using torch.load with mapping to CPU, but it doesn’t work.
|
st97655
|
Hi,
You can use torch.load() 135 and ask for your tensors to be on the cpu by setting map_location="cpu".
|
st97656
|
Hello,
I’ve been trying for a couple of months now to train a network to recognise facial expressions.
When I discovered PyTorch I spent time reading all the tutorials, blogs on CNN, and so on.
I’ve spent a good 5-6 years on ML and yet I’m still dumbfounded by how complex training a CNN appears to be.
I’ve got a very large (half a million) dataset of facial expressions, and a Titan Xp to my disposal.
At first I tried copying simple CNN architectures from various papers. Tried small input (64x64) tried single channel grayscale, etc.
I then tried using existing models (VGG11, VGG19) on grayscale, and recently I tried with RGB.
All my attempts have failed so far. Most won’t even converge, and the few that do (using face valence and not labels/classes) when evaluating are very very poor (less than 35% accuracy).
I want to ask:
can I fine tune an existing pre-trained network such as ResNet150 or inception for my task? Do I simply freeze the feature extractors, change the last layer/classifier and re-train? I’ve searched current answers, posts and github repositories but it’s not very clear.
does normalisation play an important role? I can calculate means and standard deviation for the dataset
should I even bother training a large network from scratch? E.g, VGG19, ResNet18, etc?
what kind of accuracy during evaluation should I realistically expect?
I’ve tried using data augmentation, but it appears to have made matters worse (e.g., CE never drops below a certain value)
I can probably throw a second Titan Xp to the task, but this is becoming very stressing. Any help, advise, feedback or criticism is more than welcomed!
|
st97657
|
I assume your facial expression use case is a form of an image classification task, i.e. each image has a single label.
Your first approach sounds fine. Fine tuning a model is often easier than training the model from scratch.
If you try to fine tune a model, you should try to stick to the preprocessing of the pre-trained model as much as possible. E.g. your pre-trained model was most likely trained with normalized images. Your images should therefore perform the same normalization.
Generally, using the ImageNet mean and std works reasonably well on “natural” images, i.e. color images from approx. the same domain. Medical images from a CT scanner might need other values, but that’s not the case in your task.
I would recommend to start with a really small dataset, e.g. one single image or maybe 10 images. If your model can’t overfit this tiny dataset, you might have a bug somewhere in your code.
If it works, you could try to scale up a bit.
I wouldn’t start with all images at first, as this might make debugging hard.
If your model performs well, you could continue with adding data augmentation.
Generally I would try to focus on small and simple use cases and make sure there are no obvious bugs.
|
st97658
|
Thanks a ton ptrblck this makes a lot of sense. Throwing a half a million images at it is probably the problem here. I’ve removed norm, and been using half so I’ll just try with AlexNet for now, and use a small subset of the full dataset.
|
st97659
|
I am back with an update:
I am using AlexNet pretrained. It fails to converge at all (CE remains higher than 1.0)
When I unfreeze the convolution layers it starts to learn, but only so slightly.
I am using a small dataset (4000 images), and when training it achieves about 0.1 to 0.01 CE however upon evaluation it is less than 30% accurate (correct out of total evaluated images)
I’ve played with the hyperparameters quite a bit, learning rate, momentum, L2, epocs, etc.
I am using RGB and not grayscale, and the default 224x224 input. I normalise and resize to 224x224
I am wondering:
Why am I getting better results when I try to classify valence instead of expression labels (3 classes instead of 7)
Why am I stuck at similar accuracy regardless of hyperparameter changes (about 25% to 30%)
Should I be increasing the training size?
Is my image preprocessing enough? E.g., resize and normalise?
Does using less output classes make such a big difference?
Why does the CE yo-yo up and down in most cases, is this an indication of failing to converge?
@ptrblck I appreciate any help or advise here, I know my questions are not so much about pytorch.
Regards,
Alex
|
st97660
|
How is your training accuracy? Could you overfit your data using 4000 images?
If not, could you first try to get a nearly perfect accuracy using very few samples, e.g. 10?
If that’s not possible, you might have a bug somewhere, e.g. forgetting to zero out the gradients.
How are you normalizing your images? Are you using the ImageNet mean and std or are you calculating both using your data?
What kind of data are you using, i.e. did you generate it yourself or are you using a dataset from someone else? Can you estimate how clean the data is? Could it be that expression labels are mixed sometimes?
How large is your batch size? A small batch size will look noisier than a bigger one.
If your loss does not have a decreasing trend, the training is stuck.
Could you post your code so that we can have a look for obvious bugs?
|
st97661
|
Hi @ptrblck
Let me try and answer with a list:
Data-set is AffectNet: http://mohammadmahoor.com/affectnet 58
I’ve removed certain labels as they seem to confuse the networks (uncertain, non-face, etc)
My top-1 with 7 labels is always below 30%, if I use only valence score (2 output classes) it goes up to 62%
I’ve calculted the mean and std of the entire data-set and I’m using it for normalisation. Using Imagenet mean and std was 1% to 2% worse only.
Input is RGB 224x224
The data seems to be from internet sources, I have no opinion on how “clean” or accurate it is, but the paper says that it has been human-annotated/labeled for the part I am using. I am open to using other datasets if you know a better one to suggest
I have 3 python scripts really, one which loads the custom dataset (I’ve written unit tests) and basically does the following:
load image
normalise and tensorify
A script of models which wraps around the output layers of AlexNet, SqueezeNet, VGG11, VGG19 (Only experimenting with AlexNet so far)
And the train and evaluate script with all the hyperparameters.
I have tuned it down to using:
0.001 learning rate
0.9 momentum and SGD optimiser in combination with CE loss
0.0005 L2
100 epochs
64 or 128 batch size
4000 images for training (out of about half a million) which are randomly picked. I am filtering uncertain, unknown and non-faces
I am using a MultuStepLR which seems to slightly improve accuracy
I can post the code if needed, but my most striking observation is that changing the output from 7 labels to 2 labels makes a huge impact as already mentioned. Trying with the original 11 labels would produce at best 20% top-1 accuracy.
I’ve been logging all training attempts in a mongo DB since this morning as I am trying to take a systematic and methodological approach. One observation I’ve made is that CE was exploding to a NaN when I used half instead of float and that with larger training data-sets it would go beyond 1.0 easily.
Overfitting seems to happen with the smaller training set (I got CE down to 0.000X) but it is always a bit noisy.
I’ll commit the code and then copy-paste the training and model scripts.
|
st97662
|
The training, evaluation and hyperparameter script is this
import torch
import torch.nn as nn
from affectnet_cpu import affectnet_cpu
from affectnet_gpu import affectnet_gpu
from evaluate import evaluate
import logger as logger
import models as models
num_epochs = 200
batch_size = 256
learning_rate = 0.001
momentum = 0.9
l2_reg = 0.0005
datasize = 5000
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
net = models.alexnet()
train_data = affectnet_gpu('../data/affectnet_images',
'../data/affectnet_labels/training.csv',
datasize)
train_loader = torch.utils.data.DataLoader(dataset=train_data,
batch_size=batch_size,
shuffle=True)
criterion = nn.CrossEntropyLoss().cuda()
optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, net.parameters()),
lr=learning_rate,
momentum=momentum,
dampening=0,
weight_decay=l2_reg, #try with zero!
nesterov=False)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer,
milestones=[80,120,180],
gamma=0.1)
CE = []
net.train()
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
inputs = images.cuda().float()
labels = labels.cuda().long()
optimizer.zero_grad()
outputs = net(inputs)
ideal = labels.argmax(1)
loss = criterion(outputs, ideal)
CE.append(loss.item())
# Backward and optimize
loss.backward()
optimizer.step()
if (i+1) % 20 == 0:
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'
.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
scheduler.step()
net.eval()
test_data = affectnet_gpu('../data/affectnet_images',
'../data/affectnet_labels/validation.csv')
test_loader = torch.utils.data.DataLoader(dataset=test_data,
batch_size=12,
shuffle=False)
correct, total = evaluate(net, test_data, test_loader)
print(correct, total)
accuracy = float(float(correct) / float(total))
print("accuracy", accuracy)
The models.py file is the following. Please note I adjust accordingly, to either 8 labels or 2 depending on if I’m using valence or expression labels.
import torch
import torch.nn as nn
import torchvision.models as models
"""
Custom VGG11
"""
class VGG11(nn.Module):
def __init__(self):
super(VGG11, self).__init__()
self.fc = nn.Linear(1000, 8)
self.net = models.vgg11(pretrained=True)
for p in self.net.parameters():
p.requires_grad=False
self.net.cuda().half()
self.fc.cuda().half()
def forward(self, x):
f = self.net(x)
y = self.fc(f)
return y
"""
Custom VGG19 with BN
"""
class VGG19BN(nn.Module):
def __init__(self):
super(VGG19BN,self).__init__()
self.layer1 = nn.Linear(1000,8)
self.net = models.vgg19_bn(pretrained=True)
for p in self.net.parameters():
p.requires_grad=False
self.net.cuda().half()
self.fc.cuda().half()
def forward(self,x):
f = self.net(x)
y = self.layer1(x1)
return y
"""
Custom AlexNet 2 label output
"""
class alexnet(nn.Module):
def __init__(self):
super(alexnet, self).__init__()
self.fc = nn.Linear(1000, 2)
self.net = models.alexnet(pretrained=True)
self.net.cuda().float()
self.fc.cuda().float()
def forward(self, x):
f = self.net(x)
y = self.fc(f)
return y
"""
Custom SqueezeNet 2 label output
"""
class squeezenet(nn.Module):
def __init__(self):
super(squeezenet, self).__init__()
self.fc = nn.Linear(1000, 2)
self.net = models.squeezenet1_1(pretrained=True)
self.net.cuda().float()
self.fc.cuda().float()
def forward(self, x):
f = self.net(x)
y = self.fc(f)
return y
I know I can do transfer learning which I’ve tried with Imagenet-trained networks and they all seem to produce worse accuracy, but I am willing to try again if you think it can work.
The actual affectnet script has two classes, one for CPU and one for GPU.
Just adding it here for clarity:
import torch
from torchvision import transforms
import pandas as pd
import os
import stat
from PIL import Image
from labels import labels
from random import shuffle
from torch.utils.data.dataset import Dataset
import threading
MAX_GPU_MB = 10980000000
class affectnet_gpu(Dataset):
def __init__(self, img_path, csv_path, limit=414798):
"""
Args:
image_path (string) is where the annotated images are
csv_path (string) is where the CSV files (training and testing) are
"""
self.img_path = img_path
self.labels = labels(pd.read_csv(csv_path), img_path, limit)
# *NOTE* the means and stds are on RGB 3 channel 224x224 images
self.means = [0.54019716, 0.43742642, 0.38931704]
self.stds = [0.24726599, 0.2232768, 0.21396481]
normalize = transforms.Normalize(self.means, self.stds)
self.preprocess = transforms.Compose([transforms.Resize(size=(224,224)),
transforms.ToTensor(),
normalize])
self.data = []
print("Pre-processing and allocating data")
for idx in range(len(self.labels.rows)):
if torch.cuda.memory_allocated() < MAX_GPU_MB:
self.upload_pair(idx)
else:
break
print("using affectnet set: ", len(self.data))
#
# upload to CUDA/GPU a half float `FP16` input tensor and its equivalent output label
#
def upload_pair(self, idx):
"""
Args:
@param idx (unsigned int) is the item index in the dataset
"""
pair = self.process_row(idx)
in_tensor = pair[0].cuda(non_blocking=True).float()
out_tensor = pair[1].cuda(non_blocking=True).float()
self.data.append([in_tensor, out_tensor])
#
# pre-process a row by opening the image, creating an output/label tensor
# and setting it correctly, and then returning the pair, to be uploaded on the GPU
#
def process_row(self, index):
"""
Args:
@param idx (unsigned int) is the item index in the dataset
"""
item = self.labels[index]
file = self.img_path + "/" + item["file"]
img = Image.open(file)
array = self.valence(index)
#array = self.classes(index)
return self.preprocess(img).pin_memory(), array.pin_memory()
#
# access an item in the dataset using @param index
# @return a tuple of **input** tensor, **output** tensor
#
def __getitem__(self, index):
"""
Args:
@param index (unsigned int) is the item index in the dataset
@return a pair already pre-processed and allocated on the GPU
"""
return self.data[index]
#
# get dataset length (size)
#
def __len__(self):
return len(self.data)
#
# calculate classes output
#
def classes(self, index):
item = self.labels[index]
array = torch.zeros((8,), dtype=torch.long)
array[item["expression"]] = 1
return array
#
# calculate valence output
#
def valence(self, index):
"""
Args:
@param pass the `label` and create the correct output
@return a vector of [x,y,z] where:
- `x` is positive
- `y` is neutral
- `z` is negative
"""
array = torch.zeros((2,))
item = self.labels[index]
score = item["valence"]
if score > 0.0:
array = torch.tensor([1, 0], dtype=torch.long)
elif score < 0.0:
array = torch.tensor([0, 1], dtype=torch.long)
return array
In general I’ve tried to follow all tutorials on PyTorch, searched on Stackoverflow and the forums here, did all examples, etc. I am suprised that the 8 label classification fails so miserably with AlexNet and I guess the 2/3 label classification when using valence, at top-1 accuracy 62% is to be expected?
PS: I haven’t added the labels function, and the evaluate is:
import torch
import torch.nn as nn
from affectnet_cpu import affectnet_cpu
from affectnet_gpu import affectnet_gpu
def evaluate(model, test_data, test_loader):
with torch.no_grad():
correct = 0
total = 0
for images, labels in test_loader:
images = images.cuda(non_blocking=True).float()
labels = labels.cuda(non_blocking=True).long()
ideal = labels.argmax(1)
# compute output
outputs = model(images)
_, predicted = torch.max(outputs.data, 1)
total += ideal.size(0)
correct += (predicted == ideal).sum().item()
return correct, total
|
st97663
|
If I can offer a suggestion … This probably isn’t what you’re looking for (I’m actually having a similar problem getting Inception nets and things to recognise pyramids vs cubes – generated on the fly … Which I thought would be 1,000x easier … Same models that get 98% on MNIST in seconds can barely get above 50:50 … Fully batch normalised conv nets and everything).
However, if the task is principally what you’re trying to achieve, a route I believe human expression detection uses would be to first identify facial landmarks, then train another network to identify expressions on that data.
And for that you can use this to extract features:
from PIL import Image, ImageDraw
import face_recognition
image = face_recognition.load_image_file("obama.jpg")
face_landmarks_list = face_recognition.face_landmarks(image)
print("I found {} face(s) in this photograph.".format(len(face_landmarks_list)))
for face_landmarks in face_landmarks_list:
for facial_feature in face_landmarks.keys():
print("The {} in this face has the following points: {}".format(facial_feature, face_landmarks[facial_feature]))
pil_image = Image.fromarray(image)
d = ImageDraw.Draw(pil_image)
for facial_feature in face_landmarks.keys():
d.line(face_landmarks[facial_feature], width=5)
pil_image.show()
And that will give you this relatively quickly:
And you could use a very basic neural net on the vector data that generates – only a few dozen numbers; neatly categorised by facial landmark … Certainly how I’d do it – simplify the problem as much as possible … Partly because I’m on a Macbook – first thought is always: how can I avoid having to throw 10 billion numbers at a problem?
|
st97664
|
@Swift2046 Thank you very much for the suggestion, I will definitely look into that, since AffectNet already comes with landmarks detected. It also makes sense if processing and accuracy increase.
I’m guessing you are using DLib for the landmark detection? I’m already using MT-CNN for face detection which seems to be more accurate than OpenCV, so all I’m looking at really is extracting the landmarks.
|
st97665
|
Just this most recently:
https://pypi.org/project/facedetection/ 32
Which seems to be a sort of wrapper for Google Vision Face Detection API, Microsoft Projectoxford Detection API and Akamai Image Converter API? I think I had slightly less consistent results with OpenCV.
I took a detour after struggling to train conv nets on these more abstract tasks, and that’s what led me to look at Recursive Convolutional Networks – so I might have a block that’s a 7x7 conv net, and then I’ll flatten it using a really long kernel, like 1x128 with 128 layers; feed that into a 2-layer LSTM; back to a linear layer … Seems to be a noticeable improvement on standard datasets
|
st97666
|
@Swift2046 How would you go about normalising features for ConvNets?
AFAIK, it is similar to RGB normalisation, where a value represents a coordinate in the matrix?
I’m still trying with AlexNet and managed to get 66% top-1 for Valence (-1 to 1) score and I guess I can make it go a bit higher with more data and/or better networks.
Since AffectNet already has the face features included in the CSV data I’m very inclined to test it.
|
st97667
|
Well I should warn, I’m a hack – I can get things to work, but generally benefit from talking to people who understand what they’re doing and know the terminology better.
I use this structure, so every Conv layer is batch normalised:
class BasicConv2d(nn.Module):
def __init__(self, in_channels, out_channels, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
return F.relu(x, inplace=True)
But with vector features, I’d just do a simple divide by np.max or the image size – but there might be something I’m overlooking there?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.