id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st100900 | output_padding is used in transposed conv layers. Have a look at the notes in the doc 90. |
st100901 | Thanks
I see, the original input_size for transposed conv layer is ambiguious because output_size=math.floor(xxx / stride). |
st100902 | Is it possible to use a DataLoader to repeat the same batch with a different augmentation? For example, I would like to generate a batch with images from 1 to 10 four time with different augmentation, and then for images from 11 to 20, etc. My problem is that I do not know how to avoid the DataLoader to advance the index.
Thanks. |
st100903 | Yes. I have created my own DataLoader that reads the images and masks, and then it applies the data augmentation, everything inside the
__getitem__
function. |
st100904 | This is the loader code that I am using:
class my_loader(data.Dataset):
def __init__(self, root, split='train', joint_transform=None,
transform=None, target_transform=LabelToLongTensor(),
download=False,
loader=default_loader, train=True, augm=True):
self.root = root
assert split in ('train', 'val', 'test', 'test_all')
self.split = split
self.transform = transform
self.target_transform = target_transform
self.joint_transform = joint_transform
self.loader = loader
self.train = train
self.augm = augm
if download:
self.download()
self.imgs = _make_dataset(os.path.join(self.root, self.split + '/images'))
if self.augm:
self.affine_seq = iaa.Sequential([
iaa.Fliplr(0.5),
iaa.Sometimes(0.33,
iaa.Affine(translate_percent={"x": (-0.1, 0.1), "y": (-0.1, 0.1)}, mode='symmetric')
),
], random_order=True)
self.intensity_seq = iaa.Noop()
else:
self.affine_seq = iaa.Noop()
self.intensity_seq = iaa.Noop()
def __getitem__(self, index):
path = self.imgs[index]
img = self.loader(path)
if self.train:
target = Image.open(path.replace(self.split + '/images', self.split + '/masks'))
target = target.convert('L')
target = from_pil(target).astype(np.uint8)
target = target.reshape((target.shape[0],target.shape[1],1))
target = affine_seq_deter.augment_images([target])
target = target[0]
target = target.reshape((target.shape[0],target.shape[1]))
target = (target>128).astype(np.uint8)
target = to_pil(target)
img = from_pil(img).astype(np.uint8)
affine_seq_deter = self.affine_seq.to_deterministic()
img = affine_seq_deter.augment_images([img])
img = img[0]
img = img.astype(np.uint8)
img = self.intensity_seq.augment_images([img])
img = img[0]
img = img.astype(np.uint8)
img = to_pil(img)
if self.joint_transform is not None:
if self.train:
img, target = self.joint_transform([img, target])
else:
img = self.joint_transform([img])
img = img[0]
if self.transform is not None:
img = self.transform(img)
if self.train:
target = self.target_transform(target)
else:
target = [] # Not accepted None.
import ntpath
path = ntpath.basename(path)
return img, target, path, index
def __len__(self):
return len(self.imgs)
def download(self):
raise NotImplementedError |
st100905 | If anyone have interest, I have done it by using:
sampler = torch.utils.data.SubsetRandomSampler(indices=index_you_want_to_use)
and then defining a new DataLoader with this sampler. I am sure there is a much better way to do that but it works. Personally I don’t like this method because it shuffle the dataset so you have to take care with the new indexes. |
st100906 | I am using CASIA webFace dataset for training, and the model is 64 layer architecture with conv and prelu layers, and finally 2 FC layers.
I am having hard time to tune the model so it could learn.
First I went with 22 layer model which gives result if i train the model with 100k images, instead of 900k, but fails when I used 100k+ more images.
learning rate, momentum are the only two hyperparameters. |
st100907 | Hello,
I have a network say net1, which is a sub-network of another network say net12 (say net12 = net1 + net2; net1 and net2 are connected together)
I have a pre-trained model for net12, so I would like to initialize net1 using the corresponding net1’s weights in net12. Can anyone please help me with this?
I know this problem actually corresponds to transfer learning, but I am a newbie, so I do not have a lot of experience with either PyTorch or Deep Learning. Any help will be much appreciated! |
st100908 | Hello,
I am trying to learn the network with the following very simple architecture
class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.fc1 = nn.Linear(2048, 2048)
def forward(self, f1, f2):
res = torch.norm(self.fc1(F.relu(f1 - f2)), dim = 1)
return res
Then I do the following:
net = MyNet()
net.to(device)
net.eval()
n = 1000
m = 1000
dim = 2048
x = torch.randn(dim, n)
y = torch.randn(dim, m)
scores = torch.zeros(x.size()[1], y.size()[1])
x = x.to(device)
y = y.to(device)
for i in range(x.size()[1]):
print('\r>>>> ' + str(i + 1) + '/' + str(x.size()[1]), end = '')
for j in range(y.size()[1]):
scores[i, j] = net(x[:, i].unsqueeze(0), y[:, j].unsqueeze(0))
After I run this code I get RuntimeError: CUDA error: out of memory. Looking on the output of nvidia-smi I observe that memory consumption is growing during the computations. It looks strange for me since I set model into eval() mode and I do not need any gradients or something like this. All iterations are really independent so I expect that whole code needs as much memory as one iteration.
Can you explain what is the reason of such large memory consumption and how to overcome this? I expect that it is easy to overcome since all iterations are independent.
Thanks for the help! |
st100909 | Solved by JuanFMontesinos in post #2
use
with torch.no_grad():
before doing computations, else memory is never fred since you dont compute backward |
st100910 | use
with torch.no_grad():
before doing computations, else memory is never fred since you dont compute backward |
st100911 | Is there any way to do real time image classification with webcam using Pytorch trained model? |
st100912 | Solved by ptrblck in post #11
I’ve checked another possibility and this is most likely the issue.
In your normalization, you have an additional zero for the third channel, which results in the inf values:
transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0,0.225])
Remove the zero and try it again. |
st100913 | You could just grab the webcam frame, preprocess it accordingly to your pipeline, and feed it into your model.
I’ve created a small example some time ago using PyGame and OpenCV: link 718. |
st100914 | Do all the frames in the video need to be classified? For the classification process, I need to pre-process the image in many ways : Resizing, PIL image, ToTensor, unsqueeze, and GPU Tensor so isn’t it going to affect the performance too much? I am using Resnet-50 model for ASL sign language classifier. |
st100915 | The frames won’t be buffered, so that you will lose some frames while your processing is performed.
In my example code there is also a FPS counter showing the current frame rate. |
st100916 | I did the same pre-processing for individual frames as I do to the single image classification but it still keeps getting ‘nan’ for the result. I tried to classify one image per 4 frames. Here is the code!
import numpy as np
import torch
import torch.nn
import torchvision
from torch.autograd import Variable
from torchvision import transforms
import PIL
import cv2
#This is the Label
Labels = { 0 : 'A',
1 : 'B',
2 : 'C',
3 : 'D',
4 : 'E',
5 : 'F',
6 : 'G',
7 : 'H',
8 : 'I',
9 : 'K',
10: 'L',
11: 'M',
12: 'N',
13: 'O',
14: 'P',
15: 'Q',
16: 'R',
17: 'S',
18: 'T',
19: 'U',
20: 'V',
21: 'W',
22: 'X',
23: 'Y'
}
# Let's preprocess the inputted frame
data_transforms = transforms.Compose(
[
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0,0.225])
]
)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") ##Assigning the Device which will do the calculation
model = torch.load("Resnet50_Left_Pretrained_ver1.1.pth") #Load model to CPU
model = model.to(device) #set where to run the model and matrix calculation
model.eval() #set the device to eval() mode for testing
#Set the Webcam
def Webcam_720p():
cap.set(3,1280)
cap.set(4,720)
def argmax(prediction):
prediction = prediction.cpu()
prediction = prediction.detach().numpy()
top_1 = np.argmax(prediction, axis=1)
score = np.amax(prediction)
score = '{:6f}'.format(score)
prediction = top_1[0]
result = Labels[prediction]
return result,score
def preprocess(image):
image = PIL.Image.fromarray(image) #Webcam frames are numpy array format
#Therefore transform back to PIL image
print(image)
image = data_transforms(image)
image = image.float()
#image = Variable(image, requires_autograd=True)
image = image.cuda()
image = image.unsqueeze(0) #I don't know for sure but Resnet-50 model seems to only
#accpets 4-D Vector Tensor so we need to squeeze another
return image #dimension out of our 3-D vector Tensor
#Let's start the real-time classification process!
cap = cv2.VideoCapture(0) #Set the webcam
Webcam_720p()
fps = 0
show_score = 0
show_res = 'Nothing'
sequence = 0
while True:
ret, frame = cap.read() #Capture each frame
if fps == 4:
image = frame[100:450,150:570]
image_data = preprocess(image)
print(image_data)
prediction = model(image_data)
result,score = argmax(prediction)
fps = 0
if result >= 0.5:
show_res = result
show_score= score
else:
show_res = "Nothing"
show_score = score
fps += 1
cv2.putText(frame, '%s' %(show_res),(950,250), cv2.FONT_HERSHEY_SIMPLEX, 2, (255,255,255), 3)
cv2.putText(frame, '(score = %.5f)' %(show_score), (950,300), cv2.FONT_HERSHEY_SIMPLEX, 1,(255,255,255),2)
cv2.rectangle(frame,(400,150),(900,550), (250,0,0), 2)
cv2.imshow("ASL SIGN DETECTER", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyWindow("ASL SIGN DETECTER") |
st100917 | Is your code working “offline”, i.e. without the webcam just using a single image? |
st100918 | Is it showing 'Nothing' or really 'nan'?
In your code, result seems to be the the most likely character as a string.
However, you are comparing it to a float threshold:
if result >= 0.5:
...
This should yield an error like TypeError: '>=' not supported between instances of 'str' and 'float'.
Somehow your code still seems to run. Could you check this issue? |
st100919 | Sorry. It was my mistake. Actually, I was referring to the score being ‘nan’.The result is clearly an char (from the Labels). I checked the tensors outputted from data_ transforms. The Tensors is having (-inf) values in it.
Screenshot (31).png643×501 11.4 KB |
st100920 | OK, I see.
Then try to debug, why your image capture/pre-processing is returning nans.
I would check the frame returned by cap.read() first and check all following procedures for nans. |
st100921 | I’ve checked another possibility and this is most likely the issue.
In your normalization, you have an additional zero for the third channel, which results in the inf values:
transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0,0.225])
Remove the zero and try it again. |
st100922 | Thank you very much, Patrick! Yes, the extra normalization value was causing the ‘nan’. Since the tensor are having ‘nan’ values, the scores also become ‘nan’ values since the model can’t properly feed forward with nan values.So the scores becomes messed up! Now, the real-time ASL SignLanguage Classifier is working perfectly! |
st100923 | During the training stage, the output seems to correct, that the different RoI will have different output score’s.
tensor([[ 0.4181, -0.6164, -0.5613],
[ 0.1907, -0.3460, -0.6357]], device='cuda:0') tensor([ 1, 1], device='cuda:0')
However, when i use the trained model to validate the result, different RoI will have the same output, even though they represents completely different areas.
tensor([[ 57, 319, 360, 539],
[ 544, 94, 715, 132],
[ 57, 84, 360, 310]], dtype=torch.int32) tensor([[ 0.1655, 0.0858, -0.2437],
[ 0.1655, 0.0858, -0.2437],
[ 0.1655, 0.0858, -0.2437]], device='cuda:0')
By the way, the training loss keep at a position in the training stage and has not the trending of decreasing. I’ve tried to use smaller learning rate(from 0.001 to 0.0005 to 0.0002 to 0.0001), but didn’t work.
If needed, i can post much more codes
Update
Now I found that if i use model.train() before validatation, the results still are different, but if i use model.eval(), the model will have the same output, this is strange to me. |
st100924 | It sounds to me like if your model has some Dropout layers, which will explain the behavior you’re experiencing. At test time, dropout doesn’t do anything, but during training it randomly drops some units. Take a look at the documentation for Dropout 3. |
st100925 | You are right. It turns out that the bug came from my RoI pooling implementation, all the RoI are turned to the same after RoI pooling. |
st100926 | import torch
import torch.nn as nn
import matplotlib.pyplot as plt
import numpy as np
Lr_G = 0.0001
Lr_D = 0.0001
G = nn.Sequential(
nn.Linear(100, 128 * 6 * 6),
nn.ReLU(),
nn.UpsamplingBilinear2d(),
nn.Conv2d(in_channels=1, out_channels=128, kernel_size=4),
nn.ReLU(),
nn.UpsamplingBilinear2d(),
nn.Conv2d(in_channels=128, out_channels=64, kernel_size=4),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=3, kernel_size=4),
nn.Tanh()
)
n = np.random.rand(100)
n = torch.tensor(n)
n = n.long()
print(n.type())
n = G(n)
print(n)
image.png1387×553 47.8 KB
i want to see output, just give a input ,but code broke down ,i wanna know why plz |
st100927 | in other words , i wanna know input shape and output shape . if you can give me an example i would appreciate you. |
st100928 | You should keep the data in float instead of casting it to long.
Also, your current model won’t work, as your linear layer outputs [batch_size, 128 * 6 * 6] and you are trying to use nn.UpsamlingBilinear2d on it, which expects the input to have the shape [batch_size, channels, h, w].
Here is a small working example:
batch_size = 1
class View(nn.Module):
def __init__(self, size):
super(View, self).__init__()
self.size = size
def forward(self, x):
x = x.view(self.size)
return x
G = nn.Sequential(
nn.Linear(100, 128 * 6 * 6),
nn.ReLU(),
View((batch_size, 128, 6, 6)),
nn.Upsample(size=12),
nn.Conv2d(in_channels=128, out_channels=128, kernel_size=4),
nn.ReLU(),
nn.Upsample(24),
nn.Conv2d(in_channels=128, out_channels=64, kernel_size=4),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=3, kernel_size=4),
nn.Tanh()
)
x = torch.randn(batch_size, 100)
output = G(x) |
st100929 | thank you very much, and i give the key 18 for the second Upsample layer size,and code run.and i learned how to use many kind of layer in pytorch. thanks~ |
st100930 | Hi,
When allocating a tensor on cuda:1 device and using str() on this tensor, memory is allocated on cuda:0.
Why is this ? Is my tensor copied to cuda:0 ?
I am using version 0.4.1 on linux with python 3.6
The following code reproduces the problem:
import torch
t = torch.zeros((4,3), device='cuda:1') # allocation in done on cuda:1, as expected
s = str(t) # allocates on cuda:0
Screen capture from nvidia-smi after the call to str(t)
thanks ! |
st100931 | This issue is fixed in version 0.5.0
Moving model to specific also allocates memory on GPU 0
Oh ok. I understand now.
version 0.4.1: Maybe, pytorch uses cuda:0 always by default?
pytorch master 0.5.0a0+ab6afc2: I have verified that this issue is fixed.
Sorry for many edits of the answer |
st100932 | Thanks for your answer.
Just out of curiosity, what was the reason for the allocation on cuda:0 ?
Is a release planned any time soon ? |
st100933 | I like to micromanage each batch where it gets n1 features from folder 1, n2 features from folder 2, …
How can this be accomplished this in PyTorch? |
st100934 | I tried to load two different models from different directories; however, it seems like pytorch would first search the file model.py in the current directory to choose the loaded model’s structure. As I want to load two models, I cannot put two different model.py file in the same directory. How can I handle this case? |
st100935 | You can save your models as model1.py and model2.py with model names as my_model1 and my_model2 respectively then in main.py you can import these model as -
from model1 import my_model1
from model2 import my_model2
I guess this should work. |
st100936 | Or simply just save the state_dict as shown here 37, because this way you won’t save an instance of torch.nn.Module.
Have a look at this 24 post for details when saving instances of nn.Module |
st100937 | Hello everyone,
I was trying to use the PyTorch distributed package, however, I came across the following error
Traceback (most recent call last):
File "train_parallel_ch_classifier.py", line 385, in <module>
File "train_parallel_ch_classifier.py", line 385, in <module>
main(args)
File "train_parallel_ch_classifier.py", line 35, in main
main(args)
File "train_parallel_ch_classifier.py", line 35, in main
world_size = args.world_size)
File "/z/sw/packages/pytorch/0.2.0/lib/python2.7/site-packages/torch/distributed/__init__.py", line 46, in init_process_group
world_size = args.world_size)
File "/z/sw/packages/pytorch/0.2.0/lib/python2.7/site-packages/torch/distributed/__init__.py", line 46, in init_process_group
group_name, rank)
RuntimeErrorgroup_name, rank)
: world_size was not set in config at /z/tmp/build/pytorch-0.2.0/torch/lib/THD/process_group/General.cpp:17
RuntimeError: world_size was not set in config at /z/tmp/build/pytorch 0.2.0/torch/lib/THD/process_group/General.cpp:17
I am using python 2.7, with PyTorch 0.2 installed from source. Below is how I initialize
dist.init_process_group(backend = 'gloo',
init_method = '/z/home/mbanani/nonexistant',
world_size = args.world_size)
Any thoughts on what may be causing this or how I can fix it ?
Thank you |
st100938 | I’m not sure what the exact issue is, can you post a fuller example to run?
Alternatively read our new distributed tutorial here: http://pytorch.org/tutorials/intermediate/dist_tuto.html 191 to see if it helps. |
st100939 | For shared file initialization, you need to specify ‘file:///z/home/mbanani/nonexistant’ in init_method. |
st100940 | I’m training a image segmentation network where when i give single image as input and run the model in CPU it runs in only one core takes more than 1 minute buts scales in GPU and when i simply multiply two tensors it runs in multiple cores. |
st100941 | Just starting deep learning, ask a simple question:
YOLOv3 network for target detection and classification;
The position on the training sample label is proportional to the original image.
For example, the size of the original picture is H x W;
The label is: category, x/W, y/H, w_object/W, h_object/W;
If my original picture is too large, the size is 3000 x 2500, and my detection target is very small, the size is 20 x 20.
Here’s the question:
Can I clip the part in the original picture as the training picture?
Is there a requirement for the aspect ratio of the captured image? |
st100942 | What difference are there in using an optimizer ( torch.optim.ChosenOptimizer) to vary learning rates as opposed to an lr_scheduler (torch.optim.lr_scheduler.ChosenScheduler )?
What are some possible consequences of using both? |
st100943 | Solved by ptrblck in post #3
Additionally to what @kira explained, note that the lr_scheduler just adjusts the learning rate. It does not update the weights of your model. You would still have to create and call an optimizer.
Usually the scheduler is also called once in every epoch, while the optimizer is called for every mini… |
st100944 | Hi,
You can choose any optimizer, for ex: SGD or ADAM and create your own learning rate schedules. Else if you want to use an off the shelf scheduler, you have some choices like ReduceLROnPlateau, ExponentialLR etc.
I think the later would be a better choice. |
st100945 | Additionally to what @kira explained, note that the lr_scheduler just adjusts the learning rate. It does not update the weights of your model. You would still have to create and call an optimizer.
Usually the scheduler is also called once in every epoch, while the optimizer is called for every mini-batch. |
st100946 | I run my lstm code, it work.
This is my lstm module:
lstmmodule.jpg914×679 55.2 KB
This is my lstm code:
lstmwork.jpg852×396 51.2 KB
And then, I want to transform my lstm module into lstmcell module.
This is my lstmcell module:
lstmcellmodule.jpg762×457 41.6 KB
This is my lstmcell code:
lstmcellwork.jpg846×399 55.3 KB
It runs so slow! And a error occured when I set ‘retain_graph’ False and j==1:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
I want to make my lstmcell module more faster .So how should I do?
I noticed that the attribute called ‘grad_fn’ of ‘prediction’ are different between my two case. One is ThAddmmBackward, the other is StackBackward.This is the only difference I found.
Thank you! |
st100947 | I want to calculate KL divergence between multivariate Gaussian Mixture (GMM) , with its paramter list such as weight, mean, covariance given as Tensor Array. Is there already an avaliable implementation ?
Thanks! |
st100948 | what about the KL divergence loss ?
criterion = nn.KLDivLoss()
see the documentation here 41. |
st100949 | Ranahanocka:
KLDivLoss
Thanks for your reply. But that not for Mixture of gaussians I think. |
st100950 | I know that, I was just asking if someone has an implemented approximation for calculating this KL divergence. |
st100951 | Check if this could help you https://github.com/yaqiz01/cs229-invoice-recognition/blob/e198cfc337003df1c1aa4aaa998f8082ce95dc51/bin/experiments#L477 126
Refererence: http://cs229.stanford.edu/proj2016/report/LiuWanZhang-UnstructuredDocumentRecognitionOnBusinessInvoice-report.pdf 44 |
st100952 | I ran into a weird bug around which I have gotten. Still, I’d like to report it here for your reference. torch.btrifact leads to a dim-transposed LU when the input tensor is at size (num_batch, 1, 1) on GPU.
Here is a code snippet
import torch
print('PyTorch version: {}'.format(torch.__version__))
print('---below is an incorrect case---')
aa = torch.rand((2,1,1))
aa_LU,_ = aa.btrifact()
bb = aa.cuda()
bb_LU,_ = bb.btrifact()
print(aa_LU.size())
print(bb_LU.size())
print('---below is a correct case---')
aa = torch.rand((2,3,3))
aa_LU,_ = aa.btrifact()
bb = aa.cuda()
bb_LU,_ = bb.btrifact()
print(aa_LU.size())
print(bb_LU.size())
I got
PyTorch version: 0.4.1
---below is an incorrect case---
torch.Size([2, 1, 1])
torch.Size([1, 2, 1])
---below is a correct case---
torch.Size([2, 3, 3])
torch.Size([2, 3, 3]) |
st100953 | It works for me on a master build. So I think that this has been fixed!
>>> import torch
>>>
>>> print('PyTorch version: {}'.format(torch.__version__))
PyTorch version: 0.5.0a0+7b905e4
>>> print('---below is an incorrect case---')
---below is an incorrect case---
>>> aa = torch.rand((2,1,1))
>>> aa_LU,_ = aa.btrifact()
>>> bb = aa.cuda()
>>> bb_LU,_ = bb.btrifact()
>>> print(aa_LU.size())
torch.Size([2, 1, 1])
>>> print(bb_LU.size())
torch.Size([2, 1, 1]) |
st100954 | We have several offline machines and want to deploy pytorch on them. To do this, We need to build a wheel package containing all dependencies (just like the official wheel package) in one machine and install the built .whl file in others. The result is:
(1) A .whl file is built in one machine and successfully installed on another one
(2) In the second machine, It takes a long time (about 3 minutes) in the procedure “lambda t, t.to_cuda()”. It’s a process to transform the parameters of module to cuda tensors. We guess the reason is the cuda dependencies isn’t correctly contained or used.
Our building script similar to https://github.com/pytorch/builder/blob/master/manywheel/build.sh. We can’t directly use it since our system is ubuntu and the machines are all offline. Our own script has the same building precedure: Set the enviroment variables just like that in build.sh (line 5 to 11) —> use “python setup.py bdist_wheel” to build -----> copy the dependencies into the wheel file (build_common.sh line 77 to 190. The build_common.sh is called in the end of build.sh). Our python version is 3.6.2 and cuda version is 9.0.
We hope someone to help us, or just share your experience in building a “manywheel” file. Thank you!
The problem came from the mismatch of CUDA version, just like ptrblck mentioned. We made a mistake when we wrote our own building script. Additionaly, we have confirmed that the script https://github.com/pytorch/builder/blob/master/manywheel/build.sh can be used to build a wheel containing dependencies. |
st100955 | Solved by ptrblck in post #2
What GPUs are you using?
Is only the first CUDA call taking a long time?
In the past this was due to a mismatch of your CUDA version which resulted in PyTorch being recompiled for your GPU. Could you try it with CUDA8 and see, if it still takes that long? |
st100956 | What GPUs are you using?
Is only the first CUDA call taking a long time?
In the past this was due to a mismatch of your CUDA version which resulted in PyTorch being recompiled for your GPU. Could you try it with CUDA8 and see, if it still takes that long? |
st100957 | (1) The GPUs are all nvidia P100, with driver 390.46.
(2) Problem only occurs at the initialization part, model = model.to_cuda()。The time taken for each training and evaluating iteration is normal。
(3) It seems that the latest version of pytorch (v0.4.1) requires cuda 9.0 or 9.2. I’m not sure if we can successfully compile it with CUDA8. Maybe we can try it.
By the way, if you have experience in building a pytorch wheel containing all dependencies, just like the official whl, could you please share it to us? Or tell us where to find the solutions. It will be very helpful to us. |
st100958 | You could have a look at these scripts 7 and see if you can adapt it to your platform etc. |
st100959 | Thank you @ptrblck. You are right. The problem came from the CUDA version mismatch. We check our script again and find that we didn’t set the enviroment variable $TORCH_CUDA_ARCH_LIST correctly. This cause the mismatch of CUDA version. This problem is solved now. |
st100960 | I want to include a header file from aten/src/ATen/native/test.cpp.
But I failed to include header file.
Seeing following file “build/build.ninja”
INCLUDES is not sufficient for target (test.cpp).
Is there any way to add parameter to INCLUDES by setup.py or CMakeLists.txt?
INCLUDES = -I. -I../ -I../third_party/protobuf/src -isystem ../cmake/../third_party/eigen -isystem ../cmake/../third_party/pybind11/include -isystem ../cmake/../third_party/cub -I../third_party/onnx -Ithird_party/onnx -isystem /usr/local/cuda/include -Icaffe2/aten/src/TH -I../aten/src/TH -Icaffe2/aten/src/THC -I../aten/src/THC -I../aten/src/THCUNN -I../aten/src/ATen/cuda -I../aten/src -Icaffe2/aten/src -Iaten/src -I../aten/src/THNN -I../aten/../third_party/catch/single_include -Icaffe2/aten/src/ATen -I../aten/src/ATen/.. |
st100961 | As a workaround, I add include directory to CMAKE_CXX_FLAGS in CMakeLists.txt.
From seeing generated build/build.ninja, the directory parameter added to FLAGS (not INCLUDES), Is there any good way to solve this? |
st100962 | The convolutional layers (e.g. nn.Conv2d) require groups to divide both in_channels and out_channels. The functional convolutions (e.g. nn.functional.conv2d) only require groups to divide in_channels.
This leads to confusing behavior:
import numpy as np
import torch
from torch import nn
from torch.nn import functional as F
# testing
batch_size = 1
w_img = 1
h_img = 1
c_in = 6
c_out = 9
filter_len = 1
groups = 3
image = np.arange(6, dtype=np.float32).reshape(batch_size, c_in, h_img, w_img)
filters = np.empty((c_out, c_in // groups, filter_len, filter_len), dtype=np.float32)
filters.fill(0.5)
image = torch.tensor(image)
filters = torch.tensor(filters)
features_functional = F.conv2d(image, filters, padding=filter_len // 2, groups=groups)
print(features_functional.shape[1]) # 9
layer = nn.Conv2d(c_in, c_out, filter_len, padding=filter_len // 2, groups=groups)
print(layer.out_channels) # 9
Here both forms have 9 out_channels. Changing groups to 2, however results in 8 out_channels from the functional form and an exception thrown from the other.
I see two problems with this:
Inconsistency (despite the fact that both are documented correctly).
The functional form opaquely rounds out_channels down to the nearest integer that is divisible by groups. This is a non-obvious process for the user.
Is there a reason for this difference? If not, it seems like the functional form should throw a similar error. |
st100963 | Solved by ptrblck in post #2
It seems to be fixed in the current master, although the error message is a bit unclear:
RuntimeError: std::exception
EDIT: I’ve created an issue here. Thanks for reporting it! |
st100964 | It seems to be fixed in the current master, although the error message is a bit unclear:
RuntimeError: std::exception
EDIT: I’ve created an issue here 24. Thanks for reporting it! |
st100965 | Hi all,
I am a little confused about the advanced indexing in a 4d-tensor,
Take an example, suppose I have a 4d-tensor x = torch.randn(10, 3, 5, 5) (10 RGB images in a mini-batch, the size is 5*5).
And I would like to sample 3 points on each 5*5 image in this mini-batch, the axis of the points are stored in two long tensors, which are row and col. Both row and col are 10 * 3 tensors. Can I index these points without using any loop like[x[i, :, a[i], b[i]] for i in range(10)]? |
st100966 | Solved by SimonW in post #2
N = 10
C = 3
H = 5
W = 5
x = torch.randn(N, C, H, W)
indices = torch.randint(H * W, (N, C, 1), dtype=torch.long)
torch.gather(x.view(N, C, H * W), 2, indices).view(N, C) |
st100967 | N = 10
C = 3
H = 5
W = 5
x = torch.randn(N, C, H, W)
indices = torch.randint(H * W, (N, C, 1), dtype=torch.long)
torch.gather(x.view(N, C, H * W), 2, indices).view(N, C) |
st100968 | Simple question: should I build PyTorch from sources? I know that for other frameworks (Tensorflow), the pre-compiled library might lack support to some CPU vector instructions (AVX, FMA, …) to broaden the compatibility to multiple devices/generations.
What does PyTorch pre-compiled library might differ from building from sources? |
st100969 | Building from source does support broader architectures, e.g., older GPUs, CUDA on macos, etc. But for CPU vector instructions, I think the binaries include them as well. That said, we are actively developing on this front, so building from source very possibly will give you newer and better optimized kernels. |
st100970 | In recent days, I have tried customizing functional conv2d. There are two confusions in my heart.
(1) Pytorch regards F.conv2d as a function (operation). So, it must have a forward and backward method.
But, I do not find them in the document. Where can I find them?
(2) The gradient of vector function is a Jacobi matrix. The output of F.conv2d is a matrix with high dimensions.
How does its gradient with respect to weight and bias look like (beyond my imagination)? |
st100971 | What the backward computes really is a vector Jacobian product, rather than the full Jacobian, for efficiency reasons, and also for the fact that most people really only care about the gradients of parameters w.r.t. a single scalar metric.
The current conv2d CPU backward is here https://github.com/pytorch/pytorch/blob/22e3b2c9c369c5fb44476eb538fa0a308df94eff/aten/src/THNN/generic/SpatialConvolutionMM.c#L247-L414 117 |
st100972 | In my training dataset I have many more negative instances (0 label) than positives (1 label).
So seems like my model is being heavily biased towards predicting low probabilities. How do I rescale weights?
As output I have an array of probabilities e.g [0.1, 0.3, 0.8] and an array of labels [0, 1, 0]
I would like to try diff loss functions, such as MSE or BCE. Maybe some others.
What would be the most appropriate loss function here that allows giving bigger weight towards positive instances?
And how would I do it? I see that some loss functions have weight attribute, but I am not sure how to set it properly in my case.
Or for example BCEwithLogits has pos_weight attribute. It says “Must be a vector with length equal to the number of classes.”
So if my input to a loss function is [0.1, 0.3, 0.8] , [0, 1, 0] what should be my pos_weight? Smth like [0.1, 0.9] ?
Thanks |
st100973 | The list with [0, 1, 0] is not a list of classes but just a list of values for a single class. If you have only one class, pos_weight should contain only one argument. For example, if you have 300 positive samples and 200 negative ones, you should set pos_weight to 200 / 300. It will virtually “reduce” the set of positive samples from 300 to 200.
An alternative is to sample from negative subset with higher frequency. |
st100974 | Hmm in last version of pytorch i got errors that pos_weight is not present. So I used weights argument where zero instances have weight of 1 and positive have weight of pos/negs. I think it should be correct… |
st100975 | pos_weight first appeared only in 0.4.1. Try to update your PyTorch installation.
Note that weights is a way to say “negative samples is more important”, whereas pos_weight is a way to say “negative errors must be larger”. These approaches are different and are not equivalent in the general case. |
st100976 | Could you pls explain how they are different mathematically? The difference is not clear… |
st100977 | I have a model class like this, and I tried to run it on gpu:
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv = nn.Conv2d(3, 64, kernel_size = 3)
self.avgpool = nn.AvgPool2d(3, 1)
self.denses = []
self.denses.append(nn.Linear(64, 10))
self.denses.append(nn.Linear(64, 20))
def forward(self, x):
x = self.conv(x)
x = self.avgpool(x).view(-1, 64)
out1 = self.denses[0](x)
out2 = self.denses[1](x)
return out1, out2
m = Model()
in_tensor = torch.randn(1, 3, 128, 128).cuda()
m.cuda()
out = m(in_tensor)
Then I have the error message:
Traceback (most recent call last):
File "try.py", line 56, in <module>
out = m(in_tensor)
File "/home/zhangzy/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "try.py", line 48, in forward
out1 = self.denses[0](x)
File "/home/zhangzy/.local/lib/python3.5/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "/home/zhangzy/.local/lib/python3.5/site-packages/torch/nn/modules/linear.py", line 55, in forward
return F.linear(input, self.weight, self.bias)
File "/home/zhangzy/.local/lib/python3.5/site-packages/torch/nn/functional.py", line 992, in linear
return torch.addmm(bias, input, weight.t())
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #4 'mat1'
Can’t I use python list to hold these dense branches? How shall I use structure such as list or tuple to manipulate these layers properly ? |
st100978 | You might want to look into nn.Sequential() which purpose is to add layers together as you are trying with the list.
Here is an example of a convolutional layer, followed by activation and then a pooling using sequential. Hope it makes sense.
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv1d(
in_channels=8,
out_channels=16,
kernel_size=5,
stride=1,
padding=2,
),
nn.ReLU(),
nn.MaxPool1d(kernel_size=2),
)
def forward(self, x):
x = self.conv1(x.double()) #inputs (1,3,batch_size) |
st100979 | I am building a neural network for text input that has embedding, CNN, pooling and some linears. However, I am having trouble with how to make it working with variable length. I saw that there is something called padding_idx in embedding. Should I use that? If so, what if the max length of the batch is different from the max length of the whole corpus? Thanks! |
st100980 | I’ve created some reconstructed embeddings of size [batch_size, embedding_dimension, sequence_length], and I want to be able to index_select those embeddings based on a lookup matrix of larger dimensions.
For example, I have an embedding matrix of shape [12, 4, 13], where each batch has its own embeddings which have been calculated:
c= Variable containing:
(0 ,.,.) =
Columns 0 to 8
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(1 ,.,.) =
Columns 0 to 8
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(2 ,.,.) =
Columns 0 to 8
0.4401 0.4401 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.2005 0.2005 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.1747 -0.1747 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.6075 -0.6075 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(3 ,.,.) =
Columns 0 to 8
0.3171 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.4901 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.1445 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.4877 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(4 ,.,.) =
Columns 0 to 8
-0.5359 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.4196 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.1970 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0173 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(5 ,.,.) =
Columns 0 to 8
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(6 ,.,.) =
Columns 0 to 8
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(7 ,.,.) =
Columns 0 to 8
0.3171 -0.0451 0.3721 0.3721 -0.0451 0.4610 0.2726 0.4610 0.2726
-0.4901 -0.4598 -0.4402 -0.4402 -0.4598 -0.6407 -0.3215 -0.6407 -0.3215
-0.1445 0.9977 0.2344 0.2344 0.9977 -0.0442 -0.0341 -0.0442 -0.0341
-0.4877 0.0229 0.0757 0.0757 0.0229 0.1043 0.2794 0.1043 0.2794
Columns 9 to 12
0.4610 0.4610 0.0000 0.0000
-0.6407 -0.6407 0.0000 0.0000
-0.0442 -0.0442 0.0000 0.0000
0.1043 0.1043 0.0000 0.0000
(8 ,.,.) =
Columns 0 to 8
0.3171 0.2738 0.2684 0.2222 0.7377 0.0002 0.0002 0.0000 0.0000
-0.4901 0.0548 -0.3797 -0.8453 -0.8063 -0.6694 -0.6694 0.0000 0.0000
-0.1445 0.0201 0.5294 1.0955 0.1553 0.1219 0.1219 0.0000 0.0000
-0.4877 0.5877 0.0506 0.5659 -0.6621 -0.0036 -0.0036 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(9 ,.,.) =
Columns 0 to 8
0.3171 0.4198 0.4198 -0.6232 0.6900 0.8220 -0.5885 0.3171 0.0000
-0.4901 0.1644 0.1644 0.0501 -0.0060 -0.1977 0.3468 -0.4901 0.0000
-0.1445 -0.2152 -0.2152 -0.0906 0.0158 -0.2144 0.5921 -0.1445 0.0000
-0.4877 -0.2854 -0.2854 0.2168 -0.2466 -0.1935 0.1861 -0.4877 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(10,.,.) =
Columns 0 to 8
0.3395 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.3312 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.2439 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
0.2298 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
(11,.,.) =
Columns 0 to 8
0.3171 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.4901 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.1445 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
-0.4877 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000
Columns 9 to 12
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
[torch.FloatTensor of size 12x4x13]
I now need to re-align these embeddings based on a lookup matrix of indices of shape [12,50,3] (where I would like a row-stacked 3 embeddings per row per batch based on some indices, such as:
icd_positions = Variable containing:
(0 ,.,.) =
11 11 11
11 11 11
11 11 11
⋮
11 11 11
11 11 11
11 11 11
(1 ,.,.) =
11 11 11
11 11 11
11 11 11
⋮
11 11 11
11 11 11
11 11 11
(2 ,.,.) =
11 11 11
11 11 11
11 11 11
⋮
11 11 11
11 11 11
11 11 11
...
(9 ,.,.) =
11 11 11
11 11 11
11 11 11
⋮
11 11 11
11 11 11
11 11 11
(10,.,.) =
11 11 11
11 11 11
11 11 11
⋮
11 11 11
11 11 11
11 11 11
(11,.,.) =
11 11 11
11 11 11
11 11 11
⋮
11 11 11
11 11 11
11 11 11
[torch.LongTensor of size 12x50x3]
To get an output of [12, 50, 3, 4]
Note that each batch has it’s own set of indices corresponding to its proper dimension in the embedding matrix (e.g. indices 0-13 in icd_dimensions[0:1,:,:] correspond to the embeddings in c[0:1,:,:], and the same indices in icd_dimensions[1:2,:,:] correspond to c[1:2,:,:] etc. (so they are not unique).
How can I do a lookup? I’ve tried something along the lines of
torch.cat([ torch.index_select(a, 1, i).unsqueeze(0) for a, i in zip(c, icd_positions) ])
which I think should be close but it’s not working. |
st100981 | Solved by sarahw in post #2
Nevermind, it’s resolved! You can concatenate across dimensions using more nested list comprehensions, e.g. torch.cat([torch.cat([torch.index_select(d, 1, i).unsqueeze(0) for i in tst]).unsqueeze(0) for d,tst in zip(c,icd_positions)]) |
st100982 | Nevermind, it’s resolved! You can concatenate across dimensions using more nested list comprehensions, e.g. torch.cat([torch.cat([torch.index_select(d, 1, i).unsqueeze(0) for i in tst]).unsqueeze(0) for d,tst in zip(c,icd_positions)]) |
st100983 | Hi community.
I have some doubts on the steps required to make sure I am doing the training using the GPUs avalible.
After some checks on the available resources:
cuda = torch.cuda.is_available()
n_workers = multiprocessing.cpu_count()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Cuda: ', str(cuda))
print('Device: ', str(device))
print('Cores: ', str(n_workers))
print('GPUs available', str(torch.cuda.device_count()))
Output:
Cuda: True
Device: cuda
Cores: 24
GPUs available 8
Now, I can move both the data and the model to the avaible GPUs:
Before training, allocate the model:
model.train()
model.to(device)
During training, allocate the tensors:
images = images.to(device)
labels = labels.to(device)
My question now is, when do I need nn.Parallell and what funtionalities it is adding beyond the ones that I have already applied with device?
Thanks in advance,
Pablo |
st100984 | Since you have multiple GPUs, you could use nn.DataParallel to utilize all or some of them.
Have a look at this tutorial 11 to apply it.
Basically your batch will be split into chunks in the batch dimension and pushed to all specified devices.
Also to speed up the data loading you should use multiprocessing in your DataLoader by setting num_workers>0. |
st100985 | Thanks a lot!
From the tutorial, I understand that I need to Parallelize my model before moving it to device.
Since if I move it model.to(device) it will by default copy it to just 1 GPU regardless how many GPUs are available in torch.cuda.device_count(). Is this right?
I am using the dataloader as follows. Is the implementation correct?
train_loader = DataLoader(dataset = train_set.dataset,
sampler=SubsetRandomSampler(train_set.indices),
batch_size = batch_size, num_workers=n_workers)
valid_loader = DataLoader(dataset = valid_set.dataset,
sampler=SubsetRandomSampler(valid_set.indices),
batch_size = batch_size, num_workers=n_workers)
test_loader = DataLoader(dataset = test_set, batch_size = 1,
shuffle = False, num_workers=n_workers)
Thanks in advance.
Regards,
Pablo |
st100986 | The gradients will be reduced to the GPU you are specifying, so you might see a slightly increased memory usage on this device.
The DataLoaders look good. Since you are using GPUs, you should also set pin_memory=True to use the pinned host memory as the GPU cannot access data directly from pageable host memory. |
st100987 | Currently when I use
torch.save(lstm_net.state_dict())
it does not save custom parameters I have added to the module. It only saves the weights and biases of the lstm and dense layers.
How do I fix it so it does? |
st100988 | are your custom parameters registered in the constructor with self.param_name = nn.Parameter(...)?
The key is to wrap them in nn.Parameter, or else they wont be under the purview of state_dict or model.parameters() |
st100989 | Hello, is anyone have a solution for this issue?
Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.5/dist-packages/torch/init.py”, line 80, in
from torch._C import *
ImportError: /usr/local/lib/python3.5/dist-packages/torch/lib/libshm.so: undefined symbol: _ZTI24THRefcountedMapAllocator
I have tried to reinstall pytorch several times, but it doesn’t work. |
st100990 | Could you run
pip uinstall torch
a few times?
Probably there is an older version of libshm.so somewhere. |
st100991 | Have tried that, but the second try, it said torch isn’t installed so there is no need to uninstall. |
st100992 | Are you using virtual environments?
Could there be somehow another PyTorch installation? |
st100993 | ptrblck:
libshm.so
maybe try looking for any places that this may exist:
sudo find / -name “libshm.so”
and delete any folders with torch.
Another option is to create a virtual env with conda. It works well. Here 17 is a good guide. But basically, it’s three commands (after installing conda of course) - assuming you wanted python 3.6 and cuda 8:
conda create -n pytorch python=3.6
source activate pytorch
conda install pytorch torchvision cuda80 -c pytorch
and b/c it’s a virtual env it should not care about previous installations (in theory of course ) |
st100994 | For your information, I use default python3 from ubuntu and I try to avoid install anaconda because it will crash my other program (library installation and environment, need more time to adjust the library in anaconda python)
So based on your answer, the error is caused by last installation pytorch ? |
st100995 | Hi,
I currently trying to figure out how to correctly initialize GRU/GRUCell weight matrices, and spot that the shape of those matrices is the concatenation of the reset/update/new gates resulting in a shape of 3 * hidden_size for both the input to hidden and hidden to hidden.
I took a look at the reset_parameters() method, found in the GRUCell code, and spot the variance of the initializer is computer over the hidden size, thus returning coherent results.
Then, when trying to apply an orthogonal and/or Xavier init over those matrices, I was wondering if they should be chunked to allow PyTorch to correctly compute the fan_in/out ?
Here is a snippet of what I currently thinking, and would like to acknowledge this is the right way to do so :
@staticmethod
def weights_init(x):
if isinstance(x, GRU):
for n, p in x.named_parameters():
if 'weight_ih' in n:
for ih in p.chunk(3, 0):
torch.nn.init.xavier_uniform_(ih)
elif 'weight_hh' in n:
for hh in p.chunk(3, 0):
torch.nn.init.orthogonal_(hh)
elif 'bias_ih' in n:
torch.nn.init.zeros_(p)
# elif 'bias_hh' in n:
# torch.nn.init.ones_(p)
elif isinstance(x, GRUCell):
for hh, ih in zip(x.weight_hh.chunk(3, 0), x.weight_ih.chunk(3, 0)):
torch.nn.init.orthogonal_(hh)
torch.nn.init.xavier_uniform_(ih)
torch.nn.init.zeros_(x.bias_ih) |
st100996 | Hi,
I have seen there have been attempts to add Wolfe line search for the lbfgs optimizer. However, I have not seen any implementations of line search (using the wolfe conditions) for SGD. I was wondering if there is an implementation available?
Best |
st100997 | The current version of lbfgs does not support line search, so simple box constrained is not available.
If there is someone who is looking for l-bfgs-b and line search method assisted l-bfgs.
Following modified lbfgs.py 125 code can be useful
I hope that better version will come in the next release.
‘backtracking’, ‘goldstein’, ‘weak_wolfe’ inexact line search methods are available
and box constraints for params is supported
import torch
from functools import reduce
from .optimizer import Optimizer
from math import isinf
class LBFGSB(Optimizer):
"""Implements L-BFGS algorithm.
.. warning::
This optimizer doesn't support per-parameter options and parameter
groups (there can be only one).
.. warning::
Right now all parameters have to be on a single device. This will be
improved in the future.
.. note::
This is a very memory intensive optimizer (it requires additional
``param_bytes * (history_size + 1)`` bytes). If it doesn't fit in memory
try reducing the history size, or use a different algorithm.
Arguments:
lr (float): learning rate (default: 1)
max_iter (int): maximal number of iterations per optimization step
(default: 20)
max_eval (int): maximal number of function evaluations per optimization
step (default: max_iter * 1.25).
tolerance_grad (float): termination tolerance on first order optimality
(default: 1e-5).
tolerance_change (float): termination tolerance on function value/parameter
changes (default: 1e-9).
line_search_fn (str): line search methods, currently available
['backtracking', 'goldstein', 'weak_wolfe']
bounds (list of tuples of tensor): bounds[i][0], bounds[i][1] are elementwise
lowerbound and upperbound of param[i], respectively
history_size (int): update history size (default: 100).
"""
def __init__(self, params, lr=1, max_iter=20, max_eval=None,
tolerance_grad=1e-5, tolerance_change=1e-9, history_size=100,
line_search_fn=None, bounds=None):
if max_eval is None:
max_eval = max_iter * 5 // 4
defaults = dict(lr=lr, max_iter=max_iter, max_eval=max_eval,
tolerance_grad=tolerance_grad, tolerance_change=tolerance_change,
history_size=history_size, line_search_fn=line_search_fn, bounds=bounds)
super(LBFGS, self).__init__(params, defaults)
if len(self.param_groups) != 1:
raise ValueError("LBFGS doesn't support per-parameter options "
"(parameter groups)")
self._params = self.param_groups[0]['params']
self._bounds = [(None, None)] * len(self._params) if bounds is None else bounds
self._numel_cache = None
def _numel(self):
if self._numel_cache is None:
self._numel_cache = reduce(lambda total, p: total + p.numel(), self._params, 0)
return self._numel_cache
def _gather_flat_grad(self):
return torch.cat(
tuple(param.grad.data.view(-1) for param in self._params), 0)
def _add_grad(self, step_size, update):
offset = 0
for p in self._params:
numel = p.numel()
p.data.add_(step_size, update[offset:offset + numel].resize_(p.size()))
offset += numel
assert offset == self._numel()
def step(self, closure):
"""Performs a single optimization step.
Arguments:
closure (callable): A closure that reevaluates the model
and returns the loss.
"""
assert len(self.param_groups) == 1
group = self.param_groups[0]
lr = group['lr']
max_iter = group['max_iter']
max_eval = group['max_eval']
tolerance_grad = group['tolerance_grad']
tolerance_change = group['tolerance_change']
line_search_fn = group['line_search_fn']
history_size = group['history_size']
state = self.state['global_state']
state.setdefault('func_evals', 0)
state.setdefault('n_iter', 0)
# evaluate initial f(x) and df/dx
orig_loss = closure()
loss = orig_loss.data[0]
current_evals = 1
state['func_evals'] += 1
flat_grad = self._gather_flat_grad()
abs_grad_sum = flat_grad.abs().sum()
if abs_grad_sum <= tolerance_grad:
return loss
# variables cached in state (for tracing)
d = state.get('d')
t = state.get('t')
old_dirs = state.get('old_dirs')
old_stps = state.get('old_stps')
H_diag = state.get('H_diag')
prev_flat_grad = state.get('prev_flat_grad')
prev_loss = state.get('prev_loss')
n_iter = 0
# optimize for a max of max_iter iterations
while n_iter < max_iter:
# keep track of nb of iterations
n_iter += 1
state['n_iter'] += 1
############################################################
# compute gradient descent direction
############################################################
if state['n_iter'] == 1:
d = flat_grad.neg()
old_dirs = []
old_stps = []
H_diag = 1
else:
# do lbfgs update (update memory)
y = flat_grad.sub(prev_flat_grad)
s = d.mul(t)
ys = y.dot(s) # y*s
if ys > 1e-10:
# updating memory
if len(old_dirs) == history_size:
# shift history by one (limited-memory)
old_dirs.pop(0)
old_stps.pop(0)
# store new direction/step
old_dirs.append(s)
old_stps.append(y)
# update scale of initial Hessian approximation
H_diag = ys / y.dot(y) # (y*y)
# compute the approximate (L-BFGS) inverse Hessian
# multiplied by the gradient
num_old = len(old_dirs)
if 'ro' not in state:
state['ro'] = [None] * history_size
state['al'] = [None] * history_size
ro = state['ro']
al = state['al']
for i in range(num_old):
ro[i] = 1. / old_stps[i].dot(old_dirs[i])
# iteration in L-BFGS loop collapsed to use just one buffer
q = flat_grad.neg()
for i in range(num_old - 1, -1, -1):
al[i] = old_dirs[i].dot(q) * ro[i]
q.add_(-al[i], old_stps[i])
# multiply by initial Hessian
# r/d is the final direction
d = r = torch.mul(q, H_diag)
for i in range(num_old):
be_i = old_stps[i].dot(r) * ro[i]
r.add_(al[i] - be_i, old_dirs[i])
if prev_flat_grad is None:
prev_flat_grad = flat_grad.clone()
else:
prev_flat_grad.copy_(flat_grad)
prev_loss = loss
############################################################
# compute step length
############################################################
# directional derivative
gtd = flat_grad.dot(d) # g * d
# check that progress can be made along that direction
if gtd > -tolerance_change:
break
# reset initial guess for step size
if state['n_iter'] == 1:
t = min(1., 1. / abs_grad_sum) * lr
else:
t = lr
# optional line search: user function
ls_func_evals = 0
if line_search_fn is not None:
# perform line search, using user function
# raise RuntimeError("line search function is not supported yet")
if line_search_fn == 'weak_wolfe':
t = self._weak_wolfe(closure, d)
elif line_search_fn == 'goldstein':
t = self._goldstein(closure, d)
elif line_search_fn == 'backtracking':
t = self._backtracking(closure, d)
self._add_grad(t, d)
else:
# no line search, simply move with fixed-step
self._add_grad(t, d)
if n_iter != max_iter:
# re-evaluate function only if not in last iteration
# the reason we do this: in a stochastic setting,
# no use to re-evaluate that function here
loss = closure().data[0]
flat_grad = self._gather_flat_grad()
abs_grad_sum = flat_grad.abs().sum()
ls_func_evals = 1
# update func eval
current_evals += ls_func_evals
state['func_evals'] += ls_func_evals
############################################################
# check conditions
############################################################
if n_iter == max_iter:
break
if current_evals >= max_eval:
break
if abs_grad_sum <= tolerance_grad:
break
if d.mul(t).abs_().sum() <= tolerance_change:
break
if abs(loss - prev_loss) < tolerance_change:
break
state['d'] = d
state['t'] = t
state['old_dirs'] = old_dirs
state['old_stps'] = old_stps
state['H_diag'] = H_diag
state['prev_flat_grad'] = prev_flat_grad
state['prev_loss'] = prev_loss
return orig_loss
def _copy_param(self):
original_param_data_list = []
for p in self._params:
param_data = p.data.new(p.size())
param_data.copy_(p.data)
original_param_data_list.append(param_data)
return original_param_data_list
def _set_param(self, param_data_list):
for i in range(len(param_data_list)):
self._params[i].data.copy_(param_data_list[i])
def _set_param_incremental(self, alpha, d):
offset = 0
for p in self._params:
numel = p.numel()
p.data.copy_(p.data + alpha*d[offset:offset + numel].resize_(p.size()))
offset += numel
assert offset == self._numel()
def _directional_derivative(self, d):
deriv = 0.0
offset = 0
for p in self._params:
numel = p.numel()
deriv += torch.sum(p.grad.data * d[offset:offset + numel].resize_(p.size()))
offset += numel
assert offset == self._numel()
return deriv
def _max_alpha(self, d):
offset = 0
max_alpha = float('inf')
for p, bnd in zip(self._params, self._bounds):
numel = p.numel()
l_bnd, u_bnd = bnd
p_grad = d[offset:offset + numel].resize_(p.size())
if l_bnd is not None:
from_l_bnd = ((l_bnd-p.data)/p_grad)[p_grad<0]
min_l_bnd = torch.min(from_l_bnd) if from_l_bnd.numel() > 0 else max_alpha
if u_bnd is not None:
from_u_bnd = ((u_bnd-p.data)/p_grad)[p_grad>0]
min_u_bnd = torch.min(from_u_bnd) if from_u_bnd.numel() > 0 else max_alpha
max_alpha = min(max_alpha, min_l_bnd, min_u_bnd)
return max_alpha
def _backtracking(self, closure, d):
# 0 < rho < 0.5 and 0 < w < 1
rho = 1e-4
w = 0.5
original_param_data_list = self._copy_param()
phi_0 = closure().data[0]
phi_0_prime = self._directional_derivative(d)
alpha_k = 1.0
while True:
self._set_param_incremental(alpha_k, d)
phi_k = closure().data[0]
self._set_param(original_param_data_list)
if phi_k <= phi_0 + rho * alpha_k * phi_0_prime:
break
else:
alpha_k *= w
return alpha_k
def _goldstein(self, closure, d):
# 0 < rho < 0.5 and t > 1
rho = 1e-4
t = 2.0
original_param_data_list = self._copy_param()
phi_0 = closure().data[0]
phi_0_prime = self._directional_derivative(d)
a_k = 0.0
b_k = self._max_alpha(d)
alpha_k = min(1e4, (a_k + b_k) / 2.0)
while True:
self._set_param_incremental(alpha_k, d)
phi_k = closure().data[0]
self._set_param(original_param_data_list)
if phi_k <= phi_0 + rho*alpha_k*phi_0_prime:
if phi_k >= phi_0 + (1-rho)*alpha_k*phi_0_prime:
break
else:
a_k = alpha_k
alpha_k = t*alpha_k if isinf(b_k) else (a_k + b_k) / 2.0
else:
b_k = alpha_k
alpha_k = (a_k + b_k)/2.0
if torch.sum(torch.abs(alpha_k * d)) < self.param_groups[0]['tolerance_grad']:
break
if abs(b_k-a_k) < 1e-6:
break
return alpha_k
def _weak_wolfe(self, closure, d):
# 0 < rho < 0.5 and rho < sigma < 1
rho = 1e-4
sigma = 0.9
original_param_data_list = self._copy_param()
phi_0 = closure().data[0]
phi_0_prime = self._directional_derivative(d)
a_k = 0.0
b_k = self._max_alpha(d)
alpha_k = min(1e4, (a_k + b_k) / 2.0)
while True:
self._set_param_incremental(alpha_k, d)
phi_k = closure().data[0]
phi_k_prime = self._directional_derivative(d)
self._set_param(original_param_data_list)
if phi_k <= phi_0 + rho*alpha_k*phi_0_prime:
if phi_k_prime >= sigma*phi_0_prime:
break
else:
alpha_hat = alpha_k + (alpha_k - a_k) * phi_k_prime / (phi_0_prime - phi_k_prime)
a_k = alpha_k
phi_0 = phi_k
phi_0_prime = phi_k_prime
alpha_k = alpha_hat
else:
alpha_hat = a_k + 0.5*(alpha_k-a_k)/(1+(phi_0-phi_k)/((alpha_k-a_k)*phi_0_prime))
b_k = alpha_k
alpha_k = alpha_hat
if torch.sum(torch.abs(alpha_k * d)) < self.param_groups[0]['tolerance_grad']:
break
if abs(b_k-a_k) < 1e-6:
break
return alpha_k |
st100998 | I’ve been trying to use this code. The _backtracking linesearch method doesn’t take bounds. Both _weak_wolfe and _goldstein take bounds, but don’t obey them. Further, the _max_alpha method breaks unless you are working with both upper and lower bounds. I’m not convinced that it calculates the max_alpha parameter correctly either.
I think that altering lbfgs.py 30 so that the step method can take a linesearch_function parameter (rather than throwing an error if it is not None) and then having the new __init__ and linesearch methods be the only new pieces of code in lbfgsb.py 59 would be a cleaner implementation.
Even more ideal, have the linesearch functions be in separate files within torch.optim and agnostic to lbfgs. Then have have the new __init__ be the only piece of code within lbfgsb.py 59 which allows the lbfgs step method to call the new linesearch methods with bounds.
I’m trying to work on a fix on my own, but I’m hoping that the author has already fixed these issues and could offer a new version to play around with. |
st100999 | @rapidsnow I am also interested in this topic, but I cannot follow you as there is no lbfgsb.py file. Could you be a bit more specific? |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.