id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st103400 | Solved by Chun_Li in post #3
OK, I finally solved the above problem.
Usually I update pytorch source via command “git pull”,
which is increment download method (except the first time full download).
Somehow in this process two files onnxifi_loader.c and onnxifi_loader.h
was missing! This time I did full download via command … |
st103401 | Seems that none faces the above problem? Just drag it up to see if there is any hint or help? |
st103402 | OK, I finally solved the above problem.
Usually I update pytorch source via command “git pull”,
which is increment download method (except the first time full download).
Somehow in this process two files onnxifi_loader.c and onnxifi_loader.h
was missing! This time I did full download via command
git clone --recursive https://github.com/pytorch/pytorch 8
Then the above two files onnxifi_loader.c and onnxifi_loader.h
appear, and compile pytorch completed successfully. |
st103403 | Hello, I did FNN for classification.
However, it gets stacked (kernel is dead) during the training stage.
What can be wrong with the network?
#convert to tensors
X_train_tens = torch.from_numpy(X_np)
X_train_tens = X_train_tens.type(torch.FloatTensor)
y_train_tens = torch.from_numpy(y_np)
y_train_tens = y_train_tens.type(torch.LongTensor)
device = torch.device('cpu' if torch.cuda.is_available() else 'cpu')
batch_size = 10
input_size = 154
hiden_size = 462
num_classes = 4
learning_rate = 0.001
num_epochs = 100
input_train = autograd.Variable(X_train_tens)
target_train = autograd.Variable(y_train_tens)
#define FNN
class Net(nn.Module):
def __init__(self,input_size,hiden_size,num_classes):
super().__init__()
self.fc1 = nn.Linear(input_size, hiden_size)
self.fc2 =nn.Linear(hiden_size, num_classes)
def forward(self, x):
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
return x
model = Net(input_size=input_size, hiden_size=hiden_size, num_classes=num_classes).to(device)
opt = torch.optim.Adam(params=model.parameters(), lr=learning_rate)
loss_func = torch.nn.CrossEntropyLoss()
#run training stage
correct = 0
total = 0
counter = 0
for epoch in range (num_epochs):
out = model(input_train).to(device)
_, pred = out.max(1)
total += target_train.size(0)
correct += (pred == target_train).sum().item()
print(input_train)
print(pred)
loss = loss_func(out,target_train)
counter +=1
print('loss train', "Epoch N", counter,loss.data[0])
model.zero_grad()
loss.backward()
opt.step()
print('Accuracy of the network on train dataset: {} %'.format(100 * correct / total)) |
st103404 | Your code looks fine besides your device which is always assigning the CPU, but that shouldn’t be your issue.
Could you run your script in a terminal and see, if you get any errors?
PS: You can add code using three backticks (`). I’ve formatted your code so that it’s easier to read. |
st103405 | I figured out what is happened.
FNN is started working after changing:
target_train = autograd.Variable(y_train_tens)
to
target_train = y_train_tens.squeeze(1)
I calculate a confusion matrix (using “sklearn metrics”).
However, I got always same result for 4 class classification:
[[530783 0 0 0]
[ 8097 0 0 0]
[ 20079 0 0 0]
[ 16682 0 0 0]]
Where can be an error? |
st103406 | Hello everyone,
I have some fun with comparison of linear algebra in tensorflow and pytorch. One strange thing I notice for cholesky is that when I use GPU mode in Pytorch - CPU (all cores) is still utilized heavily alongside 99% of GPU, meanwhile TensorFlow Cholesky doesn’t use CPU much. It is a bit of concern, because the project which I implemented significantly faster in TensorFlow, it is too early to say that it actually faster, but first stage of debugging, using pytorch bottleneck, showed that torch.potrf takes ~some time to compute. What Pytorch does so that it overloads all CPU cores? How can I alleviate this problem?
Here is some results using script from here https://gist.github.com/awav/5511f7fdf2a92d7b417ddb4269cb9127 12
GPU: Nvidia 1080 Ti, 11Gb.
CPU: Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz, 4 cores.
RAM: 32GB
Pytorch: 0.5.0a0+41c08fe
Python: 3.6.6
CUDA: 9.0
Magma: 2.3.0
> python bench_cholesky.py XXX 10000 [10|100]
| | cholesky (avg. sec, 100 times) | cholesky + grads (avg. sec, 10 times) |
|--------------|-------------------------------:|--------------------------------------:|
|torch | 1.31742e+00 | 2.25207e+01 |
|tensorflow | 1.13558e+00 | 2.00329e+01 |
CPU & GPU for TensorFlow and Pytorch runs:
tensorflow-vs-torch.png1153×605 144 KB
Actually, Pytorch used ~99% of GPU all the time with some fluctuation, when TensorFlow stayed 100% till the end of a test.
Thanks! |
st103407 | Alright, nobody answered, but I found a workaround to restrict the number of used threads setting torch.set_num_threads(1). It doesn’t affect performance. Question is why pytorch spawns other threads? |
st103408 | I am using a VGG16 pretrained network, and the GPU memory usage (seen via nvidia-smi) increases every mini-batch (even when I delete all variables, or use torch.cuda.empty_cache() in the end of every iteration). It seems like some variables are stored in the GPU memory and cause the “out of memory” error. I couldn’t solve the problem by using any of the other related posts in this forum.
Will you please help me understand how I can free all possible GPU memory after each mini-batch? If possible, will you please explain to me why some variables are stored in the GPU memory and are deleted from the memory when using the “del” command?
Attached below is a minimal example that reproduces the “out of memory” error I get
Thanks a lot
transform = transforms.Compose([transforms.RandomResizedCrop(224),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = dset.ImageFolder(root="......", transform=transform )
model = models.vgg16(pretrained=True)
num_features = model.classifier[6].in_features
features = list(model.classifier.children())[:-1] # Remove last layer
features.extend([nn.Linear(num_features, 2)]) # Add our layer with 2 outputs
model.classifier = nn.Sequential(*features) # Replace the model classifier
optimizer = optim.Adam(model.parameters(), lr=0.001, weight_decay=0.0005)
criterion = nn.CrossEntropyLoss()
model = model.cuda()
criterion = criterion.cuda()
train_loader = DataLoader(trainset, batch_size=4, shuffle=True, drop_last=True)
train_iterator = iter(train_loader)
for i in range(num_of_mini_Batches):
img, label= next(train_iterator)
img = Variable(img.cuda(), requires_grad=True)
label= Variable(label.cuda() )
optimizer.zero_grad()
outputs = model(img)
loss = criterion(outputs, label)
loss.backward()
# del loss, model, outputs, optimizer, img, label, train_loader, train_iterator
# torch.cuda.empty_cache()
optimizer.step() |
st103409 | Solved by ptrblck in post #14
You could try to see the memory usage with the script posted in this thread.
Do you still run out of memory for batch_size=1 or are you currently testing batch_size=4?
Could you temporarily switch to an optimizer without tracking stats, e.g. optim.SGD? |
st103410 | Is that the complete code or do you create any logs etc. after the optimization?
Usually you will run out of memory, if you store a tensor with its computation graph. E.g. if you use total_loss += loss instead of total_loss += loss.item(). |
st103411 | This is the complete code… In each successive iteration, the memory usage increases, until it runs out of memory (after two iterations).
Will you please help me understand how can I fix it?
Thanks! |
st103412 | That’s strange, as I cannot see any obvious reason for the memory growth.
Also. I used your code with a fake dataset just to make sure I’m not overlooking something, and your code runs just fine. The memory stays at the same level through 1000 iterations. |
st103413 | Before starting the “for” loop, the memory usage is as follows:
SMI_Before_For.PNG1524×422 17.2 KB
Stopping the first iteration before entering the line “loss.backward()” results in the following memory usage:
SMI_Before_backward.PNG1556×362 15.6 KB
After the line “loss.backward()”, the known “out of memory” error is shown.
Decreasing the batch size to “2” allows me to run through the first iteration, but the “out of memory” error is then shown during the second iteration (the same holds for a batch size of 1…).
Is it possible that the 4GB RAM available cannot handle such a small batch size (with RGB pictures of 224X224)?
Thanks! |
st103414 | Hi iariav - I am using the ants and bees dataset from the transfer learning tutorial 69.
Got any idea? Thanks a lot! |
st103415 | If the first iteration was successful, the second should also work. You can remove the requires_grad=True argument from your input and try it again. This would give you some more memory. |
st103416 | ptrblck - isn’t the default for requires_grad is “True”? (so that even after I delete this part, the variable would still be True?)
After removing the “requires_grad = True”, and with a batch size of “1”, the code runs with a GPU memory usage of 3.4 GB. Increasing the batch size to “2” results in an “out of memory” error in the second iteration… Is it possible that 4GB of RAM are not enough for a batch size of “2” in this case? (as mentioned in a previous comment - it is the ants and bees dataset from the transfer learning tutorial)
Looking forward for your answer. Thanks a lot! |
st103417 | It might be, even though I’m wondering why it’s running out of memory in the second iteration.
I’ll have a look at the memory usage a bit later.
No, the default for Variables was requires_grad=False.
You could also update to PyTorch 0.4.0, where Variables and tensors were merged besides some other bug fixes and new features. |
st103418 | Great. Thank you. Will be glad to get your insights about the memory usage in your computer later (BTW- will you please tell me how much memory usage does your computer show when the batch size is 4? ).
I am using Pytorch 0.4, but still using “Variable” just from the habit. I guess that the only thing that would change in the Pytorch 0.4 syntax is that I would have to delete the name “Variable” and just leave “img = img.cuda()”.
Thanks a lot again |
st103419 | The training takes ~3777MB on my system for a batch size of 4 (GTX 1070, CUDA9, cuDNN7, compiled from master). |
st103420 | Well, you obviously have a much better GPU than mine
In any case, even after updating the GPU driver and CUDNN version, the program still gets stuck in the second iteration. Is there any kind of debugging that I can perform that will allow a better understanding of the problem?
Will be glad to receive your guidance.
EDIT: it seems like the program now gets stuck before the “optimizer.step()” in the second iteration. The reason for the error is: “denom = exp_avg_sq.sqrt().add_(group[‘eps’])” in the Adam optimizer routine.
Thanks a lot again. |
st103421 | You could try to see the memory usage with the script posted in this thread 656.
Do you still run out of memory for batch_size=1 or are you currently testing batch_size=4?
Could you temporarily switch to an optimizer without tracking stats, e.g. optim.SGD? |
st103422 | Thanks a lot for the reference to the memory and cpu usage methods.
After deleting the “requires_grad = True”, the program now runs for batch_size=1 (with 2 GB of 4 GB RAM used) but get stuck in the second iteration when I use batch_size = 2. Using the method cpuStats() before and after the line optimizer.step() shows that it still uses 2 GB of GPU RAM, but get “out of memory” during the optimizer.step() call in the second iteration, with the error reported as:
denom = exp_avg_sq.sqrt().add_(group['eps'])
RuntimeError: cuda runtime error (2) : out of memory at c:\programdata\miniconda3\conda-bld\pytorch_1524546371102\work\aten\src\thc\generic/THCStorage.cu:58
Changing the optimizer to optim.SGD (i.e. defining optimizer = optim.SGD(model.parameters() , lr=0.001)) indeed allows me to run through the program with a batch size of 10! It also seems like the memory usage stays around 2 GB, even when I increase the batch size even more…
Is there any way to use Adam algorithm without getting “out of memory” in my case?
One more question - when I am not specifying “requires_grad = True”, how does the line loss.backward() work (where loss = criterion(outputs, label)) ? shouldn’t I put requires_grad=True for at least one of the variables outputs, or label? (or, use “with torch.set_grad_enabled”)
Thanks a lot! |
st103423 | You could have a look at checkpointing 284 your model, which trades memory for compute.
I think you don’t need the gradients for the input or are you trying to manipulate the input itself?
The model parameters automatically require gradients. In a standard classification setup, you don’t need to specify requires_grad=True for neither the input nor the label. |
st103424 | Thanks a lot ptrblck! If you have any reference for an example that uses checkpointing - it will be great. Thanks a lot for all of your help ! |
st103425 | When I install pytorch in my ubuntu, I found that this command
conda install pytorch=0.3.1 -c soumith
I don’t know what is the meaning of “-c soumith”, so could anyone help me to explain what does this mean, or give some direction to know about it? |
st103426 | It was the conda channel created by Soumith.
The current PyTorch release is hosted on the pytorch channel (conda install pytorch -c pytorch). |
st103427 | Hello ,
I’m doing classification using recurrent network and I remember using a metric in SKlearn with random forest that gives the classifier probability of each class , example I have 3 classes this metric gives me the probability of what the classifier thinks this object belongs to which class , Is there a built in function in Pytorch that does this ?
Thank you |
st103428 | Solved by ptrblck in post #2
If your model returns logits, you could call F.softmax on them, which will yield the class probabilities.
Example:
import torch.nn.functional as F
model = nn.Linear(10, 5)
x = torch.randn(1, 10)
output = model(x)
prob = F.softmax(output, dim=1) |
st103429 | If your model returns logits, you could call F.softmax on them, which will yield the class probabilities.
Example:
import torch.nn.functional as F
model = nn.Linear(10, 5)
x = torch.randn(1, 10)
output = model(x)
prob = F.softmax(output, dim=1) |
st103430 | Hello,
I wanted to fill an issue on github but I found this link so I am asking my question here. As the caffe2 library now leaves in the pytorch directory on github I thought that you could have answers for me even if it is a pytorch forum.
I am currently working with caffe2 in c++ and I get an error when loading different pre-trained models: Blob data not in the workspace. I searched on internet and found a fix. It consists in creating a blob called “data” and initializing it with random numbers before running the models. However, I don’t find the way to do it in c++. Indeed, there is a function in python called “FeedBlob” in the “class” workspace that does not exist in c++ (I think). I looked into the caffe 2 files (workspace.h, blob.h…) but I did not find anything to modify a blob in a workspace (I can create one, delete one but not modifying it).
Should I post this on github as it seems to be only a pytorch forum? Does someone know how to fix the original error in c++? Is there a way to modify this blob in c++ that I did not find? |
st103431 | you need to use GetBlob, then copy data to it or share point to other buffer.
// load network
CAFFE_ENFORCE(workSpace.RunNetOnce(initNet));
CAFFE_ENFORCE(workSpace.CreateNet(predictNet));
// load image from file, then convert it to float array.
float imgArray[3 * 32 * 32];
loadImage(FLAGS_file, imgArray);
// define a Tensor which is used to stone input data
TensorCPU input;
input.Resize(std::vector<TIndex>({1, 3, 32, 32}));
input.ShareExternalPointer(imgArray);
// get "data" blob
#ifdef USE_GPU
auto data = workSpace.GetBlob("data")->GetMutable<TensorCUDA>();
#else
auto data = workSpace.GetBlob("data")->GetMutable<TensorCPU>();
#endif
// copy from input data
data->CopyFrom(input);
see 03_cpp_forward/main.cpp 20 and Caffe2_Demo 15 for details. |
st103432 | HI everyone. I’m experiencing an issue that I can’t explain: I have two models which produce an output tensor with the same size (8, 8192), however calling the F.mse_loss on the output of one takes less time than calling it on the other.
See this gist to run the comparison with synthetic data:
gist.github.com
https://gist.github.com/iacolippo/9a06449fb4b819083dc61dc020dfed63 3
model_comparison
from time import time
import torch
import torch.nn as nn
import torch.nn.functional as F
class ConvNetV0(nn.Module):
def __init__(self):
super(ConvNetV0, self).__init__()
This file has been truncated. show original
V0 Total: 1.3554 Loss forward: 0.0042 Loss backward: 0.9177
V0 Total: 0.0344 Loss forward: 0.0132 Loss backward: 0.0091
V0 Total: 0.1393 Loss forward: 0.1089 Loss backward: 0.0085
V0 Total: 0.1405 Loss forward: 0.1095 Loss backward: 0.0085
V0 Total: 0.1380 Loss forward: 0.1079 Loss backward: 0.0083
V0 Total: 0.1379 Loss forward: 0.1077 Loss backward: 0.0083
V0 Total: 0.1380 Loss forward: 0.1077 Loss backward: 0.0084
V0 Total: 0.1388 Loss forward: 0.1085 Loss backward: 0.0084
V1 Total: 0.7927 Loss forward: 0.0001 Loss backward: 0.5020
V1 Total: 0.0289 Loss forward: 0.0066 Loss backward: 0.0195
V1 Total: 0.0267 Loss forward: 0.0211 Loss backward: 0.0026
V1 Total: 0.0337 Loss forward: 0.0283 Loss backward: 0.0026
V1 Total: 0.0341 Loss forward: 0.0287 Loss backward: 0.0030
V1 Total: 0.0336 Loss forward: 0.0285 Loss backward: 0.0030
V1 Total: 0.0335 Loss forward: 0.0287 Loss backward: 0.0030
V1 Total: 0.0335 Loss forward: 0.0289 Loss backward: 0.0030
Where does this difference come from? |
st103433 | Hi,
You should keep in mind that the cuda api is asynchronous, so to get proper measurement, you should always call torch.cuda.synchronize() just before calling time.time().
The difference in timing here is most certainly that one model use the gpu more and so the queuing of the operations corresponding to the loss is slightly longer. |
st103434 | thanks for your answer! And thanks for the fix on torch.cuda.synchronize() by adding it, the situation becomes:
V0 Total: 2.5034 Loss forward: 0.0111 Loss backward: 1.7371
V0 Total: 0.2608 Loss forward: 0.0004 Loss backward: 0.2297
V0 Total: 0.2657 Loss forward: 0.0061 Loss backward: 0.2203
V0 Total: 0.2759 Loss forward: 0.0068 Loss backward: 0.2181
V0 Total: 0.2736 Loss forward: 0.0067 Loss backward: 0.2147
V0 Total: 0.2715 Loss forward: 0.0047 Loss backward: 0.2148
V0 Total: 0.2697 Loss forward: 0.0040 Loss backward: 0.2150
V0 Total: 0.2614 Loss forward: 0.0001 Loss backward: 0.2159
V1 Total: 1.3935 Loss forward: 0.0006 Loss backward: 0.8544
V1 Total: 0.0790 Loss forward: 0.0041 Loss backward: 0.0590
V1 Total: 0.0854 Loss forward: 0.0068 Loss backward: 0.0594
V1 Total: 0.0631 Loss forward: 0.0059 Loss backward: 0.0390
V1 Total: 0.0669 Loss forward: 0.0048 Loss backward: 0.0536
V1 Total: 0.0844 Loss forward: 0.0068 Loss backward: 0.0584
V1 Total: 0.0596 Loss forward: 0.0040 Loss backward: 0.0368
V1 Total: 0.0669 Loss forward: 0.0026 Loss backward: 0.0560
The difference in the forward has disappeared. So do I understand correctly that by calling the synchronize we cancel any influence of the queuing?
The difference in the backward is still a bit puzzling, since the V0 model has 2.3x more operations (back of the envelope calculation), but the ratio here is more like 1/4. |
st103435 | I am not sure how to explain that. There are a lot of factors that can impact the runtime. Especially for such small batch size where the time to launch jobs on the gpu is not negligeable. |
st103436 | Well, in the end what I care about is the timing that takes queuing into account, that’s what I need to know which model is faster to train on my system. Thank you, it was very instructive |
st103437 | Hi all,
I try to install v0.4.0 for python-2.7.14 on linux.
My site does not have external network. So I have to build with source compiling.
Does anybody know how to install v0.4.0 for python-2.7.14?
Thanks,
KMLee |
st103438 | Follow the instructions here: https://github.com/pytorch/pytorch#from-source 17
You’ll need to download some additional things, though, in addition to the pytorch source code. |
st103439 | richard:
hough, in addition to the pytorch source code.
I installed pytorch-master.
Thank you. |
st103440 | Code can run under the CPU.And I want to convert to GPU.
def train():
net = C3D()
net.cuda()
net.load_state_dict(torch.load(‘a.pickle’))
…
criterion = torch.nn.CrossEntropyLoss().cuda()
optimizer = optim.SGD(net.parameters(), lr=0.0001, momentum=0.9, weight_decay=0.0005)
for i in range(1,30001):
img1,first,label1 = getitem(random.randint(1,7507))
X=get_sport_clip(img1,first)
X = Variable(X)
#X.cpu()
X.cuda()
label1 = Variable(label1)
#label1.cpu()
label1.cuda()
output1 = net(X)
loss_contrastive = criterion(output1,label1)
optimizer.zero_grad()
loss_contrastive.backward()
optimizer.step()
prediction = torch.max(F.softmax(output1), 1)[1].cuda()
pred_y = prediction.data.numpy().squeeze()
def get_sport_clip(clip_name, first , verbose=True):
…
clip = np.float32(clip)
return torch.from_numpy(clip)
RuntimeError: Expected object of type torch.FloatTensor but found type torch.cuda.FloatTensor for argument #2 ‘weight’ |
st103441 | Solved by albanD in post #2
Hi,
Keep in mind that the .cuda() function on Tensors is not inplace, you should do t = t.cuda(). |
st103442 | Hi,
Keep in mind that the .cuda() function on Tensors is not inplace, you should do t = t.cuda(). |
st103443 | Torch version: 0.5.0a0+e8536c0 (master branch as of 10-Jul-2018, 1.46 AM IST)
The below function calls exhibit segmentation fault:
In CPU:
import torch
x = torch.randn(32, 100) # works
print(torch.svd(x))
x = torch.randn(33, 100) # segmentation fault
print(torch.svd(x))
x = torch.randn(32,3500); # works
print(torch.svd(x))
x = torch.randn(33, 3500) # segmentation fault
print(torch.svd(x))
##--------------------------------------------
x = torch.randn(100, 32) # works
print(torch.svd(x))
x = torch.randn(100, 33) # segmentation fault
print(torch.svd(x))
x = torch.randn(1000000,32); # works
print(torch.svd(x))
x = torch.randn(1000000,33); # segmentation fault
print(torch.svd(x))
In GPU (GeForce GTX Titan X):
import torch
x = torch.randn(16, 100).cuda(); # works
print(torch.svd(x))
x = torch.randn(17, 100).cuda(); # segmentation fault
print(torch.svd(x))
x = torch.randn(16, 3500).cuda(); # works
print(torch.svd(x))
x = torch.randn(17, 3500).cuda(); # segmentation fault
print(torch.svd(x))
x = torch.randn(16, 2500).cuda(); # works
print(torch.svd(x))
x = torch.randn(17, 2500).cuda(); # segmentation fault
print(torch.svd(x))
##-----------------------------------------------------------
x = torch.randn(100, 10).cuda(); # works
print(torch.svd(x))
x = torch.randn(100, 11).cuda(); # segmentation fault
print(torch.svd(x))
x = torch.randn(500, 10).cuda(); # works
print(torch.svd(x))
x = torch.randn(500, 11).cuda(); # segmentation fault
print(torch.svd(x))
Seems to be a bug?
I think, there are some corner cases not handled in code probably. I am not sure.
related posts: Segmentation fault for SVD implementation in GPU for large matrices 4 |
st103444 | Solved by InnovArul in post #6
I do not know what dependencies are wrong.
I have collected pip wheels from here: torchvision, torch 0.4.0
and installed them. They do not give such segmentation faults. |
st103445 | They all seem to work for me (after adding the missing ) in the third cpu case).
I’m on a recent master with (0.5.0a0+08daed4) as well. Maybe something with your dependencies and PyTorch being compiled with incompatible compilers? I use Debian’s gcc 5 and it seems to work well.
Best regards
Thomas |
st103446 | @tom, @ptrblck I am using GCC 4.8.5, CUDA 9.0. Am I using old GCC?
I am not sure how to find out the wrong dependencies, as it did not give any error during build.
Is it possible for you to look at the build log 2 and see if anything is fishy? |
st103447 | I do not know what dependencies are wrong.
I have collected pip wheels from here: torchvision, torch 0.4.0 2
and installed them. They do not give such segmentation faults. |
st103448 | Hello,
I would like to represent my model in a way similar to summary in keras.
I know this is not new question but the thing that I am using GRU layers and using summary from
from torchsummary import summary
get me this error
TypeError: forward() missing 1 required positional argument: 'hidden'
however I sending it ( 1, hidden, number of features) as the input ?
Please advice
Abeer |
st103449 | Hi!
I have a trained CNN, and while I don’t want to provide its actual weights, I want to provide some API /webpage UI to my net (Imagine a user submitting some image file or whatever)
The actual answer might be long so to make it short:
Which topics do I need to learn in order to do that,preferably on some free server hosting service(such as heroku)? |
st103450 | Just learn how to use Flask: http://flask.pocoo.org/ 109
Then serve your PyTorch model over a REST API over flask |
st103451 | I think these links also would be helpful:
Deep-Learning-in-Production 93
WebDNN 41 |
st103452 | Hi,
I am currently using PyTorch to build an image search engine and I am using Flask to serve the model. Currently I have one instance of the model and when a user send a request the server will use the model as a global variable.
I am just wondering if a pytorch model is thread safe or would it be necessary to use a Mutex when I run the model since another thread might be using it at the same time? |
st103453 | Are you planning to run your model(s) on a GPU?
In general, a CPU model should be thread safe (there are some exceptions though; some people report involving functions that use MKL with multiprocessing causes hanging). If you’re running CUDA models on one GPU, you will get better performance by not running multiple models at the same time, so it would be good to run a mutex here. If you’re running CUDA models on multiple GPUs, that will probably deadlock due to the nccl backend. |
st103454 | Hi @richard, no the server has no GPU so it will be running on CPU only, I will remove the mutex for now and see over time if users have any issues. Thanks for your help |
st103455 | I think these links also would be helpful:
Deep-Learning-in-Production 504
WebDNN 161
Serve Models on Web 378 |
st103456 | I describe how to serve pytorch models (ResNet-18 pre-trained on ImageNet in this example) via AWS Lambda. It’s simple, easy and no hacks or complicated workarounds required!
This post also may be helpful if you are having trouble building pytorch from source (especially if you aren’t using conda or you are on Amazon Linux or a similar dist).
Waya.ai, Inc – 12 Dec 17
Serverless deep/machine learning in production — the pythonic 🐍 way ☯ 249
In this post we will serve a pyt🔥rch deep learning model with AWS lambda. The simplicity and effectiveness of this approach is pretty…
Reading time: 4 min read |
st103457 | Tensorflow has Tensorflow Serving. I know pytorch is a framework in its early stages, but how do people serve models trained with pytorch. Must it be from Python? I’m specifically looking to serve from C++. |
st103458 | We don’t have a way to serve models from C++ right now, and it’s not a priority for us at this stage. There are many things like distributed training and double backward that we’ll be implementing first. Sorry! |
st103459 | Would you say that pytorch was built with serving in mind, e.g. for an API, or more for research purposes? |
st103460 | We’re more research oriented. We’re rather thinking of creating tools to export models to frameworks that are more focused on production usage like Caffe2 and TensorFlow. |
st103461 | Also you mentioned double backward. This is the first I’ve heard of it. I found a paper by Yann LeCun on double backpropagation, but was wondering whether it’s common to use such a method. |
st103462 | Hi, I’m playing with a possible solution for serving from C based on TH and THNN. It’ll be limited to statically compilable graphs of course. I should have something to share in the not so distant future. |
st103463 | @lantiga Awesome! Let us know if you need any help! I can answer any questions about the structure of our graphs and how can you export them. We still consider these things internal and they will have to change in the near future to support multiple backward and lazy execution. |
st103464 | Thank you @apaszke! I’m aware of the fact that the graph structure is going to change considerably in the future, but delving into it now while things are simpler sounds like a good idea to me.
My plan is to focus solely on inference and implement a first graph2c “transpiler”, which will generate C code directly, without exporting to an intermediate format. It may sound hacky but it could actually be enough for us for the moment and it would avoid having to struggle with polymorphic C.
Eventually, this could become a basis for a more refined solution in which we export the graph and have a C runtime execute it.
This is driven by our need of slim deploys and our determination to use pytorch in production |
st103465 | Sure that sounds cool. It doesn’t seem hacky, it’s just a graph compiler. It’s a very good start, and will likely be capable of producing small binaries. Let us know when there’s going to be any progress or in case you have any trouble. We’ll definitely showcase your solution somewhere. |
st103466 | let us know, also interested.
For now we will create a python script to export to a Torch7 model, and then use: https://github.com/mvitez/thnets 421 in production code |
st103467 | Making progress. As soon as I get the first MNIST example to compile I’ll share what I have. |
st103468 | We need to deploy pytorch models to e.g. Android, so we need a method to export a model. This is my starting point. Can you please tell me if I am on the right way or if I am doing something totally stupid?
import sys
import torch
from torch import nn
from torchvision import models
from torch.utils.serialization import load_lua
def dump(f):
s = str(f.__class__)
sys.stdout.write(s[s.rfind('.')+1:-2]+'(')
for fa in f.previous_functions:
if isinstance(fa[0], torch.autograd.Function):
dump(fa[0])
sys.stdout.write(',')
if isinstance(fa[0], torch.nn.parameter.Parameter):
sys.stdout.write('param,')
elif isinstance(fa[0], torch.autograd.Variable):
sys.stdout.write('input,')
sys.stdout.write(')')
class MyNet(nn.Module):
def __init__(self):
super(MyNet, self).__init__()
self.conv1 = nn.Conv2d(3, 16, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.conv2 = nn.Conv2d(3, 16, kernel_size=1, bias=True)
def forward(self, x):
return self.bn1(self.conv1(x))+self.conv2(x)
#net = models.alexnet()
#net=load_lua('model.net') #Legacy networks won't work (no support for Variables)
net = MyNet()
input=torch.autograd.Variable(torch.zeros(1,3,128,128))
output=net.forward(input)
dump(output.creator)
print('')
The output for the simple MyNet will be
Add(BatchNorm(ConvNd(input,param,),param,param,),ConvNd(input,param,param,),)
Thanks |
st103469 | This will work for now, but may break in the future. We’re still actively working on autograd internals, and there are two possible ways we can take now, but we’re still thinking which one is the best. The only caveat right now is that instead of BatchNorm you may find BatchNormBackward in the graph. Are you on slack? I can keep you posted about the currently used data structures if you want. |
st103470 | So, if you’re interested this is what I have so far: https://github.com/lantiga/pytorch2c 475. (I think) I’m close, I’m working at serializing THStorage right now and probably there’s a number of other issues, but you can start to take a peek.
I’m not sure how profoundly things will have to be reworked with the upcoming changes in autograd, but it’s fun anyway. |
st103471 | Quick update: as of commit 9d0fd21, both the feedforward and MNIST tests pass (they verify that the output of the compiled code matches the output from PyTorch for the same input). I also added a few scripts to get up and running quickly, so things are kind of starting to shape up. /cc @apaszke @Eugenio_Culurciello |
st103472 | Since there are some people hacking with autograd internals, I’ve created a slack channel #autograd-internals. I’ll be sending @channel messages every time we make a breaking change to our representation so you can be up to date.
@lantiga Awesome! |
st103473 | Via @mvitez:
For your information, I have created a PyTorch exporter that dumps the execution graph to a pymodel.net 44 file that thnets 195 will be able to read. All the models in torchvision work. |
st103474 | With Learnable Parameters
m = nn.BatchNorm2d(100)
Without Learnable Parameters
m = nn.BatchNorm2d(100, affine=False)
Error with input = autograd.Variable(torch.randn(20, 100, 35, 45))
input = Variable(torch.randn(20, 100, 35, 45))
output = m(input)
What happens when I run autograd it made the error? It’s probably work when running without it. |
st103475 | I am a beginner, I tried the tutorial and got this error:
Uniform, bernoulli, multinomial, normal distribution
2x2: A uniform distributed random matrix with range [0, 1]
r = torch.Tensor(2, 2).uniform_(0, 1)
bernoulli
r = torch.bernoulli® # Size: 2x2. Bernoulli with probability p stored in elements of r
Multinomial
w = torch.Tensor([0, 4, 8, 2]) # Create a tensor of weights
r = torch.multinomial(w, 4, replacement=True) # Size 4: 3, 2, 1, 2
Normal distribution
From 10 means and SD
r = torch.normal(means=torch.arange(1, 11), std=torch.arange(1, 0.1, -0.1)) # Size 10
TypeError Traceback (most recent call last)
in ()
14 # Normal distribution
15 # From 10 means and SD
—> 16 r = torch.normal(means=torch.arange(1, 11), std=torch.arange(1, 0.1, -0.1)) # Size 10
TypeError: normal() received an invalid combination of arguments - got (std=Tensor, means=Tensor, ), but expected one of:
(Tensor mean, Tensor std, torch.Generator generator, Tensor out)
(Tensor mean, float std, torch.Generator generator, Tensor out)
(float mean, Tensor std, torch.Generator generator, Tensor out)
could someone know why? |
st103476 | Hi, all
I have a model which contains two parts. The first part “model1” takes one image and outputs a feature ‘model1_feat’. The second part “model2” takes the ‘model1_feat’ and another feature ‘input_feat’ as input, and generate the final output. I want to train this model on multi GPUs. I have writen the following code:
model1 = nn.DataParallel(model1).cuda()
model1_feat = model1(input_image)
model2 = nn.DataParallel(model2).cuda()
model2_feat = model2(model1_feat, input_feat)
But it does not work. The whole thread is blocked and the model can not generate any output. Can you help me.
BTW, the total model works fine on single card.
Thanks. |
st103477 | Instead of creating two models, you can create just one model like this. Then you can simply warp the model with nn.DataParallel. |
st103478 | The .cuda() accepts a device id. So you could assign the GPUs as:
model1 = nn.DataParallel(model1).cuda(device=0)
model1_feat = model1(input_image)
model2 = nn.DataParallel(model2).cuda(device=1)
model2_feat = model2(model1_feat, input_feat)
Your current setup is replicating both of your models on all devices and spliiting the data across them. |
st103479 | Thanks, my code could run on multi GPUs without modification, but the GPU memory is extremely unbalanced. |
st103480 | Honestly, your code raises error, " RuntimeError: all tensors must be on devices[0] ". I think we can not pass the second part on (device_id = 1). |
st103481 | Your model is like a conditional GAN, I`m also doing some experiments like yours.
I think you should put the both model on multi GPU first, and in the training procedure, put the model1_feat and input_feat to the model2, like this:
model1 = nn.DataParallel(model1).cuda()
model2 = nn.DataParallel(model2).cuda()
# in training procedure
model1_feat = model1(input_image)
model2_feat = model2(model1_feat, input_feat)
and you can set the multi GPU in command like CUDA_VISIBLE_DEVICES=0,1.
as I know, you can not pass tensors between different GPUs in running procedure. |
st103482 | Thanks for your reply. I think you are right. But the problem is that the GPU memory is extremely unbalanced. The first GPU comsumes a lot of memory while othes only used a little. For example
| 0 22043 C /usr/bin/python 11138MiB |
| 1 22043 C /usr/bin/python 5724MiB |
| 2 22043 C /usr/bin/python 5548MiB |
| 3 22043 C /usr/bin/python 5613MiB
Any ideas? |
st103483 | @WERush
I`m also confused about your problem. Can you provide some parameters of your code like batchsize ? |
st103484 | I`m also confused, may anyone else can provide some help.
maybe you could provide more details of your code if convenient. |
st103485 | WERush:
DataParallel
Hi, can I train or test one model using multi-GPU
currently I found when I training model, just one GPU run. it will be slow.
Thank you! |
st103486 | First, change your model to nn.DataParallel(model)
Then, Use the command line: CUDA_VISIBLE_DEVICES=0,1 python train.py 118 |
st103487 | gpu_ids = [2, 3, 4]
torch.cuda.set_device(gpu_ids[0]) #fix the bug for " RuntimeError: all tensors must be on devices[0] "
for use multigpu in dataset loader use: pin_memory=True
model = torch.nn.DataParallel(model, device_ids=gpu_ids)
model.cuda()
for vars in train use:
target_var = torch.autograd.Variable(target.cuda(async=True))
input_var = torch.autograd.Variable(input.cuda(async=True), requires_grad=True, volatile=False)
for vars in test stage:
target_var = torch.autograd.Variable(target.cuda(async=True))
input_var = torch.autograd.Variable(input.cuda(async=True), volatile=True) |
st103488 | Hi,
I find you could just implement one model class and use torch.nn.DataParallel to simply to train a model in parallel.
>>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
>>> output = net(input_var)
https://pytorch.org/docs/stable/nn.html 82
for your reference. |
st103489 | hi but when i try this i get error in my loss function, maybe the targets remain in gpu 1 and model outputs in gpu 0 .
this is the error I get in my loss function :
buffer[torch.eq(target, -1.)] = 0
RuntimeError: invalid argument 2: sizes do not match at /opt/conda/conda-bld/pytorch_1512946747676/work/torch/lib/THC/generated/…/generic/THCTensorMasked.cu:13
this is not an error in my code but an error popping up after using parallelism of data ( as i tried to run my less intensive code both with and without data parallelism and it throws up same error while using it with data parallelsim)
my model is intensive and I have 2 gpu’s 12206MiB each. I just need to split my model to use both gpu’s while training as well as testing.
thanks
btw my model is a fcn and its batch size is 1 |
st103490 | hi!
i try to do what you stated but get the following error:
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1512946747676/work/torch/lib/THC/generic/THCStorage.cu line=58 error=2 : out of memory
*** Error in `python’: free(): invalid pointer: 0x00007f2a53091780 *
My gpu isnt out of memory though |
st103491 | I tried converting a pytorch model into deployment (ios) so I’m focused on converting a model to either cafffe or Onnx (TypeError: forward() missing 2 required positional arguments: 'cap_lens' and 'hidden' 9). My initial error was only having 'one layer" inside an lstm yet I encountered another problem.
I tried implementing two layers (nlayers=2) by instantiating a new rnn object called text_encoder
text_encoder.png707×41 5.27 KB
Yet I’m given an error of some key(s) inside the state_dict is missing. This error doesn’t occur for having one layer yet but I do get an error during the conversion (TypeError: forward() missing 2 required positional arguments: 'cap_lens' and 'hidden' 9)
I’m not sure if this is happening because the model I loaded only had one layer or another problem. How can I recover missing key’s while adding a new layer? Or is that impossible?
loading.png1072×367 56.7 KB |
st103492 | If I understand your issue correctly, you are creating a two-layer RNN while loading a single layer state dict?
Do you want to initialize the second layer randomly while loading the parameters for the first one? |
st103493 | Yes that is correct. I forgot to mention this but my main error is the following “TypeError: forward() missing 2 required positional arguments: 'cap_lens' and 'hidden' 8”.
There are two cases I’m looking as the root of the problem
(1st Case) Diego from the other post (TypeError: forward() missing 2 required positional arguments: 'cap_lens' and 'hidden' 8) said “I think the problem is, you are using dropout with only one layer. You need at least 2 layers to apply dropout if you are using the LSTM class (https://pytorch.org/docs/stable/nn.html#lstm 1).”
loading.png1072×367 56.7 KB
I wasn’t sure if this was a big deal because I was only given a warning when I created one layer and the error doesn’t specify any complaints about it. After adding a second layer I started missing key(s) inside the state_dict. Your solution to initialize a second layer randomly while loading the parameters for the first sounds awesome but I want your opinion on (Case 2) if we really need to solve (Case 1). I’m mostly concern the root of the problem. Sorry for not mentioning (Case 2) until now.
(2nd Error): If I stick with one layer I successfully load the state_dict with no loss of key(s). Only given a warning for creating one layer for an LSTM. However I think the main root of the problem is a lack of arguments passed into text_encoder not having cap_lens and hidden for the forward function. This case is a lot more extreme since I don’t know the true origins or the two variables. I’m using this git repo (https://github.com/taoxugit/AttnGAN 1 1) for cap_lens and the hidden variable. Their located inside AttnGAN/code/pretrain_DAMSM.py @line_65
however they were generated data (prepare_data) from the class AttnGAN/code/datasets.py @line_28
prepare_data.png397×611 49.1 KB
I try to replicate prepare_data to create a new cap_lens but I keep ending up with empty content for the data. |
st103494 | It looks like the cap_length are created in the TextDataset’s get_caption method 5.
I think it’s worth trying to fix this problem first. |
st103495 | Just wanted to make sure. Your saying cap_length is the same as cap_lens correct? |
st103496 | Based on the code, it looks like in get_caption x_len is calculated, then returned to __getitem__ as cap_len.
prepare_data gets a new sample from TextDataset (so from its __getitem__), and returns sorted_cap_lens, which is finally renamed to cap_lens.
I see your confusion and think the naming in the repo could be a bit more consistent, but maybe there is a good reason to rename the same variables. |
st103497 | Link to my conversion (https://github.com/rchavezj/ConvertML_Models/blob/master/convert.ipynb 11)
So I tried using the prepare_data function and it looks like my cap_lens is getting a new matrix of data from the dataloader. I’m having confusion wrapping my head around why the hidden matrix keeps return nothing but zero. Either one of two cases comes to my head
The pre-trained loaded model doesn’t have hidden content
The way I’m loading the hidden decisions disapear.
jupyterNotebook.png616×699 39.1 KB
At least now I’m getting an error that looks reasonable. When I try to create a fake inputDimension and feed it into text_encoder to perform coreML conversion, I get an error with argument 1 not having proper data.
indicies.png800×702 90.1 KB |
st103498 | The initial hidden state might be all zeros, so I don’t think it’s a bug.
I haven’t compared your code to the other code base, but this line of code 5 seems to confirm by assumption.
The error message states that indices should be provided as torch.long.
Could you try to cast x using x = x.long()? |
st103499 | It looks like there’s something wrong with the input dimensions since I’m getting a message that I’m out of bound based on this forum post (Embeddings index out of range error 2)
outOfBound.png994×680 84.2 KB
I honestly thought the first layer from the bottom picture was the required dimension. Unless I need to make my random torch input into some sort of embedding format that I’m not self aware of. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.