id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st98768
|
So .item() can be used only for the condition that the tensor has only 1 element right? If the tensor has multiple elements, then .item() is not applicable?
Just to be clear, for the following case, which is a line copied from some outdated implementation, are we using detach() instead?
batch.trg.data[i, 0]
|
st98769
|
You can change that to batch.trg.detach()[i, 0] unless you do funny things with it afterwards. (But note that keeping it around will keep the storage of batch.trg around, not just the first column, same with .data.)
There also is .tolist() if you want to convert a tensor into a list (of lists).
Best regards
Thomas
|
st98770
|
Hi
I’m trying to train a network by my own loss function.
I can train a network with loss functions are included in the PyTorch.But, I meet the challenge when I am trying to define my own loss function.
Indeed, I need to a correct example to train a network by custom loss function in details
My loss function is:
|
st98771
|
Isn’t this the max margin loss 17?
That said, the beauty of PyTorch is that you aren’t forced to designate something a loss function, to define your own, just write out the computation and take the backward.
Best regards
Thomas
|
st98772
|
I am thinking if using multiple optimizers can change the performance of the network comparing to having one optimizer.
Say, I have Encoder-Decoder network but it has three encoders and three decoders in it.
In this case, should I define each architecture and optimize separately as below?
encoder1_optimizer = optim.Adam(encoder1.parameters(), lr=learning_rate)
encoder2_optimizer = optim.Adam(encoder2.parameters(), lr=learning_rate)
encoder3_optimizer = optim.Adam(encoder3.parameters(), lr=learning_rate)
decoder1_optimizer = optim.Adam(decoder1.parameters(), lr=learning_rate)
decoder2_optimizer = optim.Adam(decoder2.parameters(), lr=learning_rate)
decoder3_optimizer = optim.Adam(decoder3.parameters(), lr=learning_rate)
Or should I define the entire architecture in one class and use one optimizer?
AllEncoder_Decoder_optimizer = optim.Adam(AllEncoder_Decoder.parameters(), lr=learning_rate)
|
st98773
|
Solved by tom in post #2
There isn’t much point in having several optimizers of the same class, as the per-Parameter-computations will be the same.
If you want different learning rates/other parameters, you can define parameter groups in one optimizer (you can see this e.g. in fast.ai who advocate using different learning …
|
st98774
|
There isn’t much point in having several optimizers of the same class, as the per-Parameter-computations will be the same.
If you want different learning rates/other parameters, you can define parameter groups in one optimizer (you can see this e.g. in fast.ai 110 who advocate using different learning rates for different layers a lot.
Best regards
Thomas
|
st98775
|
I firstly trained a model with slow GPU usage and less power consumption. I thought that it was the data cannot feed the GPU fast enough since there are some CPU intensive pre-processing for the dataset. Then I wrote a C++ extension for it which also use multithreads to speed up the process.
The C++ extension did speed up the throughput of the pre-processing, and the GPU usage went up and power draw also went much higher, but the total time consumption also went up.
My code is base on this repo: thstkdgus35/EDSR-PyTorch 3 and, I am using Windows and change mutliprocess to mutlithread since mutliprocess is not available in Windows.
If you train so network, like RCAN, the GPU usage is very low and if I train SRCNN (7 layer of CNN and around 0.7M param) almost is no load.
I would like to know what happen, and why higher GPU usage and higher power draw cause longer time consumption.
|
st98776
|
Guys is this setup good for computer vision tasks?
K80 GPU
4x vCPU
61GB RAM. !
K80 GPU: 12GB integrated RAM; 5.6 TFLOPS
I have not much idea on speed. Currently I use about 8gb of 12 gb in a Nvidia gpu. I am running unet for medical imaging. Would like some views. I am getting this for 0.36$ an hour.
|
st98777
|
I am trying to implement a matrix factorization algorithm in pytorch. Specifically, I have a class with matrices A,B,C and want to train A and B such that AB = C. My first try was to write the training as
for i in range(self.max_iter):
self.optimizer.zero_grad()
loss = ((torch.mm(self.A, self.B) - self.C)**2).mean()
loss.backward()
self.optimizer.step()
The variables are kept as
class nmf_solver:
def __init__(self, A,B,C, step_size=STEP_SIZE):
"""
solves | AB - C |^2
"""
self.A = Variable(A.cuda(), requires_grad=True)
self.B = Variable(B.cuda(), requires_grad=True)
self.C = C.cuda()
self.step_size = step_size
self.optimizer = optim.SGD([self.A,self.B], lr=self.step_size, momentum=0.9)
But I’m met with a “RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed.”. Since there isn’t any input to the matrix factorization algorithm, I don’t really know if nn.module is appropriate to subclass.
Is there any way to make this work in pytorch?
|
st98778
|
The key here is how you keep A B and C. Apparently they aren’t leaf variables in your graph, but your snippet doesn’t show what they are.
It is completely legitimate to have modules without input. Their main function is to hold the trainable Parameters, in your case A and B, with the calculation.
Best regards
Thomas
|
st98779
|
Dear Thomas,
Thanks for your reply. I’ve edited the post to show how I keep the variables. How should I add A and B, assuming that I did subclass nn.module? Thanks!!
best,
Johan
|
st98780
|
Variable isn’t a thing anymore.
The mistake in your code is that you do the .cuda() after requiring the gradient. .cuda is calculation for autograd purposes.
I’d probably subclass nn.Module and self.A = nn.Parameter(A) (requires grad is automatic for nn.Parameter).
You don’t want to include your optimizer in the module itself, if only so you can do the following in the right order: a) instantiate your class b) .cuda() your instance c) set up the optimizer.
Best regards
Thomas
|
st98781
|
Hello.
I’m newbie using Pytorch and trying to implement my idea using this toolkit.
I tried to create two back propagation stream with the outputs from the middle and end of the network.
for example, there is a network like
input -> conv1 -> conv2 -> conv3 -> fc1 -> fc2
Then, most of classifiers uses only output of fc2 layer.
But I want to use fc1's output and fc2's output simultaneously, the output of fc1 is for my custom loss function, and fc2's for ordinary cross entropy loss function.
I’ve implemented network returns output of fc1 and fc2 below:
class TestNet(nn.Module):
def __init__(self, num_classes=10):
super(TestNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=5),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.classifier1 = nn.Linear(256, 128)
self.classifier2 = nn.Linear(128, num_classes)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), -1)
x1 = self.classifier1(x)
x2 = self.classifier2(x1)
return x1, x2
and in the training section, my code uses x1, x2 like below:
(criterion_c is cross entropy loss in nn, and criterion_g is my custom loss function.)
x1, x2= net(inputs)
loss_c = criterion_c(x2, targets)
loss_g = criterion_g(x1, vectors)
loss = weight_g_loss * loss_g + loss_c
loss.backward()
But during training, there is a problem.
Only gradient values of loss_g is back propagated, except loss_c's.
Only loss_g is getting decreased.
So I tested deleting loss_g, then loss_c is getting decreased.
So my question is,
Is there any way to make two stream of back propagation running well simultaneously?
I think that the two-way is the problem…
Thanks.
|
st98782
|
Your code looks fine. You’ll you check the values of both losses to see if maybe loss_c is a lot smaller than loss_g?
|
st98783
|
import torch
import torch.nn as nn
class SomeModel(nn.Module):
def __init__(self, gpu_ids=[]):
super(SomeModel, self).__init__()
self.gpu_ids = gpu_ids
mean = torch.autograd.Variable(torch.Tensor([0.5, 0.5, 0.5]).view(1,3,1,1)).cuda(gpu_ids[0])
std = torch.autograd.Variable(torch.Tensor([0.5, 0.5, 0.5]).view(1,3,1,1)).cuda(gpu_ids[0])
self.register_buffer('mean', mean)
self.register_buffer('std', std)
def forward(self, input):
input = (input - self.mean) / self.std
if len(self.gpu_ids) and isinstance(input.data, torch.cuda.FloatTensor):
return nn.parallel.data_parallel(self.net, input, self.gpu_ids)
else:
return self.net(input)
When I run cmd in python interpreter, strange error occurs:
>>> from a import SomeModel
>>> model = SomeModel(gpu_ids=[1,2])
>>> m = model.state_dict()
>>> model.load_state_dict(m)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/data/image_server/extra/user/pangwong/anaconda2/envs/pytorch0.3.1/lib/python2.7/site-packages/torch/nn/modules/module.py", line 519, in load_state_dict
.format(name, own_state[name].size(), param.size()))
RuntimeError: While copying the parameter named mean, whose dimensions in the model are torch.Size([1, 3, 1, 1]) and whose dimensions in the checkpoint are torch.Size([1, 3, 1, 1]).
Environment:
python2.7
pytorch0.3.1
|
st98784
|
Solved by SimonW in post #2
0.3.1 is really old. Could you try a newer version?
|
st98785
|
I am using the model from https://github.com/gpleiss/efficient_densenet_pytorch 15. It is similar to the torchvision densenet but in includes bottlenecking, compression, and a demo script which runs training including over multiple GPU. I am training on a p2.xlarge with 8 K80s. Cuda visible devices are 0-7 and it loads the model on all the gpus but when I do a small profile myself the backwards pass is 100x the time of a forward pass.
Here is a small timing profile of an iteration:
Epoch: [1/300] Iter: [2/162] Time 93.462 (110.256) Loss 25298.0527 (25176.3350)
Time to create batch on GPU: 0.0001
Forward pass through model: 0.4165
Loss Calculation: 0.0031
Accuracy Calculation: 0.0001
Loss update Calculation: 0.0000
Zero grads: 0.0016
Loss backward: 92.5070
Optim step: 0.0167
My only changes to the network have been to use my own data, move it to 1D x 2 channel. And I changed dimensions of the last linear layer to work with l1_loss (because I am running pytorch 4.0 and I understand that mse_loss is bugged). A quick view of the end of my network is below. Any ideas on where to start with this?
(denselayer16): _DenseLayer(
(norm1): BatchNorm1d(372, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu1): ReLU(inplace)
(conv1): Conv1d(372, 48, kernel_size=(1,), stride=(1,), bias=False)
(norm2): BatchNorm1d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu2): ReLU(inplace)
(conv2): Conv1d(48, 12, kernel_size=(3,), stride=(1,), padding=(1,), bias=False)
)
)
(norm_final): BatchNorm1d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(classifier): Linear(in_features=384, out_features=1, bias=True)
)
Here is also my forward pass function:
def forward(self, x):
features = self.features(x)
out = F.relu(features, inplace=True)
out = F.avg_pool1d(out, kernel_size=out.size(2)).view(out.size(0), -1)
out = self.classifier(out)
return out
|
st98786
|
whats the difference between nn.relu() vs nn.functional.relu()
and if there no difference so why such duplication…?
|
st98787
|
Solved by ptrblck in post #2
nn.ReLU() creates an nn.Module which you can add e.g. to an nn.Sequential model.
nn.functional.relu on the other side is just the functional API call to the relu function, so that you can add it e.g. in your forward method yourself.
Generally speaking it might depend on your coding style if you pr…
|
st98788
|
nn.ReLU() creates an nn.Module which you can add e.g. to an nn.Sequential model.
nn.functional.relu on the other side is just the functional API call to the relu function, so that you can add it e.g. in your forward method yourself.
Generally speaking it might depend on your coding style if you prefer modules for the activations or the functional calls. Personally I prefer the module approach if the activation has an internal state, e.g. PReLU.
|
st98789
|
One example could be the availability of hook registration as below.
How to register hook function for functional form?
I’m now trying to implement Guided backpropagation from https://arxiv.org/abs/1412.6806 , which alters the gradient calculation in ReLU layer.
When given model is implemented in the way containing nn.ReLU() instance as its submodule, I can easily register backward hook function to its every submodule as below.
model = resnet101(pretrained=True)
def hook(module, grad_input, grad_output):
if isinstance(module, nn.ReLU):
# do something
return changed_grad_input
for i, v in mod…
|
st98790
|
Looking at the code of the Export 4 function, I don’t see anything being done that distinguishes the model as something for mobile (as in android/ios). Is there any advantage to using this over Caffe2Backend.onnx_graph_to_caffe2_net to get the init_net and predict_net if one wants to run it on mobile?
|
st98791
|
My code makes use of 2 networks - net-1 and net-2
When I move net-1 to cuda, there is no problem. However, within the same script when I move net-2 to cuda I get an error :
RuntimeError: cuDNN version mismatch: PyTorch was compiled against 7005 but linked against 5105
Even stranger is that when I try to move net-2 to cuda in a python terminal, I don’t get any error.
It is worth noting that I am using net-1 which was written by someone else, and am combining it with my net-2. The whole code works fine on my local machine, but I am getting this error on a server machine.
Any advice would be greatly appreciated.
Thanks
|
st98792
|
The only module I have loaded is cudnn 7.0, however it does have the option of cudnn 5 (which I have not loaded). I am not sure how to check which version is being used currently
|
st98793
|
I found a workaround, which doesnt seem ideal. I disabled cudnn here -
/home/gtb85/pyENV/lib/python3.5/site-packages/torch/backends/cudnn/init.py
and I don’t get the same error.
|
st98794
|
Hi, I currently have cuDNN7 installed on my machine. I presumed that I installed PyTorch based on that. When I try to run a piece of code, however, I get the following error:
File "/home/sam/.virtualenvs/cv/lib/python2.7/site-packages/torch/nn/functional.py", line 1011, in affine_grid
return AffineGridGenerator.apply(theta, size)
File "/home/sam/.virtualenvs/cv/lib/python2.7/site-packages/torch/nn/_functions/vision.py", line 75, in forward
AffineGridGenerator._enforce_cudnn(theta)
File "/home/sam/.virtualenvs/cv/lib/python2.7/site-packages/torch/nn/_functions/vision.py", line 66, in _enforce_cudnn
assert cudnn.is_acceptable(input)
File "/home/sam/.virtualenvs/cv/lib/python2.7/site-packages/torch/backends/cudnn/__init__.py", line 49, in is_acceptable
if _libcudnn() is None:
File "/home/sam/.virtualenvs/cv/lib/python2.7/site-packages/torch/backends/cudnn/__init__.py", line 25, in _libcudnn
'but linked against {}'.format(compile_version, __cudnn_version))
RuntimeError: cuDNN version mismatch: PyTorch was compiled against 6021 but linked against 7001
It does sound like it needs cuDNN6, but isn’t there any option that I could use PyTorch together with cuDNN7?
|
st98795
|
@smth @smtak
"RuntimeError: cuDNN version mismatch: PyTorch was compiled against 6021 but linked against 5110"
In my case this is the error message i get. What should i do?
How to remove the path to cuDNN7 from your $LD_LIBRARY_PATH?
|
st98796
|
It sounds like you have cudnn 5 installed. If you’re sure you have cudnn6 on your system:
In a terminal:
echo $LD_LIBRARY_PATH
# should print something like /public/apps/NCCL/2.0.5/lib:/public/apps/cudnn/v6.0/cuda/lib64:/public/apps/cuda/8.0/lib64
export $LD_LIBRARY_PATH=<the string above but without the cudnn5 path>
|
st98797
|
@richard
under /usr/local/cuda/lib64 I do have libcudnn.so.5 and libcudnn.so.6.
Then I type cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2 in shell, and it did print out my current cudnn version is 6021.
Moreover, I remove export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64\${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} from .bashrc, and replace it with export LD_LIBRARY_PATH=/home/adnan/ML_packages/cudnn6/cuda/lib64. Not working.
|
st98798
|
Hello I have the same problem, but my system is windows 7 64 bit, CUDA 8, anaconda, python 3.6, with pytorch 0.4.0.
RuntimeError: cuDNN version mismatch: PyTorch was compiled against 7005 but linked against 6021
How to solve the same problem in windows?
Thanks
|
st98799
|
Hi I have a cudnn version mismatch error when i try to run my pytorch.
RuntimeError: cuDNN version mismatch: PyTorch was compiled against 6021 but linked against 5110.
I have checked my Cudnn version which is 6 .
Why I am getting this error? What should I do?
if i have to remove the path to cuDNN5 from my $LD_LIBRARY_PATH, how can i do that?
|
st98800
|
at the end of the first epoc my program suddenly crash when num_workers > 0
It’s a jupyter notebook and I don’t see any logs in the server log output
|
st98801
|
I’m getting an error RuntimeError: ONNX export failed: Couldn't export operator aten::gru when trying to export to the ONNX format. I compiled from master (latest commit in git log is from Saturday). Googling returns results suggesting that this error means some functionality I’m using isn’t supported (for exporting). I’m just a using very basic one layer GRU though…
I did come across this this 23, which seems to be the solution to solve my problem. But do you think it’s feasible for someone not that skilled in C++ to accomplish this?
|
st98802
|
Solved by yxw in post #2
Do you assigned the paramter of the gru unit batch_first=True when you training your net? it should be set to False ,onnx doesnot support if it is True
|
st98803
|
Do you assigned the paramter of the gru unit batch_first=True when you training your net? it should be set to False ,onnx doesnot support if it is True
|
st98804
|
Yes I did have that set to True! Thank you.
But think I will try out the new JIT stuff anyways.
|
st98805
|
Hello,how is your progress in trying to modify the JIT stuff?
I don’t think it is a long-term solution to solve the convert problem just by modify the batchfirst flag.because the origin pytorch model is not trained by myself.
|
st98806
|
The new C++ frontend works quite nicely (though I couldn’t manage to compile it for android (my target platform)), just follow the libtorch tutorial.
|
st98807
|
I apologize if this is not pytorch-specific enough.
I have a decently imbalanced data set for a time series classification problem (several classes, with the smallest class being ~1/8 as common as the most, and the most is about 50% of the total). Unfortunately, due to continuous nature of the data, and my desire to use the entire time series of each sample as the input (it is expected that contextual information is very important in classification), I cannot over or under sample at all.
Therefore I set the weights to be the inverse proportion of the class prevalence in the training set (i.e. largest weight 8, smallest weight 1). I will say that currently I am still adding capacity, because I can’t seem to really overfit the data yet. (although huge models just underfit).
I was noticing that the most common class was getting a lower accuracy than the other classes. So, in a lets-try-it-and-see test, I created a separate model to just classify that one class vs other, which achieved ~80% accuracy. I then fed this prediction into the first model as a single feature for each chunk of time that has to be classified. So, obviously the model should have a lot more information about how to classify that particular class. However, the loss is stopping at pretty much exactly the same value, with basically the same confusion matrix.
I’m flummoxed as to where to go from here. The class weights themselves seem to be limiting me, but when I’ve tried unweighted, there is too much of a hit on all of the other classes except the most common. I’ve thought about building separate models for each class, but even if each individual model does well, there still has to be a “decider” model that takes their output, which will again have to deal with the imbalance.
|
st98808
|
Your idea of using “specialists models” sounds interesting and I would suggest to dig a bit deeper into this approach.
As far as I understand you tried to train a separate model to classify the worst performing class against all others (stage0 model).
After this is done you are feeding this prediction into your base model (stage1 model) and try to classify all samples.
How are you feeding the prediction of your stage0 into stage1? It might be for example a scaling issue so that this particular useful feature will be difficult to learn as your other features might be completely in another range thus masking the prediction. Could you check it and rescale the features if necessary?
I’m not sure I understand the limitations of your dataset right. Could you provide some sample data with random values, e.g.:
nb_samples = 100
nb_featues = 10
seq_len = 45
nb_classes = 5
data = torch.randn(...)
target = torch.empty(..., dtype=torch.long).random_(nb_classes)
I would like to take a look, if a weighted sampling approach is not possible.
|
st98809
|
Yes, you are correct in the order of what I did. Train the “stage0” model to predict the worst performing, but also most prevalent class (probably worst performing due to the class weights used in crossentropyloss). And after that, I fed just the softmax output for the class (not the “all others”) output, into the stage1 network.
As far as the scaling. First, my “stage1” (or “base”/“original”) network has several different inputs that don’t all come in at the same level. So when I added this new feature I tried 3 different models where I put it in 1) at the bottom of the “feature extraction block” (couple dense layers with relus and dropouts), 2) at the top of the “feature extraction block” so it was basically an additional feature, and 3) at the bottom of the “classification block” (couple of dense layers with relus and dropouts at the top of the network). There are also batchnorms in between those sections. And I feel like the network made immediate use of this new feature, as the initial training epochs started off with a substantially lower loss, but then trended down to the same loss I was achieving before.
I describe the data here (Per-class and per-sample weighting 3). However, I’ll summarize again:
The nature of the problem is to classify every segment of a time series recording into 1 of X classes. I have thousands of recordings, and each recording is approximately 1000 segments long (but varies considerably - which I handle with padding). My inputs are the raw data for the recording for each segment (about 1x6000), a transform of this (7x200), and a second transform (1x100). (I have tried eliminating one or more of these inputs, but the performance suffers.) It is known that it is not possible to classify the segments in isolation, so context is necessary. How much context is unknown put probably more than 10 segments on either side. Because I don’t want to deal with playing with the context size, I instead just train on the entire recording (for the batch) at the same time. I am using a relatively new idea (temporal convolutional networks) instead of LSTMs to deal with the time aspect, and they seem to be working really well.
Let me know if you have further questions about the nature of the problem.
Work since last post
So, I was thinking about the fact that the crossentropyloss as the target function may not be the best for this problem (even though it is “the” loss for multi-class classification). So I wrote my own, that makes use of the confidences in the predictions and mirrors the actual accuracy function (cohen’s kappa) that I’m using (but inverted, so that the loss decreases as the “accuracy” increases). This eeked out some additional gains, but at the expense of the network just completely ignoring the smallest class.
My next attempt is to modify the loss in the following way: instead of using one kappa for the entire confusion matrix, calculate each individual kappa (one for each class-vs-other) and take the the product of them. The thinking is that if any one kappa suffers, then the overall loss will grow. And only if all of the kappas are doing well, will the loss decrease.
I’m still not 100% convinced of this method. But it seems like imbalanced multi-class classification is still just a tough problem (especially when you can’t over or under sample).
|
st98810
|
Running a model I trained with pytorch in caffe2. The error is:
256
starting
terminate called after throwing an instance of 'at::Error'
what(): [enforce fail at slice_op.h:28] data.ndim() >= starts.size(). 2 vs 3Error from operator:
input: "1" output: "OC2_DUMMY_80/initial_h" name: "" type: "Slice" arg { name: "ends" ints: 1 ints: -1 ints: -1 } arg { name: "starts" ints: 0 ints: 0 ints: 0 } device_option { device_type: 0 device_id: 0 }
Aborted (core dumped)
EDIT: My [1:] and [:-1] slicing does not seem to be the problem
I’ve got a slice operation in my forward step. I do not understand where this error is coming from because none of the sliced tensors have 2 dimensions (they have 3). (the input vector, see below, is used to index into an embedding so it should change from 2 to 3 dimensional)
The fact it says input: "1" and initial_h makes me think the problem is with my hidden state possibly, but that is never sliced and making changes to it has no effect at all (increased the length of the input vector has no effect, changes to the number of dimensions cause it to crash as one would expect during the rnn call).
Here the inference code
int main(int argc, const char* argv[]) {
caffe2::NetDef init_net, predict_net;
ReadProtoFromFile("/work/case/init_net.pb", &init_net);
ReadProtoFromFile("/work/case/predict_net.pb", &predict_net);
caffe2::Predictor* pred;
pred = new caffe2::Predictor(init_net, predict_net);
std::vector<long int> data = {1, 1};
caffe2::CPUContext cpu_context;
caffe2::TensorCPU input({2, 1}, data, &cpu_context);
std::vector<float> init_state;
readInitState(init_state);
std::cout<<init_state.size()<<std::endl;
caffe2::TensorCPU hidden_input({1, 1, 256}, init_state, &cpu_context);
caffe2::Predictor::TensorList input_vec = {input, hidden_input};
caffe2::Predictor::TensorList output_vec;
std::cout<< "starting" << std::endl;
(*pred)(input_vec, &output_vec);
std::cout<< "finished" << std::endl;
for(int i = 0; i < output_vec[0].size(); i++) {
std::cout << (output_vec[0].template data<float>())[i] << std::endl;
}
}
And here the model in pytorch
class CRNN(nn.Module):
def __init__(self, num_inp, num_hid, num_ff, num_layers, num_out, inv_priors):
super(CRNN, self).__init__()
self.embed = nn.Embedding(num_inp, num_hid)
self.rnn = nn.GRU(num_hid, num_hid, num_layers)
self.fc_emb_skip = nn.Linear(num_hid, num_ff)
self.fc1 = nn.Linear(num_hid + num_ff, num_ff)
self.fc2 = nn.Linear(num_ff, 3)
self.hidden_init = nn.Parameter(t.randn(num_layers, 1, num_hid).type(t.FloatTensor), requires_grad=True)
self.num_inp = num_inp
self.num_hid = num_hid
self.num_out = num_out
self.num_layers = num_layers
def forward(self, x, hidden):
emb = self.embed(x)
output, hidden = self.rnn(emb, hidden)
y_skip = F.elu(self.fc_emb_skip(emb[:-1]))
joined_out = t.cat((y_skip, output[1:],), dim=2)
outview = joined_out.contiguous().view(joined_out.size(0) * joined_out.size(1), joined_out.size(2))
y = F.elu(self.fc1(outview))
probs = F.log_softmax(self.fc2(y), 1)
return probs, hidden
|
st98811
|
For https://github.com/promach/pytorch-pruning/blob/master/prune.py#L97 16 , how could I solve the following error ?
Note: this line of code does not use divide operations unlike in others code 3
[phung@archlinux pytorch-pruning]$ python finetune.py --prune
/usr/lib/python3.7/site-packages/torchvision/transforms/transforms.py:187: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
/usr/lib/python3.7/site-packages/torchvision/transforms/transforms.py:562: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
warnings.warn("The use of the transforms.RandomSizedCrop transform is deprecated, " +
Accuracy: 0.6483
Number of prunning iterations to reduce 67% filters 5
Ranking filters…
Layers that will be prunned {10: 25, 26: 58, 21: 52, 24: 51, 28: 122, 17: 77, 12: 25, 14: 19, 19: 61, 7: 13, 5: 4, 0: 1, 2: 4}
Prunning filters…
Traceback (most recent call last):
File “finetune.py”, line 270, in
fine_tuner.prune()
File “finetune.py”, line 228, in prune
model = prune_vgg16_conv_layer(model, layer_index, filter_index)
File “/home/phung/Documents/Grive/Personal/Coursera/Machine_Learning/pruning/pytorch-pruning/prune.py”, line 97, in prune_vgg16_conv_layer
old_linear_layer.out_features)
File “/usr/lib/python3.7/site-packages/torch/nn/modules/linear.py”, line 48, in init
self.weight = Parameter(torch.Tensor(out_features, in_features))
TypeError: ‘float’ object cannot be interpreted as an integer
[phung@archlinux pytorch-pruning]$
|
st98812
|
A quick fix is to do
int(old_linear_layer.out_features)
I think there is an issue during model initialization.
In https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/linear.py#L47 69 we directly assing out_features as it is instead of typecasting to int and that creates this problem.
Will follow up further with this issue.
|
st98813
|
It seems that explicit typecast to int does not work as shown below:
[phung@archlinux pytorch-pruning]$ python finetune.py --prune
/usr/lib/python3.7/site-packages/torchvision/transforms/transforms.py:187: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
warnings.warn("The use of the transforms.Scale transform is deprecated, " +
/usr/lib/python3.7/site-packages/torchvision/transforms/transforms.py:562: UserWarning: The use of the transforms.RandomSizedCrop transform is deprecated, please use transforms.RandomResizedCrop instead.
warnings.warn("The use of the transforms.RandomSizedCrop transform is deprecated, " +
Accuracy: 0.6483
Number of prunning iterations to reduce 67% filters 5
Ranking filters…
Layers that will be prunned {28: 130, 17: 64, 10: 27, 24: 67, 26: 63, 19: 64, 21: 51, 5: 4, 12: 17, 14: 17, 0: 2, 7: 3, 2: 3}
Prunning filters…
Traceback (most recent call last):
File “finetune.py”, line 270, in
fine_tuner.prune()
File “finetune.py”, line 228, in prune
model = prune_vgg16_conv_layer(model, layer_index, filter_index)
File “/home/phung/Documents/Grive/Personal/Coursera/Machine_Learning/pruning/pytorch-pruning/prune.py”, line 97, in prune_vgg16_conv_layer
int(old_linear_layer.out_features))
File “/usr/lib/python3.7/site-packages/torch/nn/modules/linear.py”, line 48, in init
self.weight = Parameter(torch.Tensor(out_features, in_features))
TypeError: ‘float’ object cannot be interpreted as an integer
[phung@archlinux pytorch-pruning]$
|
st98814
|
Did you type cast both the inputs to the nn.Linear() module?
github.com
promach/pytorch-pruning/blob/master/prune.py#L96-L97 11
torch.nn.Linear(old_linear_layer.in_features - params_per_input_channel,
old_linear_layer.out_features)
|
st98815
|
I noticed the torch.load source code as below:
def persistent_load(saved_id):
assert isinstance(saved_id, tuple)
typename = saved_id[0]
data = saved_id[1:]
if typename == 'module':
# Ignore containers that don't have any sources saved
if all(data[1:]):
_check_container_source(*data)
return data[0]
elif typename == 'storage':
data_type, root_key, location, size, view_metadata = data
if root_key not in deserialized_objects:
deserialized_objects[root_key] = restore_location(
data_type(size), location)
storage = deserialized_objects[root_key]
if view_metadata is not None:
view_key, offset, view_size = view_metadata
if view_key not in deserialized_objects:
deserialized_objects[view_key] = storage[offset:offset + view_size]
return deserialized_objects[view_key]
else:
return storage
else:
raise RuntimeError("Unknown saved id type: %s" % saved_id[0])
Can anyone tell me what is the saved_id?
|
st98816
|
The above question is not important to me. The following one is what suspect as an IO bottleneck in the torch.load when loading pickle file
if f_should_read_directly and f.tell() == 0:
# legacy_load requires that f has fileno()
# only if offset is zero we can attempt the legacy tar file loader
try:
return legacy_load(f)
except tarfile.TarError:
# if not a tarfile, reset file offset and proceed
f.seek(0)
The legacy_load is to open the file with taropen, but in case of not a tar file, this path seems costly, and some non-necessary IO operations are triggered.
adding a condition check seems helpful:
if f_should_read_directly and f.tell() == 0 and tarfile.is_tarfile(fn):
but using is_tarfile requires _load to accept a new parameter, which filename: fn.
Check my benchmark: https://github.com/NERSC/pyprob/blob/distributed/torch_load_bench.py 8
|
st98817
|
When I load my data, I meet a problem:
Traceback (most recent call last):
File "trainer.py", line 41, in <module>
for i, (rmap, label) in enumerate(trainloader):
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 174, in __next__
return self._process_next_batch(batch)
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 198, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 32, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 81, in default_collate
return [default_collate(samples) for samples in transposed]
File "/usr/local/lib/python2.7/dist-packages/torch/utils/data/dataloader.py", line 68, in default_collate
return torch.stack(batch, 0)
File "/usr/local/lib/python2.7/dist-packages/torch/functional.py", line 56, in stack
return torch.cat(list(t.unsqueeze(dim) for t in sequence), dim)
File "/usr/local/lib/python2.7/dist-packages/torch/functional.py", line 56, in <genexpr>
return torch.cat(list(t.unsqueeze(dim) for t in sequence), dim)
RuntimeError: cannot unsqueeze empty tensor at /data/users/soumith/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensor.c:530
|
st98818
|
I also came across the same issue when trying to build a custom dataset:
Here is the getitem method:
def __getitem__(self, index):
fname = self.train_filenames[index]
image = Image.open(os.path.join(celeba_imgpath, fname))
label = self.train_labels[index]
return self.transform(image), torch.FloatTensor(label)
I think it was caused by the label given as a scalar value. I converted the label to a list, and then the problem was solved. Here is the corrected code:
def __getitem__(self, index):
fname = self.train_filenames[index]
image = Image.open(os.path.join(celeba_imgpath, fname))
label = [self.train_labels[index]]
return self.transform(image), torch.FloatTensor(label)
|
st98819
|
I also had the same issue and is resolved with your solution. This saved a lot of time! Thanks
|
st98820
|
UPDATE: I figured it out. This link explains how to incorporate scipy functions into the computational graph. You can use scipy.fftpack.dct http://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html 212
Hi everyone,
So I want to backprop the error gradient through the inverse discrete cosine transform.
I tried implementing a naive versi
on of the DCT but it was reallyyyyy slow.
Does anyone have any suggestions for writing a fast, backpropable implementation of the inverse discrete cosine transform?
I can paste my code here. The only reason I didn’t is because it is very messy and probably would not help clarify the situation.
|
st98821
|
Solved by zh217 in post #2
For anyone coming here from Google search: I have implemented DCT for pytorch in terms of the built-in FFT, so that it works on CPU and GPU, through back propagation:
|
st98822
|
For anyone coming here from Google search: I have implemented DCT for pytorch in terms of the built-in FFT, so that it works on CPU and GPU, through back propagation:
GitHub
zh217/torch-dct 382
DCT (discrete cosine transform) functions for pytorch - zh217/torch-dct
|
st98823
|
I noticed deepcopying a module causes its parameters() to be tensors rather than nn.Parameters.
import torch.nn
import copy
l = torch.nn.Linear(3,1)
c = copy.deepcopy(l)
print([type(p) for p in l.parameters()])
print([type(p) for p in c.parameters()])
[<class 'torch.nn.parameter.Parameter'>, <class 'torch.nn.parameter.Parameter'>]
[<class 'torch.Tensor'>, <class 'torch.Tensor'>]
Why does this happen and can this cause any problems when working with the copy of the module later on?
|
st98824
|
Solved by smth in post #6
This looks like a regression in 0.4.0 / 0.4.1, We reopened the issue and an engineer is working on issuing a fix.
|
st98825
|
I don’t have the system at hand, but I’m quite sure it’s the 0.4.1 stable build.
|
st98826
|
This looks like a regression in 0.4.0 / 0.4.1, We reopened the issue and an engineer is working on issuing a fix.
|
st98827
|
Hi guys. I am a newbie in pytorch.
My Ubuntu 16.04.02 LTS got python2.7 and cuda 9.0 installed (torch.cuda.is_available() returns True). However, when I tried run the following, my python just got frozen and I can only type ctrl+z to force exit.
import torch
torch.randn(10).cuda()
Then freeze until ctrl+z.
Did I do something wrong? Anyone could help? Thanks a lot.
|
st98828
|
Hi, i see similar issue during use CUDA.
And I try to check GPU memory, it will goes up until ~229MB. Python then can work normally.
I am not sure why. If you find a way to fix it, please tell me. thanks
|
st98829
|
I’m facing the same issue.
I’m running on jupyter notebook, the first call to torch.randn(10).cuda() takes a long loooong time to complete (~7 min), but subsequent calls behave normally.
I’m posting a screenshoot of the stack when interrupting the kernel.
cuda_slow.png1071×433 44 KB
|
st98830
|
What is the output of your nvidia-smi when this is happening?
Could you try building PyTorch from source and seeing if the long cuda() calls still occur?
|
st98831
|
This is the output of nvidia-smi (fastai/bin/python is the process related to the notebook):
I will try to build from source.
|
st98832
|
I couldn’t manage to build from source, first I was getting this error, I followed the workaround but then got an error regarding cuDNN version, my cuDNN installation is a mess because of tensorflow, so I didn’t wanted to mess with that afraid that it could mess my tensorflow installation. So I switched to docker, and the first run of torch.randn(10).cuda() is taking ~2 sec, which I think is expected.
|
st98833
|
@lgvaz you have a 1070 GPU, but you must’ve installed pytorch for CUDA 7.5. What it is doing is recompiling pytorch for your GPU (cuda8 supported GPU).
If you reinstall the CUDA8 version of pytorch, the freezing wont happen. Select the right option in our version selector on http://pytorch.org 120
|
st98834
|
@Spencer_ML It’s possible that there’s a mismatch between the CUDA version your pytorch was compiled with, and the CUDA version you’re running your computer on. How did you install pytorch?
|
st98835
|
Thanks all for the discussion.
Have successfully tried cuda() but on the first run of .cuda(), I found that I have to wait for 5 to 10 minutes. Is it normal or should I do some config first?
|
st98836
|
Can you run the following and report the output?
github.com
QuantScientist/Deep-Learning-Boot-Camp/blob/master/day02-PyTORCH-and-PyCUDA/PyCUDA/01 PyCUDA verify CUDA 8.0.ipynb 38
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"slideshow": {
"slide_type": "slide"
}
},
"source": [
"# Deep Learning Bootcamp November 2017, GPU Computing for Data Scientists\n",
"\n",
"<img src=\"../images/bcamp.png\" align=\"center\">\n",
"\n",
"## Using CUDA, Jupyter, PyCUDA and PyTorch\n",
"\n",
"### 01 PyCUDA verify CUDA 8.0\n",
"\n",
"Web: https://www.meetup.com/Tel-Aviv-Deep-Learning-Bootcamp/events/241762893/\n",
"\n",
This file has been truncated. show original
github.com
QuantScientist/Deep-Learning-Boot-Camp/blob/master/day02-PyTORCH-and-PyCUDA/PyTorch/01 PyTorch GPU support test.ipynb 17
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"% reset -f\n",
"from __future__ import print_function\n",
"from __future__ import division\n",
"import math\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"%matplotlib inline\n",
"# ! pip install torchvision"
]
},
This file has been truncated. show original
|
st98837
|
@QuantScientist Sure. Here are my outputs:
##############################################
##########01 PyCUDA verify CUDA 8.0.ipynb########
1 device(s) found.
Device #0: GeForce GTX 1080 Ti
Compute Capability: 6.1
Total Memory: 11440512 KB
ASYNC_ENGINE_COUNT: 2
CAN_MAP_HOST_MEMORY: 1
CLOCK_RATE: 1582000
COMPUTE_CAPABILITY_MAJOR: 6
COMPUTE_CAPABILITY_MINOR: 1
COMPUTE_MODE: DEFAULT
CONCURRENT_KERNELS: 1
ECC_ENABLED: 0
GLOBAL_L1_CACHE_SUPPORTED: 1
GLOBAL_MEMORY_BUS_WIDTH: 352
GPU_OVERLAP: 1
INTEGRATED: 0
KERNEL_EXEC_TIMEOUT: 1
L2_CACHE_SIZE: 2883584
LOCAL_L1_CACHE_SUPPORTED: 1
MANAGED_MEMORY: 1
MAXIMUM_SURFACE1D_LAYERED_LAYERS: 2048
MAXIMUM_SURFACE1D_LAYERED_WIDTH: 32768
MAXIMUM_SURFACE1D_WIDTH: 32768
MAXIMUM_SURFACE2D_HEIGHT: 65536
MAXIMUM_SURFACE2D_LAYERED_HEIGHT: 32768
MAXIMUM_SURFACE2D_LAYERED_LAYERS: 2048
MAXIMUM_SURFACE2D_LAYERED_WIDTH: 32768
MAXIMUM_SURFACE2D_WIDTH: 131072
MAXIMUM_SURFACE3D_DEPTH: 16384
MAXIMUM_SURFACE3D_HEIGHT: 16384
MAXIMUM_SURFACE3D_WIDTH: 16384
MAXIMUM_SURFACECUBEMAP_LAYERED_LAYERS: 2046
MAXIMUM_SURFACECUBEMAP_LAYERED_WIDTH: 32768
MAXIMUM_SURFACECUBEMAP_WIDTH: 32768
MAXIMUM_TEXTURE1D_LAYERED_LAYERS: 2048
MAXIMUM_TEXTURE1D_LAYERED_WIDTH: 32768
MAXIMUM_TEXTURE1D_LINEAR_WIDTH: 134217728
MAXIMUM_TEXTURE1D_MIPMAPPED_WIDTH: 16384
MAXIMUM_TEXTURE1D_WIDTH: 131072
MAXIMUM_TEXTURE2D_ARRAY_HEIGHT: 32768
MAXIMUM_TEXTURE2D_ARRAY_NUMSLICES: 2048
MAXIMUM_TEXTURE2D_ARRAY_WIDTH: 32768
MAXIMUM_TEXTURE2D_GATHER_HEIGHT: 32768
MAXIMUM_TEXTURE2D_GATHER_WIDTH: 32768
MAXIMUM_TEXTURE2D_HEIGHT: 65536
MAXIMUM_TEXTURE2D_LINEAR_HEIGHT: 65000
MAXIMUM_TEXTURE2D_LINEAR_PITCH: 2097120
MAXIMUM_TEXTURE2D_LINEAR_WIDTH: 131072
MAXIMUM_TEXTURE2D_MIPMAPPED_HEIGHT: 32768
MAXIMUM_TEXTURE2D_MIPMAPPED_WIDTH: 32768
MAXIMUM_TEXTURE2D_WIDTH: 131072
MAXIMUM_TEXTURE3D_DEPTH: 16384
MAXIMUM_TEXTURE3D_DEPTH_ALTERNATE: 32768
MAXIMUM_TEXTURE3D_HEIGHT: 16384
MAXIMUM_TEXTURE3D_HEIGHT_ALTERNATE: 8192
MAXIMUM_TEXTURE3D_WIDTH: 16384
MAXIMUM_TEXTURE3D_WIDTH_ALTERNATE: 8192
MAXIMUM_TEXTURECUBEMAP_LAYERED_LAYERS: 2046
MAXIMUM_TEXTURECUBEMAP_LAYERED_WIDTH: 32768
MAXIMUM_TEXTURECUBEMAP_WIDTH: 32768
MAX_BLOCK_DIM_X: 1024
MAX_BLOCK_DIM_Y: 1024
MAX_BLOCK_DIM_Z: 64
MAX_GRID_DIM_X: 2147483647
MAX_GRID_DIM_Y: 65535
MAX_GRID_DIM_Z: 65535
MAX_PITCH: 2147483647
MAX_REGISTERS_PER_BLOCK: 65536
MAX_REGISTERS_PER_MULTIPROCESSOR: 65536
MAX_SHARED_MEMORY_PER_BLOCK: 49152
MAX_SHARED_MEMORY_PER_MULTIPROCESSOR: 98304
MAX_THREADS_PER_BLOCK: 1024
MAX_THREADS_PER_MULTIPROCESSOR: 2048
MEMORY_CLOCK_RATE: 5505000
MULTIPROCESSOR_COUNT: 28
MULTI_GPU_BOARD: 0
MULTI_GPU_BOARD_GROUP_ID: 0
PCI_BUS_ID: 1
PCI_DEVICE_ID: 0
PCI_DOMAIN_ID: 0
STREAM_PRIORITIES_SUPPORTED: 1
SURFACE_ALIGNMENT: 512
TCC_DRIVER: 0
TEXTURE_ALIGNMENT: 512
TEXTURE_PITCH_ALIGNMENT: 32
TOTAL_CONSTANT_MEMORY: 65536
UNIFIED_ADDRESSING: 1
WARP_SIZE: 32
##############################################
Global memory occupancy:98.000000% free
===Attributes for device 0
MAX_THREADS_PER_BLOCK:1024
MAX_BLOCK_DIM_X:1024
MAX_BLOCK_DIM_Y:1024
MAX_BLOCK_DIM_Z:64
MAX_GRID_DIM_X:2147483647
MAX_GRID_DIM_Y:65535
MAX_GRID_DIM_Z:65535
MAX_SHARED_MEMORY_PER_BLOCK:49152
TOTAL_CONSTANT_MEMORY:65536
WARP_SIZE:32
MAX_PITCH:2147483647
MAX_REGISTERS_PER_BLOCK:65536
CLOCK_RATE:1582000
TEXTURE_ALIGNMENT:512
GPU_OVERLAP:1
MULTIPROCESSOR_COUNT:28
KERNEL_EXEC_TIMEOUT:1
INTEGRATED:0
CAN_MAP_HOST_MEMORY:1
COMPUTE_MODE:DEFAULT
MAXIMUM_TEXTURE1D_WIDTH:131072
MAXIMUM_TEXTURE2D_WIDTH:131072
MAXIMUM_TEXTURE2D_HEIGHT:65536
MAXIMUM_TEXTURE3D_WIDTH:16384
MAXIMUM_TEXTURE3D_HEIGHT:16384
MAXIMUM_TEXTURE3D_DEPTH:16384
MAXIMUM_TEXTURE2D_ARRAY_WIDTH:32768
MAXIMUM_TEXTURE2D_ARRAY_HEIGHT:32768
MAXIMUM_TEXTURE2D_ARRAY_NUMSLICES:2048
SURFACE_ALIGNMENT:512
CONCURRENT_KERNELS:1
ECC_ENABLED:0
PCI_BUS_ID:1
PCI_DEVICE_ID:0
TCC_DRIVER:0
MEMORY_CLOCK_RATE:5505000
GLOBAL_MEMORY_BUS_WIDTH:352
L2_CACHE_SIZE:2883584
MAX_THREADS_PER_MULTIPROCESSOR:2048
ASYNC_ENGINE_COUNT:2
UNIFIED_ADDRESSING:1
MAXIMUM_TEXTURE1D_LAYERED_WIDTH:32768
MAXIMUM_TEXTURE1D_LAYERED_LAYERS:2048
MAXIMUM_TEXTURE2D_GATHER_WIDTH:32768
MAXIMUM_TEXTURE2D_GATHER_HEIGHT:32768
MAXIMUM_TEXTURE3D_WIDTH_ALTERNATE:8192
MAXIMUM_TEXTURE3D_HEIGHT_ALTERNATE:8192
MAXIMUM_TEXTURE3D_DEPTH_ALTERNATE:32768
PCI_DOMAIN_ID:0
TEXTURE_PITCH_ALIGNMENT:32
MAXIMUM_TEXTURECUBEMAP_WIDTH:32768
MAXIMUM_TEXTURECUBEMAP_LAYERED_WIDTH:32768
MAXIMUM_TEXTURECUBEMAP_LAYERED_LAYERS:2046
MAXIMUM_SURFACE1D_WIDTH:32768
MAXIMUM_SURFACE2D_WIDTH:131072
MAXIMUM_SURFACE2D_HEIGHT:65536
MAXIMUM_SURFACE3D_WIDTH:16384
MAXIMUM_SURFACE3D_HEIGHT:16384
MAXIMUM_SURFACE3D_DEPTH:16384
MAXIMUM_SURFACE1D_LAYERED_WIDTH:32768
MAXIMUM_SURFACE1D_LAYERED_LAYERS:2048
MAXIMUM_SURFACE2D_LAYERED_WIDTH:32768
MAXIMUM_SURFACE2D_LAYERED_HEIGHT:32768
MAXIMUM_SURFACE2D_LAYERED_LAYERS:2048
MAXIMUM_SURFACECUBEMAP_WIDTH:32768
MAXIMUM_SURFACECUBEMAP_LAYERED_WIDTH:32768
MAXIMUM_SURFACECUBEMAP_LAYERED_LAYERS:2046
MAXIMUM_TEXTURE1D_LINEAR_WIDTH:134217728
MAXIMUM_TEXTURE2D_LINEAR_WIDTH:131072
MAXIMUM_TEXTURE2D_LINEAR_HEIGHT:65000
MAXIMUM_TEXTURE2D_LINEAR_PITCH:2097120
MAXIMUM_TEXTURE2D_MIPMAPPED_WIDTH:32768
MAXIMUM_TEXTURE2D_MIPMAPPED_HEIGHT:32768
COMPUTE_CAPABILITY_MAJOR:6
COMPUTE_CAPABILITY_MINOR:1
MAXIMUM_TEXTURE1D_MIPMAPPED_WIDTH:16384
STREAM_PRIORITIES_SUPPORTED:1
GLOBAL_L1_CACHE_SUPPORTED:1
LOCAL_L1_CACHE_SUPPORTED:1
MAX_SHARED_MEMORY_PER_MULTIPROCESSOR:98304
MAX_REGISTERS_PER_MULTIPROCESSOR:65536
MANAGED_MEMORY:1
MULTI_GPU_BOARD:0
MULTI_GPU_BOARD_GROUP_ID:0
##############################################
##############################################
#########01 PyTorch GPU support test.ipynb#########
__Python VERSION:’, ‘2.7.12 (default, Nov 19 2016, 06:48:10) \n[GCC 5.4.0 20160609]
__pyTorch VERSION:’, ‘0.1.12_2
__CUDA VERSION
nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
__CUDNN VERSION:’, 6021
__Number CUDA Devices:’, 1L
Active CUDA Device: GPU’, 0L
Available devices ', 1L
Current cuda device ', 0L
##############################################
1 device(s) found.
(0, ‘GeForce GTX 1080 Ti’)
##############################################
True
<class ‘pycuda.gpuarray.GPUArray’>
##############################################
|
st98838
|
@bmkor please make sure you install the CUDA8 version of pytorch from the version selector. if you install version of 7.5 then you will have to wait a minute for startup.
|
st98839
|
@smth Thanks. Should I down my cuda from 9 to 8 in this case? I am not sure if my cuda is 9 in fact. Execute command nvcc —version gives:
nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176
|
st98840
|
@bmkor cuda >=8 should be fine. So if you have cuda9, then you dont need to downgrade.
|
st98841
|
I see a similar issue when running mytensor.to(torch.device('cuda'))
here’s the output to several calls to nvidia-smi, notice how the memory usage goes up until ~340MiB and then the call returns
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:13 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 43C P8 N/A / N/A | 0MiB / 983MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:15 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 44C P0 N/A / N/A | 33MiB / 983MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 22MiB |
+-----------------------------------------------------------------------------+
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:16 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 44C P0 N/A / N/A | 94MiB / 983MiB | 15% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 83MiB |
+-----------------------------------------------------------------------------+
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:17 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 44C P0 N/A / N/A | 140MiB / 983MiB | 7% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 130MiB |
+-----------------------------------------------------------------------------+
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:18 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 44C P0 N/A / N/A | 173MiB / 983MiB | 5% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 162MiB |
+-----------------------------------------------------------------------------+
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:19 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 44C P0 N/A / N/A | 216MiB / 983MiB | 5% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 205MiB |
+-----------------------------------------------------------------------------+
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:20 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 45C P0 N/A / N/A | 249MiB / 983MiB | 5% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 238MiB |
+-----------------------------------------------------------------------------+
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:21 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 45C P0 N/A / N/A | 292MiB / 983MiB | 3% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 281MiB |
+-----------------------------------------------------------------------------+
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:23 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 45C P0 N/A / N/A | 333MiB / 983MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 322MiB |
+-----------------------------------------------------------------------------+
^[[A
Shiphero-API on feature/tomas/OP-11-mergeable-order-large-accounts [$?]
➜ nvidia-smi
Thu Oct 18 19:45:25 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.57 Driver Version: 410.57 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce 940M Off | 00000000:04:00.0 Off | N/A |
| N/A 45C P0 N/A / N/A | 353MiB / 983MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 15774 C ...ymology/.virtualenvs/pytorch/bin/python 342MiB |
+-----------------------------------------------------------------------------+
|
st98842
|
Doing a tutorial 8 and I cannot run the caffe2 model because of an error
Tensor type mismatch, caller expects elements to be float, while tensor contains double.
I found an issue 11 where an answer is suggested (saying, given the error OP had, to use HalfToFloat) but no example is given, and googling returns no relevant results (the code that one finds is completely out of date).
So, I assume somewhere in the API there is a DoubleToFloat method, where is it and how do I call it.
edit: As grepping through the pytorch repo returns zero results for DoubleToFloat, I’m thinking the error is because of something else. My model is a RNN, and I am not inputting a hidden state, so maybe the error is because of that?
|
st98843
|
Still got the error when inputting the hidden state, turns out I had created it as float64 (I thought I had checked it was float but it seems I hadn’t )
|
st98844
|
Hello. I encountered a confusing problem when I write a simple GRU demo, that is the GPU memory keep going up every iteration. I don’t know if there are some mistakes in my code.
my code:
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from torch.optim import Adam
from torch.utils.data import DataLoader, Dataset
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
class tdModel(nn.Module):
def __init__(self, input_dim=101, num_filters=196, win_size=15, stride=4, gru_hidden=128, batch_size=5):
super(tdModel,self).__init__()
self.gru_hidden_size = gru_hidden
self.batch_size = batch_size
# model architecture
self.conv2d = nn.Conv2d(in_channels=1, out_channels=num_filters, kernel_size=(win_size, input_dim), stride=(stride,1))
self.gru1 = nn.GRU(input_size=num_filters, hidden_size=gru_hidden,dropout=0.2, batch_first=True)
self.gru_hidden1 = self.init_gru_hidden(batch_size)
self.gru2 = nn.GRU(input_size=gru_hidden, hidden_size=gru_hidden, dropout=0.2, batch_first=True)
self.gru_hidden2 = self.init_gru_hidden(batch_size)
self.dense = nn.Linear(gru_hidden, 1)
def init_gru_hidden(self,batch_size):
return torch.zeros(1, batch_size, self.gru_hidden_size).cuda()
def forward(self, X):
X = X.unsqueeze(1)
X = self.conv2d(X).squeeze()
X = X.transpose(1,2)
X, self.gru_hidden1 = self.gru1(X, self.gru_hidden1)
X, self.gru_hidden2 = self.gru2(X, self.gru_hidden2)
X = F.sigmoid(self.dense(X))
return X
class audioDataset(Dataset):
def __init__(self):
super(audioDataset,self).__init__()
self.X = np.load('./XY_train/X.npy')
self.Y = np.load('./XY_train/Y.npy')
self.X = torch.FloatTensor(self.X)[:25,:,:].to(device)
self.Y = torch.FloatTensor(self.Y)[:25,:,:].to(device)
def __len__(self):
return self.X.shape[0]
def __getitem__(self,idx):
X = self.X[idx]
Y = self.Y[idx]
return (X,Y)
if __name__=='__main__':
model = tdModel()
model.to(device)
opt = Adam(lr=0.0001, weight_decay=0.01,params=model.parameters())
criterion = nn.BCELoss()
epochs = 50
dataset = audioDataset()
dataloader = DataLoader(dataset, batch_size=5,shuffle=True)
for ep in range(epochs):
for batch_sample in dataloader:
X,Y = batch_sample
opt.zero_grad()
ret = model(X)
loss = criterion(ret,Y)
print("Epoch {}: {}".format(ep, loss))
loss.backward(retain_graph=True)
opt.step()
|
st98845
|
Solved by ptrblck in post #2
I’m no expert in RNN, but it seems you are keeping the complete history of your hidden states, which uses more memory for each epoch and also slows down the code after a while.
If you re-initialize your hidden states in each epoch with:
model.gru_hidden1 = model.init_gru_hidden(batch_size)
model.g…
|
st98846
|
I’m no expert in RNN, but it seems you are keeping the complete history of your hidden states, which uses more memory for each epoch and also slows down the code after a while.
If you re-initialize your hidden states in each epoch with:
model.gru_hidden1 = model.init_gru_hidden(batch_size)
model.gru_hidden2 = model.init_gru_hidden(batch_size)
the memory usage and speed is constant.
|
st98847
|
It works! Thank you very much. But I still confused about its running detail. Why does model allocate another memory rather than rewrite hiddent states?(Actually I’m a rookie with pytorch… OTL)
|
st98848
|
I’m glad it’s working!
The computation graph keeps growing as you never detach or reset the hidden states.
If I’m not mistaken your original implementation would treat the sequential epochs as one long time series.
|
st98849
|
Hi there,
Is it at all possible to use a different conv algorithm than the one selected by pytorch/cudnn by default?
Say we have this piece of code:
def run_conv2d_memtest_pytorch():
with torch.no_grad():
i = torch.rand((1, 192, 512, 512))
w = torch.rand((64, 192, 7, 7))
i = i.cuda()
w = w.cuda()
print(torch.cuda.max_memory_allocated() // 1024 // 1024)
while True:
o = F.conv2d(i, w, stride=1, padding=3)
print(torch.cuda.max_memory_allocated() // 1024 // 1024)
torch.cuda.empty_cache()
The problem is that this conv2d will require over 9 GB of memory even for batch size of 1. Using a 6x6 conv instead results in a much more reasonable requirement of about 320M, which is close to what you would expect when calculating the conv using the naive method. My guess is that cudnn and pytorch both will choose something like winograd and multiply the data for faster computation.
My issue is that a model was trained on a GPU with sufficient memory, but now I’d like to run it on GPUs with 6-8 GB for inference. I’m fine with it being slower due to suboptimal algorithm, but as it is, I can’t figure a way to run even a single example due to the model having this convolution layer inside of it.
Do I have any options outside of compiling a naive cuda conv2d implementation, creating a wrapper for Python and calling it explicitly?
|
st98850
|
Ok, one way to do this is to run two convolutions with stride e.g. (2, 1), adjusting padding accordingly, and combining the resulting tensor, cutting the memory requirement rougly in half.
Interestingly, with stride (2,2), the heuristic reverts back to memory-cheap algorithm, so you can run 4 convs and combine their output.
In any case, this is really dumb and there should be a way in torch to do this painlessly.
|
st98851
|
Hi all,
I have a dataset where each sample has 7 different channels. Currently I build the datasets for each of my 4 classes separately and then use a concatdataset to put them together. I need to perform a z-score normalization on the whole training set, separately for each channel - it looks like I want to use transforms.Normalize to do this, but I’m having trouble figuring out how.
Would the best practice be to subclass the concatenated dataset in someway to add this normalization? Also, is there an efficient way to get the mean/stddev for each channel once I have built the whole training set? Lastly, if I need to build my own Dataset (i.e., just subclass the generic Dataset), is there a way I can draw examples from the three dataset’s I’ve already made?
Thanks so much for any help!
|
st98852
|
I’ve created a small code sample to calculate the mean and std of your dataset of the fly in case all images do not fit into your memory here 30.
After you’ve calculated the mean and std you can create a Dataset and use transform.Normalize to normalize the images:
transform = transforms.Normalize(mean=mean, std=std)
class MyDataset(Dataset):
def __init__(self, data, transform=None):
self.data = data
self.transform = transform
def __getitem__(self, index):
x = self.data[index]
if self.transform:
x = self.transform(x)
return x
def __len__(self):
return len(self.data)
dataset = MyDataset(data, transform=transform)
Let me know, if that works for you!
|
st98853
|
Thanks so much for the help! One follow up - I’m not sure how I can modify your code to calculate the mean and stddev seperately for each of my channels (i.e. every one of my batches will be shape [batchsz, 7, 20] and I’d like to normalize each of the 7 channels seperately). Any thoughts on this? Thanks again
|
st98854
|
In the linked code snippet the mean and std are calculated for each channel, such that both estimates will contain 7 values.
If you want to calculate it separately for each channel, you could split the data in each channel and run the code.
Is there a reason you don’t want to calculate the mean and std for every channel in a single run?
|
st98855
|
No sorry, what you said should work then! I thought they were being calculated as single scalar values, not as vectors with a value for each channel - I must have changed something without realizing. Thanks so much for all the help!
|
st98856
|
No worries! Let me know, if you get stuck somewhere or my code doesn’t produce the right results.
|
st98857
|
Problem: To predict the remaining life of a bearing in industrial assets such as pumps, compressors, gearbox etc
image.png956×963 163 KB
|
st98858
|
What kind of question do you have? Are you trying to re-implement this paper and run into some PyTorch-related problems?
|
st98859
|
hi , for this paper i am trying to find the solution, please let me know whether this can be solved with pytorch or with any techniques using deep learning
|
st98860
|
I’m not familiar with this paper, but the authors mention they are using a Neural Network so based on this statement it should be possible to reimplement it using PyTorch. Do you have a specific dataset you are working with?
|
st98861
|
I am trying to do (3D-sparse x 2D-dense) multiplication.
However, I can’t resize the 3D-sparse tensor to a 2D-sparse, e.g:
t1 = to_sparse(torch.randn((A,B,C)))
t2 = torch.randn((C,D))
# (A,B,C)x(C,D) -> (A,B,D)
t3 = torch.mm(t1.view(-1,C),t2).view(A,B,D)
Gives the error:
RuntimeError: view is not implemented for type torch.sparse.FloatTensor
einsum for my particular case would have been ideal, unfortunately though, it is not yet implemented for sparse tensors.
Is there any other way to resize a sparse tensor?
|
st98862
|
I was wondering if the pytorch community considered making a mobile app for the discussion forum. This site is becoming one of my favorite sites.
|
st98863
|
The mobile website works pretty well for me. Some buttons are placed a bit strange, but I think I’m just too clumsy to use a mobile.
What functionality are you missing? Push notifications would be nice, but the Email notifications are working pretty good as well.
|
st98864
|
I found that even if I set all the seed and cudnn, I still can not get deterministic result. The seed and cudnn settings are:
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(args.seed)
torch.cuda.manual_seed_all(args.seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
I use following settings:
FloatTensor as default Tensor.
CUDA version is 9.0.176.
cudnn version is 7102.
Pytorch version is 0.4.1.post2.
GPU is GTX1080.
Python version is 3.5.6.
I ran a nlp task and the loss between two runs:
The loss of these two runs between Iter 0~17 are the same, however, in Iter 18, one is 0.8873478174 and another is 0.8873476982.
Since this is a nlp task, I believe that there is no randomness in the data preprocessing porcess. I have checked the dataloader and set ‘num_workers’ as default setting. I also check the data of these two runs and they are the same.
I really want to know the reason why this happen. I’ve been dealing with the problem for few weeks. It totally drives me crazy.
I also searched for some related discussion, and I found that a topic discussed that this issue is caused by FloatTensor and need to switch to DoubleTensor. If it is true, is there any way to get deterministic result using FloatTensor?
I really appreciate it if someone could discuss with me.
|
st98865
|
Might be result of indexAdd. See https://github.com/pytorch/pytorch/blob/master/docs/source/notes/randomness.rst 126 (it’s becoming part of doc soon)
|
st98866
|
I read the doc and found the following illustration. Could you tell me what are those functions that contain non-determinism. Thank you so much.
image.png951×194 12.8 KB
|
st98867
|
Hi, I want to train a mask rcnn model on a custom dataset.
Will be using the Detectron repo for this.
Some objects in the dataset are very small
by up scaling all images and by using the following RPN
settings,
RPN_MAX_LEVEL: 6
RPN_MIN_LEVEL: 2
RPN_ASPECT_RATIOS: (0.25, 0.5, 1.0, 2.0, 4.0)
RPN_ANCHOR_START_SIZE: 8
Hopefully the small objects can be detected
The question is, should I edit the params for ROI pooling?
I would be referring to
ROI_CANONICAL_SCALE
ROI_CANONICAL_LEVEL
ROI_MAX_LEVEL
ROI_MIN_LEVEL
Thanks for your advice!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.