id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st115568
|
If you create a new tensor inside the model function, it will be on cuda if the model is on cuda.
|
st115569
|
Hello,
I was monitoring my training and I realized something. The load alternates between 100%CPU, 0%GPU and 0%CPU, 100%GPU, leading to a huge waste of hardware resources. Would it be possible to start computing the next batch while the GPU is working on the current one ? I tried to look at the documentation and the code itself but it seems it is not possible to do it. I think it would be a feature that benefit a lot of users. Or is it already possible to do it?
Thank you
|
st115570
|
Just out of curiosity, why did the main developers of PyTorch choose not to use Cython for autograd and the integration between Python and torch-lib ?
|
st115571
|
I need to implement a highway network and run it on cifar-10. So far, the highway block looks like this:
class HwNetBasicblock(nn.Module):
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(HwNetBasicblock, self).__init__()
self.conv_a = nn.Conv2d(inplanes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn_a = nn.BatchNorm2d(planes)
self.conv_b = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn_b = nn.BatchNorm2d(planes)
self.gate = nn.Conv2d(inplanes, planes, kernel_size=3, stride=stride, padding=1, bias=True)
self.downsample = downsample
BIAS_INIT = -13
self.gate.bias.data.fill_(BIAS_INIT)
def forward(self, x):
residual = x
basicblock = self.conv_a(x)
basicblock = self.bn_a(basicblock)
basicblock = F.relu(basicblock, inplace=True)
basicblock = self.conv_b(basicblock)
basicblock = self.bn_b(basicblock)
t_activation = self.gate(residual)
t_value = F.sigmoid(torch.mean(t_activation))
if self.downsample is not None:
residual = self.downsample(x)
return F.relu(residual*(1-t_value) + basicblock*t_value, inplace=True)
I remember, from the highway network paper, that if you go deep (50-100 layers) , you need to initialize the bias of the transform gate to a very small negative number (-7, -15…) . I am trying to do this by using the
BIAS_INIT = -13
self.gate.bias.data.fill_(BIAS_INIT)
but this is obviously not working. On CIFAR-10, the 50 layer network goes well above 93% accuracy, while the 100 layer net stops at around 86% .
How can I correctly initialize the biases ?
EDIT:
Is doing this outside the network gonna work?
# initialize the bias
def bias_init(m):
if isinstance(m, nn.Conv2d):
BIAS_INIT = -13
m.bias.data.fill_(BIAS_INIT)
model = CifarResNet(HwNetBasicblock, depth=110, num_classes=10)
model = model.apply(bias_init)
net = model.cuda()
I am using the fact that my normal convolutions have bias=False, and only the gate has bias=True.
|
st115572
|
# initialize the bias
def bias_init(m):
if isinstance(m, nn.Conv2d):
BIAS_INIT = -13
m.bias.data.fill_(BIAS_INIT)
model = CifarResNet(HwNetBasicblock, depth=110, num_classes=10)
model = model.apply(bias_init)
net = model.cuda()
The option you suggest should work and I am doing something similar when initializing my networks. If you take a look at https://github.com/pytorch/pytorch/blob/master/torch/nn/init.py 10 it also seems to be the preferred approach of initializing parameters. (Although one of the devs will probably have to tell you if this is actually true)
If only certain layers actually have a bias value you should also additionally be able to do something like:
if not isinstance(m.bias, type(None)):
m.bias.data.fill_(value)
This way you wouldn’t have to worry about bias=True or False explicitly.
EDIT:
When I further think about it you could actually probably do this thing directly in the init of your network similar to the example given in https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py#L112-L118 2
for m in self.modules():
if isinstance(m, nn.Conv2d):
if not isinstance(m.bias, type(None)):
m.bias.data = -13
I intuitively prefer the option of defining the function to initialize/change parameters outside of the model init though as you can just call it on the model whenever you want, instead of just in the constructor.
|
st115573
|
I am trying to do something like this:
def forward(self, x):
x = self.conv1(x)
x=self.R1(x)
x=self.M1(x)
layer1_mean=self.mean(x) ##calculate mean
layer1_var=self.variance(x) ##calculate variance
new_layer=((torch.zeros(2*batch, 36,1,1).cuda()).float) ##Create new tensor
## copy to new tensor
new_layer[0:batch]=layer1_mean
new_layer[batch:2*batch]=layer1_var
However, I am getting the error : ‘method’ object does not support item assignment
What’s going wrong and how can it be resolved?
|
st115574
|
You forgot parenthese for .float(), which means new_layer is a function in your code sample, and new_layer[0:batch] tries to index a function which is not possible.
|
st115575
|
Thanks @albanD. But its now throwing this error:
RuntimeError: copy from Variable to torch.cuda.FloatTensor isn’t implemented
Although, new_layer and layer1_mean are of same type.
|
st115576
|
What is given as the input of a module is a Variable, not a Tensor.
In your case, as the error message says, layer1_mean is a Variable while new_layer is a Tensor.
You should add a new_layer = Variable(new_layer).
|
st115577
|
Why does pytorch have a worse performance than Chainer : https://github.com/ilkarman/DeepLearningFrameworks/ 29
and is also slower than Chainer ?
(Not intend to “blame” pytorch, simply curious how the (rather big) performance diferences can be explained)
|
st115578
|
Hi everyone, I’m trying to port this Torch code (https://github.com/anokland/dfa-torch 47) in Pytorch. It’s the implementation of Direct Feedback Alignment (https://arxiv.org/abs/1609.01596 20).
The logic of he implementation is this: we need to inject the error signal at the output of each leayer, so we will create a custom SequentialSG model, everytime it sees a special module called ErrorFeedback copies the error signal in the grad_output of that module.
Now in each ErrorFeedback module, during the forward pass we simply copy the input to the output (i.e. we do nothing), during the backward pass instead we perform the random projection of the error on the dimension of the hidden layer. To do this, we have to define a new module but also extend torch.autograd. I followed http://pytorch.org/docs/master/notes/extending.html 56
So we have to define a class ErrorFeedbackFunction which inherits from torch.autograd.Function, where we define the two static methods forward and backward and a class which inherits from torch.nn.Module.
Now we just have to define a new nn.Module, which uses our new function in the forward pass.
Now if I try to use it.
I get this error:
TypeError: 'ErrorFeedbackFunction' object does not support indexing
What am I doing wrong? Did I miss some steps in my implementation?
|
st115579
|
I was reading again http://pytorch.org/docs/0.2.0/notes/extending.html 141 and I realized I didn’t alias the apply method. The problem is that even when I try with the Linear example provided in the docs, I get:
type object 'Linear' has no attribute 'apply'
I looked through the torch.nn source code and I saw that in the original Linear implementation apply is not used, in fact it is defined in pytorch/torch/nn/functionals.py.
Instead for example Bilinear is using it, even though an apply method is not defined anywhere in the Bilinear class in pytorch/torch/nn/_functions/linear.py.
Can someone explain and walk me through this?
|
st115580
|
are you using atleast pytorch 0.2.0? The apply method is auto-generated via a class transformation. You just need to define forward and backward
|
st115581
|
Now I am. I was still using version 0.1. Now I don’t get strange errors anymore and the code runs until the end.
Still I’m suspicious that my custom backward isn’t being executed. If I put a print statement inside the backward @staticmethod I don’t see anything on my jupyter notebook.
|
st115582
|
I inserted a print statement in the code sample in the link below. It prints things fine.
gist.github.com
https://gist.github.com/anonymous/866455c6bdb88403d23d8c862b3b9233 274
foo.py
import torch
from torch.autograd import Variable, Function
class Linear(Function):
# Note that both forward and backward are @staticmethods
@staticmethod
# bias is an optional argument
def forward(ctx, input, weight, bias=None):
ctx.save_for_backward(input, weight, bias)
This file has been truncated. show original
|
st115583
|
Ok, my ErrorFeedback module is working properly. Now I need to subclass nn.Sequential and override accUpdateGradParameters, accGradParameters and backward methods.
But if I do something very simple to test, like copying the original accGradParameters method and add a print statement. Nothing is shown on the console. Why is that?
|
st115584
|
iacolippo:
accUpdateGradParameters
accUpdateGradParameters is not pytorch, it’s torch(lua)
|
st115585
|
I was looking at torch/**legacy**/nn/Sequential, but subclassing torch/nn/Sequential. I think it’s easier to port my code to legacy nn and then I will think about another implementation.
|
st115586
|
Hello,
I’m trying to implement convolution using matrix multiplication or something good approach.
I have spatial dependent kernel,
K dim=(H,W,S*S) eg., S=5 (5x5 convolution)
T dim=(H,W,C)
after convolution, as a result, I want to get,
R dim=(H,W,C)
currently, I use matrix multiplication in each point like this as numpy for test,
for y in range(h):
for x in range(w):
patch = get_patch(x,y) # return S*S (x,y) centered patch
R[y,x] = np.matmul(T[y,x], K[y,x])
but this approach uses CPU
I want to execute this on GPU using pytorch
Is there any way to implement this?
|
st115587
|
Hello,
I have been trying to incorporate my own CUDA kernel for a Highway LSTM into a PyTorch layer, mostly following the suggestions here: Compiling an Extension with CUDA files
Like what was suggested in that thread, I am reading the data from the tensors and running the kernel like this:
#include <THC/THC.h>
#include "highway_lstm_kernel.h"
extern THCState *state;
int highway_lstm_forward_cuda(int inputSize, int hiddenSize, int miniBatch,
int numLayers, int seqLength,
THCudaTensor *x,
THCudaTensor *h_data,
THCudaTensor *c_data,
THCudaTensor *tmp_i,
THCudaTensor *tmp_h,
THCudaTensor *T,
THCudaTensor *bias,
THCudaTensor *dropout,
THCudaTensor *gates,
int isTraining) {
float * x_ptr = THCudaTensor_data(state, x);
float * h_data_ptr = THCudaTensor_data(state, h_data);
float * c_data_ptr = THCudaTensor_data(state, c_data);
float * tmp_i_ptr = THCudaTensor_data(state, tmp_i);
float * tmp_h_ptr = THCudaTensor_data(state, tmp_h);
float * T_ptr = THCudaTensor_data(state, T);
float * bias_ptr = THCudaTensor_data(state, bias);
float * dropout_ptr = THCudaTensor_data(state, dropout);
float * gates_ptr = THCudaTensor_data(state, gates);
cudaStream_t stream = THCState_getCurrentStream(state);
cublasHandle_t handle = THCState_getCurrentBlasHandle(state);
highway_lstm_ongpu(inputSize, hiddenSize, miniBatch, numLayers, seqLength,
x_ptr, h_data_ptr, c_data_ptr, tmp_i_ptr, tmp_h_ptr, T_ptr, bias_ptr,
dropout_ptr, gates_ptr, isTraining, stream, handle);
return 1;
}
And then I call this from within Python like so:
highway_lstm_layer.highway_lstm_forward_cuda(
self.input_size, self.hidden_size, self.mini_batch, self.num_layers,
self.seq_length, input, hy, cy, tmp_i, tmp_h, weight, bias, dropout,
gates, 1 if self.train else 0)
However, I get the following error:
Traceback (most recent call last):
File "highway_lstm_layer.py", line 112, in <module>
print lstm(input)
File "/home/nfitz/miniconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 224, in __call__
result = self.forward(*input, **kwargs)
File "highway_lstm_layer.py", line 96, in forward
output, hidden = HighwayLSTMFunction(self.input_size, self.hidden_size, num_layers=self.num_layers, dropout=self.dropout, train=self.train)(input, self.weight, self.bias)
File "/home/nfitz/miniconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 284, in _do_forward
flat_output = super(NestedIOFunction, self)._do_forward(*flat_input)
File "/home/nfitz/miniconda2/lib/python2.7/site-packages/torch/autograd/function.py", line 306, in forward
result = self.forward_extended(*nested_tensors)
File "highway_lstm_layer.py", line 34, in forward_extended
gates, 1 if self.train else 0)
File "/home/nfitz/miniconda2/lib/python2.7/site-packages/torch/utils/ffi/__init__.py", line 177, in safe_call
result = torch._C._safe_call(*args, **kwargs)
TypeError: 'struct THCudaTensor' is opaque
Any hints on what would be causing this?
|
st115588
|
Hi,
I create training, validation and testing data loaders for MNIST as follows:
train_set = datasets.MNIST(root=data_root, train=True, transform=transform_train, download=True)
valid_set = datasets.MNIST(root=data_root, train=True, transform=transform_test, download=False)
test_set = datasets.MNIST(root=data_root, train=False, transform=transform_test, download=False)
# Split training into train and validation
train_size = 600;
val_size = 59400;
indices = torch.randperm(len(train_set))
train_indices = indices[:len(indices)-valid_size][:train_size or None]
valid_indices = indices[len(indices)-valid_size:] if valid_size else None
train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
sampler=SubsetRandomSampler(train_indices), **kwargs)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size, **kwargs)
if valid_size:
valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=batch_size,
sampler=SubsetRandomSampler(valid_indices), **kwargs)
else:
valid_loader = None
Now what I would like to do is to transform the training data and then add the transformed data to the existing training data to form a new training set, somehow like this:
# Now transform the training data and add the new transformed data to existing training data
for data, target in train_loader:
t_ims = ut.transform_ims(data.numpy(), [parameters])
t_data = torch.from_numpy(t_ims)
# Concatenate
data = [data, t_data]
target = [target, target]
# Set new training data
train_loader.data = data
train_loader.target = target
Could you please tell me how to do that? I find the structure of dataloader in PyTorch is really difficult to understand
Thank you very much for your help!!
|
st115589
|
If you do something like this:
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])
Then the transformed data will be automatically used in training. Is that what you wanted?
See full example here with train / test transforms.
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.ipynb 83
|
st115590
|
Thanks, @QuantScientist. Is it possible to apply a more sophisticated composition of transformation (e.g blurring)? The transformation I used is something like this:
t_ims = ut.transform_ims(data.numpy(), [zoom_level, rot_angle, tx, ty, blur_sigma])
In addition, is it possible to add two or more transformations to the training data instead one one?
|
st115591
|
Hi,
I am trying to write a fully working LR example, and I encountered two different issues.
The NN is very simple, with a torch.nn.Linear with two targets and as output and a torch.nn.CrossEntropyLoss() as the loss function.
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/17%20%20PyTorch%20Logistic%20Regression.ipynb 187
The first issue is related to ROC_AUC, during training, which seems to be skwed instead of beeing curve-linear. This happens probably because of the way I am calculating the probabilities but I cant seem to get my hands on where is the error.
2. The second issue is during testing, with the dimentions of the results of running the inference, and the dimentions of the actual expected target (e.g. 0 or 1) . Cant seem to be able to fix this, see the picture below.
17_PyTorch_Logistic_Regression.jpg1177×711 121 KB
Many thanks,
|
st115592
|
EDIT:
First issue resolved by:
Changing out dimension to 1 and returning return F.sigmoid(x)
Changing the loss function to torch.nn.BCELoss()
During training use prediction = (net(X).data).float() # probabilities
17_PyTorch_Logistic_Regression.jpg802×869 106 KB
|
st115593
|
HI
Now I train my model with batch-size, BATCH_SIZE=2 for example.
And in each batch, there is an operation: A Matirx Mul A Vector:
So the batch matrix’s size can be (2, 3,4)
and then the batch vector size can be (2, 1, 3)
in each batch , the matrix mul will be executed: (1,3) X (3,4), which get a vector( 1,4)
So with batch size, How can I get the final tensor with size : (2, 1, 4) . And torch.mul does not work
|
st115594
|
Right now, the model structure needs to match exactly the saved state when using load_state_dict().
It would be useful to introduce an optional argument, say allow_missing_keys so that the function doesn’t throw when unexpected keys are present.
The use case is models that gets extended: as training takes a long time it’s useful (and sometimes necessary for convergence) to train a subset of the model, then add some new components that are randomly initialized, and resume training from there (either of the whole model or only of the additional components).
Right now to achieve this result it’s necessary to call state_dict() on the extended model, merge, and then use load_state_dict(). This is not particularly elegant nor memory friendly for big models.
|
st115595
|
You can still do so.
Create a new model, with a subpart having the same model. As long as subparts structure matched old model, read it’s dict file for part 1. Easily doable.
|
st115596
|
CFFI 38 has been a great success so far in Python. Did you consider using it for interfacing C code?
There are many benefits of CFFI, just to mention few:
Abstract the Python version (CPython2, CPython3, PyPy).
Better control over when and why the C compilation occurs, and more standard ways to write setuptools-based setup.py 3 files.
Keep all the Python-related logic in Python so that you don’t need to write much C code.
|
st115597
|
Are you asking about our library (TH, THC, etc.) wrappers or about the extensions?
Our FFI extension utils actually depend on cffi 37. You can check extension-ffi 48 repo for examples.
|
st115598
|
Thanks for response.
I was asking about wrappers at PyTorch (TH, THC…) - looking at the sources I saw that you are using CPython C API.
With CFFI you could get support for PyPy for free (and probably some other runtimes as well).
|
st115599
|
we could not use CFFI for that part because we wanted a lot of additional stuff for our core bindings.
For example, multiple dispatch – with CFFI, we have to build a multiple dispatch in python, which we did not want.
Also, a few other things like autogeneration of good errors etc.
See this code pointer for an example: https://github.com/pytorch/pytorch/blob/master/setup.py#L92-L99 85
|
st115600
|
A very quick question.
I am trying to write a small cffi extension for pytorch (it will only be a small util function not an entire layer).
My question is that the argument for the c function in the example, use THFloatTensor*.
Does it means it also support Variable ?
|
st115601
|
Hi,
I want to set different weight decay for each parameter.
I think per-parameter options 80 is related.
But, in my understanding, it can be only applied for each layer or for bias and weights.
I want to know how to set different weight decay for each weight in a layer.
For example, top row is 1e-3 and second row is 1e-4, and bottom row is 1e-3 for weight parameters of a 3x3 convolutional filter.
Can I implement such things in PyTorch?
|
st115602
|
Hi, I’ve met the following error many times, could anyone tell me any possible reasons? Thanks so much.
Traceback (most recent call last):
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/process.py”, line 249, in _bootstrap
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/process.py”, line 93, in run
File “/home/zhou_rui/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 44, in _worker_loop
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/queues.py”, line 349, in put
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/reduction.py”, line 51, in dumps
File “/home/zhou_rui/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py”, line 113, in reduce_storage
RuntimeError: unable to open shared memory object </torch_3500_2599739126> in read-write mode at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/TH/THAllocator.c:230
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/util.py”, line 254, in _run_finalizers
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/util.py”, line 186, in call
File “/home/zhou_rui/anaconda3/lib/python3.6/shutil.py”, line 476, in rmtree
File “/home/zhou_rui/anaconda3/lib/python3.6/shutil.py”, line 474, in rmtree
OSError: [Errno 24] Too many open files: '/tmp/pymp-6ll9wgxr’
Process Process-11:
Traceback (most recent call last):
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/process.py”, line 249, in _bootstrap
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/process.py”, line 93, in run
File “/home/zhou_rui/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 44, in _worker_loop
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/queues.py”, line 349, in put
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/reduction.py”, line 51, in dumps
File “/home/zhou_rui/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py”, line 113, in reduce_storage
RuntimeError: unable to open shared memory object </torch_3500_2599739126> in read-write mode at /opt/conda/conda-bld/pytorch_1502009910772/work/torch/lib/TH/THAllocator.c:230
[]
iter: 2905 Time 0.896 Data 0.013 Loss 4.7818 RPN 3.1798 2.1379 0.1042 ODN 1.6020 1.5693 0.0072 0.0256
[‘bottle’ ‘bottle’ ‘bottle’ ‘bottle’]
T
iter: 2906 Time 0.876 Data 0.002 Loss 2.4086 RPN 1.4914 0.2360 0.1255 ODN 0.9172 0.8927 0.0000 0.0244
[]
iter: 2907 Time 0.816 Data 0.003 Loss 3.5380 RPN 2.0609 0.3948 0.1666 ODN 1.4772 1.4434 0.0126 0.0212
[‘vase’]
T
iter: 2908 Time 0.887 Data 0.005 Loss 1.9561 RPN 1.2698 0.3745 0.0895 ODN 0.6863 0.6636 0.0004 0.0223
Traceback (most recent call last):
File “main.py”, line 522, in
main()
File “main.py”, line 294, in main
i, args.eval_freq)
File “main.py”, line 346, in train
for i, (inputs, anns,paths) in enumerate(train_loader):
File “/home/zhou_rui/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py”, line 195, in next
idx, batch = self.data_queue.get()
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/queues.py”, line 345, in get
return _ForkingPickler.loads(res)
File “/home/zhou_rui/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py”, line 70, in rebuild_storage_fd
fd = df.detach()
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py”, line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py”, line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/connection.py”, line 487, in Client
c = SocketClient(address)
File “/home/zhou_rui/anaconda3/lib/python3.6/multiprocessing/connection.py”, line 614, in SocketClient
s.connect(address)
FileNotFoundError: [Errno 2] No such file or directory
|
st115603
|
ruizhou:
OSError: [Errno 24] Too many open files: '/tmp/pymp-6ll9wgxr’
Run:
ulimit -a
and then:
sudo sysctl -w fs.file-max=100000
|
st115604
|
Hi thanks for your reply. That didn’t work for me. I suspect the limit on the number of open files per process could be the cause. What would be a reasonable number for that limit? Thank you!
|
st115605
|
Hi~
I am a fresh man in Pytorch, and want to generate mini batch data set.
My data format is below , three fields and no target
feature1, feature2, feature3
But the Dataloader’s input is two fields:
feature , target
So how can I generate batch-size data as above?
|
st115606
|
feature is usually an array, target is usually the label.
You can ignore the target if you dont want it.
See an example for a custom data loader here:
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/09%20PyTorch%20Kaggle%20Image%20Data-set%20loading%20with%20CNN.ipynb 125
https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/day%2002%20PyTORCH%20and%20PyCUDA/PyTorch/21-PyTorch-CIFAR-10-Custom-data-loader-from-scratch.ipynb 96
|
st115607
|
I want to fune-tuning VGG16 model with medical image, and I preprocessd the image by normalization:
def norm(x):
x=x.clamp(max=1500)
x=x/x.max()
for t, m, s in zip(x,[each.mean() for each in x],[each.std() for each in x]):
t.sub_(m).div_(s)
return x
But when I load the parameters and run the net, I found the feature maps become all zeros after the second conv layer, Anyone can help me find the error in my operation?
Thank you !
|
st115608
|
i find example from pytorch can do 99% (forget) or may be 100% for mnist
it seems a simple NN model can also achieve 100% in result
how to compare models when all achieves 100% result?
can it achieve 100% in real practice?
|
st115609
|
I wrote some LSTM based code for language modeling:
def forward(self, input, hidden):
emb = self.encoder(input)
h, c = hidden
h.data.squeeze_(0)
c.data.squeeze_(0)
seq_len = input.size(0)
batch_size = input.size(1)
output_dim = h.size(1)
output = []
for i in range(seq_len):
h, c = self.rnncell(emb[i], (h, c))
# self.hiddens: time * batch * nhid
if i == 0:
self.hiddens = h.unsqueeze(0)
else:
self.hiddens = torch.cat([self.hiddens, h.unsqueeze(0)])
# h: batch * nhid
#self.att = h.unsqueeze(0).expand_as(self.hiddens)
self.hiddens = self.hiddens.view(-1, self.nhid)
b = torch.mm(self.hiddens, self.U).view(-1, batch_size, 1)
a = torch.mm(h, self.W).unsqueeze(0).expand_as(b)
att = torch.tanh(a + b).view(-1, batch_size)
att = self.softmax(att.t()).t()
self.hiddens = self.hiddens.view(-1, batch_size, self.nhid)
att = att.unsqueeze(2).expand_as(self.hiddens)
output.append(torch.sum(att * self.hiddens, 0)) #hidden.data
output = torch.cat(output)
decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
decoded = self.logsoftmax(decoded)
output = decoded.view(output.size(0), output.size(1), decoded.size(1))
return output, (h, c)
And I got error in backward():
RuntimeError: cuda runtime error (2) : out of memory at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.9_1487343590888/work/torch/lib/THC/generic/THCStorage.cu:66
Any ideas why it might happen?
The memory goes to 5800MB very quickly in the first 10 batches, and then it keeps running with this much memory occupied for another several hundred batches, and then it runs out of memory.
|
st115610
|
ZeweiChu:
iddens = self.hidde
No, I don’t have to keep it. Is it a bad thing to keep unnecessary variables in the model?
|
st115611
|
if you keep Variables around, the corresponding graph that created these Variables is kept around. Hence the elevated memory usage…
|
st115612
|
@ZeweiChu yes, it’s good practice to make your model stateless. It’s best if you only keep references to parameters, and all intermediate values generated in forward are not saved anywhere for extended periods of time.
|
st115613
|
The main part of my code looks like this.
def repackage_variable(v, volatile=False):
return [Variable(torch.from_numpy(h), volatile=volatile).unsqueeze(1) for h in v]
for k in range(len(minbatches)):
minbatch = minbatches[perm[k]]
x_padded = utils.make_mask(minbatch)
x_padded = repackage_variable(x_padded, False)
x_padded = torch.cat(x_padded, 1)
T = x_padded.size(0)
B = x_padded.size(1)
inp = x_padded[:T-1, :].long()
target = x_padded[1:, :].long().view(-1, 1)
if use_cuda:
inp = inp.cuda()
target = target.cuda()
mask = (inp != 0).float().view(-1, 1)
hidden = model.init_hidden(batch)
model.zero_grad()
#print(inp.size())
output, hidden = model(inp, hidden)
output = output.view(-1, n_vocab)
loss = output.gather(1, target) * mask
loss = -torch.sum(loss) / torch.sum(mask)
loss.backward()
optimizer.step()
My question is, at each iteration, since all "Variable"s “inp” and “target” are overwritten, will the model state variables like “self.hiddens” also be overwritten? Does the old computation graph still exist in the next iteration?
nvidia-smi shows that about 6G of memory is used, but I am only testing on batch size of 50, and the length should be at most 200, why would it take up so much memory? And the memory size increases among iterations from time to time, but it could stay the same for a while. Any clues what might be the reason?
|
st115614
|
@ruotianluo how would they get cleaned up? It’s a reference. We’ll free most of the buffers, but I think there might still be some of them alive. This is going to change in the upcoming releases btw.
@ZeweiChu I can’t see anything wrong with your example. The only suggestion would be to convert the input into Variables as late as you can (e.g. do cat, type casts and copies on tensors not Variables). Maybe that’s how much memory your model requires. Are you sure it can even fit in memory?
|
st115615
|
So, the reference should be cleaned up after self.hiddens is overwritten by next forward? Is it correct?
|
st115616
|
Yes. It won’t be kept there indefinitely, but it still can postpone some frees and increase the overall memory usage.
|
st115617
|
Any progress on this one? I am facing a similar issue. I have implemented an LSTM and the memory remains constant for about 9000 iterations after which it runs out of memory. I am not keeping any references of the intermediate Variables.
I am running this on a 12GB Titan X GPU on a shared server.
|
st115618
|
Finally fixed it. There was problem in my code. I was unaware that x = y[a:b] is not a deep copy of y. I was modifying x, and in turn modifying y, and increasing the size of the data in every iteration. Using x = copy.deepcopy(y[a:b]) fixed it for me.
|
st115619
|
So did you figure out why your memory usage keeps increasing? I had the exact same question as you did. Thanks.
|
st115620
|
How can I manually free the memory? For example, how would you clean up self.hidden here?
|
st115621
|
hmishfaq:
e the memory? For example, how would you cle
hi
why moidifying y will increase size of the data? i have similar problems
|
st115622
|
I would like to train A3C or distributed DQN on GPU with the new torch.distributed API. These algorithms boil down to writing GPU Hogwild training correctly. Papers like Elastic SGD 17 also needs async GPU code to reproduce.
I used to work with Tensorflow distributed mode, which has a whole collection of abstractions and wrappers to implement async training. https://www.tensorflow.org/deploy/distributed 34
A decent implementation includes parameter servers and high-level managers to take care of gradient communication, parameter syncing, and shared Adam/Adagrad optimizers, for example.
Unfortunately, I cannot find any official tutorials or example code that show how to write a basic GPU Hogwild with parameter servers. The torch.distributed primitives are too low-level to use correctly. I looked at the source code of torch.nn.parallel.DistributedDataParallel, hoping to get some inspiration. It’s also too involved for me to understand and rewrite for my use case.
I understand that the distributed mode is very new. I’d really appreciate it if anyone can give some guidance on how to emulate TF’s distributed semantics in pytorch. For example, what are the main steps and which MPI primitives should be used in each step? Ideally, I’d love to see some skeleton code. I can figure out the rest of the details by myself, but I need something to start with. Thanks in advance!
|
st115623
|
I don’t believe it is possible to do async hog wild training with gpu. The point with that type of training is to exploit some of the benefits CPU use has over gpu. You could do a like-a3c batch training where training is parallelized i.e. but global model is updated synchronously not asynchronously.
Ps there is no a3c Gpu hogwild training in tensorflow or pytorch on github to my knowledge
|
st115624
|
Why isn’t it possible? Each worker can send its gradient to the central parameter server without first waiting for the other workers and then do all_reduce. The Tensorflow example code shows just that.
|
st115625
|
Ah but see what you just explained is a queue and parameters are updated synchronously not asynchronously
|
st115626
|
Hogwild is lock free training
I think what you want if you want use gpu is something like batch-a3c where you have good examples here in pytorch:
GitHub
facebookresearch/ELF 60
ELF - An End-To-End, Lightweight and Flexible Platform for Game Research
And tensorflow:
GitHub
ppwwyyxx/tensorpack 18
tensorpack - A Neural Net Training Interface on TensorFlow
|
st115627
|
Maybe Hogwild isn’t the right term, but here’s what I want to achieve:
Take distributed DQN as an example. Each worker does the following repeatedly:
Each has a local copy of the global parameter.
Interact with the Atari simulator, sample experience from replay memory, and compute gradient on GPU. The policy network can potentially be a big convnet, so GPU will accelerate a lot.
Send the gradient to the parameter server. The PS uses shared Adagrad or whatever to update the central parameter copy.
Pull from PS to update the local parameter copy.
This is the async GPU training I want to implement. There are many use cases outside RL too, like elastic SGD.
|
st115628
|
I think the repos you mentioned solve a different problem. I don’t want to batch the experience collected from the game simulators and do computation on only one GPU. I want to send the gradients over to parameter servers asynchronously, like what Tensorflow’s code 16 is doing conceptually.
The project I’m working on is not exactly A3C or DQN, so I need a more general async GPU skeleton code to work with. But thank you all the same for the links!
|
st115629
|
I believe elf does with gpu as well. Believe same model. The same people made both. But when updating global model on gpu to be shared I believe there needs to be locks so updated synchronously cause only atomic operations can be done without locks on gpu
|
st115630
|
Thanks for the links. ELF is not written in pytorch and is quite heavy-weight. It’d be much more illuminating to see minimal example code with torch.distributed to reproduce at least part of the TF parameter server + async training logic. Furthermore, the async GPU skeleton code can be reused over and over again in many cases.
|
st115631
|
There is a pytorch model in the RL pytorch folder. There reason I say you want batch a3c because if you are updating parameters individually with gpu. The lock acquiring and releasing will slow it down so that it’s no faster than if done on CPU only if you do a whole bunch of updates together and get benefit of much faster matrix computation of gpu will it be beneficial at all
To do individual updates using gpu will be in most all cases slower except if your model is extremely large
|
st115632
|
For DQN it’s not the case, because it’s already batched. GPU makes a big speed difference over CPU on a single thread in my experiments.
I’m just looking for a way to reproduce TF’s async distributed semantics in torch.distributed rather than coming up with workarounds. It’s good to understand how to put those primitives together in a correct way, so that I have more control over the communications. There isn’t a single tutorial code on how to use torch.distributed primitives in real settings. The ImageNet example (released with v0.2) uses the nn.parallel.DistributedDataParallel wrapper whose internals are quite obscure.
|
st115633
|
But when training these models the bottleneck is data. The model needs to perform an action to get next values and then update. That is small amount of computation so the added speed of doing on gpu from that is lossed from slower sharing updates on gpu compared to CPU. Only by collecting a bunch of these updates and updating all at once and losing all the slow individual shared updates will it be beneficial. So yes you can do that but why if it will be no faster is what I’m trying to express.
|
st115634
|
I think if each DQN learner’s batch is big enough, the overall speed will be much faster even if they have to lose some cycles sending the gradient and downloading updates from the parameter server. That’s what Deepmind did in their GORILA paper 7. “Algorithm 1” on page 5.
I can imagine how the code would be written in Tensorflow’s async framework. But it’s not obvious how to translate into PyTorch’s distributed primitives.
|
st115635
|
Hey, did you make the distributed PyTorch code? I am also trying with the same and except the imagenet example, I am not able to find an example for distributed machine learning in PyTorch.
|
st115636
|
Not yet. Even if I did, my code would very likely be suboptimal since I’m unfamiliar with distributed computing. That’s why an official tutorial/repo of examples would really help.
|
st115637
|
Not sure if this is what you want to do, but there is a torch.distributed example here:
https://ptorch.com/news/40.html 211
|
st115638
|
In [327]: myt = torch.LongTensor([i for i in range(100)])
In [328]: torch.dist(myt,myt)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-328-6ecb2e5af07e> in <module>()
----> 1 torch.dist(myt,myt)
TypeError: Type torch.LongTensor doesn't implement stateless method dist
|
st115639
|
Long is typically used for storing target. Try converting it to a float type using tensor.type(‘FloatTensor’) if you need to do a dist function!
|
st115640
|
Hi!
I would like to add channels to a tensor, so that from [N,C,H,W] it becomes [N, C+rpad, H, W], potentially filled with zeros for the new channels, all that in a forward() function of a nn.Module.
From what I gathered, padding in the width and height dimensions is implemeted in F.pad(), which calls functions from nn._functions.padding, such as the ones of ConstantPad2d. Unfortunately padding channels is missing here.
I also see that a legacy module implement some of those things, but it does not seem to create new Variables, and I am not sure if I should use something like this.
Anyways, I am open to suggestions as to clean&concise ways to implement zero-padding in the channels to use in a nn.Module.forward() function.
Thanks!
|
st115641
|
Hi,
If your original tensor is of the size you gave above, your can use the following code to pad inp:
padding = Variable(torch.zeros(N, pad, H, W))
padded_inp = torch.cat((inp, padding), 1)
|
st115642
|
Quite nice, works perfectly. Thank you! I don’t know why I was afraid of creating a Variable in forward().
|
st115643
|
Most deep learning frameworks can forward and backward tensor data or forward objects that can be serialized into a tensor. If I want to forward (and possibly backward) some complicated custom data structure that is written in c/c++ and that cannot be easily serialized into a flat memory, how can I do that?
In Caffe this is easy, because I can put the custom data into a member variable of a Layer class and then forward the address of the object. Every layer instance is (sequentially) called only once in a forward pass of a graph, so every thing is fine. In backward pass I can access the same object, and modify it to accumulate non-tensor gradient if needed.
|
st115644
|
Hi!
So to my understanding, if i want to change the mode of operation for the dropout units,all i need to do is net.eval() when testing/validating and net.train(True) when training. Is that true?
If so, I am super confused: after adding dropout layers, the loss on the training is consistently HIGHER than the loss on the validation set (per example),and it seems to be by a factor of 2 (I’ve used p=0.5 at the dropout layers).
Before adding the dropout layers my net overfitted the data,but in the first epochs the loss was more or less the same.
(If that makes any difference, the criterion is crossentropy, and I didn’t specify a softmax layer since to my understanding it is not needed as crossentropy’s inputs should be the scores and not the logits
an example output:
[14, 6000] loss: 0.084
valid loss is 0.05202618276541386 and percents 0.9190970103721782
The first line is the training loss (per sample) and the second line is the validation loss…
I’m super confused, help will be appreciated!
|
st115645
|
If so, I am super confused: after adding dropout layers, the loss on the training is consistently HIGHER than the loss on the validation set
If you’re randomly dropping out units, wouldn’t you expect the loss to be higher, due to uncertainty that the network is operating under?
|
st115646
|
Well honestly it’s the first time I’m using dropout, but there should be a correcting factor during evaluation (when we use all the neurons output) so that the actual output should be on the same scale as the output during training.
Is the correcting factor missing, or should I review dropout again?!
Also, when it comes to the API was I correct? (Is this how it was meant to be used?)
|
st115647
|
the correcting factor is applied at training, and it does exist. The correction is a scaling correction, it is not always linearly correlated with the loss (especially not after going through a Softmax)
|
st115648
|
I installed PyTorch from source using the latest githu repo. When I tryied to import torch, I get the following
lex@lex:~/Documents/NNs/pytorch$ python -c "import torch"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "torch/__init__.py", line 53, in <module>
from torch._C import *
ImportError: No module named _C
There is a _C.so module in the /usr/local/lib/python2.7/dist-packages/torch/ path. DOes anyone know why this is giving me this error?
|
st115649
|
I never did. It always gives that error when I install from source. I have noticed that conda installs don’t do this. I wonder why this is so.
lex@lex:~/Documents$ /usr/bin/python -c "import torch"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/torch/__init__.py", line 53, in <module>
from torch._C import *
ImportError: /usr/local/lib/python2.7/dist-packages/torch/lib/libgomp.so.1: version `GOMP_4.0' not found (required by /usr/local/lib/python2.7/dist-packages/torch/lib/libTH.so.1)
lex@lex:~/Documents$
|
st115650
|
Here is a working solution for this problem when you install from source from varunagrawal 83:
"
I recently encountered this exact problem when compiling from source. A clean install also didn’t help. What worked was removing the libgomp.so.1 library from the torch directory and instead sym-linking it to the one in /usr/lib.
ln -s /usr/lib/libgomp.so.1 path/to/python/site-packages/torch/lib/libgomp.so.1
"
|
st115651
|
All the tensor is ByteTensor, why this error occur?
I get this error ‘RuntimeError: expected Byte tensor (got Float tensor)’, but I print the input x showing it is Byte tensor.
How to fix this?
Part of code.
def forward(self, x1, x2, y):
print(x1.data)
x1 = self.conv1(x1) <------- ERROR HERE
Traceback (most recent call last):
File “/Users/wzy/PycharmProjects/pytorch/resnet.py”, line 277, in
train(epoch)
File “/Users/wzy/PycharmProjects/pytorch/resnet.py”, line 241, in train
h_x1, h_x2, h_y = model(x1, x2, y)
File “/Users/wzy/anaconda/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “/Users/wzy/PycharmProjects/pytorch/resnet.py”, line 166, in forward
x1 = self.conv1(x1)
File “/Users/wzy/anaconda/lib/python3.5/site-packages/torch/nn/modules/module.py”, line 206, in call
result = self.forward(*input, **kwargs)
File “/Users/wzy/anaconda/lib/python3.5/site-packages/torch/nn/modules/conv.py”, line 237, in forward
self.padding, self.dilation, self.groups)
File “/Users/wzy/anaconda/lib/python3.5/site-packages/torch/nn/functional.py”, line 40, in conv2d
return f(input, weight, bias)
RuntimeError: expected Byte tensor (got Float tensor)
(0 ,0 ,.,.) =
0 0 0 … 255 255 255
0 0 0 … 255 255 255
0 0 1 … 255 255 255
… ⋱ …
0 0 1 … 255 255 255
0 0 0 … 255 255 255
0 0 0 … 255 255 255
(0 ,1 ,.,.) =
0 0 0 … 255 255 255
0 0 0 … 255 255 255
0 0 1 … 255 255 255
… ⋱ …
0 0 1 … 255 255 255
0 0 0 … 255 255 255
0 0 0 … 255 255 255
(0 ,2 ,.,.) =
0 0 0 … 255 255 255
0 0 0 … 255 255 255
0 0 1 … 255 255 255
… ⋱ …
0 0 1 … 255 255 255
0 0 0 … 255 255 255
0 0 0 … 255 255 255
[torch.ByteTensor of size 1x3x28x28]
|
st115652
|
Cek your input. I think your input in form of Float Tensor. I always have same problem for the first time. Sometimes we convert from numpy give different tensor type. Anyway just for the shortcut, if your model wants Byte tensor, just cast your input tensor to byte tensor. For example your input tensor name is input , so you can use input.type(torch.ByteTensor) it will automatically convert your float tensor to byte tensor.
|
st115653
|
Thanks,however I write my custom dataset class with
imgx1 = torch.ByteTensor(imgx1).view(28, 28, 3).permute(2, 1, 0)
return imgx1
I think the input is ByteTensor when I get input.
|
st115654
|
If all tensors ByteTensor, to help ensure this try entering into code:
torch.set_default_tensor_type('torch.ByteTensor')
|
st115655
|
Thanks, I got the following error, is this a problem with the resnet model ?
Traceback (most recent call last):
File "/Users/wzy/PycharmProjects/pytorch/resnet.py", line 211, in <module>
model = resnet101()
File "/Users/wzy/PycharmProjects/pytorch/resnet.py", line 205, in resnet101
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
File "/Users/wzy/PycharmProjects/pytorch/resnet.py", line 127, in __init__
bias=False)
File "/Users/wzy/anaconda/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 233, in __init__
False, _pair(0), groups, bias)
File "/Users/wzy/anaconda/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 37, in __init__
self.reset_parameters()
File "/Users/wzy/anaconda/lib/python3.5/site-packages/torch/nn/modules/conv.py", line 44, in reset_parameters
self.weight.data.uniform_(-stdv, stdv)
AttributeError: 'torch.ByteTensor' object has no attribute 'uniform_'
|
st115656
|
herleeyandi:
input.type(torch.ByteTensor)
Did you get it to work? I got the same issue. Thanks
|
st115657
|
HI,this is a confusing error message and it will be fixed in the new version. You can see pytorch github issues for more details.
|
st115658
|
Hello @xiao1228 Sorry for my late reply. Here I try in Pytorch version 0.2 but I think it also works in version 0.1.
|
st115659
|
Is there any Spatial Transformer Layer kind of a thing in pytorch? I could find TransformerLayer in Lasagne which is the STN layer implementation.
EDIT 1:
If there is any example of STN with affine_grid and grid_sample as mentioned below, it would be of great help.
|
st115660
|
See http://pytorch.org/docs/master/nn.html#torch.nn.functional.grid_sample 498 and http://pytorch.org/docs/master/nn.html#torch.nn.functional.affine_grid 305 .
|
st115661
|
Thanks for the links.
I would like to ask if there any example to depict the combination of the the affine_grid and grid_sample along with the localization network (which is essentially a set of conv layers ending up to regress the 6 parameters of the transformation)?
|
st115662
|
I just try an example without adding localization network but a pre-defined parameters for transformation. I think it’s straight forward to be modified.
github.com
YongyiTang92/pytorch-tutorial/blob/master/tutorials/05-mytest/STN_test.ipynb 234
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Pytorch spatial tranformer Example\n",
"See also https://github.com/fxia22/stn.pytorch\n",
"Note that torch.nn.functional.affine_grid and torch.nn.functional.grid_sample only in Pytorch v0.2.0 or above"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"%matplotlib inline\n",
This file has been truncated. show original
|
st115663
|
@YongyiTang92 For Pytorch v0.2.0, is there a direct link that we can download the builded whl file, or we have to build from the source file?
|
st115664
|
Hi I am new to Pytorch.
In may training matrix, most of the features are continuous, but I have several symbolic features.
I want to use neural network.
Do I need to convert these features to the numbers or is there any better way to do that?
Thanks for your time and the help
|
st115665
|
by symbolic, you mean discrete features (instead of continuous). For all discrete features, make them into one-hot coded symbols, or you can use an nn.Embedding and learn an embedding for each discrete feature.
|
st115666
|
Hi there,
I know my question is not very related to Pytorch, but I was trying to use pytorch on these two GPUs, so I was wondering if anyone could help me out.
I have two GPUs on my machine, one is Quadro k620 and one is Quadro k2200. But for some strange reason, they have the same physical ID. See the outputs from lshw and nvidia-smi.
Screenshot from 2017-08-21 11-13-55.png1324×619 92.8 KB
Screenshot from 2017-08-21 11-14-22.png882×516 71.6 KB
When I use torch.cuda.device_count(), it tells me that two device are available. But I can only use quadro k2200, no matter how I set torch.cuda.device. I have also tested in tensorflow, and I can only selected gpu:0. If I tried to select gpu:1, it will report no device gpu:1 is available.
So does anyone know how can make the two GPUs have different IDs?
Thanks a lot for your help.
Shuokai
|
st115667
|
The output from your lshw command shows the SAME physical ID fo both of the GPU’s. Did you see that?
Run this code:
import torch
import sys
print('__Python VERSION:', sys.version)
print('__pyTorch VERSION:', torch.__version__)
print('__CUDA VERSION')
from subprocess import call
# call(["nvcc", "--version"]) does not work
! nvcc --version
print('__CUDNN VERSION:', torch.backends.cudnn.version())
print('__Number CUDA Devices:', torch.cuda.device_count())
print('__Devices')
call(["nvidia-smi", "--format=csv", "--query-gpu=index,name,driver_version,memory.total,memory.used,memory.free"])
print('Active CUDA Device: GPU', torch.cuda.current_device())
print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.