id
stringlengths
3
8
text
stringlengths
1
115k
st115268
yeah, I am agree with you. Have you test that? Dose it works? I think LSTM may have too many parameters, GRU may works better?
st115269
Hi ! You have done a great work, I am also interested in CLSTM and want to do something using it. I don’t know how it run in your machine, but I can’t run your code directly, so I rewrite some parts and it can run well with these changes, I changed the loop in CLSTM.forward to: for idlayer in xrange(self.num_layers): hidden_c=hidden_state[idlayer] output_inner=[] for t in xrange(seq_len): hidden_c=self.cell_list[idlayer](current_input[:,t,:,:,:],hidden_c) output_inner.append(hidden_c[0].unsqueeze(1)) next_hidden.append(hidden_c) current_input=torch.cat(output_inner,1) Does these changes conflict with your original intension?
st115270
Hi alan, could you tell me whats the error you are having with the original code? I will check your changes to see if they do the same.
st115271
The most obvious error is that the features map size are not compatible,for example I can’t use torch.cat to concatenate input image and hidden states successfully.
st115272
Thanks @alan_ayu! There was indeed an error in the input format (batch, seq_len,…). It happened because I used the right format in my own code, and I put a wrong one in GitHub. Could you please check again? Let me know if you still have any issues.
st115273
There is also this model: github.com Atcold/pytorch-CortexNet/blob/master/model/ConvLSTMCell.py 1.1k import torch from torch import nn import torch.nn.functional as f from torch.autograd import Variable # Define some constants KERNEL_SIZE = 3 PADDING = KERNEL_SIZE // 2 class ConvLSTMCell(nn.Module): """ Generate a convolutional LSTM cell """ def __init__(self, input_size, hidden_size): super().__init__() self.input_size = input_size self.hidden_size = hidden_size This file has been truncated. show original
st115274
I’m using convolution network for classification. My dataset is a typical 2D matrix,say, 100 samples x 10 features. one row represents a sample(e.g. a certain person’s information ), one column represents an attribute/feature (e.g. gender, name, weight, etc.). When using nn.BatchNorm1d, there’s a num_features argument, also, it says that the input of nn.BatchNorm1d is (N, C) or (N, C, L). I guess C here means Channel. Then here comes the problem, unlike image data, there’s no natural channel in my dataset. So I can either treat my dataset as 100 samples with channels=10 and num_features=1, or, with channels=1 and num_features=10. I wonder which one is proper for using a BatchNorm1d? Why? Furthermore, take one sample vector with 10 features for example, after passing a Conv1d(in_channel = 1, out_channel = 3) layer, I’ll get 3 vectors with 10 features. Then, should I treat them as Channel=3 and num_features=10? I think this problem essentially lies in that what’s the difference between channels and features for input like images? RGB are usually called channels, but I think they can also be treated as features of one pixel. I’m not sure if I’m right.
st115275
it’s more natural to treat your dataset as 10 channels and 1 feature. This is because over the next few layers, convolutions will actually try to learn correlated transforms between data projections. For BatchNorm, normalizing channel-wise is much more natural, because you dont want to normalize from one feature to another. (each feature can have different ranges, etc.).
st115276
Thx for reply @smth . So, to sum up, for a m dimension 1-D vector input if the input contains m attributes (e.g. a person’s weight, name, gender, etc.), it should be treated as m channels and one feature, and, do channel-wise batch-norm. if the input contains 1 attribute but have m values (e.g. a piece of voice signal with m time steps ), then it is better to treat it as one channel and m features. for a mxn 2-D matrix input if the input contains mxn features (say, re-arrange a 1-D “person information” vector mentioned above into 2-D matrix ), likewise, we should treat the input as mxn channels and one feature if the input contains 1 attribute(e.g. the “red” channel of an 2-D image), should be treated as 1 channel and mxn features Feel free to correct me if I was wrong.
st115277
Lets say I have an input tensor input = torch.tensor([1,2,3,4,5]), I want an out tensor output which looks like [[2,3],[4,2],[3,1]], or any other crazy dimensional ? So far I see that this works: t = torch.Tensor([[1,2],[3,4]]) c = torch.gather(t, 0, torch.LongTensor([[0,0],[1,1]])) c = [[1, 4],[3, 4]] But I am not able to get why this is the output, or why the index has to be of the same dimension and size as of input. I have worked with tf.gather(), and I know how it works. Can somebody explain the logic for torch.gather() ?
st115278
Hi, how to loop in a model and extract the weight for bn layers? The only way I can see to get the weight of a layer is by model.bn1.weight.data however, in the loop, it will have some parameter contains the name for the layer i.e. n model.n.weight.data but this way doesn’t work, as the model doesn’t have contributed to n. Thank you
st115279
you can check the type of the layer. for example: if type(model.n) == torch.nn.BatchNorm2d:
st115280
Struggling away with pytorch 0.2.0. I am trying to run a Udemy deeplearning project on ubuntu 16.04 x64. Surprisingly enough the same project runs well on a Windows 10 laptop with an earlier version of pytorch on a conda python 3.5 env. So I created a conda python 3.5 env on ubuntu laptop to run things like tensorflow, pytorch, etc. My problem is that after loading torch (import torch) at the top of my Python script, as soon as the script reaches the part where torch is enabled (torch.nn in this case) the script crashes, killing the kernel. I discovered that in a python window, i could “import torch” but when I do a torch.rand(4) as an example, i get an “illegal instruction, core dumped” so this is what is killing my script. I have a screenshot of this but am unclear as to how to attach it to my message. I hope this makes sense to any of the developers who are involved in the pytorch 0.2.0 project.
st115281
Clive, are you running this in a Virtual Machine? What is the output of cat /proc/cpuinfo? I am looking to see if there is atleast SSE4 support. PyTorch binaries ship with SSE, SSE2, SSE3, SSE4 assembly instructions. SSE4 itself is quite an old standard with processors dating many years supporting it, so we didn’t think it was a problem. However, I wonder if your machine supports it.
st115282
hi there I am actually running this on a desktop Ubuntu 16.04 x64 desktop running Python 3.6. but I did up to a day ago run my script in a PY35 env on the same computer and the results were the same. Here is the display of cat/proccpuinfo clived@UbuntuGnome2:~$ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel® Pentium® Dual CPU E2140 @ 1.60GHz stepping : 13 microcode : 0xa1 cpu MHz : 1200.000 cache size : 1024 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dtherm bugs : bogomips : 3189.63 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 15 model name : Intel® Pentium® Dual CPU E2140 @ 1.60GHz stepping : 13 microcode : 0xa1 cpu MHz : 1200.000 cache size : 1024 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 10 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm lahf_lm dtherm bugs : bogomips : 3189.63 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: This computer does seem to support SSE.SSE2 and SSE3 but not SSE4. Both are older machines running I686 cpus I hope this helps and thanks for your response Clive
st115283
I might just add that I also ran the same code in another ubuntu 1604 x64 box with the same results
st115284
Can you build it from source like so: github.com QuantScientist/Deep-Learning-Boot-Camp/blob/master/day02-PyTORCH-and-PyCUDA/PyTorch/build_torch.sh 32 # PyTorch GPU and CPU # If you dont have CUDA installed, run this first: # https://github.com/QuantScientist/Deep-Learning-Boot-Camp/blob/master/docker/deps_nvidia_docker.sh #GPU version export PATH=/usr/local/cuda-8.0/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH export PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH} export CUDA_BIN_PATH=/usr/local/cuda export CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-8.0 # Build PyTorch from source git clone https://github.com/pytorch/pytorch.git cd pytorch git submodule update --init #git checkout 4eb448a051a1421de1dda9bd2ddfb34396eb7287 export TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1+PTX" This file has been truncated. show original
st115285
@clived2 your only option (if you dont have SSE4) is to build from source. Unfortunately our binaries dont work for machines with processors this far back. See @QuantScientist 's advice or follow instructions from here: https://github.com/pytorch/pytorch#from-source 47
st115286
Thanks, I’ll try to build it from source. My graphics card is one of those internal Intel ones, is such graphic cards as this acceptable ? I’ll follow QuantScientist’s email and see waht happens
st115287
Thanks, I’ll remove the version of torch that I installed and build it as you suggested here
st115288
WOW, guys it worked. I don’t have a nvidia graphics card, just one of those intel things on the motherboard, so I ignored any references to it in QuantScientist’s notes. While my two linux boxes are sort of old, I’m running the latest Ubuntu distribution, and it took about 30 minutes to compile and it seems to be working just fine. I tested it operationally on some of the Python scripts from my Udemy courses and they all seem to be working. This has been quite the week for me, yesterday tweaking a tensorflow install to run on Python3.6 and today building my own version of Pytorch Thanks guys Clive
st115289
Will do, QuantScientist. Your notes on the subject did the trick for me Thanks a million
st115290
Hi, sorry newbie here, I’m trying to understand how to do gradient checking with PyTorch. I’m using the mnist example as a reference ( https://github.com/pytorch/examples/blob/master/mnist/main.py 9 ). So first I wanted to get the input analytic gradients, which I think can be achieved by changing the following lines like this: Line 81: data, target = Variable(data), Variable(target) --> data, target = Variable(data, requires_grad=True), Variable(target) Insert into somewhere between lines 81-84: data.register_hook(print) Then to get the numerical gradients I created a function like so: def numerical_grad(input_, target, row_idx, col_idx): model.eval() input_shp = input_.size() E = torch.zeros(input_shp) if args.cuda: E = E.cuda() eps = 0.001 E[0][0][row_idx][col_idx] = eps E = Variable(E) M1 = input_ + E M2 = input_ - E out1 = model(M1) out2 = model(M2) l1 = F.nll_loss(out1, target) l2 = F.nll_loss(out2, target) grad = (l1 - l2)/(2*eps) return grad I assumed this would give me the numerical gradient of the input at the specified row and column index, but it’s way way off. Am I doing something wrong? Thanks
st115291
Hi, Is there any problem with the current source, conda, and the docker? Asking because I have been trying to build a new docker using the latest source but after building (i.e. import torch), I get the following error: (I didn’t have this problem before) from torch._C import * ImportError: /opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/_C.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZN3MPI8Datatype4FreeEv and here is my docker: RUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ cmake \ git \ curl \ vim \ ca-certificates \ libjpeg-dev \ libpng-dev &&\ rm -rf /var/lib/apt/lists/* RUN curl -o ~/miniconda.sh -O https://repo.continuum.io/miniconda/Miniconda3-4.2.12-Linux-x86_64.sh && \ chmod +x ~/miniconda.sh && \ ~/miniconda.sh -b -p /opt/conda && \ rm ~/miniconda.sh && \ /opt/conda/bin/conda install conda-build && \ /opt/conda/bin/conda create -y --name pytorch-py35 python=3.5.2 numpy pyyaml scipy ipython mkl&& \ /opt/conda/bin/conda clean -ya ENV PATH /opt/conda/envs/pytorch-py35/bin:$PATH RUN conda install --name pytorch-py35 -c soumith magma-cuda80 RUN git clone --recursive https://github.com/pytorch/pytorch /opt/pytorch WORKDIR /opt/pytorch RUN git submodule update --init RUN TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1+PTX" TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \ CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" \ pip install -v . RUN git clone https://github.com/pytorch/vision.git && cd vision && pip install -v . WORKDIR /workspace RUN chmod -R a+w /workspace
st115292
If you are willing to go with python 2.7, my docker has everything you need: github.com QuantScientist/Deep-Learning-Boot-Camp/blob/master/docker/Dockerfile.gpu3 32 FROM nvidia/cuda:8.0-cudnn6-devel-ubuntu16.04 ENV CUDA_ARCH_BIN "30 35 50 52 60" ENV CUDA_ARCH_PTX "60" RUN rm -rf /var/lib/apt/lists/* RUN apt-get clean RUN apt-get update && apt-get install --no-install-recommends -y \ git cmake build-essential libgoogle-glog-dev libgflags-dev libeigen3-dev libopencv-dev libcppnetlib-dev libboost-dev libboost-all-dev libboost-iostreams-dev libcurl4-openssl-dev protobuf-compiler libopenblas-dev libhdf5-dev libprotobuf-dev libleveldb-dev libsnappy-dev liblmdb-dev libutfcpp-dev wget unzip \ python \ python-dev \ python2.7-dev \ python3-dev \ python-virtualenv \ python-wheel \ python-tk \ pkg-config \ libopenblas-base \ This file has been truncated. show original
st115293
I’m fixing this problem today. you can track my progress with the issue: github.com/pytorch/pytorch Issue: Fix libTHD's dependies to correctly link against _C.so 65 opened by Pavel-Akapian on 2017-09-09 closed by soumith on 2017-09-13 Hello! I am trying to install the PyTorch from source with conda (Ubuntu 16.04.4,CUDA 8.0, cudnn 5.1.10, Python 3.5.2 :: Anaconda 4.2.0...
st115294
I’ve just checked, and Dockerfile upstream builds and I can ‘import torch’ without issues. The base image does not have mpi, neither mpi is installed later, which means that THD is compiled without support for MPI backend, but also means that you don’t have import problems.
st115295
My docker contains openmpi.1.10.3 but I had not had this problem before (even without building docker and just installing from source, I get this error …). The error is same as the link posted above by @smth
st115296
I’ve installed PyTorch from source by following the instructions on the Github repo. Everything works fine, and the installation is a success, although the problem arises when I try to import torch. This the following import error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/kevinlu/miniconda3/envs/fastai/lib/python3.6/site-packages/torch/__init__.py", line 53, in <module> from torch._C import * ImportError: dlopen(/Users/kevinlu/miniconda3/envs/fastai/lib/python3.6/site-packages/torch/_C.cpython-36m-darwin.so, 10): Symbol not found: _ompi_mpi_char Referenced from: /Users/kevinlu/miniconda3/envs/fastai/lib/python3.6/site-packages/torch/_C.cpython-36m-darwin.so Expected in: flat namespace in /Users/kevinlu/miniconda3/envs/fastai/lib/python3.6/site-packages/torch/_C.cpython-36m-darwin.so
st115297
This is the issue linked below. There are workarounds in the discussion. Best regards Thomas github.com/pytorch/pytorch Issue: Fix libTHD's dependies to correctly link against _C.so 191 opened by Pavel-Akapian on 2017-09-09 closed by soumith on 2017-09-13 Hello! I am trying to install the PyTorch from source with conda (Ubuntu 16.04.4,CUDA 8.0, cudnn 5.1.10, Python 3.5.2 :: Anaconda 4.2.0...
st115298
You probably have a different Python interpreter in Jupyter. Can you print the paths on the command line and on Jupyter? import sys print('__Python VERSION:', sys.version) print(sys.path) Also run this in Jupyter: ! which python
st115299
I’m just running from my command line this is what I’m doing: (fastai) kevinlu@Kevins-MBP:~/Documents/Move37/JeanApp (master *) $ which python python is /Users/kevinlu/miniconda3/envs/fastai/bin/python Then I run python Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:14:59) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> print(sys.path) ['', '/Users/kevinlu/miniconda3/envs/fastai/lib/python36.zip', '/Users/kevinlu/miniconda3/envs/fastai/lib/python3.6', '/Users/kevinlu/miniconda3/envs/fastai/lib/python3.6/lib-dynload', '/Users/kevinlu/miniconda3/envs/fastai/lib/python3.6/site-packages']
st115300
QuantScientist: ! which python run this in jupyter please and then report back
st115301
This is what I am getting: Python 3.6.2 |Continuum Analytics, Inc.| (default, Jul 20 2017, 13:14:59) Type 'copyright', 'credits' or 'license' for more information IPython 6.0.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: ! which python /Users/kevinlu/miniconda3/envs/fastai/bin/python
st115302
I’d recommend editing setup.py 27 and doing setup.py 27 clean and rebuilding. Best regards Thomas
st115303
Tried the fix by changing: main_libraries += ['cudart', 'nvToolsExt', 'nvrtc', 'cuda', 'mpi_cxx'] and now I’m having trouble building it, here is the output in pastebin 6.
st115304
i’ll be fixing this error once and for all today. give me 8 hours more. the github issue will be updated.
st115305
I would like to use a weighted MSELoss function for image-to-image training. I want to specify a weight for each pixel in the target. Is there a quick/hacky way to do this, or do I need to write my own MSE loss function from scratch?
st115306
you can do this: def weighted_mse_loss(input, target, weights): out = input - target out = out * weights.expand_as(out) # expand_as because weights are prob not defined for mini-batch loss = out.sum(0) # or sum over whatever dimensions return loss
st115307
Oh, I see. There’s no magic to the loss functions. You just calculate whatever loss you want using predefined Torch functions and then call backward on the loss. That’s super easy. Thanks!
st115308
what if I want L2 Loss ? just do it like this? def weighted_mse_loss(input,target,weights): out = (input-target)**2 out = out * weights.expand_as(out) loss = out.sum(0) # or sum over whatever dimensions return loss right ?
st115309
So long as all the computations are done on Variables, which will ensure the gradients can be computed.
st115310
Should the “weights” also be wrapped as a Variable in order for auto-grad to work?
st115311
Thanks. Actually, variablization must be done if we need them operate at gpu(s).
st115312
Pushing tensors to the gpu hangs. For example: import torch a = torch.zeros(10) a.cuda() # hangs indefinitely I have tried reinstalling CUDA 8.0 and CUDNN 5.1 but this has not resolved the issue. Some quick specs that may be handy: os: Ubuntu 16.04 python distro: python 2.7 (conda) gpu: nvidia gtx 1080ti nvidia driver: 375
st115313
here is the resulting traceback from a keyboard interrupt (to stop the hanging): KeyboardInterrupt Traceback (most recent call last) <ipython-input-6-504fb99d952c> in <module>() ----> 1 a.cuda() /home/jkarimi91/Apps/anaconda2/envs/torch/lib/python2.7/site-packages/torch/_utils.pyc in _cuda(self, device, async) 64 else: 65 new_type = getattr(torch.cuda, self.__class__.__name__) ---> 66 return new_type(self.size()).copy_(self, async) 67 68 /home/jkarimi91/Apps/anaconda2/envs/torch/lib/python2.7/site-packages/torch/cuda/__init__.pyc in _lazy_new(cls, *args, **kwargs) 267 # We need this method only for lazy init, so we can remove it 268 del _CudaBase.__new__ --> 269 return super(_CudaBase, cls).__new__(cls, *args, **kwargs) 270 271
st115314
Turned out I did not have cuda80 installed in my conda env. I must have accidentally deleted it or installed pytorch for cuda 7.5 originally.
st115315
We already have a ToTensor class that transform numpy-style image into a torch tensor. It seems that a ToVariable class could also added to boost the data loading performance via multiprocess at the data loading step. Does this idea make sense ? Thanks. class ToVariable(object): """Convert Tensors in sample to Variable.""" def __call__(self, sample): return Variable(sample)
st115316
Converting a tensor to a Variable doesn’t incur any noticeable time penalty, so I don’t see why it would make things faster. I think the best is just to convert the tensors just after they are returned by the dataloader, so that we have only a single tensor to convert to Variable
st115317
Thanks. It doesn’t work either, since a data loader don’t recognize a Variable: TypeError: batch must contain tensors, numbers, dicts or lists; found <class 'torch.autograd.variable.Variable'>
st115318
I think you wouldn’t want this because the transforms occur before the collate function in DataLoader. Instead, you might want to create a custom collate function. Below is a quick example import torch import torch.utils.data as data from torch.autograd import Variable def variable_collate(batch): """Puts batch of inputs, labels each into a Variable. Args: batch: (list) [inputs, labels]. In this simple example, I'm just assuming the input and labels are already Tensor types Output: minibatch: (Variable) targets: (Variable) """ minibatch, targets = zip(*[(a, b) for (a,b) in batch]) minibatch, targets = torch.stack(minibatch, dim=0), torch.stack(targets, dim=0) minibatch, targets = Variable(minibatch), Variable(targets) return minibatch, targets X = torch.arange(0, 10).view(-1, 2) Y = torch.zeros(5).view(-1, 1) ds = data.TensorDataset(X, Y) dl = data.DataLoader(ds, batch_size=1, collate_fn=variable_collate) for mb, tgts in dl: print(mb, tgts)
st115319
Why do you want to return Variables in the Dataset? I would avoid having that pattern actually. But if you really want to, then you can provide your own collate_fn as pointed out by @dhpollack.
st115320
@dhpollack @fmassa Thank you so much. I just thought that it would be more efficient to leave the “Variable” procedure to multiple processes. It would not be necessary if it takes almost no time. And I start to have another question now. Does it make sense to copy the data to the gpu at the data loading step? class ToTensor(object): """Convert ndarrays in sample to Tensors.""" def __init__(self, phase_cuda=False): self.phase_cuda = phase_cuda def __call__(self, sample): image, landmarks = sample['image'], sample['landmarks'] # swap color axis because # numpy image: H x W x C # torch image: C X H X W image = image.transpose((2, 0, 1)) return {'image': torch.from_numpy(image).cuda() if self.phase_cuda else torch.from_numpy(image), 'landmarks': torch.from_numpy(landmarks).cuda() if self.phase_cuda else torch.from_numpy(landmarks)}
st115321
If you are using multiple threads for data loading, that might not be necessary, but it depends on several factors
st115322
Thank you. It didn’t work either, cuda operation like cuda() fail in multiprocess worker.
st115323
Hey, I’ve finished training a GRU model for time series regression, and i was trying to visualize the activation of the neurons for different inputs. Although it is simple to have access to the weights of the models, I couldn’t find a way to get the activation output. I wonder if it is possible, and if someone has an idea of how to achieve it ? Thanks
st115324
I need to frequently use permute(0,2,1,3,4) in my network. Will it hurt the performance a lot? If yes, is there a better way to do it?
st115325
no it wont hurt performance because of the permute function itself. permute does just some stride manipulation, so is practically a free operation.
st115326
#train the model for epoch in range(2): for i, (images, labels) in enumerate(train_loader): print(type(images)) images = Variable(images) labels = Variable(labels) print(type(images)) # Forward + Backward + Optimize optimizer.zero_grad() outputs = cnn(images) loss = criterion(outputs, labels) loss.backward() optimizer.step() if (i+1) % 100 == 0: print(loss.data) print ('Epoch [%d/%d], Iter [%d/%d] Loss: %.4f' %(epoch+1, 2, i+1, len(train_dataset)//BATCH_SIZE, loss.data[0])) ERROR: <class 'torch.LongTensor'> <class 'torch.autograd.variable.Variable'> --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-26-5427cb169c61> in <module>() 8 # Forward + Backward + Optimize 9 optimizer.zero_grad() ---> 10 outputs = cnn(images) 11 loss = criterion(outputs, labels) 12 loss.backward() /home/quoniammm/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 222 for hook in self._forward_pre_hooks.values(): 223 hook(self, input) --> 224 result = self.forward(*input, **kwargs) 225 for hook in self._forward_hooks.values(): 226 hook_result = hook(self, input, result) <ipython-input-19-8341c87faa62> in forward(self, x) 14 15 def forward(self, x): ---> 16 x = F.relu(self.conv1(x)) 17 x = F.max_pool2d(F.relu(self.conv2(x)), 2) 18 /home/quoniammm/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs) 222 for hook in self._forward_pre_hooks.values(): 223 hook(self, input) --> 224 result = self.forward(*input, **kwargs) 225 for hook in self._forward_hooks.values(): 226 hook_result = hook(self, input, result) /home/quoniammm/anaconda3/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input) 252 def forward(self, input): 253 return F.conv2d(input, self.weight, self.bias, self.stride, --> 254 self.padding, self.dilation, self.groups) 255 256 /home/quoniammm/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in conv2d(input, weight, bias, stride, padding, dilation, groups) 50 f = ConvNd(_pair(stride), _pair(padding), _pair(dilation), False, 51 _pair(0), groups, torch.backends.cudnn.benchmark, torch.backends.cudnn.enabled) ---> 52 return f(input, weight, bias) 53 54 RuntimeError: expected Long tensor (got Float tensor) I have a question about type. when I run Variable().Why the type of images is changed? I can’t understand it.Who can tell me why?
st115327
Maybe one of the layers in your CNN does the conversion. Can you include the code for the CNN? In the general case you can convert like so: if use_cuda: lgr.info ("Using the GPU") Y = Variable(torch.from_numpy(y_data_np).type(torch.LongTensor).cuda()) else: lgr.info ("Using the CPU") Y = Variable(torch.squeeze (torch.from_numpy(y_data_np).type(torch.LongTensor))) # Also, BCEloss requires Floats in Y (e.g. targets) so maybe this is the case in your cost function too.
st115328
The problem is solved.But I am still a little confused.The way I uesd before is: #convert to pytorch tensor train_data = torch.from_numpy(train_data) train_label = torch.from_numpy(train_label) val_data = torch.from_numpy(valid_data) val_label = torch.from_numpy(valid_label) After reading your words, I changed to these: #convert to pytorch tensor train_data = torch.from_numpy(train_data)..type(torch.FloatTensor) train_label = torch.from_numpy(train_label).type(torch.LongTensor) val_data = torch.from_numpy(valid_data).type(torch.FloatTensor) val_label = torch.from_numpy(valid_label).type(torch.LongTensor) The problem is solved.But The error info is RuntimeError: expected Long tensor (got Float tensor).Should it be RuntimeError: expected Float tensor (got Long tensor) This is so weired that I can not understand.What is the reason of it?
st115329
Please upload a full example to Git so that I can run it locally and understand what the problem is.
st115330
I fixed your issue, see a new notebook here: github.com QuantScientist/quoniammm/blob/master/CNN_minist.ipynb 29 { "cells": [ { "cell_type": "code", "execution_count": 51, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import torch\n", "from torch.autograd import Variable\n", "import torch.nn as nn\n", "import torch.nn.functional as F\n", "import torch.utils.data as Data\n", "import torchvision\n", "\n", "import numpy as np\n", "import pandas as pd\n", "import matplotlib.pyplot as plt\n", This file has been truncated. show original You needed: train_data = np.array(train_data, dtype=np.float32) valid_data = np.array(valid_data, dtype=np.float32) You now have a new error, but that would be easy to fix. Best,
st115331
Hi, I’m trying to train FCN-32s in PyTorch, I follow this implementation in PyTorch [pytorch-fcn] to write my codes, and try to train FCN-32s with my wrapped API. However, the results are not so satisfied, in the pytorch-fcn, it reports results after 90K iterations achieving 63.13 IU, but in my implementation, even after 100K iterations, the results are still very bad. Results in tensorboard(after 100K iterations): 屏幕快照 2017-09-11 19.05.58.png1392×504 60.5 KB My implementation codes: Codes # modesl.py def get_upsampling_weight(in_channels, out_channels, kernel_size): """Make a 2D bilinear kernel suitable for upsampling""" factor = (kernel_size + 1) // 2 if kernel_size % 2 == 1: center = factor - 1 else: center = factor - 0.5 og = np.ogrid[:kernel_size, :kernel_size] filt = (1 - abs(og[0] - center) / factor) * \ (1 - abs(og[1] - center) / factor) weight = np.zeros((in_channels, out_channels, kernel_size, kernel_size), dtype=np.float64) weight[range(in_channels), range(out_channels), :, :] = filt return torch.from_numpy(weight).float() class FCN32s(nn.Module): def __init__(self, pretrained=False, num_classes=21): super(FCN32s, self).__init__() # vgg16 = VGG16(pretrained=True) vgg16 = VGG16(pretrained=False) if pretrained: state_dict = torch.load('./vgg16_from_caffe.pth') vgg16.load_state_dict(state_dict) self.features = vgg16.features self.features._modules['0'].padding = (100, 100) for module in self.features.modules(): if isinstance(module, nn.MaxPool2d): module.ceil_mode = True # Fully Connected 6 -> Fully Convolution self.fc6 = nn.Conv2d(512, 4096, 7) self.relu6 = nn.ReLU(inplace=True) self.drop6 = nn.Dropout2d() # FC 7 self.fc7 = nn.Conv2d(4096, 4096, 1) self.relu7 = nn.ReLU(inplace=True) self.drop7 = nn.Dropout2d() self.score = nn.Conv2d(4096, num_classes, 1) self.upsample = nn.ConvTranspose2d( num_classes, num_classes, 64, 32, bias=False) # Init ConvTranspose2d init_weights = get_upsampling_weight(num_classes, num_classes, 64) self.upsample.weight.data.copy_(init_weights) # Init FC6 and FC7 classifier = vgg16.classifier for idx, l in zip((0, 3), ('fc6', 'fc7')): layer = getattr(self, l) vgg16_layer = classifier[idx] layer.weight.data = vgg16_layer.weight.data.view( layer.weight.size()) layer.bias.data = vgg16_layer.bias.data.view( layer.bias.size()) def forward(self, x): w, h = x.size()[2:] x = self.features(x) x = self.drop6(self.relu6(self.fc6(x))) x = self.drop7(self.relu7(self.fc7(x))) x = self.score(x) x = self.upsample(x) x = x[:, :, 19:19+w, 19:19+h].contiguous() return x # datasets.py def gen_voc_dataset(phase, path='/share/datasets/VOCdevkit/VOC2012'): # VOC Dataset voc_input_trans = T.Compose([ ToTensor(rescale=False), # Just ToTensor with no [0, 255] to [0, 1] IndexSwap(0, [2, 1, 0]), # RGB --> BGR T.Normalize( VOCClassSegmentation.mean_bgr, (1, 1, 1)), ]) voc_target_trans = ToArray() dataset = VOCClassSegmentation( path, phase, input_trans=voc_input_trans, target_trans=voc_target_trans) return dataset def gen_sbd_dataset(phase, path='/share/datasets/SBD/dataset'): sbd_input_trans = T.Compose([ ToTensor(rescale=False), IndexSwap(0, [2, 1, 0]), T.Normalize( SBDClassSegmentation.mean_bgr, (1, 1, 1)), ]) sbd_target_trans = ToArray() dataset = SBDClassSegmentation( path, phase, input_trans=sbd_input_trans, target_trans=sbd_target_trans) return dataset def make_dataset(phase, ignores=None): datasets = [] if ignores is None: ignores = [] ignores = set(ignores) for key, val in globals().items(): if key.startswith('gen_'): dataset_name = key.split('_')[1] if dataset_name not in ignores: print('Use dataset {} for phase {}'.format(dataset_name, phase)) d = val(phase) datasets.append(d) dataset = ConcatDataset(datasets) return dataset # utils.py def get_params(model, bias=False): for m in model.modules(): if isinstance(m, nn.Conv2d): if bias: yield m.bias else: yield m.weight # loss.py class CrossEntropyLoss2d(nn.Module): def __init__(self, weight=None, size_average=True, ignore_index=255): super(CrossEntropyLoss2d, self).__init__() self.nll_loss = nn.NLLLoss2d(weight, size_average, ignore_index) def forward(self, inputs, targets): return self.nll_loss(F.log_softmax(inputs), targets) # train.py import argparse import torch.optim as optim import torch.nn as nn from torch.utils.data import DataLoader from torchtools.trainer import ModelTrainer from torchtools.callbacks import ModelCheckPoint from torchtools.callbacks import TensorBoardLogger from torchtools.meters import FixSizeLossMeter, EpochLossMeter from torchtools.meters import EpochIoUMeter, BatchIoUMeter, FixSizeIoUMeter from torchtools.meters import SemSegVisualizer from torchtools.loss import CrossEntropyLoss2d from datasets import make_dataset from model import FCN32s as Model from utils import get_params parser = argparse.ArgumentParser() parser.add_argument('--EPOCHS', type=int, default=200) parser.add_argument('--BATCH_SIZE', type=int, default=1) parser.add_argument('--LR_RATE', type=float, default=1e-10) parser.add_argument('--MOMENTUM', type=float, default=0.99) parser.add_argument('--WEIGHT_DECAY', type=float, default=5e-4) parser.add_argument('--NUM_WORKERS', type=int, default=4) parser.add_argument('--OUTPUT_PATH', type=str, default='./outputs') parser.add_argument('--PIN_MEMORY', type=bool, default=True) parser.add_argument('--SHUFFLE', type=bool, default=True) parser.add_argument('--DEVICE_ID', type=int, default=0) parser.add_argument('--USE_CUDA', type=bool, default=True) parser.add_argument('--DATA_PARALLEL', type=bool, default=False) args = parser.parse_args() train_set = make_dataset('train', ignores=['voc']) val_set = make_dataset('val', ignores=['sbd']) train_loader = DataLoader(train_set, args.BATCH_SIZE, shuffle=args.SHUFFLE, num_workers=args.NUM_WORKERS, pin_memory=args.PIN_MEMORY) val_loader = DataLoader(val_set, args.BATCH_SIZE, shuffle=args.SHUFFLE, num_workers=args.NUM_WORKERS, pin_memory=args.PIN_MEMORY) model = Model(pretrained=True) criterion = CrossEntropyLoss2d() if args.USE_CUDA: model = model.cuda(args.DEVICE_ID) criterion = criterion.cuda(args.DEVICE_ID) if args.DATA_PARALLEL: model = nn.DataParallel(model) optimizer = optim.SGD([ {'params': get_params(model, bias=False)}, {'params': get_params(model, bias=True), 'lr': args.LR_RATE * 2, 'weight_decay': 0}, ], lr=args.LR_RATE, momentum=args.MOMENTUM, weight_decay=args.WEIGHT_DECAY) trainer = ModelTrainer(model, train_loader, criterion, optimizer, val_loader, use_cuda=args.USE_CUDA, device_id=args.DEVICE_ID) checkpoint = ModelCheckPoint(args.OUTPUT_PATH, 'val_loss', save_best_only=True) train_loss_meter = FixSizeLossMeter('loss', 'train', 20) val_loss_meter = EpochLossMeter('val_loss', 'validate') val_iou_meter = EpochIoUMeter('val_IoU', 'validate', num_classes=21) train_iou_meter = FixSizeIoUMeter('train_IoU', 'train', 20, num_classes=21) train_epoch_iou_meter = EpochIoUMeter('train_IoU_epoch', 'train', num_classes=21) ss_visualizer = SemSegVisualizer('Prediction', 'train', 'voc', 300 // args.BATCH_SIZE) tb_logger = TensorBoardLogger(args.OUTPUT_PATH) trainer.register_hooks([train_loss_meter, val_loss_meter, ss_visualizer, checkpoint, val_iou_meter, train_iou_meter, train_epoch_iou_meter, tb_logger]) trainer.train(args.EPOCHS) Could someone give me some hints with my errors ? Thanks.
st115332
this might be because of our VGG model (I heard some reports that finetuning gives lower accuracy). Try https://github.com/jcjohnson/pytorch-vgg 90 It’s converted the Caffe model directly into pytorch format. These models expect different preprocessing than the other models in the PyTorch model zoo. Images should be in BGR format in the range [0, 255], and the following BGR values should then be subtracted from each pixel: [103.939, 116.779, 123.68]
st115333
Currently I’m using what https://github.com/wkentaro/pytorch-fcn 49 used, a pretrained vgg16 model in PyTorch format. vgg16 = VGG16(pretrained=False) if pretrained: state_dict = torch.load('./vgg16_from_caffe.pth') vgg16.load_state_dict(state_dict) But the performance is much worse than pytorch-fcn 49’s implementation.
st115334
Maybe I did not write this clear, A BGR in [0, 255] format with BGR mean subtracted and a pretrained caffe-converted vgg16 model is exactly what i’m using, I also tried RGB in [0, 1] with pretrained vgg16 model in torchvision.models, neither of them worked fine.
st115335
Maybe this repo comes at a good time: https://github.com/fyu/drn 134 Dilated ResNets for Semantic Segmentation with good results.
st115336
Would Pytorch support something like this? How does one go about implementing a simple Autoencoder? class Encoder(nn.Module): def __init__(self): super(Encoder, self).__init__() self.fc1 = nn.Linear(784, 32) def forward(self, x): return F.sigmoid(self.fc1(x)) class Decoder(nn.Module): def __init__(self): super(Decoder, self).__init__() self.fc1 = nn.Linear(32, 784) def forward(self, x): return F.sigmoid(self.fc1(x)) class AutoEncoder(nn.Module): def __init__(self): super(AutoEncoder, self).__init__() self.fc1 = Encoder() self.fc2 = Decoder() def forward(self, x): return self.fc2(self.fc1(x)) model = AutoEncoder() optimizer = optim.Adam(model.parameters(), lr=0.5) for epoch in range(1, 201): train(epoch) test(epoch, validation)
st115337
If you really want to do the simplest, I would suggest: class Autoencoder(nn.Module): def __init__(self, ): super(Autoencoder, self).__init__() self.fc1 = nn.Linear(784, 32) self.fc2 = nn.Linear(32, 784) self.sigmoid = nn.Sigmoid() def forward(self, x): x = self.sigmoid(self.fc1(x)) x = self.sigmoid(self.fc2(x)) return x
st115338
@alexis-jacq I need to access the intermediate data… Can I do that in your implementation? @apaszke I thought it would work too, but it says matrices expected, got 4D, 2D tensors at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/TH/generic/THTensorMath.c:857
st115339
dmadeka1: @alexis-jacq I need to access the intermediate data… Can I do that in your implementation? In that case your approach seems simpler. You can even do: encoder = nn.Sequential(nn.Linear(782,32), nn.Sigmoid()) decoder = nn.Sequential(nn.Linear(32,732), nn.Sigmoid()) autoencoder = nn.Sequential(encoder, decoder)
st115340
@alexis-jacq I want a auto encoder with tied weights, i.e. weight of encoder equal with decoder. How to implement it?
st115341
So you want a kind of balanced autoencoder, where Encoder = Transpose(Decoder)? In that case, I would do something like this: class BalancedAE(nn.Module): def __init__(self, ): super(BalancedAE, self).__init__() self.encoder = nn.Parameter(torch.rand(size_input, size_output)) def forward(self, x): x = torch.sigmoid(torch.mm(self.encoder, x)) x = torch.sigmoid(torch.mm(x, torch.transpose(self.encoder, 0, 1)) return x
st115342
I’m writing some codes to implement neural turing machines, which needs a memory module, and I don’t know how to handle the variable length sequences, I found the rnn codes which use _backend.rnn and passed a batch_sizes parameter. If I use the padding sequences, what should I do in the loss and optimizer? My rnn likes the example probided by docs, is different from the standard rnn: class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.i2h = nn.Linear(input_size + hidden_size, hidden_size) self.i2o = nn.Linear(input_size + hidden_size, output_size) self.softmax = nn.LogSoftmax() def forward(self, input, hidden): combined = torch.cat((input, hidden), 1) hidden = self.i2h(combined) output = self.i2o(combined) output = self.softmax(output) return output, hidden def initHidden(self): return Variable(torch.zeros(1, self.hidden_size))
st115343
Just use for-loop to iterate on you variant length sequences. But in the sense of efficiency, I would recommend you to use padded sequences for mini-batch. You can use the output of RNN to calculate loss and do backward.
st115344
Could it work just to use padded sequences? If I understand it correctly, first padding the sequences and use the corresponding output ( e.g. input is [1,2,0,0] and output is [0,1,2,2], I will use the second output “1” to calculate the loss), don’t need any other operations in rnn layer?
st115345
That’s right. You could also use the final output, which is efficient but will probably do harm to the performance.
st115346
Thanks very much! I used to think I need write some codes in my rnn layer, I have been stuck here for a long time.
st115347
I am actually stuck on a similar problem , I am trying to do speech recognition using attention mechanism , I have build the boiler plate code ( model ) for that using the seq-to-seq tutorial , and have preprocessed my speech data , now the problem is that for each item in my dataset x , I have following pair <frames of x of size (anything,13)> , < transcription of x> Now for each item in dataset the number of frames are different , some are (256,13) , (134,13) , you get the idea , so how do I pad it to create of same length so I can train it on GPU , also where to pad the sequences , should I do this in my dataloader class , or do it before I create a dataloader class , Thanks
st115348
I read the pytorch’s rnn code, I found there are two implementaions on cpu and gpu. In pytorch/nn/_functions/rnn.py, they use batch_sizes parameter in VariableRecurrent which is running on cpu, and there is no batch_sizes parameter in CudnnRNN which is running on gpu, so maybe dynamic batching is not supported on gpu. I’m not 100% sure about it. I don’t really understand the VariableRecurrent’s logic flow, I think it uses the corresponding output and calculate the loss. Now I’m goting to padding the sequences to max length, and use the right output(some short length sequence’s output is not the last one) to get loss. Please tell me if you have any new answers! Thanks!
st115349
I want to build a model in Pytorch where I have text as input. Each batch consists of multiple text passages which consist of words, which in turn consist of characters. Additionally to using word embeddings I would like to process character embeddings through an bidirectional RNN to generate on character-based embedding per word to help with out-of-vocab words. How do I process the character embeddings of size [batch_size, num_words, num_chars, char_embedding_size] through an RNN? There is one dimension too much and when I do char_embeddings.view([batch_size*num_words, num_chars, char_embedding_size]) I cannot create a packed sequence as the lengths of the words are not ordered. How can I still process these char embeddings through a bidirectional RNN while still correctly handling the different number of characters in each word?
st115350
I have defined a convolutional neural network as follows: class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 20, kernel_size=5, padding=2), nn.BatchNorm2d(20), nn.ReLU(), nn.MaxPool2d(2)) self.layer2 = nn.Sequential( nn.Conv2d(20, 20, kernel_size=5, padding=2), nn.BatchNorm2d(20), nn.ReLU(), nn.MaxPool2d(2)) self.fc = nn.Linear(MxNx20, 10) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.view(out.size(0), -1) out = self.fc(out) return out cnn = CNN() In self.fc layer of nn.module i have used M and N in order to define weight matrix size. M and N can take values like 5x5, 7x7, 10x10 and so on. I can not define M and N in terms of self.layer2. Is there any way to predefine size of the weight matrix of self.fc layer
st115351
Hey folks, Currently I’m an intern at a company to do some research into the application of machine and deep learning. I’m in talks with my supervisor to get a more powerful machine at my desk, but I fear that it may not come through and I would be resigned to using whatever I have available right now. As such I’m trying to figure out my options in case I can’t get a proper rig going. The problem is as follows: the company has given me a standalone PC with a 2GB NVIDIA Quadro 4000 graphics card. However, this card has compute capacity 2.0, which means that I am able to use CUDA but not cuDNN. I get that my code will be slower without cuDNN, but I can’t quite determine if I would run into problems when I would try to implement a more complex network without cuDNN (e.g. incompatibilities with network layout or certain functions). Would I be able to do everything using PyTorch without cuDNN, or do I really require cuDNN for certain functions? Thanks in advance!
st115352
hi Alex, with card graphic compute capacity 2.0, pytorch won’t run with cuda so it very slower training big data i think only theano run cuda library but it don’t run cudnn library (library for depth neural network) i also have nvidia 635M, compute capacity 2.0, should when run training example in github very slow can i make friends with you?
st115353
Thanks đàm! For those of you finding this thread in the future: I found the following blog post 85 on GPU selection. TL;DR Don’t cheap out like I wanted to - invest in a card that’s suitable for the job. Convolution operators are performed using cuDNN, so it makes sense to require compute capacity 3+, and since Kepler cards are allegedly quite slow you’ll be looking at Maxwell or more recent cards. And of course you can be my friend đàm!
st115354
What is the best way to select a subset of data from batch during forward pass based on a condition? For example, the following seems to take twice the amount ids = torch.squeeze(torch.nonzero(torch.ones(batch_size))).cuda() # some condition subset_x= x[ids,] subset_x = conv(subset_x) compared to processing the whole batch of data x = conv(x)
st115355
The way you are doing seems fine. Can you send a small snippet illustrating the twice slowdown?
st115356
Here is the modified mnist example to illustrate this. gist.github.com https://gist.github.com/psattige/eff61cdc0a70824eb0378cb5a04d749a 12 mnist_test.py # Without slicing/indexing (0.8s) # python mnist_test.py # With slicing/indexing (1.2s) # python mnist_test.py --slice from __future__ import print_function import argparse import torch import torch.nn as nn This file has been truncated. show original My current guess is the cost of indexing adds up to a considerable amount if there several of such operations.
st115357
Hi, Thank you for pytorch. Would you have a hint how to approach ever increasing memory use? I use pytorch to training a network(CNN),with the increase of epoch,I notice that the (RAM, but not GPU) memory increases from one epoch to the next. After the number of epoch reaches 1000,the RAM is full and it can’t continue to iterate to train the CNN network. Can you give me some suggestions? Thank you so much.
st115358
you are likely keeping references to Variables somewhere. Very likely you are doing: total_loss = total_loss + current_loss instead of: total_loss = total_loss + current_loss.data[0]
st115359
@smth Thank you so much for your reply. But it seems did not work. I use multi subprocesses to load data(num_workers =8) and with the increase of epoch,I notice that the (RAM, but not GPU) memory increases. I thought may be I can kill subprocesses after a few of epochs and then reset new subprocesses to continue train the network,but I don’t know how to kill the subprocesses in the main processes. Can you give me some suggestions? Thank you so much.
st115360
And I set num_workers = 0,the (RAM, but not GPU) memory not increases largely with the increase of epoch… Can you give me some suggestions or instructions about the problem? Thank you so much.
st115361
what are you loading here? images or some other format? Someone recently reported when loading Tiff, the python library they are using to load had memory leaks. Also, are you using a custom Dataset class or using one from torchvision / torchtext?
st115362
Thank you for your reply. Loading images (.jpg), as showing below kwargs = {'num_workers': 4, 'pin_memory': True} if args.cuda else {} train_loader = torch.utils.data.DataLoader( TripletImageLoader('../data',trainImage, trainTriplet, transform=transforms.Compose([ transforms.Scale(args.imageSize), transforms.ToTensor(), transforms.Normalize((,), (,)) ])), batch_size=args.batch_size, shuffle=True, **kwargs)
st115363
this should definitely not go out of memory. Can you share the code for your data loader?
st115364
Hi,thank you for your reply. The code of data loader shown as follow: def default_image_loader(path): return Image.open(path) class TripletMNIST(torch.utils.data.Dataset): def __init__(self, base_path, filenames_filename, triplets_file_name, transform=None, loader=default_image_loader): self.base_path = base_path self.filenamelist = [] for line in open(filenames_filename): self.filenamelist.append(line.rstrip('\n')) triplets = [] for line in open(triplets_file_name): triplets.append((line.split()[0], line.split()[1], line.split()[2])) # anchor, far, close self.triplets = triplets self.transform = transform self.loader = loader def __getitem__(self, index): path1, path2, path3 = self.triplets[index] img1 = self.loader(os.path.join(self.base_path,self.filenamelist[int(path1)])) img2 = self.loader(os.path.join(self.base_path,self.filenamelist[int(path2)])) img3 = self.loader(os.path.join(self.base_path,self.filenamelist[int(path3)])) if self.transform is not None: img1 = self.transform(img1) img2 = self.transform(img2) img3 = self.transform(img3) return img1, img2, img3 def __len__(self): return len(self.triplets)
st115365
grad of grads seems to fail on multiple gpus with the following error: RuntimeError: arguments are located on different GPUs at /pytorch/torch/lib/THC/generated/…/generic/THCTensorMathPointwise.cu Small snippet: interp_points = Variable(some_tensor, requires_grad=True) errD_interp_vec = netD(interp_points) errD_gradient, = torch.autograd.grad(errD_interp_vec.sum(), interp_points, create_graph=True) lip_est = (errD_gradient).view(batch_size, -1).sum(1) lip_loss = penalty_weight*((1.0-lip_est)**2).mean(0).view(1) lip_loss.backward() If the backward is computed directly to netD(interp_points) everything is fain. netD is wrapped in Data parallel table. Does anyone have any idea? Thanks!
st115366
if you give a small script that reproduces this error, i will investigate further.
st115367
Is there anyone who can use torchtext skillfully? Now, I have use torchtext some time, but I can not understand many functions and parameters, so can not use it skillfully, who can use it skillfully or have a document with it.