id
stringlengths
3
8
text
stringlengths
1
115k
st119668
CUDA multiprocessing is quite complicated, and I wouldn’t recommend it, unless it would give you huge performance benefits. The two most important things to remember are: You can’t use the default fork start method. You need to switch multiprocessing to either use spawn (simpler) or forkserver (possible more efficient, but still subtle - server needs to be initialized before CUDA is initialized). The problem is that CUDA doesn’t support forking the process once the context has been initialized. The child will be in an inconistent state and it’s UB. A shared CUDA storage/tensor from another process mustn’t go out of scope in the process that created it, for at least as long as it’s used in other processes. Otherwise, UB again.
st119669
I’m not familiar with all the architecture in pytorch, but I would like to share some thoughts if I may. Wouldnt multi-threading mult-gpu work if: C++ part release GIL systematically Memory (de)allocation isnt blocking (use memory pool like Tensorflow instead of CachedAllocation) I guess moving all c++ part to cffi would solve 1) and would make it even possible to use pypy (lua torch used JIT but in python its suddenly a bad idea?)
st119670
Thank you for the quick reply. For some context, I am trying to launch several agents inheriting mp.Process playing in separate environments, each enqueuing frames in a shared Queue. A single prediction subprocess is dequeuing batches of frames, does an inference step through the neural network estimator and passes back actions to each agent. I am already seeing nice speed-ups compared to the naive asynchronous implementation and after I changed the start method to spawn it seemed to be running fine on my laptop with cuda support (shaving about 20% of the execution time when using cuda). However when I tested it on our servers I got a RuntimeError: CUDNN_STATUS_NOT_INITIALIZED. I get now it’s not really trivial setting this up and I will continue working on the implementation leaving cuda aside for now. I do plan to revisit the issue and I’ll post here about any developments.
st119671
@apaszke also, how is multi-gpu support being implemented? I’ve seen the Hogwild! example but it’s cpu only. edit: The child will be in an inconistent state and it’s UB What is UB?
st119672
Yeah, UB is undefined behaviour = anything could happen now. @kmichaelkills we do release GIL in C++ parts and memory (de)allocation isn’t blocking. You can think of the caching allocator like a self expanding memory pool. Multi-threaded multi-GPU works, and is used to dispatch kernels by our DataParallel module and data_parallel function. Still, even though we carefully release the GIL as soon as we enter C++, the contention on the lock, that serializes execution, makes it nearly impossible to saturate 8 modern GPUs. That’s why we’ve started exploring CUDA multiprocessing. Regarding cffi, we didn’t want to add more dependencies, and having more fine grained control over GIL and some other aspects makes it easier for us to develop the C code. I’d love to support PyPy, but there are very few people using it, so it’s not really a top priority for us. @florin well, the frames are on the CPU already, right? Can’t you cat them there and ship the whole batch to CUDA before giving it to the network? Why do you need CUDA multiprocessing there? I don’t really understand the thing about multi-GPU. You can switch between GPUs using torch.cuda.device, and that’s all you need to execute code on another device. Alternatively, if you want to have purely asynchronous training, you could start up some threads or processes, have them run on separate GPUs, and sync parameters periodically. This is a use case where CUDA multiprocessing can come in handy, because the shared memory never goes out of scope.
st119673
As shown in the title, do clip like clip(p, 0, 1) making p in the range of (0,1). I could not find a clip in the torch tensor functions
st119674
In the RNN implementation, there are two biases, b_ih and b_hh. Why is this? Is it different from just use one bias? Will it affect performance or efficiency?
st119675
You mean in general? For the same reason that it needs two sets of weights, one for the input and one from the previous state.
st119676
Taking RNN with tanh activation for example, it follows h_t = tanh(w_{ih} * x_t + b_{ih} + w_{hh} * h_{(t-1)} + b_{hh}) b_{ih} and b_{hh} are just biases (trainable constants). It does not matter for input or previous state. Actually, let b=b_{ih} + b_{hh}, h_t = tanh(w_{ih} * x_t + w_{hh} * h_{(t-1)} + b). It should be the same.
st119677
As you pointed out it doesn’t really change the definition of the model, but this is what cuDNN does, so we’ve made our RNNs consistent with this behaviour.
st119678
I want to make sure that I do not break my installation on my LINUX machine with GPU (cuda 8.0) when attempting to update pytorch, so I am asking here to be sure. The context is that I had installed PyTorch on my LINUX machine a per the old instructions. Specifically, I have a virtualenv environment, and within that, I pip installed from a wheel. I would like to now upgrade my PyTorch version to whatever is on master, and the website has this to say about LINUX with CUDA8: conda install pytorch torchvision cuda80 -c soumith However, I do not have conda, and anyway I installed via a wheel with pip. Is there anything I should do (or not do) in particular here to make sure I do not break my system? I want to basically just update my pytorch on my LINUX machine. Thanks!
st119679
git clone https://github.com/pytorch/pytorch 325 cd pytorch python setup.py 49 install then later cd pytorch git pull origin master python setup.py 49 install
st119680
Thanks! I am a little confused though - since I operate out of my pip virtual env… how does python “know” which version of pytorch to use?.., when I do import torch for example?
st119681
@Veril I am not also confused… how do I know what version of pytorch is currently being run? The pip one (old one) is showing here, but … how do I load the new version once it’s installed? And how do I know what version I have imported? And lastly, how can I control which version I want to load?? thanks!!
st119682
Hope there is something like “luarocks install nn” in Torch7 so that we can update blood-edge versions quicky…
st119683
There’s a list of directories where Python will look for packages - sys.modules. If it’s found in any of them, it will stop looking further and load that. When you run in virtualenv, sys.modules is extended by putting the local package directory before the system-wide one. Next version of PyTorch is going to have torch.version attribute, so you’ll be able to check which one has been loaded. You can use pip uninstall torch to get rid of the old install. All remaining question are about managing venvs, and there are probably lots of articles, that will explain it better, that I will in this short response.
st119684
@apaszke Im in a real rut here. I would like to right now un-install the NEW pytorch that I installed via @Veril’s answer, and go back to the old 1.6 version, (that I still have here on pip). How do I do that? I looked at sys.modules but… it’s just a huge amount of information dumped into the screen. I currently do not know how to control what version is being loaded… :-/ Help!
st119685
It’s really hard for me to debug this remotely. Just run pip uninstall torch until it tells you that there’s no such package installed. Then install the version you want.
st119686
Hi All, Can someone share a snippet demonstrating how to use mark_non_differentiable from autograd.Function 42?
st119687
For example, comparison is non-differentiable, so you could implement a Function for it like that: class Gt(Function): def __init__(self, scalar=None): super(Gt, self).__init__() self.scalar = scalar def forward(self, tensor1, tensor2=None): other = tensor2 if tensor2 is not None else self.scalar mask = tensor1.gt(other) self.mark_non_differentiable(mask) return mask
st119688
I have made a CNN: class convNet(nn.Module): #constructor def __init__(self): super(convNet, self).__init__() self.conv1 = nn.Conv2d(3, 96, kernel_size=7,stride=1,padding=3) self.conv2 = nn.Conv2d(96, 256, kernel_size=5,stride=1,padding=2) self.conv3 = nn.Conv2d(256,384,kernel_size=3,stride=1,padding=1) self.conv4 = nn.Conv2d(384,512,kernel_size=3,stride=1,padding=1) self.conv5 = nn.Conv2d(512,1024,kernel_size=3,stride=1,padding=1) self.fc1 = nn.Linear(4*4*1024, 300) self.fc2 = nn.Linear(300, 100) self.fc3 = nn.Linear(100, 10) def forward(self, x): conv1_relu = nnFunctions.relu(self.conv1(x)) conv2_relu = nnFunctions.relu(self.conv2(conv1_relu)) conv3_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv3(conv2_relu)),2) conv4_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv4(conv3_relu)),2) conv5_relu =nnFunctions.max_pool2d(nnFunctions.relu(self.conv5(conv4_relu)),2) x = conv5_relu.view(-1, 4*4*1024) x = nnFunctions.relu(self.fc1(x)) x = nnFunctions.relu(self.fc2(x)) x = self.fc3(x) return x net=convNet() net.cuda() But when I write torch.save('net.txt',net) It gives me the following errror: AttributeError Traceback (most recent call last) <ipython-input-13-a8f33f24049b> in <module>() ----> 1 torch.save('net.t7',net) /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in save(obj, f, pickle_module, pickle_protocol) 121 f = open(f, "wb") 122 try: --> 123 return _save(obj, f, pickle_module, pickle_protocol) 124 finally: 125 if new_fd: /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in _save(obj, f, pickle_module, pickle_protocol) 212 pickle_module.dump(sys_info, f, protocol=pickle_protocol) 213 --> 214 with closing(tarfile.open(fileobj=f, mode='w:', format=tarfile.PAX_FORMAT)) as tar: 215 _add_to_tar(save_sys_info, tar, 'sys_info') 216 _add_to_tar(pickle_objects, tar, 'pickle') /home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in open(cls, name, mode, fileobj, bufsize, **kwargs) 1691 else: 1692 raise CompressionError("unknown compression type %r" % comptype) -> 1693 return func(name, filemode, fileobj, **kwargs) 1694 1695 elif "|" in mode: /home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in taropen(cls, name, mode, fileobj, **kwargs) 1721 if mode not in ("r", "a", "w"): 1722 raise ValueError("mode must be 'r', 'a' or 'w'") -> 1723 return cls(name, mode, fileobj, **kwargs) 1724 1725 @classmethod /home/sarthak/anaconda2/lib/python2.7/tarfile.pyc in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel) 1577 self.members = [] # list of members as TarInfo objects 1578 self._loaded = False # flag if all members have been read -> 1579 self.offset = self.fileobj.tell() 1580 # current position in the archive file 1581 self.inodes = {} # dictionary caching the inodes of /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.pyc in __getattr__(self, name) 241 if name in modules: 242 return modules[name] --> 243 return object.__getattribute__(self, name) 244 245 def __setattr__(self, name, value): AttributeError: 'convNet' object has no attribute 'tell'
st119689
you have an obvious mistake. It’s not: torch.save('net.txt',net) It is: torch.save(net, 'net.txt')
st119690
While loading my network: net=torch.load('net.txt') I get the following error: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () ----> 1 net=torch.load(‘net.txt’) /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in load(f, map_location, pickle_module) 246 f = open(f, 'rb') 247 try: --> 248 return _load(f, map_location, pickle_module) 249 finally: 250 if new_fd: /home/sarthak/anaconda2/lib/python2.7/site-packages/torch/serialization.pyc in _load(f, map_location, pickle_module) 344 unpickler = pickle_module.Unpickler(pickle_file) 345 unpickler.persistent_load = persistent_load --> 346 result = unpickler.load() 347 return result AttributeError: 'module' object has no attribute 'convNet'
st119691
It’s hard to debug this without seeing the code, but my guess would be that pickle can’t find the definition of the convNet class. You’re probably trying to import it from a different directory, or you have moved the file.
st119692
I have saved my net on one jupyter-notebook and I am trying to load the saved network in another jupyter-notebook . Is there a way to load the net in a different notebook.
st119693
Or better put it in a file in the same directory as your notebooks and import it from both.
st119694
I know when you run the first batch, it’s possible to be very slow. However, I met the following situation: Read data: 1.64876580238 iter 60021 (epoch 8), train_loss = 2.436, time/batch = 0.396 Read data: 0.0616140365601 iter 60022 (epoch 8), train_loss = 2.617, time/batch = 0.098 When I spend a long time reading data, the time of network is also higher. The time is counted after synchronized. My dataloader includes multi-threaded loading. Anyone has any idea?
st119695
That’s a tough one. There are many reasons why this can happen. You can have a background process that’s sleeping most of the time, but wakes up and consumes a lot of resources for some time. It might be Python’s garbage collection. It can be some kind of cleanup in the OS. It can be because of the hardware. Does this happen to you very often?
st119696
It may be solved if I change my data loading strategy. (The readdata time is occasionally long because I’m periodically dumping tasks into the pool). But every time readdata is long, the network also becomes slow. So I’m curious what will cause the network to become slow. Could high IO slow down the network computation? I thought the network computation time is mostly determined by GPU?
st119697
It is, but you still need to have your CPU share, so it can queue the kernels for the GPU. If the Python process is suspended, then it can’t give the GPU any work to do.
st119698
Hi, I wrote a simple script here, utilizing the new nn.ModuleList. module = [nn.Conv2d(1,64, 3, stride = (1,1), padding = (1,1)), nn.BatchNorm2d( self.module_list = nn.ModuleList(module) self.module_list += module Each “module”, is composed of a Conv2d, and a BatchNorm. What I would like self.module_list to be, is: Conv2d -> BatchNorm -> Conv2d -> BatchNorm Is this the right way to go about it? I worry that I am not really composing a function, but re-using the same set of weights twice… which I don’t want of course. I want each Conv2d, BN to be unique of course. Is this the right way to go about it? Thanks!!
st119699
No, you’re adding the same modules, with the same weights twice. You need to create a new list, and append that after construction. def make_sequence(): return [nn.Conv2d(1,64, 3, stride = (1,1), padding = (1,1)), ...] self.module_list = nn.ModuleList() for i in range(5): self.module_list += make_sequence()
st119700
I was implementing low-rank approximation for reducing the number of filters (https://arxiv.org/pdf/1405.3866.pdf 46). below is simple code to test the performance part 1 import torch import time import numpy as np img = torch.autograd.Variable(torch.rand(1, 3, 416, 416)) filters = torch.autograd.Variable(torch.rand(64, 3, 5, 5)) avg = 0 for x in xrange(10): num_ops = 0 st = time.clock() conv = torch.nn.functional.conv2d(img, filters, padding=1) num_ops = np.prod(conv.size())*np.prod(filters.size()) # print result.size() avg += time.clock() - st print avg/10.0, 'Average conv operation time' print 'Number of operation', num_ops Part 2 print "=================================================================" filters = torch.autograd.Variable(torch.rand(8, 3, 5, 5)) A = torch.autograd.Variable(torch.rand(64, 8)) avg = 0 core_avg = 0 for x in xrange(10): num_ops = 0 st = time.clock() core_st = time.clock() conv = torch.nn.functional.conv2d(img, filters, padding=1) num_ops = np.prod(conv.size())*np.prod(filters.size()) core_avg += time.clock() - core_st conv = conv.view(8, -1) core_st = time.clock() num_ops += np.prod(conv.size()) * A.size()[0] conv = torch.mm(A, conv) core_avg += time.clock() - core_st # print conv.view(result.size()).size() avg += time.clock() - st print 'Number of reduced operation', num_ops print avg/10.0, 'Average reduced operation time' print core_avg/10.0, 'Average reduced core operation time' output 0.0859299 Average conv operation time Number of operation 52652851200 ================================================================= Number of reduced operation 910455552 0.0904023 Average reduced operation time 0.0897171 Average reduced core operation time part 1 does the simple convolution and part 2 does low-rank approximation. I am seeing that number of operation is less for low-rank approximation, but there is no reduction in time take. What is wrong in my implementation? Is conv2d is well optimized than matrix multiplication?
st119701
The number of operations you computed is for the naive convolution algorithm, but there’s been a lot of research in this area, and modern algorithms perform much less operations. Additionally, the actual speed depends not only on floating point operations, but also on memory bandwidth. Doing conv + mm requires reloading some intermediate values multiple times, and this takes time. A single conv kernel can reuse values already loaded int registers.
st119702
Hi guys, I’ve been digging through docs and this forum but I can’t quite figure this seemingly basic thing out–if I want to reparameterize a filter as an indexed function of another filter (let’s just say in the most basic case that I want to make a sparse 5x5 filter out of a 3x3 filter), how do I go about doing that so that this properly ends up in the graph? In Theano, I would just use set_subtensor, but I’m not entirely clear on how things get put into the graph with e.g. index_fill_ or index_copy_. The end goal is to end up parameterizing filters like this (MDC Blocks), 8. I’m also struggling to figure out exactly how to replicate standard numpy advanced indexing/slicing. Ideally I would just go: W_base = Parameter(torch.randn(num_out,num_in,3,3)) W_effective = Variable(torch.zeros(num_out,num_in,5,5)) W_effective[:,:,::2,::2] = W_base And have a 5x5 filter which is effectively a dilated 3x3 filter, and where only the elements that I indexed in that call are parameters that can be updated. Note that I’m not just trying to dilate a filter (the convNd modules already look to have that on lock) but ideally I’d like to be able to arbitrarily parameterize a tensor by indexing. Any help is appreciated! Best, Andy
st119703
Hmm, this part might be hard as we don’t support strided indexing yet (it should be easy to add though). Apart from this, if you want to have a Parameter, that’s computed from another one, you should do that in the forward function of the model, and use our functional interface. Here’s a simple example with Linear: class MyModule(nn.Module): def __init__(self): self.large_linear = nn.Linear(100, 100) def forward(self): x = F.sigmoid(self.large_linear(x)) W_small = self.large_linear.weight[:10] b_small = self.large_linear.bias[:10] return F.linear(x, W_small, b_small)
st119704
Thanks! Hmm, so including the paramterization in forward() makes sense so that the new param is a part of the graph…might it instead be easier to initialize a Parameter, set the static values to zero, then specifically index in the backprop to only update the desired indices? Best, Andy
st119705
No, that wouldn’t work, because you’d have to backprop through the parametrization at each iteration, but it doesn’t make sense to specify retain_variables and loose all the memory savings in other parts of the graph. I’d recommend recreating the parametrized Variable from scratch at every iteration.
st119706
Alright, so I’ve dug further into this and found some interesting things. TL; DR: I solved it for my use-case, but I might have stumbled onto some bugs. Don’t use in-place operations, bad things happen. I managed to achieve what I was going for by creating the main weight parameter, w, and two intermediate variables, W, a zeros-filled of the desired final shape and M, A mask of the same shape as W filled with 1s that correspond which elements of W I want to fill with w. For consistency the number of positive elements in M has to equal the number of elements of w, of course. I initialize both W and M in the init method of the module, then call W[M] = w during the forward() method, and convolve using the modified W. This works, but for some reason when I do things this way it results in training starting out fast, then progressively slowing down (starting out at around 5 batches/s and dropping to 0.2 batches/s over the course of the first 1000 batches). It also throws an error about non-leaf variables not yet being serializable when I train to use torch.save, presumably because I’m creating some naughty nodes that I shouldn’t be. My initial suspicion was that I was creating additional subgraphs that weren’t being deleted, or not freeing memory appropriately (more on that in a moment), but the memory usage in this case was constant. Investigating further, I found that if I replaced W[M] = w with W.masked_copy_(M,w), I get an error after the first batch saying that I need to use"retain_variables=True" if I want to backpropagate a second time through the graph. The error method is a bit confusing here, as I am only calling backward() once. My intuition is that the above error occurs because I’m calling an in-place method in forward(), which seems to be against best pytorch practice at the moment, so whatever variables autograd would need to do the backprop aren’t getting saved. Calling backward(retain_variables=True) results in the same behavior as using W[M]=w; it works, but it progressively slows down throughout training. I’m still not sure on what’s causing the slowing–my best guess is that some part of the graph isn’t getting appropriately freed in such a way that rather than creating multiple subgraphs that take up memory, it’s just propping through the same graph element an increasing number of times on each successive iteration. I ran into another interesting issue while messing with this–while running Brendan Amos’s Densenet model inside my own training code, if I swapped out standard filters for dilated filters using the dilate keyword in conv2d, I would see a memory explosion that would overflow my GPU within ~50 batches. It turned out this was because I was using saved_loss+=loss rather than +=loss.data, so it was creating multiple copies of subgraphs and not freeing them appropriately. The user error isn’t interesting, but the fact that when using undilated filters I do not observe this memory explosion, despite the bad +=loss line, is interesting. Anyhow, I did manage to get things working for my use-case–rather than trying to do a masked copy or in-place operation, I just instantiate W as a full-rank tensor and drop W*M into the F.conv2d call. Works great and is about twice as fast as using the dilation parameter (presumably because it allows for the use of the cuDNN backend). Here’s a code snippet with which I’m currently getting ~80-100% speedup over using the dilation keyword for my use-case. Note that this currently prevents you from saving with torch.save due to a “can’t serialize non-leaf variables yet” dealio. in init: self.m = Variable(torch.cuda.FloatTensor( [[(( [( ([1]+[0]*(dilation-1)) *3)[:-(dilation-1)]] + [[0]*(3+(dilation-1)*2)]*(dilation-1))*3) [:-(dilation-1)]] *n_in]*n_out)) self.W = Parameter(torch.zeros(n_out,n_in,3+2*(dilation-1),3+2*(dilation-1)),requires_grad=True).cuda() # requires_grad not necessarily necessary in forward: out = F.conv2d(input,weight=self.W*self.m,padding=dilation,bias=None) Sorry for the wall of text, but hopefully this will prove enlightening and thorough if other people come along with similar issues on in-place ops. For the record, I’m using the build provided by conda install and am on python 2.7 (my attempts at building from source crash, sigh). I tested these with Cuda7.5 on a GTX980 and Cuda8.0 on a Titan X. Best, Andy
st119707
What’s the full error you’re seeing? It doesn’t sound as if it has to do anything with in-place ops. The slowdown comes from the fact, that you’re modifying the W and reusing it in every subsequent iteration, but it keeps it’s history, so it keeps growing. In-place operations are part of history too! If you don’t want to backprop through old graph multiple times, I’d recommend you to repackage W in a fresh Variable after every iteration (see e.g. repackage_hidden 16 from the language modeling example). There should be a better way to solve this, but I’d need to know more about your particular use case. This also causes the serialization failure - you can only save leaf Variables, but your W keeps hold of the history, so it’s not a leaf! That’s probably because PyTorch has to pick a more memory-intensive backend for dilated convolutions. If you can send me a diff that I could test, I could confirm that it’s because of this.
st119708
I nominate Adam for king of speed replies, wow! So I tried two variations on repackage_hidden in the forward() method, both of which solve this issue: Keep W and M as regular old tensors (not variables) then have: W[M]] = w out = F.conv2d(x,Variable(W)) Repackage W a la the example you linked: W = V(self.W.data) W[M] = w #1 is ~15% faster than #2, and the original thing I was doing with W*M is ~7.5% faster than #2. This also makes it possible to save with torch.save. The exact error I was getting when not properly Variable-ing around was: RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time. I was only calling backward() a single time, hence my initial confusion; I mostly found this interesting because it only occurred when switching to masked_copy_ from a direct indexing call. I’ll put together a benchmark script to compare and validate different dilation implementations–I did some experimentation on that front for theano/lasagne 3 a few months back, so I’ll be really curious to see how things roll with PyTorch. Thanks, Andy
st119709
Great! I think that if you can keep them as Tensors (i.e. don’t strictly need autograd for that part), then it’s best to do this. The error was caused because the history was never freed, I don’t think it has to do anything with masked_copy_. If you look into how assignment is implemented, it actually also uses an in-place function (but it’s hidden behind that syntactic sugar). Cool! Once you put up your PyTorch implementation I can take a look to make sure it’s running at full speed!
st119710
I have: PRE EMBEDDING: input.size() (2406L, 2L) – -- (word_id, batch_size) 99 99 489 1171 281 1317 ⋮ 0 435 0 2741 0 517 [torch.LongTensor of size 2406x2]``` POST EMBEDDING: ```context_output.size() -- (2406L, 2L, 1L)``` -- (word_sequence, batch_size, attention_score) ```Variable containing: ( 0 ,.,.) = 1.00000e-02 * 2.0804 1.6674 2.9782 ⋮ 4.8565 4.8565 4.8565 ... ( 1 ,.,.) = 1.00000e-02 * 0.2246 1.4224 4.1816 ⋮ 4.4363 3.0162 3.3986 [torch.FloatTensor of size 2x2406x1]``` I need to sum the values in the POST EMBEDDING tensor if they have the same ```word_id``` from the PRE EMBEDDING tensor. Example: PRE EMBEDDING ```Variable containing: 4 1 4 2 2 2 [torch.LongTensor of size 3x2]``` POST EMBEDDING: ```Variable containing: ( 0 ,.,.) = 0.35 0.35 0.65 ... ( 1 ,.,.) = 0.25 0.65 0.65 [torch.FloatTensor of size 2x3x1]``` DESIRED RESULT ```Variable containing: ( 0 ,.,.) = 4.0 0.7 2.0 0.65 1.0 0.0 ... ( 1 ,.,.) = 4.0 0.0 2.0 1.3 1.0 0.25 [torch.LongTensor of size 2x3x2]```
st119711
There’s no built in method that can do that. It seems weird to me that you want the desired results to have columns of (word_idx, sum), because the first column is integral, while the other is floating point. Tensors are always homogenous. I don’t think there’s a method for that in any of the numerical packages. You’ll have to process the tensors using your own Python code. If you want to make it differentiable, you can create your own autograd Function 33.
st119712
Hi all, I’m trying to implement a simple recursive neural network 24. My cell looks something like this: import torch import torch.nn.functional as F import torch.nn as nn class ReNNCell(nn.Module): def __init__(self, dim): super(ReNNCell, self).__init__() self.dim = dim self.W = nn.Linear(dim*2, dim) self.W_score = nn.Linear(dim, 1) def forward(self, inputs): assert(inputs.size()[0] == 1), 'we expect batch size = 1' rep = F.relu(self.W(inputs)) score = F.relu(self.W_score(rep)) return rep, score It will take two concatenated inputs and then return a new representation of these inputs and a corresponding score. Based on this score I would like to build up a tree as follows: tensors = [Variable(torch.randn(1,3)), Variable(torch.randn(1,3)), Variable(torch.randn(1,3)), Variable(torch.randn(1,3))] cats = [] for i in range(1,4,2): cats.append(torch.cat([tensors[i-1], tensors[i]],1)) cell = ReNNCell(3,2) outputs = [cell(c) for c in cats] reps = [o[0] for o in outputs] scores = [o[1] for o in outputs] scores = torch.cat(scores, 1) reps = torch.cat(reps, 0) max_score, max_index = torch.max(scores, 1) max_index = torch.squeeze(max_index, 1) max_score = torch.squeeze(max_score, 1) Based on this selection I will compute some dummy loss. crit = torch.nn.MSELoss() loss = crit(reps.index_select(0,max_index), Variable(torch.ones(1,3))) If I call loss.backward() I get the following error: …/pytorch/torch/autograd/variable.py in backward(self, gradient, retain_variables) 156 ‘or with gradient w.r.t. the variable’) 157 gradient = self.data.new().resize_as_(self.data).fill_(1) –> 158 self._execution_engine.run_backward((self,), (gradient,), retain_variables) 159 160 def register_hook(self, hook): RuntimeError: could not compute gradients for some functions (Threshold, Threshold) Could anybody point me in the right direction? Is this a bug or do I perhaps have to somehow compute the gradients myself?
st119713
Hi, It may be related to this 128 issue. Does replacing max_index = torch.squeeze(max_index, 1) by max_index = torch.squeeze(Variable(max_index.data), 1) solves your issue ?
st119714
Yes it does! So index_select somehow discards the Variable state (or is it squeeze)?
st119715
No its a problem (on pytorch side, not your code) in the way backward engine computes dependencies. Its triggered by the fact that you use the max_index from the max operation that is not differential. You can use the workaround until this issue 215 is solved.
st119716
How do you load a model from the mode Zoo (say: https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth 101) and how do you forward with this model file?
st119717
To open that file (which is a collection of weights) you can use params = torch.load('resnet18-5c106cde.pth'). To get a ResNet-18, instead, you should use res18 = torchvision.models.resnet18(pretrained=True). To forward a random image, you can define a FloatTensor, encapsulate it into a Variable and send it to the network. x = torch.rand(1, 3, 224, 224) xVar = torch.autograd.Variable(x) res18(xVar) --> Variable containing: -0.4374 -0.3994 -0.5249 ... -0.5333 1.4113 0.9452 [torch.FloatTensor of size 1x1000] You can access the output vector by appending .data to your output Variable. Let me know if it works.
st119718
Linear layer requires 2D tensor. Does it mean that I got to use ‘view’ function to reshape the previous output every time I call it? But for some experiments BN and RELU are both needed afterwards. They both require 4D, for which I need to call reshape function again and again… and I can’t find one corresponding reshape function for nn.Sequence()? Is there any plan to add it?
st119719
What about BatchNorm1d (ref. 8)? And of course, a Linear Module is simply an affine transformation, hence it requires vectors (1D) or stack of vectors, i.e. matrices (2D). ReLU() is a point-wise operator, so it should work with any input dimensionality.
st119720
No, we don’t plan to add any modules for reshaping, that could be put into a Sequential. You might want to take a look at how torchvision models are implemented 24.
st119721
thank you for replying me! I’m so sorry for not noticing BatchNorm1d in the document. Now my confusion is perfectly solved~
st119722
For example, A Tensor with shape [2, 3, 4] -> [4, 2, 3] In numpy, we could directly use np.transpose(Tensor, [2, 0, 1]). However, in pytorch, I could not find a elegant way to do it. Thanks for your help.
st119723
What is the best way to initialize a new network from a sub-network of larger network (saved using torch.save(net.state_dict())?
st119724
You could use some things from How to extract features of an image from a trained model
st119725
I would like to custom nn.Module including max unpooling module. For example, if I declare MyModule1 and 2 which outputs the pooling index beside output signal and takes the pooling index (pool_idx), respectively, as below. class MyModule1(nn.Module): def __init__(self): super(MyModule1, self).__init__() self.conv = nn.Conv2d(64, 3, 4) self.pool = nn.MaxPool2d(3, 3, return_indices = True) def foward(self, x): x = self.convl(x) x, pool_idx = self.pool(x) return x, pool_idx class MyModule2(nn.Module): def __init__(self, pool_idx): super(MyModule, self).__init__() self.pool_idx = pool_idx self.conv = nn.ConvTransposed2d(3, 64, 4) self.unpool = nn.MaxUnpool2d(3,3) def forward(self, x): x = self.unpool(x, self.pool_idx) x = self.conv(x) return x But, I do not think that the 2nd module accepts the pooling index as a input. Also, I do not know how to make a bigger module including these two custom modules. For example, class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.m1 = MyModule1() self.m2 = MyModule2() def forward(self, x): x, pool_idx1, pool_idx2 = self.m1(x) x = self.m2(x, pool_idx1, pool_idx2) return x But, it won’t work. I would greatly appreciate if you could help me to solve this problem.
st119726
You shouldn’t save the pooling index in the constructor. This should work: class MyModule1(nn.Module): def __init__(self): super(MyModule1, self).__init__() self.conv = nn.Conv2d(64, 3, 4) self.pool = nn.MaxPool2d(3, 3, return_indices = True) def foward(self, x): x = self.convl(x) x, pool_idx = self.pool(x) return x, pool_idx class MyModule2(nn.Module): def __init__(self): super(MyModule2, self).__init__() self.conv = nn.ConvTransposed2d(3, 64, 4) self.unpool = nn.MaxUnpool2d(3,3) def forward(self, x, pool_idx): x = self.unpool(x, pool_idx) x = self.conv(x) return x class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.m1 = MyModule1() self.m2 = MyModule2() def forward(self, x): x, pool_idx = self.m1(x) x = self.m2(x, pool_idx) return x
st119727
Hi all, I try to use pytorch on the 2nd GPU, `a = torch.ones(1).cuda(1) b = torch.ones(1).cuda(1) c = torch.cat((a,b),0)` Then an error comes out: RuntimeError: cuda runtime error (77) : an illegal memory access was encountered at /data/users/soumith/miniconda2/conda-bld/pytorch-0.1.7_1485444530918/work/torch/lib/THC/generic/THCTensorCopy.c:65 How can I fix this?
st119728
In addition, how to set learning rates for different layers? I think use for param_group in optimizer.state_dict()['param_groups']: param_group['lr'] = lr can only set the learning rate for hole model.
st119729
Yes, we’re aware of the bug in cat. It will be fixed during the weekend. About the second question, see the per-parameter options section 25 of the optim docs.
st119730
Encounter the same problem. I have just updated to the latest version but the error sitll rises. Has it been fixed? If not, is there any workaround?
st119731
Whatsmore, this error rises only when I use GPU 1,2,3 on my PC. No error rises if I use gpu 0. Not sure whether this is related to the issue: torch.cat puts result on current GPU rather than GPU of inputs 21. For now, it seems that I can workaround by using GPU 0.
st119732
A temporary workaround is to wrap the torch.cat calls in with torch.cuda.device_if(tensor) where tensor can be e.g. the first element of the catted sequence. A fix is waitining in this PR 47.
st119733
Hello everyone! This is my first post in the forums. I am a new user, starting my Torch and pyTorch experience and I am very excited to do so! I am still going over documentation, tutorials and such and considering further options. One thing I thought I would ask the more experienced people in the forum concerns packages. There’s quite a few packages in pyTorch (this is one of my favourite aspects of Torch actually!) and I was wondering if people would like to ‘pass the torch’ in a sense sharing their experiences with them. Which package was the best in loading data from a csv file? Which packages do you use more often when dealing with images? Which packages do you use more often when dealing with text and NLP? Which packages do you use for visualization of model performance and understanding of your results? Which packages do you think are missing and could be developed to further enhance pyTorch? Thanks in advance for any comments that might come! See you down the Torch road! Kind regards, Theodore.
st119734
Hi all, After I load a pytorch model, python give me this message, ~/anaconda2/lib/python2.7/site-packages/torch/serialization.py:304: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm2d' has changed. you can retrieve the original source code by accessing the object's source attribute or settorch.nn.Module.dump_patches = Trueand use the patch tool to revert the changes. Anybody know what this means?
st119735
Since the model structure is actually defined by the code, we’ve implemented a simple mechanism that saves the source code of each Module that you save, and when it’s loaded it compares it with the source that’s used to reinstantiate it. It’s meant to warn you that the model might now work differently, if you change your model code in the meantime. I didn’t realize it also worked for built in modules, we’ll have to disable that. You can safely ignore the warning, and overwriting the old checkpoint with a new one will make it disappear. Also, we recommend serializing only the state dict.
st119736
I was wondering why conda is preferred over pip. pip was used to compile from source, but now it installs binary wheels too. Any major difference? For Torch we went through 5 (?) package managers at least… Edit: OK, this quote “Pip is a package manager, and Virtualenv is an environment manager. Conda is both.” (ref. 34) may explain it.
st119737
This is because conda manages dependencies that are not just python dependencies. For example with conda, we get tighter control over which BLAS is installed.
st119738
Hi, I met a problem as follows: BLAS : Program is Terminated. Because you tried to allocate too many memory regions. After setting threads as follows, export OPENBLAS_NUM_THREADS=1 export GOTO_NUM_THREADS=1 export OMP_NUM_THREADS=1 The problem was solved. Therefore, the problem is aroused from multi-thread of CPUs. I just used 50 or 100 images as a batch, where each image is 3224224. The machine has 2 GPUs – Tesla M40, 2 CPUs–Intel Xeon E5 v4 CPU. Because I cannot connect with conda, I download Pytorch and torchvision sources, and compile them. The cuda version is 7.5, the cudnn version is 5.1. How can I deal with the problem? Thank you beforehand.
st119739
I guessed the problem is caused by the imcompatibility of OpenBLAS with Pytorch, because the problem is totally solved by reinstalling Pytorch with the command ‘conda install pytorch torchvision -c soumith’. It is heard that Pytorch compiled from the sources of github cannot well control OpenBLAS.
st119740
I haven’t heard about problems with OpenBLAS before, but I don’t think it’s really a PyTorch bug, since we’re only calling its functions. If it can’t manage its threads properly, there’s nothing we can do about it. I’d recommend using MKL.
st119741
Yes, I do agree with you. It is not the problem of Pytorch. I just recorded the problem and its solution to tell other persons the underlying imcompatibility of OpenBLAS, and follow the simplest installing way.
st119742
ERROR: type should be string, got "https://arxiv.org/abs/1702.03192 23\nIf these results are to be believed it’s perhaps worth looking into.\n\nAbstract—Fully connected network has been widely used in\ndeep learning, and its computation efficiency is highly benefited\nfrom the matrix multiplication algorithm with cuBLAS on\nGPU. However, We found that, there exist some drawbacks\nof cuBLAS in calculating matrix A multiplies the transpose\nof matrix B (i.e., NT operation). To reduce the impact of NT\noperation by cuBLAS, we exploit the out-of-place transpose\nof matrix B to avoid using NT operation, and then we apply\nour method to Caffe, which is a popular deep learning tool.\nOur contribution is two-fold. First, we propose a naive method\n(TNN) and model-based method (MTNN) to increase the\nperformance in calculating A × B\nT\n, and it achieves about\n4.7 times performance enhancement in our tested cases on\nGTX1080 card. Second, we integrate MTNN method into Caffe\nto enhance the efficiency in training fully connected networks,\nwhich achieves about 70% speedup compared to the original\nCaffe in our configured fully connected networks on GTX1080\ncard.\n\n\n\nConclusion and Future Work\nOur method to multiply matrix A and the transpose of\nmatrix B is much better than that of cuBLAS API. The\noriginal results achieve about 4.7x speedup compared to\nusing cuBLAS directly on GTX1080 card. Furthermore,\nthe method is applied to Caffe, and the optimized Caffe\nperforms about 70% speedup on GTX1080 card.\nThe transposition algorithm we used is out-of-place\nmethod, which results in double memory of a matrix and\nit cannot run normally if there is no enough memory.\nTherefore, we plan to exploit in-place matrix transposition\nalgorithm and to find a good trade-off between memory\noverhead and throughput."
st119743
Test script: import torch import torch.nn as nn from torch.autograd import Variable class Net(nn.Module): def __init__(self, **config): super(Net, self).__init__() self.config = config self.embedding = nn.Embedding(config['vocab_size'], config['embedding_size']) self.rnn = nn.LSTM( input_size = config['code_size'] + config['embedding_size'], hidden_size = config['hidden_size'], num_layers = config['num_layers'], dropout = config['dropout_ratio'], ) self.linear = nn.Linear(config['hidden_size'], config['vocab_size']) self.softmax = nn.Softmax() def forward(self, code, step): batch_size = code.size()[0] prev_index = Variable(torch.LongTensor(batch_size).fill_(self.config['beg_index'])) prev_h = prev_c = Variable(torch.zeros(self.config['num_layers'], batch_size, self.config['hidden_size'])) logits = [] for i in range(step): prev_vector = self.embedding(prev_index) curr_input = torch.cat((code, prev_vector), 1) curr_input = curr_input.view(1, *curr_input.size()) curr_output, (curr_h, curr_c) = self.rnn(curr_input) prev_h, prev_c = curr_h, curr_c logit = self.linear(curr_output.squeeze()) prev_index = torch.max(logit, 1)[1].squeeze() logits.append(logit) shape = (len(logits),) + logits[0].size() logit = torch.cat(logits, 0) prob = self.softmax(logit).view(*shape) return prob net = Net( code_size = 100, hidden_size = 50, num_layers = 2, dropout_ratio = 0, vocab_size = 1000, embedding_size = 10, beg_index = 1, ) code = Variable(torch.FloatTensor(14, 100)) prob = net(code, 10) prob.backward(torch.ones(prob.size())) Execution result (note the last line): Traceback (most recent call last): File "test.py", line 50, in <module> prob.backward(torch.ones(prob.size())) File "/Users/warbean/anaconda3/envs/py35/lib/python3.5/site-packages/torch/autograd/variable.py", line 158, in backward self._execution_engine.run_backward((self,), (gradient,), retain_variables) RuntimeError: could not compute gradients for some functions (Linear, Linear, Linear, Linear, Linear, Linear, Linear, Linear, Linear) Then I found the error info “could not compute gradients for some functions” is located at https://github.com/pytorch/pytorch/blob/master/torch/csrc/autograd/engine.cpp#L276 7, caused by assertion failure: THPUtils_assert(not_ready.empty(), "could not compute gradients for some functions (%s)", names.c_str()); Is there something wrong in PyTorch’s dependency engine or in my usage? (I’m not sure whether there is a “dependency engine” concept in PyTorch. I just borrow it from MXNet.)
st119744
Yeah we call that backward engine. It’s a bug indeed. Use this workaround for now, and I’ll fix it today: prev_index = Variable(torch.max(logit.data, 1)[1].squeeze())
st119745
I now see that the problem arises because the Linear layer will get gradients only from the last iteration of the loop (it is followed by non-differentiable argmax in all other cases). Not sure if that’s desired, just wanted to give you a heads up.
st119746
I’ve verified that the bug can only unnecessarily raise errors - it doesn’t affect correctness. There’s a big PR that touches the code that I’d need to change to fix it, and since the workaround is quite simple, I’m putting this on hold until it’s merged. I’ve opened an issue 14.
st119747
Thank you for the workaround! It runs without error. However I don’t understand what is “get gradients only from the last iteration of the loop”. What I desire is to propagate gradients into each output step, throughout all the sequence to the beginning. Should I propagate into each output step indivisually? Like this: for prob in probs: prob.backward(gradient, retain_variables = True)
st119748
Sorry, nevermind my comment. I only visualized the graph for a single iteration. It’s all fine.
st119749
I’m using resnet to do feature extraction. I’m assuming the current resnet provided in model zoo is converted from fb.resnet.torch. Are you planning to convert the caffe model into pytorch version? (From my own experience, it seems the caffe one is better.)
st119750
the current model is trained from scratch and matches accuracy to fb.resnet.torch model
st119751
I realize that. The features just don’t work as well as the features extracted from the caffe model. Anyway, I wrote some code which could convert the caffe resnet to pytorch model. GitHub ruotianluo/pytorch-resnet 290 Convert resnet trained in caffe to pytorch model. (group norm resnet is provided too) - ruotianluo/pytorch-resnet
st119752
I know in tensorflow I can do it with tf.slice(my_tensor, begin, size) 41 Many thanks
st119753
In Torch / PyTorch you can use the narrow(dimension, start, length) method (ref. 180), which allows you to get a smaller range (length) of a given dimension at a specific starting point (so, you’ll end up with the same number of dimensions). Use instead the select(dimension, index) method (ref. 140) to choose a specific index of one dimension (you’ll end up with one less number of dimensions).
st119754
Or better just use the regular indexing syntax, as if you were operating on numpy arrays: arr[0, 2:4] will select first slice along the first dimension and slices 2 and 3 from the second dimension.
st119755
Hi, I am building a network containing a linear layer at the top of my architecture. I notice that the number of input units need to be explicitly specified like: self.fc1 = nn.Linear(5033, 30) I know I can compute the number of output from my previous pooling layer. I was wondering if there is a more convenient way to define a network without computing number of units for each layer. Thanks in advance Sen
st119756
No, we require providing the exact numbers of features at construction time. I think it’s safer and you gain more insight into your network structure, because you know exactly what kind of dimensionalities are used. If it’s really unacceptable for you, you might create the layers lazily in the forward function, but that’s not recommended.
st119757
Here is an example of torch.nn.PReLU(num_parameters) acting on a 5d Tensor: out = nn.PReLU(8)(Variable(torch.rand(2,8,16,16,16))) The error looks like: RuntimeError: wrong number of input planes at /data/users/soumith/miniconda2/conda-bld/pytorch-cuda80-0.1.8_1486040640754/work/torch/lib/THNN/generic/PReLU.c:49 Here is an example of torch.nn.Conv3d(bias=False) acting on a 5d Tensor: out = nn.Conv3d(8,16, kernel_size=3, padding=1, bias=False)(Variable(torch.rand(2,8,16,16,16))) The error looks like: TypeError: FloatVolumetricConvolutionMM_updateOutput received an invalid combination of arguments - got (int, torch.FloatTensor, torch.FloatTensor, torch.FloatTensor, NoneType, torch.FloatTensor, int, int, int, int, int, int, int, int, int), but expected (int state, torch.FloatTensor input, torch.FloatTensor output, torch.FloatTensor weight, torch.FloatTensor bias, torch.FloatTensor finput, int kT, int kW, int kH, int dT, int dW, int dH, int pT, int pW, int pH) Lua Torch had similar problems with 5d Tensors, so maybe this is a back end issue.
st119758
Conv3d(bias=False) has a PR fixing it. I’ll merge it in this week and push it into the next release on Wednesday.
st119759
Also, it seems that PReLU doesn’t support 5d tensors at the moment. I’ve opened an issue 36.
st119760
import torch import torch.nn.functional as F def softmax(input, axis=1): """ Apply softmax on input at certain axis. Parammeters: ---------- input: Tensor (N*L or rank>2) axis: the axis to apply softmax Returns: Tensor with softmax applied on that dimension. """ input_size = input.size() trans_input = input.transpose(axis, len(input_size)-1) trans_size = trans_input.size() input_2d = trans_input.view(-1, trans_size[-1]) soft_max_2d = F.softmax(input_2d) soft_max_nd = soft_max_2d.view(*trans_size) return soft_max_nd.transpose(axis, len(input_size)-1) aa= torch.randn(3,4,4) print aa soft_1 = softmax(aa, axis = 1) print soft_1 gives the following error: File "/local/anaconda2/lib/python2.7/site-packages/torch/tensor.py", line 214, in view raise ValueError("input should be contiguous") ValueError: input should be contiguous
st119761
Transpose followed by view will fail, because view requires the tensor to be contiguous. @longcw’s code should do it.
st119762
It seems that Module.zero_grad() does not like parameters with no grad (source below crashes), but what is the proper way to have model parameters which should not be touched by the backprop, but should benefit from Module’s comfort (cuda() etc.)? import torch from torch import Tensor from torch.nn.parameter import Parameter from torch.nn import Module class Blah(Module): def __init__(self, dim): super(Blah, self).__init__() self.s = Parameter(torch.rand(1, dim), requires_grad = False) self.t = Parameter(torch.rand(1, dim)) blah = Blah(10) blah.zero_grad()
st119763
Maybe you want to register a buffer 30 instead? Parameters are meant to be optimised. If you don’t want a parameter, simply use a Variable instead, which is roughly equivalent to your grad-less Parameter. This should do the trick. import torch from torch import Tensor from torch.nn.parameter import Parameter from torch.autograd import Variable from torch.nn import Module class Blah(Module): def __init__(self, dim): super(Blah, self).__init__() self.s = Variable(torch.rand(1, dim)) self.t = Parameter(torch.rand(1, dim)) blah = Blah(10) blah.zero_grad()
st119764
Using a Variable was my first choice, but then Module.cuda() does not propagate to it. With the use of Variable you suggest, blah.cuda() will convert blah.t.data to torch.cuda.FloatTensor as expected, but will leave blah.s.data unchanged. Wouldn’t it be more consistent that Module.zero_grad() deal with requires_grad=False? Edit: That would be in modules.py 6 def zero_grad(self): """Sets gradients of all model parameters to zero.""" for p in self.parameters(): if hasattr(p, 'grad'): p.grad.data.zero_()
st119765
you can overlod the cuda function and call default cuda method inside it. def cuda(self): super(Blah, self).cuda() self.s.cuda()
st119766
But ain’t there other functionalities that I will have to fix also ? Persistence in particular ?
st119767
I really think you want to register a buffer… When you call cuda(), you’ll have your buffer turned into a torch.cuda.FloatTensor (cuda() ref. 4 and buffer’s apply() ref. 5). Then, in your forward() method you can generate a Variable with the buffer content.