id
stringlengths
3
8
text
stringlengths
1
115k
st101600
I’m not sure, but you should update to the latest release. Variables and tensors were already merged in 0.4.0. You can find the install instructions on the website 10.
st101601
Hi everyone, I was wondering if i’ts possible to wrap scikit-learn’s functions in something like let’s say Variable and at run time make them execute on gpu instead of cpu? Does anyone know of anything similar?
st101602
Hi, scikit-learn functions are implemented for cpu, so it is not possible to run it on gpu. You would need to use gpu-implemented functions.
st101603
Thanks! I know that scikit is cpu but I thought maybe there was some dirty hack or trick to make that work on gpu without too much hustle. There’s h2o4gpu which is trying to translate scikit to gpu based algos but still is very early and hasn’t adopted most of the algos yet.
st101604
Ho, I’m afraid I’m not aware of anything like that done with pytorch Tensors/on gpu.
st101605
Thanks! Apparently there is https://github.com/h2oai/h2o4gpu 34 but still needs a lot of work to be done.
st101606
Apologies for the late reply. Apparently not yet. I haven’t had the time to look into it properly.
st101607
Hi, currently I’m using BinaryNet provided here. Particularly, I’m working on the vgg_cifar10_binary model on CIFAR10 dataset. I modified the code to have two instances of the model, one for training, the other for testing, like this model = models.__dict__[args.model] model_val = models.__dict__[args.model] model_config = {'input_size': None, 'dataset': 'cifar10'} model = model(**model_config) model_val = model_val(**model_config) In each epoch, I copied the parameters of the training model to the testing model for epoch in range(args.start_epoch, args.epochs): optimizer = adjust_optimizer(optimizer, epoch, regime) train_loss, train_prec1, train_prec5 = train( train_loader, model, criterion, epoch, optimizer) torch.save(model.state_dict(), path) model_val.load_state_dict(torch.load(path)) with torch.no_grad(): val_loss, val_prec1, val_prec5 = validate(val_loader, model_val, criterion, epoch) Other parts of the code remain unchanged. I expected it to perform the same as using one model. But it’s not. The validation actually gave correct results at the first epoch, and dropped drastically starting from the 2nd epoch (changing model_val back to model in validate function would make the network function properly). Did I do anything wrong with the parameter saving & loading? Thanks!
st101608
I’m training a Convolutional Kernel Network (https://arxiv.org/pdf/1406.3332.pdf 6 https://arxiv.org/pdf/1605.06265.pdf 4) and I need to sample random image patches after the last convolution layer. My current approach is the following, where x_in is the image of shape [batch_size, channels, height, width], fs is the length of the side of the square I want to sample, and n is the number of samples I want to extract from the current batch. def sample_patches(self, x_in, fs, n): all_patches = x_in.unfold(2, fs, 1).unfold(3, fs, 1).transpose(1, 3).contiguous().view(-1, x_in.size(1) * fs * fs) # print(all_patches.size()) n_sampling_patches = min(all_patches.size(0), n) indices = torch.randperm(all_patches.size(0))[:n_sampling_patches] indices = indices.cuda() patches = all_patches[indices] return patches This code works but is quite slow. I think because the “contiguous” does a lot of copying. Is there a more efficient way to achieve this? Thanks, Matteo Ronchetti
st101609
LSTM, from the official documents: num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1 So, if I want to stack a number of layers, say 2, I could just specify 2 as the num_layers, right? But does it do the same way as the following code? nn.Sequential( LSTM1, LSTM2 )
st101610
I learned some feature maps from network, and I fed these feature maps to softmax for normalizing values to [0,1], but I found that all resultant values are small number, that maximum value is only 0.1, I want to perform min-max normalization for the resultant features. That is to say that let the maximum of the resultant feature map as 1 and minimum as 0. But I am not sure if I such min-max normalization will broke the gradient backword, because it calculate maximum and minimum value, which is not differential. Could anybody give me some suggestion?
st101611
When I write code, I find a thing that when I just use one gpu, I can use one-gpu setting: model = model.cuda(), all the things go well. Also, if I use multi-gpu setting: model = nn.DataParallel(model, device_ids=[0]).cuda(), all the things go well too. So what’s the difference between using multi-gpu setting and using one-gpu setting? Like memory performance, computation performance…
st101612
Solved by deepmo in post #2 Maybe I find the answer. After I see the source code of nn.DataParallel() (version 0.4.1). def __init__(self, module, device_ids=None, output_device=None, dim=0): super(DataParallel, self).__init__() if not torch.cuda.is_available(): self.module = module …
st101613
Maybe I find the answer. After I see the source code of nn.DataParallel() (version 0.4.1). def __init__(self, module, device_ids=None, output_device=None, dim=0): super(DataParallel, self).__init__() if not torch.cuda.is_available(): self.module = module self.device_ids = [] return if device_ids is None: device_ids = list(range(torch.cuda.device_count())) if output_device is None: output_device = device_ids[0] self.dim = dim self.module = module self.device_ids = device_ids self.output_device = output_device _check_balance(self.device_ids) if len(self.device_ids) == 1: self.module.cuda(device_ids[0]) So there is no difference in performance whether we use multi-gpu setting or one-gpu setting, becuase multi-gpu setting use the one-gpu setting when len(device_ids) == 1.
st101614
(PyTorch 0.4) How does one apply a manual dropout layer to a packed sequence (specifically in an LSTM on a GPU)? Passing the packed sequence (which comes from the lstm layer) directly does not work, as the dropout layer doesn’t know quite what to do with it and returns something not a packed sequence. Passing the data of the packed sequence seems like it should work, but results in the attribute error shown below the code sample. Perversely, I can make this an inplace operation (again, on the data directly, not the full packed sequence) and it technically works (i.e., it runs) on the CPU, but gives a warning on the GPU that the inplace operation is modifying a needed gradient. So: Are the different behaviors between CPU and GPU expected? What is the overall correct way to do this on a GPU? What is the overall correct way to do this on a CPU? def __init__ (self, ....): super(Model1, self).__init__() .... self.drop = torch.nn.Dropout(p=0.5, inplace=False) def forward(self, inputs, lengths): pack1 = nn.utils.rnn.pack_padded_sequence(inputs, lengths, batch_first=True) out1, self.hidden1 = self.lstm1(pack1, (self.hidden1[0].detach(), self.hidden1[1].detach())) out1.data = self.drop(out1.data) AttributeError: can't set attribute
st101615
Does anyone use dropout with packed sequences? I have a tentative workaround for this, but I am very curious to know what the standard PyTorch way of doing this is, and what is going on the the different behaviors on GPU and CPU. That’s the sort of thing that can really make you question the validity of your results.
st101616
I usually create a new packed sequence when I apply an op (reusing the old batchsizes). The docs tell you not to, but it works just fine. Note, however, that dropout for sequences is something where there are several options which - depending on whom you ask - work to varying degree (e.g. “variational dropout”) Best regards Thomas
st101617
Thomas, Thanks for the response, and your point about Dropout styles is well-taken. Can I prevail on you for a code snippet? All I am trying to do is add dropout in the simplest possible way, between the layers after the activations, and I had already hit on the idea of packing, running the LSTM, padding, running the dropout, etc. This seems similar to your idea of creating new packed sequences. However, I have a three-test benchmark, which aims to learn an identity function of complex multi-coeffcient data: Run a single layer LSTM network (no dropout layer) Run a two-layer LSTM network (no dropout layer) Run a two-layer LSTM network (dropout layer between L1 and L2, dropout set to 0, i.e., deactivated) What I see in cases 1 and 2 is the network quickly learning to output what it gets in, while in case 3 I get substantially degraded performance. It never learns to mimic the input data at all. What I would expect, though, is effectively identical performance between cases 2 and 3, up to the shuffling of minibatches in my standard implementations. My best guess is that I’ve somehow broken the gradient flow, but I can’t see how, or where, or how to fix it.
st101618
I’m implementing as follows, where the h1_kernel, c1_kernel, etc are hidden and cell state kernels, for learning initial hidden layers. (I prefer to learn kernels so that I can easily change batch sizes later; the full hidden and cell states are just repetitions of the learned kernels.) def forward(self, inputs, lengths, batch_size): self.h1, self.c1 = self.init_hidden(batch_size, self.h1_kernel, self.c1_kernel) self.h2, self.c2 = self.init_hidden(batch_size, self.h2_kernel, self.c2_kernel) pack1 = nn.utils.rnn.pack_padded_sequence(inputs, lengths, batch_first=True) out1, _ = self.lstm1(pack1, (self.h1, self.c1)) pad1 = nn.utils.rnn.pad_packed_sequence(out1, batch_first=True)[0] drop1 = self.drop(pad1.data) pack2 = nn.utils.rnn.pack_padded_sequence(drop1, lengths, batch_first=True) out2, _ = self.lstm2(pack2, (self.h2, self.c2)) pad2 = nn.utils.rnn.pad_packed_sequence(out2, batch_first=True) dense_out = self.dense(pad2[0]) pack_dense = nn.utils.rnn.pack_padded_sequence(dense_out, lengths, batch_first=True) pad_dense = nn.utils.rnn.pad_packed_sequence(pack_dense, batch_first=True) In this case, the dropout is set elsewhere to be 0, i.e., present, but de-activated. If I remove the dropout altogether (and adjust the rest of the forward accordingly) the observed behavior in training is significantly (and repeatably) different. Absent dropout works better than deactivated dropout. Can anyone please shed some light on this? Why does this happen? What is the correct way to do this if I want actual non-zero dropout?
st101619
The solution appears to be, “Define the dropout layer as in_place,” which I leave here for posterity.
st101620
In case of ambiguity for how to use inplace: a_packed_seq = torch.nn.utils.rnn.pack_sequence([torch.randn(3, 1), torch.randn(2,1), torch.randn(1,1)]) print(a_packed_seq) dropout_layer = torch.nn.Dropout(p=0.999, inplace=True) dropout_layer(a_packed_seq[0]) print(a_packed_seq)
st101621
Hi, now, I have a data matrix X, assuming that X = [1,2,3;4,5,6;7,8,9], then, I also have an index binary matrix Y, assuming that Y = [0,1,0;1;0;1,0,0,0], I want to know that how can I efficiently get torch.sum(X*Y) by utilizing the characteristics of binary matrix Y.
st101622
According to the following link it seems there is a commit made as recent as 20 days back. Not sure if it is official. github.com/pytorch/pytorch Bicubic interpolation for nn.functional.interpolate 275 by driazati on 11:57PM - 25 Jul 18 5 commits changed 29 files with 827 additions and 114 deletions.
st101623
I’m implementing DDPG, where the target is computed from a target network. I define the mean squared loss function as below, loss = F.mse_loss(self.critic_main(states, actions), target) however, I don’t know whether loss.backward() will compute the gradient the loss of parameters in target. Should I call detach on target in advance to avoid redundant computation?
st101624
I’m new to pytorch. I see some code which always calls .data to indirectly update a tensor, an example is the moving average for the target network in DQN target_param.data.copy_(tau*local_param.data + (1.0-tau)*target_param.data) but I also notice that some code directly update tensor in with torch.no_grad(), such as update weights in network with torch.no_grad(): w -= w.grad What’s the difference between these two different update methods for tensor?
st101625
If you use .data autograd doesn’t track these changes and you might end up with wrong gradients, as you are modifying the underlying data. If you want to update a parameter like weight, you should use the new with torch.no_grad() op, as it’s generally not advised to use .data.
st101626
Based on what you said, it seems that .data is the same as the tensor itself. I want to know whether it is necessary to use .data anymore since pytorch 4.0 has deprecated variable?
st101627
I wouldn’t use .data unless it’s really necessary and I know exactly, what I’m doing. Most use cases will work just fine using torch.no_grad().
st101628
Thanks, I just want to make sure. Does it mean that I could use torch.no_grad() instead in my first example? i.e. with torch.no_grad(): target_param = tau*local_param + (1-tau)*target_param
st101629
i have been trying to install the fastai library using pip install git+https://github.com/fastai/fastai.git . The installation process gave the following error messages. Even I tried to install torch first, and then run the above git command again, the same error messages still appear. How to solve this problem. Thanks. Collecting torch<0.4 (from fastai) Using cached https://files.pythonhosted.org/packages/5f/e9/bac4204fe9cb1a002ec6140b47f51affda1655379fe302a1caef421f9846/torch-0.1.2.post1.tar.gz Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\shuxi\AppData\Local\Temp\pip-install-oe2ba4vx\torch\setup.py", line 11, in <module> raise RuntimeError(README) RuntimeError: PyTorch does not currently provide packages for PyPI (see status at https://github.com/pytorch/pytorch/issues/566). Please follow the instructions at http://pytorch.org/ to install with miniconda instead. ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in C:\Users\shuxi\AppData\Local\Temp\pip-install-oe2ba4vx\torch\
st101630
I’m not familiar with the fastai installation, but somehow PyTorch 0.1.2 is cached and being used. Could you try to ignore the cache with: pip install git+https://github.com/fastai/fastai.git --no-cache-dir
st101631
I am also encountering same issue. --no-cache-dir did not solved my problem. Do you have any other suggestion?
st101632
As I’m really not familiar with the fastai wrapper, maybe @jphoward might give you some advice.
st101633
Hi, I’m building my own workstation. Recently I got a 1080ti cheap (I know I should wait till the end of the month but the deal ws soo good). Being a person who used servers all his life, I don’t have a workstation to use this thing on. I’m mainly looking at the Xeon line of CPUs and a set of CPU, motherboard, ram for ddr4 costs significantly more than an equivalent one with just ddr3 (it is also an earlier version of the same xeon CPU) I’d mainly be using the workstation to train detection and siamese networks. Is a saving of $400 usd worth it?
st101634
I am trying to use nn.utils.rnn.pack_sequence to provide input to an LSTM. a = torch.Tensor([1, 2, 3]) b = torch.Tensor([4, 5]) c = torch.Tensor([6]) packed = rnn_utils.pack_sequence([a,b,c]) The above (pulled straight from the docs) seems to run just fine and returns the following: PackedSequence(data=tensor([1., 4., 6., 2., 5., 3.]), batch_sizes=tensor([3, 2, 1])) The problem is that despite the documentation stating you can provide an rnn/lstm/gru a packedsequence and linking to the pack_sequence util, I cannot get it to work. Creating a simple rnn layer-- nn.RNN(3,3) --and supplying the above packed sequence yields the following error: 124 raise RuntimeError( 125 'input must have {} dimensions, got {}'.format( --> 126 expected_input_dim, input.dim())) 127 if self.input_size != input.size(-1): 128 raise RuntimeError( RuntimeError: input must have 2 dimensions, got 1 What am I doing wrong here?
st101635
By “map reduce”, let’s say I have three parameters A={set of tensors}, B={set of tensors}, and C={set of tensors}. There may be tens of thousands or more combinations of these. Now I want to perform some custom computation function (i.e. “map”) between specific combinations of these parameters, and some input data, say { (x1, y1, z1), (x2, y2, z3), … }. The results of these computations, will be aggregated/mapped/associated back (i.e. “reduce”) based on the parameters A, B, C. The straightforward way is to use a nested for loop. However I’d like to be able to perform this efficiently, using GPU acceleration where possible, since each particular computation is independent from others. Thank you in advance.
st101636
Hi, I have a tensor X , which is of shape: batch_size X number_of_sequence X hidden_size ex: 10 X 253 X 768 … This tensor X is from the output of an LSTM. There is a another tensor Y, of shape: batch_size X number_of_sequence X embedding_size ex: 10 X 253 X 300 … This tensor Y is the output from an Embedding. I need to work with these 2 tensors X and Y and feed this to an Attention network to match the sequence Y to each element in X. I need bit help as to which operations would be better to pack X and Y … I mean, if I do torch.cat((X, Y), dim=2) will this be a good idea?
st101637
To match Y to each element in X, they both should be having same dimension. Don’t you think so? i.e., either 768 or 300.
st101638
Thank you for your reply! The dimensions of these 2 tensors X and Y are different and for my case, I want the dimension of X. So, what I can do is put Y into an LSTM and get this as the same dimension as of X. Then I can do a point-wise addition between X and Y to get an unified-weighted tensor. Can you share will point-wise addition would be a good operation in this case? or there are other better operations?
st101639
I have different types of attention mechanisms. (one-directional, bi-directional etc) similar to arxiv.org 1502.03044.pdf 1 9.14 MB Mostly, they create an affinity matrix with M x N (M = sequence length of X, N = sequence length of Y) by a dot (vector) product between X & Y and use this affinity matrix to summarise X and/or Y.
st101640
Hy there! I am bit confused. How would you do a dot product between 2 different dimensions tensors. can you please share some code with me? for example x = torch.randn(2, 3, 10) # batch_size X number_of_sequence X hidden_size y = torch.randn(2, 3, 5) # batch_size X number_of_sequence X embedding_size and I need the output to be also of shape (2, 3, 10). How can I do this?
st101641
Please let me elaborate a bit more. I am working on a question-answering task. Instead of CNN, the input is from a word embedding. So, I have 2 tensors. An output from an LSTM (in the figure attached, this is the output from LSTM in First attention to x2_1) that holds the representations of: 1.1 the input words (X_1) and other features. 1.2 the question embedding on the input words. Let’s say this tensor is hidden_docs. The shape of this tensor is batch_size X sequence_length X hidden_size. For example, hiddens_docs = tensor(10, 253, 768) X_1 is the input words representation. Let’s say this tensor is x1_emb. The shape of this tensor is batch_size X sequence_length X embedding_size. For example, x1_emb = tensor(10, 253, 300) I need to project x1_emb on hidden_docs to get a unified representation between hidden_docs and x1_emb. And this representation then needs to be feed into x2_1 (in the ‘Attention again’). Currently, what I am doing is since the shape is different, I am passing the x1_emb into an LSTM and able to get the shape as = batch_size X sequence_length X hidden_size. And then I am doing a point-wise addition operation between hidden_docs and x1_emb before passing the output to x2_1. Now here I am bit confused. Is the operation - point-wise addition between these two tensors is right operations? are there other better operations to do to capture the better representations between hidden_docs and x1_emb?
st101642
I understand. I’m not sure if I could say what operations are right and wrong. In general, I have seen papers doing addition as well as dot product based attention mechanism. You can only evaluate it empirically I guess.
st101643
Hello: I am using Pytorch 0.3.1 My target Tensor (A) size is (Batch_size, sequence_len, node_dimension) 512x40x5 and my index is (Batch_size, 2-index (x,y)) 512x2x20 ( 0 ,.,.) = 0 8 16 … 21 29 37 7 15 23 … 22 30 38 ( 1 ,.,.) = 16 8 0 … 29 21 37 23 15 7 … 30 22 38 My question is How can I gather between 2 index value in batch as follow? To get the result (B): ( 0 ,.,.) = A[:,0:7,:], A[:,8:15,:], … A[:,37:38,:] (1,.,.) Some notes for the index and results : x < y (always) and the y-x>=1 the length of one gathered result (B) is variable and the length always between in 2<= B <=sequence_len I have tried in for-loop. The computation time is about ~3.5s for 512 batch size. I want to know if there is a better way to do this. Thanks !
st101644
After many operations, the results are below (I am trying to reproduce the results). What I find is that FMA > parallel > serial when concerning computation precision. Is this say that gpu mode is more precise than cpu mode? But from the result cpu mode’s accuracy is more higher. And what can I do to narrow the gap between gpu mode and cpu mode? In addition, what can I do to reduce precision loss? # cpu mode, device = torch.device("cpu") -104.9049, -2.0102, -56.8038, # gpu mode, device = torch.device("cuda") -109.6338, -16.2780, 44.1723,
st101645
Hey, I’m working with Pytorch v0.4.1 and was working with some toy example for Dropout and came across this behaviour which I could not explain: >>> d = torch.nn.Dropout() >>> inp = torch.tensor([1.,2.,3.,4.,5.,6.]) >>> d(inp) tensor([ 2., 4., 0., 0., 0., 12.]) Shouldn’t Dropout() simply (and only) zero out 50% of the tensor values? Or am I misreading something? Why are the remaining tensor values getting doubled? Thanks for the help!
st101646
Since dropout has different behavior during training and test, you have to scale the activations sometime. Imagine a very simple model with two linear layers of size 10 and 1, respectively. If you don’t use dropout, and all activations are approx. 1, your expected value in the output layer would be 10. Now using dropout with p=0.5, we will lose half of these activations, so that during training our expected value would be 5. As we deactivate dropout during test, the values will have the original expected value of 10 again and the model will most likely output a lot of garbage. One way to tackle this issue is to scale down the activations during test simply by multiplying with p. Since we prefer to have as little work as possible during test, we can also scale the activations during training with 1/p, which is exactly what you observe.
st101647
Interesting. Thanks a lot for the reply. It does make sense. Is this behaviour documented somewhere in the docs? If not, should it be? Or is it the general behaviour of dropout itself? If needed, I can help with that.
st101648
It’s the general behavior of dropout and mentioned in the paper 282. Maybe a short notice on the scaling might help with the understanding.
st101649
I want to use multiple conditions in torch.where function. But It seems to have some problems with the logical operation. x = torch.randn(3, 2) y = torch.ones(3, 2) torch.where(x > 0 or x<0.1, x, y) Error: bool value of Tensor with more than one value is ambiguous torch.where(x > 0 | x<0.1, x, y) Error: unsupported operand type(s) for |: ‘int’ and ‘Tensor’ There is no introduction of logical operation in pytorch docs. So I want to know if this can be implemented or if there is an alternative method to do this. Any help would be much appreciated.
st101650
Solved by John_Smith in post #2 You need to put the conditions in parentheses. This is because ‘|’ has higher precedence than the comparison operators. That’s why you got the error message saying “unsupported operand type(s) for |: ‘int’ and ‘Tensor’” x = torch.randn(3, 2) y = torch.ones(3, 2) torch.where((x > 0) | (x < 0.1), x, …
st101651
You need to put the conditions in parentheses. This is because ‘|’ has higher precedence than the comparison operators. That’s why you got the error message saying “unsupported operand type(s) for |: ‘int’ and ‘Tensor’” x = torch.randn(3, 2) y = torch.ones(3, 2) torch.where((x > 0) | (x < 0.1), x, y)
st101652
The problem is described in detail here 11. Please let me know if you think I should copy the description here, for a self-contained topic.
st101653
Hi! I’ve trained a pretty cool model I want to share,and I’ll do that via a server (flask). Maybe I’m overthinking it, but: Let’s say that my model takes about 1GB of RAM memory (net=torch.load(…)). What happens if,say, 50 people are trying to use my model at the same time, and my server has only 8GB of RAM? (so that I can’t actually load the model for each different thread,and let’s say that making my users wait in a queue is really a last resort (I will do that only if nothing else will work at all). The actual input to the model itself is an image, so is it reasonable to assume that it will be a very rare instance for 2 different users to complete their post requests such that the model will actually be accessed as the same time? If that assumption is reasonable, how bad is it to share the model between the threads? (say,as a global object of the entire process for example)? What’s the best approach? Any other tips/further directions I should pursue?
st101654
If I train using k-fold, where should I put net.train()? only once for each model? in every fold? or in every epoch? does it give different performance? Thank You
st101655
Solved by GuillaumeLeclerc in post #2 From what I know train() enables some modules like dropout, and eval() does the opposit. So I would say: before you start training your model call train() and then eval() when you are done. The classic workflow would be: call train() epoch of training on the training set call eval() evaluate yo…
st101656
From what I know train() enables some modules like dropout, and eval() does the opposit. So I would say: before you start training your model call train() and then eval() when you are done. The classic workflow would be: call train() epoch of training on the training set call eval() evaluate your model on the validation set repeat
st101657
Hey all, I would like to ask how the hidden states produced by a Bidirectional RNN are concatenated. If I’m not mistaken, the output parameter of a PyTorch RNN is of shape (N, T, 2*H) given, that the ‘batch_first’ and ‘bidirectional’ parameters have been set to True [N: number of examples, T: number of time steps, H: cell size]. Are the two 3-D tensors concatenated on the last axis? Or is each pair of tensors computed on a given time step “put” next to each other? I would like to merge the hidden states of the Bidirectional RNN (to produce an output of shape (N, T, H)). If I’m correct there is a difference between torch.split() and torch.chunk() functions. Thank you for your help in advance! Bence
st101658
You’re not mistaken – the output parameter of all the PyTorch recurrent units (assuming batch_first=True) is, when used bidirectionally: (num_examples, seq_len, 2 * hidden_dim) The two 3D tensors are actually concatenated on the last axis, so to merge them, we usually do something like this: output = output[:, :, :self.hidden_dim] + output[:, :, self.hidden_dim:] You might also try averaging them (by dividing the merged hidden state by 2). Alternately (and this is common) you can just use the concatenated hidden state as is (I assume you’re using the hidden state as a context vector to condition a decoder?) – this is quite common, I think. This 358 discussion is handy, and helped me out when I couldn’t quite figure out the documentation on PyTorch’s bidirectional RNNs. Lastly, if you want a fixed-length summary of the hidden state, you can apply L2 pooling to the whole thing. I believe this is the component-wise root-mean-square of all the hidden states – in other words, component-wise square each hidden state at each time step; average them all together (i.e., sum them and divide by the sequence length); take the component-wise square root of the result.
st101659
Hello, I read similar topic in initializing hidden layer in RNN network. However they are quite confusing for me. Right now I have the code as follows to initialize hidden layer with zeros. Could explain to me how to modify it so that it is initialized based on training? Thx Matt def initHidden(self): return torch.zeros((self.MiniBatchSize,self.HiddenNodes),dtype=torch.double)
st101660
Do you mean you want to treat your initial hidden state as a learnable parameter? Wrap it in nn.Parameter 5: class RNN(nn.Module): def __init__(self, ...): ... h_0 = torch.zeros((self.MiniBatchSize, self.HiddenNodes), dtype=torch.double) self.h_0 = nn.Parameter(h_0) ...
st101661
You wouldn’t want to if you are using a stateful RNN at least, where you pass in the hidden state output from the previous minibatch.
st101662
How will I call the function. Previously I wrote: hidden=model.initHidden() outA, hidden=aModel(input, hidden)
st101663
Maybe I don’t call it anymore. Just use self.hidden internally and call outA=model(Input). I still need to do this detach call I think, between minibatches I would type model.hidden_0.detach()
st101664
Something like this perhaps? def forward(self, data, hidden): if hidden == None: hidden = self.h_0 ...
st101665
Could you review the entire code (psueducode). class RNN(nn.Module): def __init__(self, ...): ... h_0 = torch.zeros((self.MiniBatchSize, self.HiddenNodes), dtype=torch.double) self.h_0 = nn.Parameter(h_0) self.Lin1 = nn.Linear(InNodes , NumNodes) self.Lin2= nn.Linear(InNodes , NumNodes) def forward(self, data, hidden): if hidden == None: hidden = self.h_0 aDat=torch.cat((data,hidden)) kOut=Lin1(aDat) hidden=Lin2(aDat) return kOut,hidden aModel=RNN() hidden=None while True: aData, aTruth=Minibatch for count in range(numSteps): kOut,hidden=aModel(aData[count],hidden) loss=criterion(kOut,aTruth) hidden.detach()
st101666
I’ve formatted your code. You can add code with three backticks before and after the code block (```).
st101667
I wrote some code to freeze part of my model. for param in model.network.reasoner.parameters(): param.requires_grad = False then for the optimizer, I wrote parameters = [p for p in self.network.parameters() if p.requires_grad] self.optimizer = optim.SGD(parameters, lr=self.args.learning_rate, momentum=self.args.momentum, weight_decay=self.args.weight_decay) However, when I do loss.backward I got the following error: RuntimeError: inconsistent range for TensorList output When I do not freeze the model, the whole model works fine. Any idea why this error happens?
st101668
I have multiple datasets, each with a different number of images (and different image dimensions) in it. In the training loop I want to load a batch of images randomly from among all the datasets but so that each batch only contains images from a single dataset. For example, I have datasets A, B, C, D and each has images 01.jpg, 02.jpg, … n.jpg (where n depends on the dataset), and let’s say the batch size is 3. In the first loaded batch, for example, I may get images [B/02.jpg, B/06.jpg, B/12.jpg], in the next batch [D/01.jpg, D/05.jpg, D/12.jpg], etc. So far I have considered the following: Use a different DataLoader for each dataset, e.g. dataloaderA, dataloaderB, etc., and then in each training loop randomly select one of the dataloaders and get a batch from it. However, this will require a for loop and for large number of datasets it would be very slow since it can’t be split among workers to do it in parallel. Use a single DataLoader with all of the images from all datasets together but with a custom collate_fn which will create a batch using only images from the same dataset. (I’m not sure how exactly to go about this.) I have looked at the ConcatDataset class but from its source code it looks like if I use it and try getting a new batch the images in it will be mixed up from among different datasets which I don’t want. What would be the best way to do this? Thanks!
st101669
When getting a state dict using <module>.state_dict() the dictionary references the internal parameters of the model. Meaning, once the model changes, the dict will also change. Usually this doesn’t really impact things as most people will serialize the state to disk straight away. If you however keep copies of the state dict in memory you won’t be able to load from these as their state is always the same as the networks state. I ran into this while implementing early stopping and it took me a while to figure out. Loading the state_dict, using load_state_dict, (obviously) just had no effect. In the interest of making the solution more discoverable I figured I’d describe my troubles here. Do you think this would warrant a mention in the official documentation? If so, should I just create a Github issue?
st101670
Solved by ptrblck in post #4 You are right. Sorry for being not clear enough. Here is another small code example: old_state_dict = {} for key in model.state_dict(): old_state_dict[key] = model.state_dict()[key].clone()
st101671
Since the state_dict keeps references to your model’s parameters, you should .clone it, if you need to restore it. As this will use more memory, it’s not the default behavior. Maybe it’s a good idea to mention it in the docs.
st101672
Seeing that what is returned is an OrderedDict (at least in 0.4.0) you can’t just call .clone() on the dict but would either have to do it for each layer: {k : v.clone() for k, v in net.state_dict().items()} or use deep copy.deepcopy.
st101673
You are right. Sorry for being not clear enough. Here is another small code example: old_state_dict = {} for key in model.state_dict(): old_state_dict[key] = model.state_dict()[key].clone()
st101674
I am looking to try different loss functions for a hierarchical multi-label classification problem. So far, I have been training different models or submodels (e.g., a simple MLP branch inside a bigger model) that either deal with different levels of classification, yielding a binary vector. I have been using BCEWithLogitsLoss and summing all the losses existing in the model before backpropagating. I am considering trying other losses like MultiLabelSoftMarginLoss and MultiLabelMarginLoss. What other loss functions are worth to try? hamming loss perhaps or a variation? Is it better to sum all the losses and backpropagate or do multiple backpropagations? Thanks in advance!
st101675
how can i keep part of my model ? model = torchfcn.models.FCN32s(n_class=21) checkpoint = torch.load("./logs/MODEL-fcn32s/checkpoint.pth.tar") model.load_state_dict(checkpoint['model_state_dict']) model = model[0:10] # to keep until 10th layer What about the forward pass ? Do I have to redefine it ?
st101676
It depends on the model. If you are slicing a nn.Sequential model, you can just keep your desired layers and wrap them in a new nn.Sequential instance: model = nn.Sequential( nn.Linear(10, 10), nn.ReLU(), nn.Linear(10, 10), nn.ReLU(), nn.Linear(10, 10), nn.ReLU(), nn.Linear(10, 10), nn.ReLU(), ) model = nn.Sequential(*list(model.children())[:2]) x = torch.randn(1, 10) output = model(x) In case of a custom model with an own forward, you would need to rewrite this method.
st101677
Hey, I’m new to Pytorch. I’m trying to implement a compressed network where I switch off a neuron after every few epochs throughout training based on some threshold criteria(something similar to dropout). I’m not sure how to implement this in forward if that’s where masking of a neuron should be done. Thanks in advance.
st101678
I think this should be done during forward pass. Switching off means setting it’s value to zero? If so, you would need to have a mask matrix which contains zeros and ones and you would have to multiply it to the outputs. The only thing is, that you would have to update this mask by hand, as you cannot compute suitable gradients.
st101679
I was wondering if there was some code out of the box for doing binary classification for MNIST or CIFAR10?
st101680
MNIST and CIFAR10 both have 10 categories so I’m not sure how a binary classifier is supposed to work.
st101681
it can work if u ran your whole predicting algorithm 10 times and each time you make one of the category as 1 and rest all as 0.
st101682
PCA consists of combinations of multiple linear operations, so the answer is YES. You can even backpropagate through PCA if you want.
st101683
Hello all, I have success install pytorch using conda command. Now, I want install pytorch 0.4.1, cuda 9.1 and cudnn 7.0 on ubuntu 14.04 and docker? Anyone has success to install please give me some advice? Thanks
st101684
Try Checking this Docker file https://hub.docker.com/r/floydhub/pytorch/tags/ 112 Screen Shot 2018-08-11 at 12.57.02 AM.png2314×538 35.5 KB You can upgrade some if you need
st101685
yeah after pulling the docker just update the version to 0.41. [Hoping if you don’t have any update constraints]
st101686
Here is the output from torch.jit.trace on one of my model. Screenshot from 2017-10-28 13-16-26.png865×224 220 KB Most of these are intuitive but I couldn’t understand the meaning of certain keywords. Do we have a good documentation for explaining what is in there? If not, could somebody help me understand what does “Handle” and “uses” in this line mean? %9 : Float(900, 100), %10 : Handle = ^Addmm(1, 1, False)(%3, %1, %7), uses = [[%11.i0], []];
st101687
Solved by Stanislav_Pidhorskyi in post #2 You can do: import torch.onnx import torch.onnx.utils ... trace, out = torch.jit.get_trace_graph(model, args=(input,)) trace = torch.onnx.utils._optimize_graph(trace.graph(), torch.onnx.OperatorExportTypes.ONNX) This will give you more human-readable trace with onnx ops. Though, _optimize_graph is…
st101688
You can do: import torch.onnx import torch.onnx.utils ... trace, out = torch.jit.get_trace_graph(model, args=(input,)) trace = torch.onnx.utils._optimize_graph(trace.graph(), torch.onnx.OperatorExportTypes.ONNX) This will give you more human-readable trace with onnx ops. Though, _optimize_graph is internal function.
st101689
class DiceCoeff(Function): “”“Dice coeff for individual examples”"" def forward(self, input, target): self.save_for_backward(input, target) self.inter = torch.dot(input.view(-1), target.view(-1)) + 0.0001 self.union = torch.sum(input) + torch.sum(target) + 0.0001 t = 2 * self.inter.float() / self.union.float() return t # This function has only a single output, so it gets only one gradient def backward(self, grad_output): input, target = self.saved_variables grad_input = grad_target = None if self.needs_input_grad[0]: grad_input = grad_output * 2 * (target * self.union + self.inter) \ / self.union * self.union if self.needs_input_grad[1]: grad_target = None return grad_input, grad_target def dice_coeff(input, target): “”“Dice coeff for batches”"" if input.is_cuda: s = torch.FloatTensor(1).cuda().zero_() else: s = torch.FloatTensor(1).zero_() for i, c in enumerate(zip(input, target)): s = s + DiceCoeff().forward(c[0], c[1]) return s / (i + 1) it is so so so so so slower then i use BCEloss.it almost can not to be used.
st101690
why do you define it as autograd function? The way you used it, you always create a new instance of your Function. Why not defining it like this: class DiceCoeff(torch.nn.Module): def __init__(self, eps=0.0001) super().__init__() self._eps = eps def forward(self, prediction, target): inter = torch.dot(prediction.view(-1), target.view(-1)) + self._eps union = prediction.sum() + target.sum() + self._eps return 2*inter.float()/union.float() and using it like this: dice_crit = DiceCoeff() total_dice_coeff = 0 single_dices = [dice_crit(_x, _y) for _x, _y in zip(prediction, target)] for _dice in single_dices: total_dice_coeff += _dice total_dice_coeff = total_dice_coeff / len(single_dices) This way you only create the criterion once, autograd handles the gradient computing (in a very efficient way) and you don’t have to create the s tensor yourself every time. From my expierience custom loss functions are almost as fast as the native ones, if they are implemented in torch.nn.Module Can you test if this approach speeds up your training? EDIT: Assuming prediction and target are minibatch-tensors you could do something like this: class DiceCoeff(torch.nn.Module): def __init__(self, eps=0.0001) super().__init__() self._eps = eps def forward(self, prediction, target): flattened_pred = prediction.view(prediction.size(0), -1) flattened_target = target.view(target.size(0), -1) inter = torch.dot(flattened_pred, flattened_target) + self._eps union = flattened_pred.sum(-1) + flattened_target.sum(-1) + self._eps return (2*inter.float()/union.float()).mean() and simply pass the whole batches like this: dice_crit = DiceCoeff() total_dice_coeff = dice_crit(predictions, targets) and you will directly get the mean dice for the current batch
st101691
Thank you very very much!It run faster than before.Your guidance solve my problem perfectly!
st101692
I have a laptop with a 940 MX GPU. It’s not much but I use it for learning. I recently tried to install pytorch and it told me my GPU is too old. I read somewhere on this forum that installing it from source might still work, however I have no clue on how to do that. Could someone please explain the steps or link to some tutorial for doing so (I wasn’t able to find anything on this.).
st101693
You can find the build instructions here 51. Let me know, if you encounter any problems.
st101694
I’ve been using PyTorch for a couple of months now and I’m really impressed with it! I would like to know if there’s or will be something like: https://github.com/pytorch/tnt/blob/master/torchnet/dataset/transformdataset.py 8 Natively in PyTorch. The problem I’m facing right now is that I want to split a Dataset using random_split and later decorate only the training set, like this: dstrain, dsvalid = train_valid_split(fullset, valid_size=valid_size) dstrain = TransformDataset(dstrain, transforms=my_transforms) I think that it would be really useful to add something like that class to torch.utils.data. What do you think? If you think it is a good idea I would be glad to help. Thank you in advance!
st101695
Hi all, I’ve just installed: The latest version of anaconda; Pytorch (based on the attached screen shot of specs); and I’m running CUDA 9.2, Python 3.7, Windows 10. I’ve also checked in my “snowflakes” environment (via anaconda prompt) that pip3 & tourchvision are on the list. Problem frame | When I open Juypter and run: from torchvision import transforms I get this error: ModuleNotFoundError: No module named ‘torchvision’ Can someone please advise why this is the case and how to corrrect, because, I can’t use torchvision for a project I’m working on. PyTorch Specs.png1563×648 52.8 KB PS: When using Anaconda Navigator - I’ve gone to my environment, and looked for torchvision (in the search packages box). In Anaconda there are no packages called torchvision. So, why would PyTorch recommend it’s install (see screen shot above) if there is no PyTorch (or torchvision, etc) package/s in anaconda?
st101696
MattP: ModuleNotFoundError: No module named ‘torchvision’ I understand your frustaion sometimes it happens due conda environment not successfully getting activating. But here one solution to this Install it from the jupyter notebook itself SEE THE SCREENSHOT Screen Shot 2018-08-11 at 8.40.48 PM.png1452×486 39 KB
st101697
Many thanks for your reply. I’ve run: !pip3 install torchvision Though this is the error I got: Pip Error.png1386×144 11.1 KB
st101698
jmandivarapu1: just try running only pip install torchvision just try running only pip install torchvision
st101699
type in your system search bar “anaconda navigator” then launch the “anaconda command prompt” and run those commands and before this just type “conda” in the windows cmd just to double check that conda is active by default or else need to add the path variable.