id
stringlengths
3
8
text
stringlengths
1
115k
st116668
#4. Train the network #This is when things start to get interesting. We simply have to loop over our data iterator, #and feed the inputs to the network and optimize for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # wrap them in Variable inputs, labels = Variable(inputs), Variable(labels) # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.data[0] if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print(‘Finished Training’) RuntimeError Traceback (most recent call last) in () 7 8 running_loss = 0.0 ----> 9 for i, data in enumerate(trainloader, 0): 10 # get the inputs 11 inputs, labels = data //anaconda/lib/python3.5/site-packages/torch/utils/data/dataloader.py in next(self) 210 self.reorder_dict[idx] = batch 211 continue –> 212 return self._process_next_batch(batch) 213 214 next = next # Python 2 compatibility //anaconda/lib/python3.5/site-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch) 237 self._put_indices() 238 if isinstance(batch, ExceptionWrapper): –> 239 raise batch.exc_type(batch.exc_msg) 240 return batch 241 RuntimeError: Traceback (most recent call last): File “//anaconda/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 41, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File “//anaconda/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 110, in default_collate return [default_collate(samples) for samples in transposed] File “//anaconda/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 110, in return [default_collate(samples) for samples in transposed] File “//anaconda/lib/python3.5/site-packages/torch/utils/data/dataloader.py”, line 90, in default_collate storage = batch[0].storage()._new_shared(numel) File “//anaconda/lib/python3.5/site-packages/torch/storage.py”, line 111, in _new_shared return cls._new_using_filename(size) RuntimeError: error executing torch_shm_manager at “//anaconda/lib/python3.5/site-packages/torch/lib/torch_shm_manager” at /Users/soumith/miniconda2/conda-bld/pytorch_1493757035034/work/torch/lib/libshm/core.cpp:125
st116669
Hello, I am trying to implement Dice loss function and I need to find indices where the model predicts target class (I have segmentation problem with 2 classes only and I want my loss to depend only on predicted class, not the background). When I add this line: input = torch.ge(input, 0.5).float() I get “RuntimeError: there are no graph nodes that require computing gradients”. When I remove this line, everything is great. Is it even possible to do this in PyTorch?
st116670
Thx, for the advice. I have used torch.nn.Threshold and it looks like it works well
st116671
When will the next release come out? It’s already 2 month. Looking forward to learn some new features.
st116672
When milestones are done: GitHub pytorch/pytorch 146 Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
st116673
such as: b = torch.rand(1,3,2) x = a+b``` if use numpy: ```a = torch.rand(2,3,2).numpy() b = torch.rand(1,3,2).numpy() x = a+b``` numpy will get correct result. but pytorch will show `RuntimeError: inconsistent tensor size at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/TH/generic/THTensorMath.c:831` sorry I am new in pytorch. Thanks
st116674
Hello this probably sounds quite vague, but I wonder if anyone has managed to train three nets using adversarial training? Here’s the general algorithm encoder_predictor_discriminator_training.jpg734×687 369 KB E,F and D are nets, with F and D being simple MLPs, and E is an encoder with an application specific architecture. In the inner loop, E and F are trained co-operatively, and in the outer loop they are trained adversarially against D. The convergence/stability theory/proof is from a paper on A conditional adversarial architecture 34 The actual application is not so relevant, it the solid proof that’s interesting. I was simply wandering if anyone had managed to train such a three headed beast? @smth, @tom any thoughts on this one? You guy’s are expert adversarial training people who I thought I should ask first
st116675
i haven’t managed to train such a thing (but I didn’t try). Seems very very mildly interesting, let me know how it goes
st116676
Awesome, thanks for the link. I had not seen it before, but I would not consider myself an expert (just yet ). It certainly feels like a very different use of adversarial: So you have a training set of (sample, source, label) and you want to produce an encoding e = e(sample) that keeps as much information about label as possible but such that source is as independent from (encoding, label) as possible and the idea is that this helps when classification you have new sources. Interesting. Do you have a use case in mind? I wonder if it could be used for advanced style fingerprinting, too, when you switch source and label. Best regards Thomas [edit: fixed the error Ajay pointed out below]
st116677
Hi Tom, Wow - that is a perfect summary - it took me a nearly a full days reading to come to the same level of understand. Indeed in this case the purpose of the discriminator network D, is to remove conditional dependencies on the sources (i.e. sleeping subject + measurement environment). Perhaps you meant e = e(sample) rather than e=e(source)? As the encoder, is fed in the sequence of spectrograms X = [x_1,x_2,…,x_t] in Omega_x, up to some time t, in the notation of the start of section 3. Model. At the moment my use/test case is pretty much the same as the original paper - but due to the generality of this domain adaptation/source conditionally independent, type of setup I guess it could be applied to many use cases in general time-series classification and prediction? The minimum requirement’s would be as you clearly summarised, (a time-series sample X, source ID s, instantaneous event labels y). So I guess there could be applications to claim modelling/event classification & prediction in insurance - the solid proof/performance guarantee of the paper makes it a good candidate? My guess is due to the generality of this setup it could eventually be extended to extra modalities, by simply training more independent encoders say a, b, c, on each of them, and then concatenating the outputs of these encoders - and projecting this vector, e.g. E_canonical (X_a, X_b, X_c) = concatenate [ E_a(X_a), E_b(X_b), E_c(X_c)] onto a latent embedding/manifold. The projection from this canonical space, could then be fed into the predictor and discriminator networks as before. Some notion of how perhaps to attempt this is given here - Conditional generation of multi-modal data using constrained embedding space mapping 4. It’s just an idea at the moment though. Perhaps it could again be very applicable in actuarial science, maybe as an alternative to classical Gaussian/Levy/Poisson Process models - given enough data it should hopefully pick up notions of correlations in both time and across modalities? I’m not sure about the application to style fingerprinting, so maybe we could just try it and see? I have some test data which is similar to that used in this paper. The difference is, the available data uses accelerometer recording’s (rather than radio frequency modulations) of sleep study participants, and it also has their “gold standard” labels from polysomnography (PSG). https://es.informatik.uni-freiburg.de/datasets/ichi2014 2 So converting this 3 channel accelerometer data to sequences of spectrograms is reasonably simple to do. After that the papers encoder architecture could be applied, and it’s algorithm should be reproducible without any modifications - I hope Hopefully in a few months time I should have access to some SOTA clinical grade wearable sensors, which record ECG, electrodermal activity, respiration rate/depth - generally multi-modal streaming bio-markers of sympathetic nervous system and cardio-respiratory activities - similar to the Verily Study Watch. I’m guessing something similar to that will be the new standard recording instrument in health insurance studies, in a few years time? Great to be chatting with you again Best regards, Ajay
st116678
The other association that comes to my mind is that it seems like a generalised information bottleneck 4 - except that you are not necessarily trying to compress the sample, but lose mutual information between the sample and the source, so you want to minimize (I(Encoding, Source) - β I(Encoding, Target)) with β=1/λ or so. But the notation is too thick for me today…
st116679
Hi, @smth of course I’ll let you know how it goes - thanks a lot for the reply
st116680
Hi Thomas thanks a lot for the insight, I hadn’t thought about in this way? It does sound right - we want to remove the effects of individual sources from a general representation of the samples (i.e. the encoder), and predictor. The three player game idea seems to be the easiest way to get a simple algorithm with an equilibrium/convergence proof. I don’t know how to derive something like that from basic information theory - that’s kind of why I like this paper. The only other reference’s to adversarial training with three networks is in the domain adaption literature, and also Triple Generative Adversarial Nets 13 - it has code - but I don’t understand the paper at the moment
st116681
Hi Ajay, @AjayTalati looking at this some more, I wonder whether the guarantees are unusually strong in practice - if … have enough capacity and is trained to reach optimum seems like a pretty strong assumption (it would seem the WGAN equivalent would be not far away from “the discriminator has learnt a test function reproducing the real Wasserstein distance” and “the generator has been trained so that the Wasserstein distance is 0” - at which point you automatically have success). The code for the article accompanying the ichi2014 dataset you linked seems to be downloadable from this uni-siegen page 3. Maybe it is useful as a baseline. I’m not sure that I quite understand the spectrogram conversion from accelerometer data - this seems quite different from RF data. Best regards Thomas
st116682
Hi Thomas, thanks really a lot for your insight - I never thought of comparing with the WGAN So yes !!! I’m keen to do the experiment you suggest i.e. comparing the three-player game setup in this paper with the WGAN. I’m very curious how the two compare! Will need to think a bit more carefully though about a reasonable architecture for the WGAN - I’ve got no experience of training sequence GANs so I’m sure how to do it yet? I’ve got a reasonable amount of the architecture and training code for the 3 player game paper done now. What I’ve got so far is - that the Encoder E is basically just a standard image captioning CNN-LSTM architecture 1 with the 2D residual CNN replaced with a 1D residual CNN. The Predictor P can just be a simple MLP. So this architecture can be trained using the algorithm posted at the top of this thread, minus the discriminator loop, i.e. just ignoring the lines repeat, Update discriminator D: until In the paper, in section 4.5 - Role of Our Adversarial Discrimnator, the performance of this simple setup, (of just the Encoder and Predictor), is refereed to the “baseline model”, and it’s compared with the full setup, i.e. including the Discriminator. It seems the performance of the baseline is not soon bad on it’s own, but the addition of the discriminator has some important effects, in particular it allows the learning transitions between the labels, which is a hidden/latent category never presented to the predictor network - so that’s empirically quite interesting So hopefully we should be able to test this baseline model soon - i.e the next few days, I’ll try it and get back to you - I’m guessing that it should train reasonably quickly compared to larger image/more complicated GAN models? As you pointed out, looking into the data pre-processing in more detail it appears there are few different reasonable ways to convert the accelerometer data into spectrograms? I believe the ichi2014 dataset has a 100hz sampling frequency so slicing this up into 30 second windows should give 3000 data points - I’m hoping that this will be good enough to get a reasonable spectrogram - using scipy.signal.spectrogram 1 - which should output a single 1D vector of the amount of energy in each frequency “bin” - for each 30 second window. Alternatively I’ll just have to experiment with different sized windows as I’ve seen the spectorgram method applied to accelerometer sleep data before. Though I believe it’s quite common for accelerometer activity recognition data 1 - so perhaps I/we could have a look into that if this doesn’t work? Alternatively, the authors of the MIT paper have said they will release their RF and Polysomnography data, but I’m guess if that does happen it won’t be till mid August, after they present the paper at ICML17 Here’s a nice picture of the the time evolution of a spectrogram “window”, spectrogram.jpg743×527 256 KB If I’m not going crazy - each single spectrogram is a slice through this 3D plot at a set time. This slice is then simply a vector of real numbers, where each number in the vector is the power in a particular frequency bin, i.e. the height plotted on the amplitude, z-axis Since there are 3 channels in the accelerometer data, I’m guessing I’m either going to train three separate Encoders, and perhaps share the weights? Or, alternatively and more simply I could just add the spectrograms, to get the energies summed across all three channel, for each given bin. I’m not too sure about these two design decision’s so I guess I just have to try them both? I’ll post the all code (loading-preprocessing-spectrogram, models and the training scripts) in a Github repo as soon as it’s working/worthwhile to share. Really nice to be working with you again Best regards, Ajay PS - thank you very much for the link to code on the this uni-siegen page - somehow I missed that? PS2 - there’s a nice video about the RF device implementation here - https://www.youtube.com/watch?v=BhSL7AILTzE 3 - it does’nt talk about the three player game or really any deep learning methods, but it’s interesting background for this particular use case
st116683
Hi I have a question about the THBlas.c file in the TH/generic folder. Originally I thought that these function references are for CBLAS functions, but the recent commit for sdot (lines 21 -30) suggests that CBLAS functions need to be specifically called. If they are not for CBLAS functions, then what are they calling? Thanks in advance. https://github.com/pytorch/pytorch/blob/master/torch/lib/TH/generic/THBlas.c#L12-L38 6
st116684
Hello, I profiled my code using cprofile, but it seemed weird. One operation (sum) took most of execution time. And when I add time.sleep(0.02) before the operation, the time taken by the operation was decreased. I suspect that is because the operation waits all pending computations to be done. Is it one of the DCG properties?
st116685
Am I right in thinking the sum was a full reduction? Its because it implicitly copies the reuslt hostside, causing a sync point. By the way, you are right that gpu operations are async by defaut, in the absence of any kind of sync point, such as reading data to hostside.
st116686
Yes, sum is full reduction. Thanks for clear answer. Then are cpu operations synchronous? EDIT: Why does full reduction copy the result to the host?
st116687
Hi, The CPU operations are synchronous. Only the GPU operations are asynchronous. The full reduction returns a number, and to be able to return this number, it has to wait for the computation to be done.
st116688
There are a few possible ‘why?’: what is the technical underlying reason? why is it like this? The technical underlying reason is that anything that causes a ‘read’ of an actual concrete value from the gpu causes a sync point. Operations returning torch tensors dont necessarily force sync points. However reduce all, in its current implementation, returns a scalar float, rather than a tensor. This forces a sync point. Why does reduce all return a scalar, rather than a tensor? I dont actually know but I guess some combinatin of: maybe torch was written before gpus were widely available, and on the cpu, making reduce all return a float seems not unreasonable? torch is written by conv net guys, and for conv nets, reduce all causing a sync point is almost un-noticeable, in practice
st116689
hughperkins: However reduce all, in its current implementation, returns a scalar float, rather than a tensor. This forces a sync point. That’s exactly what I wondered. I didn’t even know sum returns float. Thanks!
st116690
For the ‘ml align and translate’ paper. I know they use a GRU, whereas I’m just using some generic non-gated RNN, for simplicity. But, just pondering how to integrate the context vector into a standard rnn? They write: Screen Shot 2017-07-16 at 11.05.39 AM.png1624×830 97.9 KB I’m thinking fo doing something like: class Decoder(object): def forward(self, input, context, state): return self.rnn(input + self.W @ context, state) This way I can just reuse existing RNN classes. Thoughts?
st116691
How can i debug this? File “training.py”, line 48, in train loss.backward() # compute the gradients File “/home/user/anaconda3/lib/python3.6/site-packages/torch/autograd/variable.py”, line 146, in backward self._execution_engine.run_backward((self,), (gradient,), retain_variables) RuntimeError: sizes do not match at /py/conda-bld/pytorch_1493680494901/work/torch/lib/THC/generated/…/generic/THCTensorMathPointwise.cu:216 My loss function is really complex and i think the problem might be when i do abs() or exp() operations, but i can’t find out where and the error doesn’t say much. I have a conda installed release pytorch.
st116692
It says you have two tensors whose size doesnt match. For example, you cant add two tensors if their sizes dont match. What I usually do in such cases is spinkle print statements liberally just before the line in question, printing the .size() for the input tensors of the operaiton. If you have a debugger, that might be faster. I never figured out how to use a python debugger yet . Or perhaps I just find print sufficient/ok for me. Unclear…
st116693
pytorch seqtoseq, char-level, trains in 60 seconds Overview of concept of seq-to-seq. Assumes you know rnn already. Run the char-level training, on a few enlish-french sentences. See trains ok-ish. Brief overview of corresponding code. Go over the challenges I encountered with trying to make a simple rnn model that trains in 30-60 seconds. Slides: https://docs.google.com/presentation/d/1z9INuS1VX2UigL3WqB60oJCMi_CJuaVRi-Qaun6fPl4/edit?usp=sharing 4 Source-code: https://github.com/hughperkins/pub-prototyping/blob/df9cf0c9fa473517956c55c33435924a289ddd36/papers/attention/seq2seq_noattention_trainbyparts.py 8 Experiment log: https://github.com/hughperkins/pub-prototyping/blob/df9cf0c9fa473517956c55c33435924a289ddd36/papers/attention/mt_by_align_translate.log 1 "Sequence to Sequence Learning with Neural Networks", by Sutskever, Vinyals, Quoc, 2014 https://arxiv.org/abs/1409.3215
st116694
Few videos that upgrade this code bit by bit: _1. upgrade to use idiomatic pytorch Modules, rather than kind of inline function stuff: pytorch seqtoseq, revised v2, idiomatic pytorch modules _2. upgrade to use timestep batching, in the encoder: pytorch seqtoseq, revised v2, batch timesteps _3. upgrade to use minibatches, in both encoder and decoder: pytorch seqtoseq, revised v3, minibatches Links to the relevant source-code are included in the youtube descriptions, but they’re basically different commits of : https://github.com/hughperkins/pub-prototyping/blob/master/papers/attention/seq2seq_noattention_trainbyparts.py 2
st116695
Hi, First of all, PyTorch is a very good library for deep learning researchers. I would like to contribute to the improvement of PyTorch. I have a few questions as below. I want to easily find the code path for a particular function or class. For example, there is torch.nn.CrossEntropyLoss which is not defined in torch/nn 1 but is defined in torch/nn/modules/loss.py 5. Is there a way to easily find the code path? We can import CrossEntropyLoss class using import torch.nn.CrossEntropy or import torch.nn.modues.loss.CrossEntropy. How does the former work? In loss.py, torch.nn.L1Loss 1 and torch.nn.MSELoss 2 do not seem to be implemented. Where are these classes implemented? Thanks
st116696
It’s cause they are using pythons Intra-package References and those init.py files are the key to follow mapping Such as for your example here: https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/ 98init.py
st116697
Goal here I would assume they wanted people to be able to call all these numerous classes or functions simply from torch.nn and not have one massive file containing all of them
st116698
@dgriff Thank you very much. Your answers is the answer to question 2. Do you have any idea about question 1 and 3?
st116699
my bad I see what you mean on question 3 I missed before read to quickly. L1Loss seemed to have name remap to AbsCriterion. so code under AbsCriterion. You will find that the case for these others as well: name_remap = { 'TemporalConvolution': 'Conv1d', 'SpatialDilatedConvolution': 'DilatedConv2d', 'SpatialMaxUnpooling': 'MaxUnpool2d', 'SpatialReflectionPadding': 'ReflectionPad2d', 'SpatialReplicationPadding': 'ReplicationPad2d', 'VolumetricReplicationPadding': 'ReplicationPad3d', 'VolumetricMaxUnpooling': 'MaxUnpool3d', 'SoftMax': 'Softmax', 'LogSoftMax': 'LogSoftmax', 'HardTanh': 'Hardtanh', 'HardShrink': 'Hardshrink', 'SoftPlus': 'Softplus', 'SoftShrink': 'Softshrink', 'MSECriterion': 'MSELoss', 'AbsCriterion': 'L1Loss', 'BCECriterion': '_BCELoss', # TODO: move the glue code into THNN 'ClassNLLCriterion': 'NLLLoss', 'DistKLDivCriterion': 'KLDivLoss', 'SpatialClassNLLCriterion': 'NLLLoss2d', 'MultiLabelMarginCriterion': 'MultiLabelMarginLoss', 'MultiMarginCriterion': 'MultiMarginLoss', 'SmoothL1Criterion': 'SmoothL1Loss', 'SoftMarginCriterion': 'SoftMarginLoss', }
st116700
I use pycharm IDE (https://www.jetbrains.com/pycharm/ 6) and it provides “follow” feature. It’s very convenient. EDIT: There are many IDEs provide following feature, like VS Code (Although I haven’t used this one.)
st116701
I think it is a very interesting architecture w/ the proper CUDA kernel: faster than LSTMs + with customizable recurrent poolings/gates. Are there any plans of adding more recurrent layers, and more specifically QRNNs? Thanks!
st116702
I custom PyTorch word language model (my code https://github.com/ttpro1995/custom_word_language_model 35) I add convolution layer before LSTM. (see file class ModelWrapper in https://github.com/ttpro1995/custom_word_language_model/blob/master/model/model_wrapper.py 10) In main.py 3, I freeze the encoder (embedding) and train convolution, lstm, decoder for p in model.conv_module.parameters(): p.data.add_(-lr, p.grad.data) for p in model.rnn.parameters(): p.data.add_(-lr, p.grad.data) for p in model.decoder.parameters(): p.data.add_(-lr, p.grad.data) I got test ppl 3.70, which is too small. Run command python main.py --cuda --emsize 300 --nhid 168 --dropout 0.5 --epochs 40 --noglove log file https://gist.github.com/e7644ad05836b6a147cb243e3764ff1f 21 Please tell me if anything go wrong.
st116703
Hi, I want to combine the model parallelism and data parallelism in PyTorch. For example, my model is too large, and has to be split into 2 GPUs with batch size = 1. In the same time, I want to fully utilize the 8 GPUs. I know how to do model parallelism, also know how to do data parallelism. But how to do them in the same time? Is there any example for my case?
st116704
Suppose I have a matrix X with m-by-n matrix training examples in a numpy array in memory. Should I convert it directly to X = Variable ( torch.FloatTensor(X) ) or convert it to X = torch.FloatTensor(X) and then convert to Variable as needed? What are the differences between these two approaches? What are their pros and cons?
st116705
I guess my question is if you should be careful about the time you are going to wrap a Tensor in a Variable, for example, because you may still want to manipulate the data in it, and it is in a Variable, it may keep track of these operations and do something unexpected during backward. Do I have to be more careful handling variables than tensors? In the following example, memory was leaking with each call to train, but it stopped leaking when I wrapped tensors in variables only when calling model and margin_loss, rather than once in the beginning. Is this a bug or am I overlooking something? def train(model, sents, tags, optimizer, kpred): margin_loss = nn.MultiMarginLoss(margin=1) model.train() for (s, t) in itertools.izip(sents, tags): len_t = t.size(0) t_pred = torch.LongTensor(len_t) t_pred[:kpred] = t[:kpred] s = Variable( s ) t = Variable( t ) t_pred = Variable( t_pred ) for idx in xrange(kpred, len_t): optimizer.zero_grad() scores = model(s, t_pred, idx) loss = margin_loss(scores, t[idx:idx + 1] ) loss.backward() optimizer.step() _, ti_pred = torch.max(scores, 1) t_pred[idx] = ti_pred[0, 0] t_pred[idx] = t[idx] Thanks
st116706
Variables hold a reference to the graph. I presume you were keeping around Variables across the boundary of the inner for-loop, and in that case the graph will be held on across iterations of the loop. That’s why (maybe) you thought memory was leaking, but actually the Variable was remembering what was done to it across all of for idx in xrange(kpred, len_t).
st116707
@smth as a side (but related) note, if we are selecting a random batch when doing SGD, do we also need to wrap things with variables or torch.tensors? E.g: def get_batch2(X,Y,M,dtype): X,Y = X.data.numpy(), Y.data.numpy() N = len(Y) valid_indices = np.array( range(N) ) batch_indices = np.random.choice(valid_indices,size=M,replace=False) batch_xs = torch.FloatTensor(X[batch_indices,:]).type(dtype) batch_ys = torch.FloatTensor(Y[batch_indices]).type(dtype) return Variable(batch_xs, requires_grad=False), Variable(batch_ys, requires_grad=False)
st116708
Hi all, I am new to framework with dynamic computation graph. I search everywhere but I couldn’t find a reference about how to implement mini-batch with RNN or even tree LSTM with varying length input. So I guess my general problem is how to do mini batch with dynamic computation graph. Thanks.
st116709
For RNNs, there’s already a batched variable-length example (on the SNLI dataset https://github.com/pytorch/examples/tree/master/snli 508); there all you have to do is sort the examples a little so that you can batch together sentences of similar lengths with minimal padding. TreeRNNs are harder. I’ll add an example soon that does this, but the general idea for TreeRNNs is that batching is up to you as the user, and you should split and concatenate when you need to. So if you use a binary tree structure, you can represent it as a shift-reduce parser (see the SPINN paper from Bowman et al) that means you can process multiple trees in parallel by doing preprocessing like this: input: tree1: ((ab)c) tree2: (d(ef)) preprocessed input: 1 2 3 4 5 tree1: SHIFT(a) SHIFT(b) REDUCE SHIFT(c) REDUCE tree2: SHIFT(d) SHIFT(e) SHIFT(f) REDUCE REDUCE and then using advanced indexing to copy all the tokens for SHIFT at each timestep in parallel while concatenating the stack representations for batched REDUCE. Sorry if this is confusing, I promise an example will be up soon. I would add that PyTorch is impressively fast on TreeRNNs even without batching.
st116710
Thank you for the pointer. So it’s actually up to the user to design batching mechanism. Thank you all for building pytorch with amazing flexibility and great tutorial. Just curious, is there any plan to release a technical report about the performance of pytorch compared to other framework with support of dynamic computation graph?
st116711
For now I think we just have to say that we’re quite fast We’d rather have someone independently benchmark the frameworks, or do a collaboration where maintainers of each implement the same script. Otherwise the benchmarks can end up being a bit biased, because we don’t know other libraries nearly as well as we do PyTorch.
st116712
In this particular SNLI example, there is an import call to module torchtext in train.py 17. Does that module exist? I can’t find it. examples/snli/train.py train.py 17: from torchtext import data train.py 17: from torchtext import datasets
st116713
Using BucketIterator, which produces minibatches with minimized paddings, would work ok for SNLI since it’s a sentence classification task. However, for sequence tagging tasks, I think having padded inputs (even though # pads is minimized) without gradient masking won’t be a good idea, since we would need to get gradients from the targets for the padded inputs.
st116714
Yes, this kind of padding is a stopgap until PyTorch has full masked RNN support, which is on its way.
st116715
What if we are facing arbitrary trees rather than binary ones? This can correspond to the childsum treelstm where each node can have different number of children. Is it still possible to batch with the shift-reduce strategy?
st116716
I hear that tensorflow-fold is able to batch trees of arbitrary shapes. Is there similar implementations in pytorch? Why no body is trying to make a tool?
st116717
for what it’s worth, I could install torchtext using: pip install git+https://github.com/pytorch/text.git (in a virtualenv environment, otherwise try with --user)
st116718
whats an example for a feedforward NN or CNN? I try to index my torch arrays of data and it says I can’t/shouldn’t be using numpy to index things. As in: def get_batch(X,Y,M): N = len(Y) valid_indices = np.array( range(N) ) batch_indices = np.random.choice(valid_indices,size=M,replace=False) batch_xs = X[batch_indices,:] batch_ys = Y[batch_indices] return batch_xs, batch_ys where X and Y are torch tensors (or variables).
st116719
I think my code runs now, but it seems there has to be a better way than doing: def get_batch2(X,Y,M,dtype): X,Y = X.data.numpy(), Y.data.numpy() N = len(Y) valid_indices = np.array( range(N) ) batch_indices = np.random.choice(valid_indices,size=M,replace=False) batch_xs = torch.FloatTensor(X[batch_indices,:]).type(dtype) batch_ys = torch.FloatTensor(Y[batch_indices]).type(dtype) return Variable(batch_xs, requires_grad=False), Variable(batch_ys, requires_grad=False) I tried a couple of torch methods like gather and index_select but with no luck. Some of the things I tried: #valid_indices = torch.arange(0,N).numpy() #valid_indices = np.array( range(N) ) #batch_indices = np.random.choice(valid_indices,size=M,replace=False) #indices = torch.LongTensor(batch_indices) #batch_xs, batch_ys = torch.index_select(X_mdl, 0, indices), torch.index_select(y, 0, indices) #batch_xs,batch_ys = torch.index_select(X_mdl, 0, indices), torch.index_select(y, 0, indices) wonder if this silly moving from numpy to torch is actually slowing my code down! I hope not. (as a side note I put my question on SO: https://stackoverflow.com/questions/45113245/how-to-get-mini-batches-in-pytorch-in-a-clean-and-efficient-way 56 question with this level of detail are welcome to the pytorch forum or not? Or should one stick with SO?)
st116720
Do I have to keep converting between my torch tensors/variables to numpy for it to work? I got error: return array(a, dtype, copy=False, order=order) TypeError: float() argument must be a string or a number, not 'torch.FloatTensor' so yes?
st116721
Yes, you have to convert your tensor into numpy with Tensor.numpy in order to use matplotlib. However, your error comes from the fact you are trying to convert into a float a whole tensor, like: t = Tensor(5) float(t) Apparently, your tensor is already containing floats, so the conversion is not necessary. But the good way to do it would be t.float(). Then, to convert into numpy, and use matplotlib just do: n = t.numpy() plt.plot(t) plt.show()
st116722
Hi, I am reading the doc for nn.gru. But it is not so clear about the output of hidden states. eg. If I build a bidirectional gru with 2 layers, the result of hidden states will have two for 1st layer and two for 2nd layer, but the order is not said specific.
st116723
For example , I want to expand a variable with 2150 to 2 * 4 50 , which should equal to stack(variable4,dim=1)
st116724
Use variable.expand (2,4,50) to get something similar as with torch.cat in your example. If you really meant stack, throw in .unsqueeze and amend the dimensions in expand. With the broadcasting functionality in master / the next release, you often don’t need to use expand. Best regards Thomas
st116725
Thank you I tried expand() but the module says assert_error in hx.contiguous(). How should I fix that?
st116726
I am trying to move a big tensor from CPU to GPU by the method .cuda() But I found the time varies a lot. For example, the first time for moving data is about a few ms but the second moving may take a few seconds. When I use pdb to debug the code, the time of TENSOR.cuda() operation is fast if I pause a while before executing the operation. If I do not pause, the operation will take longer time. When I use ctrl + c to stop the program, the code is on return new_type(self.size()).copy_(self,async) So does anyone know what caused this?
st116727
I am not pretty sure if I am right or not. the pytorch seems not running in real python code. The python code is more like a instrument. Before we call or print the result explictly, the result is not really in memory. So every time I call print or CUDA it will take a while to fectch the data.
st116728
I havent tested this out in pytorch itself, but generally speaking any time you ask the gpu to do something, it’s fairly asynchronous. So, you send a request, in the mail as it were, and at some point in the future, when the gpu feels like it, has finished its breakfast, reading the morning paper etc, you get the results back. Now, ok, it doesnt really read the paper etc, but it does take a while to do stuff. If you send more stuff whilst the first stuff hasnt finished, obviously the second set of stuff will be delayed for a while. copying data to the gpu counts as ‘doing stuff’. it may look instantaneous, because the instruction returns immediatley, but it’ll take a while. Hunt for an instruction with a name like ‘sync’, or similar, and have a play with that. in fact there’s an example I made earlier here: github.com/pytorch/pytorch Reductions returning scalars cause implicit sync-point 39 opened Jul 6, 2017 closed Jul 7, 2017 hughperkins Scalar reduction results are currently being returned as host-side scalars, causing an implicit sync-point, and gpu->device copy. See the following results: ================= it...
st116729
Thanks hughpekins. I like your explanation. It is intuitive and straight forward. I reckon the Network forward propagation seems asychronized too. Do you have any idea about that? Or are all pytorch codes executed just “return immediately” instead of “return after real executing” ?
st116730
correct. pretty much anything going to gpu, forward prop, or whatever, is in general asynchronous, unless something forces otherwise. Things that cause sync poitns: calling ‘sync’ memory allocation displaying values on the host anything that causes values to be copied to the host
st116731
Thanks for reply:grinning: I think I have known why the .cuda() has various executing time. A “traffic compact” happens to the GPU:joy:
st116732
File “/home/h/anaconda3/lib/python3.5/site-packages/torch/autograd/variable.py”, line 620, in gather return Gather(dim)(self, index) RuntimeError: expected a Variable argument, but got LongTensor The code is def forward(self, sentence, lengths): embed = self.embedding(sentence) packed_seq = pack_padded_sequence(embed, lengths, batch_first=True) out, _ = self.lstm(packed_seq) unpacked, unpacked_len = pad_packed_sequence(out, batch_first=True) maske = torch.LongTensor(unpacked_len).view(-1,1,1).expand_as(unpacked) spaces = self.out(unpacked.gather(1,maske) print(spaces.shape) return spaces The docusments says that the index should be a tensor. but It seem not. If I use Variable(maske). the Error will be File “/home/h/anaconda3/lib/python3.5/site-packages/torch/autograd/_functions/tensor.py”, line 542, in forward return input.gather(self.dim, index) RuntimeError: Invalid index in gather at /py/conda-bld/pytorch_1490983232023/work/torch/lib/TH/generic/THTensorMath.c:441
st116733
should be a Variable(torch.LongTensor( ... )) ‘invalid index’ means you have a value in the LongTensor that is less than 0, or greater than or equal to the first dimension of the embedding matrix
st116734
Hi, Looking for code review on ths following seq2seq implementation. char-level, with shared embedding, and with encoder added to loss function, for faster initial training. github.com hughperkins/pub-prototyping/blob/master/papers/attention/seq2seq_noattention_trainbyparts.py 53 """ pytorch seq2seq designed for rapid prototyping, so trains super quickly, on just a few examples """ import torch from torch import nn, autograd, optim import numpy as np import math import sys import time import encoding # import data_starredwords as data import data_anki as data # N => N # S => max_sentence_length # L => num_layers # H => hidden_size This file has been truncated. show original import torch from torch import nn, autograd, optim import numpy as np import math import sys import encoding # import data_starredwords as data import data_anki as data N = 100 # N = 8 N = 16 max_sentence_len = 10 N = 4 print_every = 2 hidden_size = 16 # hidden_size = 1024 hidden_size = 256 # num_epochs = 16 # N = 10 training = data.Data().get_training(N=N) training = [ {'input': ex['first'][:max_sentence_len], 'target': ex['second'][:max_sentence_len]} for ex in training ] for n in range(min(N, 16)): print(n, training[n]) for i, example in enumerate(training): example['input_encoded'] = encoding.encode_passage(example['input']) example['target_encoded'] = encoding.encode_passage(example['target']) V = len(encoding.char_by_idx) print('vocab size %s' % V) torch.manual_seed(123) np.random.seed(123) class Encoder(nn.Module): def __init__(self, embedding): super().__init__() self.input_size = embedding.weight.size()[0] self.hidden_size = embedding.weight.size()[1] self.embedding = embedding self.rnn_enc = nn.RNN( input_size=self.hidden_size, hidden_size=self.hidden_size, num_layers=1, nonlinearity='tanh' ) def forward(self, x, state): x = self.embedding(x) x, state = self.rnn_enc(x, state) return x, state class Decoder(nn.Module): def __init__(self, embedding): super().__init__() self.input_size = embedding.weight.size()[0] self.hidden_size = embedding.weight.size()[1] self.embedding = embedding self.rnn_dec = nn.RNN( input_size=self.hidden_size, hidden_size=self.hidden_size, num_layers=1, nonlinearity='tanh' ) def forward(self, x, state): x = self.embedding(x) x, state = self.rnn_dec(x, state) return x, state optimizer_fn = optim.Adam # optimizer_fn = optim.SGD embedding = nn.Embedding(V, hidden_size) encoder = Encoder(embedding=embedding) decoder = Decoder(embedding=embedding) embedding_matrix = embedding.weight parameters = ( set(encoder.parameters()) | set(decoder.parameters()) | set(embedding.parameters())) opt = optimizer_fn(parameters, lr=0.001) epoch = 0 while True: encoder_debug = '' decoder_debug = '' for n, ex in enumerate(training): input_encoded = ex['input_encoded'] target_encoded = ex['target_encoded'] input_len = len(input_encoded) target_len = len(target_encoded) teacher_forcing = (epoch % print_every) != 0 loss = 0 criterion = torch.nn.NLLLoss() # encode def encode(input_encoded, state): global encoder_debug enc_loss = 0 prev_c = encoding.start_code input_sentence_verify = '' sentence = '' # [1:] is to cut off the start token # [:-1] is to cut off end token too :-) for t, input_c in enumerate(input_encoded[1:]): input_c = input_c.item() input_sentence_verify += encoding.char_by_idx[input_c] pred_c_embedded, state = encoder(autograd.Variable(torch.LongTensor([[prev_c]])), state) pred_c = pred_c_embedded.view(-1, hidden_size) @ embedding_matrix.transpose(0, 1) _, v = pred_c.max(-1) v = v.data[0][0] sentence += encoding.char_by_idx[v] # want to force encoder to build language model a bit faster than # if it has to wait only for gradient from decoder: enc_loss += criterion(pred_c, autograd.Variable( torch.LongTensor([input_c]))) prev_c = input_c if n <= 4 and epoch % print_every == 0: if n == 0: encoder_debug += 'epoch %s encoder:\n' % epoch encoder_debug += ' [%s] => [%s]\n' % (input_sentence_verify, sentence) return state, enc_loss state = autograd.Variable(torch.zeros(1, 1, hidden_size)) state, enc_loss = encode(input_encoded, state) loss += enc_loss # decode if True: prev_c = encoding.start_code output_sentence = '' for t, target_c in enumerate(target_encoded[1:]): target_c = target_c.item() pred_c_embedded, state = decoder( autograd.Variable(torch.LongTensor([[prev_c]])), state) pred_c = pred_c_embedded.view(-1, hidden_size) @ embedding_matrix.transpose(0, 1) _, v = pred_c.max(-1) v = v.data[0][0] output_sentence += encoding.char_by_idx[v] loss += criterion(pred_c, autograd.Variable(torch.LongTensor( [target_c]))) if teacher_forcing: prev_c = target_c else: # if we're already wrong, let's just abandon... if target_c != v: break prev_c = v if n <= 1 and epoch % print_every == 0: if n == 0: decoder_debug += 'epoch %s decoder:\n' % epoch if not teacher_forcing: decoder_debug += ' [%s] => [%s] [%s]\n' % ( ex['input'], ex['target'], output_sentence) embedding.zero_grad() encoder.zero_grad() decoder.zero_grad() loss.backward() torch.nn.utils.clip_grad_norm(parameters, 4.0) opt.step() if encoder_debug != '': print(encoder_debug) if decoder_debug != '': print(decoder_debug) epoch += 1
st116735
(since I’m using similar examples to create tutorial Youtube videos, seems like making sure it’s more or less recognizeable idiomatic pytorch is probably a good idea )
st116736
Hi everyone. I have taken a pre trained resnet model. I created a new model with the first few layers(and i also transferred the weights of the first common layers), somewhere around half of the original pre trained model. When I pass an image as an input, the time the new custom model takes to process the image for its layers is more than the whole of resnet takes to process. Is there something wrong ? A smilar thing is hapening when I use alexnet pre trained. I created a new model which comtains only the ‘features’ part of the original alexnet model. The time my new model takes is highly large than the ful alexnet model takes to process… Please help me
st116737
Its normal for the first few layers to dominate the time, since the image is really large at the start, hasnt gone through any maxpooling layers yet. if you can find a solution to this, I guess this is paper-worthy
st116738
But even the last layers take more time than the full model itself if done separately…
st116739
Oh, I see. Well, that could be explainable by the time it takes to copy data to and from the gpu. But I have trouble visualizing how you’re rewriting the layers to cause such a copy? Perhaps you can paste a short representative working example of how such code looks?
st116740
When I try and compile pytorch on OSX I will get this: nvcc fatal : The version (‘80100’) of the host compiler (‘Apple clang’) is not supported The solution here is to downgrade to 7.3: github.com/arrayfire/arrayfire NVCC does not support Apple Clang version 8.x 5 opened Apr 14, 2016 closed May 8, 2018 mlloreda Error message: nvcc fatal : The version ('80000') of the host compiler ('Apple clang') is not supported Steps to fix: Log in to... OSX known issue That worked the last time I compiled pytorch on my MBP, but now pytorch fails to build because of TLS references. Devel/pytorch/pytorch/torch/lib/THD/master_worker/master/State.hpp:23:3: error: thread-local storage is not supported for the current target thread_local static rank_type s_current_worker; ^ Any suggestions?
st116741
oh man, OSX + CUDA is such a mess. For now you can compile without distributed using: NO_DISTRIBUTED=1 python setup.py install
st116742
@smth Thanks! I’m actually just using the Torch libs so I can get away with just disabling building essentially all C++ code.
st116743
… places output in a different place from literally every other updateOutput function that auto.py calls in to.
st116744
I am trying to make my own customer layer with an RNN that takes the generated text, does some transformations to it, then converts back into a tensor. I am not sure if what Im thinking is doable. Is it possible to generate the output of the current batch inside an RNN model during a forward pass? I want to use that data to inform the loss function. I tried with this model definition: class TestLayer(nn.Module): def __init__(self): super(TestLayer,self).__init__() def forward(self,input): prime_str='A' predict_len=100 temperature=0.8 #hidden = decoder.init_hidden() prime_input = char_tensor(prime_str) predicted = prime_str # Use priming string to "build up" hidden state for p in range(len(prime_str) - 1): _, hidden = decoder(prime_input[p], hidden) inp = prime_input[-1] for p in range(predict_len): output, hidden = decoder(inp, hidden) # Sample from the network as a multinomial distribution output_dist = output.data.view(-1).div(temperature).exp() top_i = torch.multinomial(output_dist, 1)[0] # Add predicted character to string and use as next input predicted_char = all_characters[top_i] predicted += predicted_char inp = char_tensor(predicted_char) return predicted class RNN(nn.Module): def __init__(self, input_size, hidden_size, output_size, n_layers=1): super(RNN, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.n_layers = n_layers self.test_layer = TestLayer() self.encoder = nn.Embedding(input_size, hidden_size) self.gru = nn.GRU(hidden_size, hidden_size, n_layers) self.decoder = nn.Linear(hidden_size, output_size) def forward(self, input, hidden): input = self.encoder(input.view(1, -1)) output, hidden = self.gru(input.view(1, 1, -1), hidden) output = self.decoder(output.view(1, -1)) output = self.test_layer(output) return output, hidden def init_hidden(self): return Variable(torch.zeros(self.n_layers, 1, self.hidden_size))
st116745
I am training a neural network, and I just need a subset of data for training, e.g. label of 3, 5, 8 out of 10 classes. But for some reason, I don’t want to change the label value as the input to the network. So in my Softmax loss layer, I want to change the label set (3.5.8) to (0,1,2), so the system can train without memory issue. So how can I change the label variable directly? Any good suggestions? Thanks!
st116746
If you don’t want to change it during preprocessing, maybe using a fixed Embedding layer will do?
st116747
I’ve seen that the GPU device can be picked by passing an argument to the cuda method. However, I was wondering if it would be possible to define this through environment variables.
st116748
Yes, it is possible. You can either run your script by setting CUDA_VISIBLE_DEVICES like CUDA_VISIBLE_DEVICES=1 python myscript.py Or change it in the code, as done in this script 1.7k
st116749
This is specific to CUDA, and not to pytorch. You can find some more information about it in this link 1.1k
st116750
One caveat for setting the CUDA_VISIBLE_DEVICES environment variable in the code is that that has to be done before any call to torch.cuda.
st116751
I have two network, they shared some layers. For example, F and F1 compose net1, F and F2 compose net2, net1 and net2 compose net. I first trained F and F1, then I make F.weight and F1.weight are Flase, I begin to train F2,but every epoch ,I find the accuracy of net1 always change a little, but when I set net.eval() to train F2, the accuracy no change . However, every epoch when I test the accuracy of net1 ,I used net.eval() so I do not know why this happen
st116752
MIght be just me, but I found this kind of hard to follow. Maybe you can make a really simple example, using just like one linear layer or something, and random data (torch.rand), so it’s easy to reproduce?
st116753
Is there batchnorm in either network? Batchnorm updates its running averages when run in training mode, even when you don’t use an optimizer.
st116754
Yes, have BN layer, but the weight in BN layer also is False, how does it change?
st116755
There’s the learned weight and bias, but also a separate running mean and running variance that it keeps track of outside of any optimizer. If you want these not to update you need to set the BatchNorm module’s momentum to 0.
st116756
I need to use softmax function on variables containing 5D tensors (batch_size X nChannels X W X D X H ) that is common when working with 3D images of size W X D X H. Since softmax() doesn’t support 5D, I was changing the variable as below: var = var.view(batch_size, nChannels, -1) However running softmax(var) throws runtime error when W X D X H is bigger than a certain size. Please see here 11 for an example to produce this error. Is there another better way of making softmax() and other functions work on 3D images that end up being 5d tensors with the batch and channel dimension ?
st116757
Is it intended behavior? Doesn’t it just have backportability? It’s possible to load cpu()'ed model’s state_dict using cuda()'ed model’s load_state_dict(), but backward is not, when cuda is not available (CUDA_VISIBLE_DEVICES=-1). I read torch.load() doc, and I understand that there may be some cuda-device-related information in state_dict. So I think that would be good nn.Module.state_dict() to have an option to convert all output tensors to cpu tensors when saving. EDIT: I tried cpu()-save()-cuda() way to resolve this problem, but I just found out that optimizer’s interface doesn’t have cpu() or cuda(). Then how can I resolve this? My problem is, I just want to check if my model is learning correctly, but I’m getting out of memory error. CUDA_VISIBLE_DEVICES=-1 resolves OOM error, but raised another error.
st116758
If you want to load a model trained using cuda to run on cpu you just load as: saved_state = torch.load("saved_model_dict_file", map_location=lambda storage, loc: storage) model.load_state_dict(saved_state)
st116759
Hi! I’m one the maintainers of tiny-dnn 32 and we’d like to know the position of this community regarding import/export models from/to other frameworks? We are also in touch with the Khronos group to help with their initiative to define a proper Neural Network Exchange Format (NNEF) 43 which could make things easier. More details 24 Meanwhile, do you have already any internal data structure to read the models definition?
st116760
We’re open to discussion about formats for sharing parameters and model definitions between libraries. If there’s going to be a larger initiative with some of the frameworks adopting a new format, we’ll likely join and try to help. Right now our serialization is very torch specific and we didn’t design it having in mind interoperability with other software. If you need to export model weights I encourage using state_dict and some more common formats like HDF5 or protobuf.
st116761
is there a suggested new format to export in pytorch for use in other framework and custom compilers?
st116762
Chinese tutorial on creating autograd.Variable, running backward. Fairly basic. (basically, similar to https://www.youtube.com/watch?v=4F2LfiY8JLo 4 , but in Chinese) youtube: 给介绍pytorch的autograd.Variable一点 tudou:
st116763
With torchvision.transforms.RandomHorizontalFlip I can randomly flip the input image. Is there a way to retireve whether the image was flipped or not? I need this to adapt my target.
st116764
Best way to deal with this is to write your own transform. Check out the code of torchvision.transforms.RandomHorizontalFlip 16
st116765
Hey guys, I’m having this weird error with NLLLoss: The predicted and target outputs are the following: Predicted Variable containing: -2.8742 -0.0581 -2.1152 -0.1285 -2.8361 -0.0604 -3.5815 -0.0282 -3.4316 -0.0329 -3.9429 -0.0196 -2.6402 -0.0740 -2.7773 -0.0642 -2.3009 -0.1056 -1.7565 -0.1895 [torch.FloatTensor of size 10x2] Target Variable containing: 1 0 1 1 1 1 0 0 0 1 [torch.LongTensor of size 10x1] And I get the following error: RuntimeError: multi-target not supported at /Users/soumith/code/builder/wheel/pytorch-src/torch/lib/THNN/generic/ClassNLLCriterion.c:20 I tried with toy examples before and the formats seem pretty much valid. I had a different version of the model running and it was working… Is this some kind of pytorch bug? Best regards, Miguel EDIT It only happens when I load the target output from numpy! I was doing: target = Variable(torch.from_numpy(np.random.randint(2, size=(10, 1)))) This is definitely something that should not happen…
st116766
the targets should be 1 dimension less than outputs for NLLLoss, as they are integer values from 0 to nClasses-1. So in your case, target should be 1D tensor of size 10
st116767
I guess that’s exactly what I was doing… It may be related with the way numpy represents 1D vectors (either as column or line vector)