id
stringlengths
3
8
text
stringlengths
1
115k
st46868
If you want to freeze the model and use a KNN implementation from e.g. sklearn, you could replace the last linear layer with nn.Identity to get the features from the penultimate layer and feed them to sklearn.neighbors.KNeightborsClassifier 4. Alternatively, you could also implement KNN manually in PyTorch (you should be able to find some implementations in this forum or probably on GitHub).
st46869
Hello everyone, I want to build Pytorch with GLIBCXX_USE_CXX11_ABI=1 how should I go about setting this? Im following the official guide which only says : export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"} python setup.py install I couldn’t find anything regarding this. its not even indicated in the setup.py Thanks a lot in advance
st46870
Solved by Shisho_Sama in post #2 I also found this which says where we can change this in cmake : if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU") set(TORCH_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=0") endif() for me though I had to change the /cmake/TorchConfig.cmake.in which is : # When we build libtorch with the old GCC ABI, depe…
st46871
I also found this 2 which says where we can change this in cmake : if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU") set(TORCH_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=0") endif() for me though I had to change the /cmake/TorchConfig.cmake.in which is : # When we build libtorch with the old GCC ABI, dependent libraries must too. if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU") set(TORCH_CXX_FLAGS "-D_GLIBCXX_USE_CXX11_ABI=@GLIBCXX_USE_CXX11_ABI@") endif() but is there another way to set the variable instead of hardcoding this into cmake file? Update: Will doing something like this suffice? export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"} export TORCH_CXX_FLAGS="-D_GLIBCXX_USE_CXX11_ABI=1" python setup.py install OK I did this and the build process finished successfully Update 2: However, upon importing the torch now I get In [1]: import torch --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-1-eb42ca6e4af3> in <module> ----> 1 import torch ~/Desktop/Pytorch1_6/pytorch-1.6.0/torch/__init__.py in <module> 333 334 # If you edit these imports, please update torch/__init__.py.in as well --> 335 from .random import set_rng_state, get_rng_state, manual_seed, initial_seed, seed 336 from .serialization import save, load 337 from ._tensor_str import set_printoptions ~/Desktop/Pytorch1_6/pytorch-1.6.0/torch/random.py in <module> 2 import warnings 3 ----> 4 from torch._C import default_generator 5 6 ImportError: cannot import name 'default_generator' from 'torch._C' (unknown location) Whats wrong? OK found the cause : If you try to import torch while inside the top level pytorch source tree it will import the torch directory and not the installed torch module. Try changing your current working directory and see if the issue resolves. With this now fixed, mission is now accomplished In [1]: import torch In [2]: torch._C._GLIBCXX_USE_CXX11_ABI Out[2]: True final note, to create the whl file, simply repalce python setup.py with pip wheel . or simply do: export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"} export TORCH_CXX_FLAGS="-D_GLIBCXX_USE_CXX11_ABI=1" pip wheel .
st46872
While debugging a program with a memory leak I discovered that the leak was bigger when I was using pycharm debugger. I haven’t compared this to other debuggers but there was a definite much larger gpu memory consumption. I tried a whole bunch of debugger settings, including “on Demand” but none seem to make a difference. (note: This post has been edited to add this clarification - as I originally blatantly blamed the pycharm debugger, but as @googlebot pointed out in his comments it could be just as well the case with any other debugger. ) the original post follows: Is anybody using pycharm debugger with pytorch programs? it leaks gpu memory like there is no tomorrow. This sounds similar to the problem with the backtrace on OOM in ipython not freeing gpu memory. Unknowingly I made the mistake of actually trying to debug memory leakage using pycharm, not realizing that pycharm debugger itself is storing tensors and not freeing them, even with forced gc.collect(). I suppose this is by design since pycharm stores all the variables for the user to access and thus they can’t be freed until the frames are exited. I discovered that while writing a script to reproduce a memory leak, and only when I added a gpu memory tracing in it using I noticed that that I was getting totally different measurements when running the same script under debugger and not. I tried a whole bunch of settings, but none seem to help. If you have a way to tell pycharm to not store intermediary vars please share, but somehow I doubt it’s even possible. So if you’re trying to debug a pytorch program under pycharm and you end up getting OOM, this is why.
st46873
Use “Variable loading policy” = on demand. IPython’s history may keep tensors alive (underscore variables like _, _10). In practice, I almost never have issues caused by that, but maybe your console usage patterns are different. So, disabling IPython may do something (I don’t know how to disable history only). If you don’t pause or use breakpoints, I don’t see how pycharm would allocate cuda memory.
st46874
Thank you for your follow up, @googlebot Use “Variable loading policy” = on demand. As I mentioned I tried many different options, including this one - to no avail. Python’s history may keep tensors alive (underscore variables like _, _10) No, this has to do with ipython not releasing GPU ram on OOM - a huge problem for jupyter users. A fix has been proposed almost 2 years ago, but it has never been integrated: github.com/ipython/ipython fix a memory leak on exception (caused by the stored traceback) 1 ipython:master ← stas00:leak-on-exc opened Jan 22, 2019 stas00 +2 -0 I use 1/0 cell-fix following the oom cell to work around it. If you don’t pause or use breakpoints, I don’t see how pycharm would allocate cuda memory. Right, basically you’re saying do not use pycharm debugger. It’s not allocating cuda memory - it prevents variables from being freed and gc.collect()ed and thus memory from being freed.
st46875
Your message sounded like the debugger is somehow defective, which is not the case in my experience. IPython/jupyter’s leaks would happen regardless of pycharm or other debugger IDE, won’t they?
st46876
You’re correct that I made a broad statement, without comparing to other debuggers. I appreciate you flagging that, @googlebot. I edited the first post to reflect that. While there is definitely a similarity I don’t think we can compare this to jupyter/ipython. In ipython you can control what variables remain in scope, whereas debuggers do their own tracking that you can’t control. Unfortunately, I haven’t saved that particular code that was showing drastically different gpu memory usage w/ and w/o debugger. I tried a few approaches now and the only correlation I found is the number of breakpoints in frames that had huge vars on cuda - more extra memory used when more breakpoints. I may get a chance to investigate it more and if I do I will report back. But I’m surely going to be wary of watching memory usage under debugger and will have to check the usage patterns outside of debugger.
st46877
I am trying to train a network to output target values (between 0 and 1). I cannot batch my inputs, so I am using a batch size of 1. Since I don’t want the sum of the loss gradients of each example, but the gradient of the average loss, I am adding item_loss/num_items for each item to end up with an average epoch_loss and optimize that. But it trains strangely and poorly. To illustrate, I will train with just 1 or 2 examples. If I train with just 1 example (say y = .25) and the initial output is .15, for some reason the output values continue to increase after multiple epochs even after it passes the target, e.g. [.15, .18, .21, .225, .24, .256, .275, .31]. Here are training logs from a real example: Outputs: 0.525374174118042 Targets: 0.7524919509887695 Item loss: 0.05158248543739319 Epoch: 1 Batch: 0 Batch loss: 0.0515825 Outputs: 0.5907765030860901 Targets: 0.7524919509887695 Item loss: 0.026151886209845543 Epoch: 2 Batch: 1 Batch loss: 0.0261519 Outputs: 0.6628296971321106 Targets: 0.7524919509887695 Item loss: 0.008039319887757301 Epoch: 3 Batch: 2 Batch loss: 0.0080393 Outputs: 0.735643744468689 Targets: 0.7524919509887695 Item loss: 0.0002838620566762984 Epoch: 4 Batch: 3 Batch loss: 0.0002839 Outputs: 0.7974085807800293 Targets: 0.7524919509887695 Item loss: 0.0020175036042928696 Epoch: 5 Batch: 4 Batch loss: 0.0020175 Outputs: 0.8372832536697388 Targets: 0.7524919509887695 Item loss: 0.007189564872533083 Epoch: 6 Batch: 5 Batch loss: 0.0071896 Outputs: 0.8560269474983215 Targets: 0.7524919509887695 Item loss: 0.010719495825469494 Epoch: 7 Batch: 6 Batch loss: 0.0107195 Outputs: 0.8599795699119568 Targets: 0.7524919509887695 Item loss: 0.011553588323295116 Epoch: 8 Batch: 7 Batch loss: 0.0115536 Outputs: 0.8537989258766174 Targets: 0.7524919509887695 Item loss: 0.010263103060424328 Epoch: 9 Batch: 8 Batch loss: 0.0102631 Outputs: 0.8402236700057983 Targets: 0.7524919509887695 Item loss: 0.007696854416280985 Epoch: 10 Batch: 9 Batch loss: 0.0076969 Outputs: 0.8212108612060547 Targets: 0.7524919509887695 Item loss: 0.0047222888097167015 Epoch: 11 Batch: 10 Batch loss: 0.0047223 Outputs: 0.7986907958984375 Targets: 0.7524919509887695 Item loss: 0.0021343333646655083 Epoch: 12 Batch: 11 Batch loss: 0.0021343 Outputs: 0.7748352289199829 Targets: 0.7524919509887695 Item loss: 0.0004992220783606172 Epoch: 13 Batch: 12 Batch loss: 0.0004992 Outputs: 0.7519184350967407 Targets: 0.7524919509887695 Item loss: 3.289204641987453e-07 Epoch: 14 Batch: 13 Batch loss: 0.0000003 If I train with just 2 examples (say y_1 = .25, y_2 = .7), for some reason after 1 or 2 epochs the outputs for both examples will always be the same, e.g. [(.354, .332), (.36, .36), (.38, .38), (.352, .352)], I haven no idea why. It doesn’t get close to either one of targets, but I suspect that it is optimizing for their average, (y_1 + y_2)/2. Here are training logs from a real example: Outputs: 0.6534302234649658 Targets: 0.12747164070606232 Item loss: 0.13831622898578644 Outputs: 0.781857967376709 Targets: 0.6774895191192627 Item loss: 0.005446386523544788 Epoch: 1 Batch: 0 Batch loss: 0.1437626 Outputs: 0.49351614713668823 Targets: 0.12747164070606232 Item loss: 0.06699429452419281 Outputs: 0.49351614713668823 Targets: 0.6774895191192627 Item loss: 0.016923101618885994 Epoch: 2 Batch: 1 Batch loss: 0.0839174 Outputs: 0.4287058711051941 Targets: 0.6774895191192627 Item loss: 0.030946651473641396 Outputs: 0.4287058711051941 Targets: 0.12747164070606232 Item loss: 0.04537103697657585 Epoch: 3 Batch: 2 Batch loss: 0.0763177 Outputs: 0.37630677223205566 Targets: 0.12747164070606232 Item loss: 0.030959460884332657 Outputs: 0.37630677223205566 Targets: 0.6774895191192627 Item loss: 0.045355524867773056 Epoch: 4 Batch: 3 Batch loss: 0.0763150 Outputs: 0.3388264775276184 Targets: 0.6774895191192627 Item loss: 0.05734632909297943 Outputs: 0.3388264775276184 Targets: 0.12747164070606232 Item loss: 0.022335434332489967 Epoch: 5 Batch: 4 Batch loss: 0.0796818 Outputs: 0.31596532464027405 Targets: 0.6774895191192627 Item loss: 0.06534986943006516 Outputs: 0.31596532464027405 Targets: 0.12747164070606232 Item loss: 0.01776493526995182 Epoch: 6 Batch: 5 Batch loss: 0.0831148 Outputs: 0.3056109547615051 Targets: 0.12747164070606232 Item loss: 0.015866806730628014 Outputs: 0.3056109547615051 Targets: 0.6774895191192627 Item loss: 0.06914683431386948 Epoch: 7 Batch: 6 Batch loss: 0.0850136 Outputs: 4.601355976774357e-05 Targets: 0.6774895191192627 Item loss: 0.2294648438692093 Outputs: 0.013865623623132706 Targets: 0.12747164070606232 Item loss: 0.006453163921833038 Epoch: 8 Batch: 7 Batch loss: 0.2359180 Outputs: 0.3054547607898712 Targets: 0.6774895191192627 Item loss: 0.06920493394136429 Outputs: 0.3054547607898712 Targets: 0.12747164070606232 Item loss: 0.015838995575904846 Epoch: 9 Batch: 8 Batch loss: 0.0850439 Outputs: 0.31345269083976746 Targets: 0.12747164070606232 Item loss: 0.017294475808739662 Outputs: 0.31345269083976746 Targets: 0.6774895191192627 Item loss: 0.0662614032626152 Epoch: 10 Batch: 9 Batch loss: 0.0835559 Outputs: 0.32778313755989075 Targets: 0.12747164070606232 Item loss: 0.020062347874045372 Outputs: 0.32778313755989075 Targets: 0.6774895191192627 Item loss: 0.06114727631211281 Epoch: 11 Batch: 10 Batch loss: 0.0812096 Outputs: 0.34695807099342346 Targets: 0.6774895191192627 Item loss: 0.05462551862001419 Outputs: 0.34695807099342346 Targets: 0.12747164070606232 Item loss: 0.024087145924568176 Epoch: 12 Batch: 11 Batch loss: 0.0787127 How can I fix these problems, what could I be doing wrong? Here is my training code: def train(training_dataloader, model, loss_func, optimizer, epochs, log_interval, save_as): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") num_items = len(training_dataloader.sampler.indices) for epoch in range(epochs): epoch_loss = 0.0 for i, batch in enumerate(training_dataloader): a = torch.tensor(batch["a"]).float().to(device) b = torch.tensor(batch["b"]).float().to(device) inputs = [a,b] output_values = model(inputs) true_values = batch["c"].float().to(device) item_loss = loss_func(output_values, true_values) / num_items epoch_loss += item_loss if i % log_interval == 0: print(f"Outputs: {output_values.item()} \t Targets: {true_values.item()} \t Item loss: {item_loss}") optimizer.zero_grad() epoch_loss.backward() optimizer.step() if i % log_interval == 0: print(f"Epoch: { epoch + 1 } \t Epoch: {epoch} \t Epoch loss: {epoch_loss:.7f}")
st46878
Solved by agt in post #10 I cleaned up and modularized my model and solved the case with multiple samples all having the same output. It had to do with me passing the original inputs into a linear layer instead of the convolved inputs. Since all the inputs contain similar sets of elements, except the elements are related in…
st46879
I thought it might be this (Outputs from a simple DNN are always the same whatever the input is 3), but model.state_dict() suggests the weights and biases are all on a similar scale. @ptrblck could you lend a hand?
st46880
agt: If I train with just 1 example (say y = .25) and the initial output is .15, for some reason the output values continue to increase after multiple epochs even after it passes the target, e.g. [.15, .18, .21, .225, .24, .256, .275, .31]. This doesn’t seem right, as a single sample should perfectly overfit, while your increasing output points towards a potential bug in the code. However, your output shows some convergence in the end and the loss is 3.289204641987453e-07, which seems to be good. Could you post your model definition, so that we can have a look?
st46881
You’re right, it eventually converges, but I don’t understand how the gradients for a single sample can push it in the wrong direction for multiple steps. My basic understanding of SGD says that with a single sample, the gradients should be guaranteed to be in the right direction. So how is this happening? Here is my model: class Net(torch.nn.Module): def __init__(self, num_node_features): super().__init__() self.num_node_features = num_node_features self.alpha = torch.nn.Parameter(torch.randn(1)) self.linear_1 = torch.nn.Linear(num_node_features, num_node_features * 2) self.linear_2 = torch.nn.Linear(self.linear_1.out_features, self.linear_1.out_features) self.linear_3 = torch.nn.Linear(self.linear_2.out_features, 1) self.linear_4 = torch.nn.Linear(self.linear_2.out_features * 2, self.linear_2.out_features) self.linear_5 = torch.nn.Linear(self.linear_2.out_features, self.linear_1.in_features) self.linear_6 = torch.nn.Linear(self.linear_1.in_features, 1) def forward(self, protein_ligand_pair): graph_embeddings = [] for molecule in protein_ligand_pair: #These are actually batches of matrices. node_matrix, adjacency_matrix = molecule propagations = 5 for step in range(propagations): smoothed_node_matrix = torch.matmul(adjacency_matrix, node_matrix) node_matrix = self.alpha*node_matrix + (1-self.alpha)*smoothed_node_matrix batch_size, num_nodes, num_input_features = node_matrix.shape new_node_matrix = torch.empty(batch_size, num_nodes, self.linear_2.in_features, device = self.alpha.device) for node_i in range(num_nodes): linear_layer = self.linear_1 if step == 0 else self.linear_2 node_features = node_matrix[:, node_i] new_node_matrix[:, node_i] = torch.nn.ReLU()(linear_layer(node_features)) node_matrix = new_node_matrix num_nodes, num_features = node_matrix.shape[1:] aggregation_weights = torch.empty(batch_size, num_nodes, device = self.alpha.device) for node_i in range(num_nodes): node_features = node_matrix[:, node_i] aggregation_weights[:, node_i] = torch.nn.ReLU()(self.linear_3(node_features)) aggregate = torch.matmul(aggregation_weights, node_matrix) graph_embeddings.append(aggregate) protein_ligand_concatenated = torch.cat(graph_embeddings, axis = 2) last_hidden_output = torch.nn.Sequential(self.linear_4, torch.nn.ReLU(), self.linear_5, torch.nn.ReLU())(protein_ligand_concatenated) ligand_protein_affinity = torch.nn.Sigmoid()(self.linear_6(last_hidden_output)).flatten() return ligand_protein_affinity Some context for the “for step in range(propagations)” loop: How does applying the same convolutional layer to its own output affect learning?
st46882
My opinion, the main reason of such behavior is you are doing one optimiztion step once per epoch. This way you are optimizing for the best “epoch-level” loss. So the loss will converge, but it is not guaranteed to have optimal weights on per-sample basis. Doing the one optimization step per epoch is like doing “batch gradient descent” with all your data as one batch. The opposite will be doing optimiztion step every sample, which is in your case will be “online” or “stochastic” gradient descent. The third way is to try “mini-batch” gradient descent - which is in your case to accumulate gradients for several samples (e.g. 10-20, you decide) and do optimization step after that. As you can see, it is somehow the way of regularizing you model - when optimizing once per epoch the model will have a hard time to learn a concept of single sample, but when optimizing after every sample it would be hard for the model to “see the forest behind the trees”. If none of that doesn’t seem to have some sense for you, I would suggest to familiarize yourself with a concepts of batch, stochastic and mini-batch gradient descent. The great explanations on the topic you can find from the popular Andrew Ng Machine Learning or Deep Learning Courses.
st46883
That doesn’t explain this situation where the entire dataset has exactly 1 sample. I’m more confident that something is wrong and it might be a bug in light of @ptrblck’s reply.
st46884
It could be bug, sure. But if I understand correctly, first log is where you train with only one sample and it converges perfectly. Did you try mini batch strategy, training for more epochs or maybe increasing the learning rate for batch strategy?
st46885
In the logs, the output starts below the target value, then the optimizer pushes the parameters to make the output increase and eventually it passes the target. At this point, the optimizer should change the parameters so that the output will be in the opposite direction (decreasing it). Gradient descent should guarantee this decrease if the dataset has 1 sample only. But instead, the output just keeps increasing and increasing for multiple steps before eventually coming back down and converging. I don’t understand how this could happen – optimizer makes it continue increasing for multiple steps after passing target and loss goes up and up – with only 1 sample.
st46886
Solved the case with only 1 sample! My expectations were based on SGD, but I was using Adam, which oscillates before converging. But why with 2 samples in the dataset, does the network end up outputting the same strange value for both of them, instead of converging to each of their targets? I still don’t understand this.
st46887
I cleaned up and modularized my model and solved the case with multiple samples all having the same output. It had to do with me passing the original inputs into a linear layer instead of the convolved inputs. Since all the inputs contain similar sets of elements, except the elements are related in different ways, passing the inputs to a linear layer and aggregating each of their output elements just results in the same/similar output for all of them. When I apply a convolution to each input so that each element of the input is updated with information from the other elements it’s connected to, then those differences become apparent in the convolved inputs, the linear layers outputs diverge between the convolved inputs, and aggregating those outputs can lead to very different final outputs between the inputs depending on how the elements in the inputs were connected. At least, that’s what I think is going on and it’s alright now.
st46888
I’m looking for the most efficient way to perform this operation weight_range = [] for i in range(id_min.shape[0]): weight_range.append(torch.arange(id_min[i], id_max[i], device=device)) weight_range = torch.stack(weight_range).flatten() Where id_min and id_max are 1D torch tensors representing the minimum and maximum range (the size id_min:id_max is always the same). This operation is performed in the forward step and is very time consuming. It’s basically grabbing a small subset of weights for each each batch and it will be a different subset depending on the input.
st46889
I have a complicated graph-neural-network-like pytorch model which is very hard for me to vectorize. I am looking forward to using the new vmap functionality and might even start trying it out from the master branch on github, but in the meantime, I’m trying to understand more about vectorization in general. I made the following short script to test how important batching is for speed in PyTorch. When I use the batched version, it’s about 77 times faster. (I used input and output dimensions of 256 and number of vectors = 10000) import torch import argparse from time import time parser = argparse.ArgumentParser( description="A short program to test how much batching helps in PyTorch" ) parser.add_argument("i", type=int, help="input vector dimension") parser.add_argument("o", type=int, help="output vector dimension") parser.add_argument("n", type=int, help="number of vectors to multiply matrix by" ) parser.add_argument("--batched", action="store_true") args = parser.parse_args() layer = torch.nn.Linear(args.i, args.o) xs = torch.randn(args.n, args.i, requires_grad=False) print("start") t1 = time() if args.batched: ys = layer(xs) else: for x in xs: y = layer(x) t2 = time() print("end") print(f"time elapsed: {t2-t1}") Basically, my question comes down to the following: What are all of the factors that contribute to making batched code take 1/77th the time? (even on CPU)
st46890
For loops add the Python overhead in each iteration and call into the dispatcher for a small workload. A lot of operations are using vectorization inside, which won’t be fully used. Note that this is not PyTorch-specific and can also be seen in e.g. numpy: def fun1(x): out = [] for x_ in x: out.append(np.sqrt(x_)) out = np.stack(out) return out def fun2(x): out = np.sqrt(x) return out x = np.random.rand(1000, 100) %timeit out1 = fun1(x) %timeit out2 = fun2(x)
st46891
For loops add the Python overhead in each iteration and call into the dispatcher for a small workload. The for loops add very little overhead as far as I could tell from profiling using line_profiler. Is the pytorch dispatch really enough to explain it being 77 times slower? A lot of operations are using vectorization inside, which won’t be fully used. What does vectorization really mean at a low level? Do you mean that pytorch uses avx2 style SIMD parallelism on CPU?
st46892
Hi! just curious as to how Dataloader create batches in sequential mode. if I have a list of files in order of [“file1.txt”, “file2.txt”, “file3.txt”, “file4.txt”,“file5.txt”,“file6.txt”], would the output order be something like this (with a batch size of 2): [“file1.txt”, “file2.txt”], [ “file3.txt”, “file4.txt” ] , [“file5.txt”,“file6.txt”],
st46893
Solved by ptrblck in post #2 If you are storing the filenames in e.g. a list in the initially posted order, then you are correct in the batch outputs. If you are not using shuffle=True, the Dataset.__getitem__ will get indices in a sequential order (i.e. 0, 1, 2, 3, ...) and depending how you are storing and loading the data, …
st46894
If you are storing the filenames in e.g. a list in the initially posted order, then you are correct in the batch outputs. If you are not using shuffle=True, the Dataset.__getitem__ will get indices in a sequential order (i.e. 0, 1, 2, 3, ...) and depending how you are storing and loading the data, these indices will just be used in the __getitem__ method.
st46895
On Windows, is there a way to step into C++ PyTorch extension code for debugging? Thanks.
st46896
Hi , everyone. I try so hard to solve my issue but it seems that I am wasting 4 hours and can not get a result. I try to train a RNN model using GRU to classify three different language in wav file. I still can’t find the reason for the error. loss_func = nn.NLLLoss() learning_rate = 0.01 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) num_epochs = 25 model.train() for epoch in range(num_epochs): train_loss = 0 for x, y in train_dataloader: x = x.to(device) y = y.to(device) yhat = model(x) print(yhat.shape) loss = loss_func(yhat,y) model.zero_grad() loss.backward() optimizer.step() train_loss += loss if not (epoch % 1): print(f'Epoch: {epoch+1:02d}, ' + f'Loss: {train_loss / len(train_dataloader.dataset):.4f}') print('Finished Training') The shape of the output yhat is (60,1000,3) and using the softmax activation. The shape of the target y is (60,1000,1). It shows that the error is ValueError: Expected target size (60, 3), got torch.Size([60, 1000, 1]) Is there something wrong with my labels?
st46897
I’m not familiar with your use case and thus don’t know what the dimensions mean in your output. However, nn.NLLLoss (and nn.CrossEntropyLoss) expect a model output in the shape [batch_size, nb_classes, *] and a target in [batch_size, *] containing class indices in the range [0, nb_classes-1]. Note that the * stands for additional dimensions. In your case it seems that you are using [batch_size=60, nb_classes=1000, seq_len=3] in your output. If that’s the case, the target should have the shape [60, 3] and contain values in [0, 999]. Bill_Ye: and using the softmax activation nn.NLLLoss expects log probabilities, so use F.log_softmax instead.
st46898
ptrblck: However, nn.NLLLoss (and nn.CrossEntropyLoss) expect a model output in the shape [batch_size, nb_classes, *] and a target in [batch_size, *] containing class indices in the range [0, nb_classes-1]. Thank you for the response, it helps a lot.
st46899
This is in the context of model-based reinforcement learning. Say I have some reward at time T, and i want to do truncated backprop through the network roll out, what is the best way to do this? Are there any good examples out there? I haven’t managed to find much. Any help would be appreciated!
st46900
# non-truncated for t in range(T): out = model(out) out.backward() # truncated to the last K timesteps for t in range(T): out = model(out) if T - t == K: out.detach() out.backward()
st46901
Shouldn’t the truncated example be # truncated to the last K timesteps for t in range(T): out = model(out) if T - t == K: out.backward() out.detach() out.backward()
st46902
I guess that an alternative is to do a “head” truncation besides a “tail” trunction This is tail truncation – smth: for t in range(T): out = model(out) if T - t == K: out.detach() out.backward() This is head truncation – modelparameter.requires_grad = False for t in range(T): out = model(out) if T - t == K: modelparameter.requires_grad = True out.backward()
st46903
I have a similar issue in this post 167. I followed the pseudocode for the non-truncated BPTT in this conversation, the network trains but I have the feeling that the gradient is not flowing through time. I posted my training code for the network. Can someone give some tips?
st46904
Check out hooks. If you want to inspect an gradient, you can register a backwards_hook, and drop the values into a print statement or tensorboard. Why cant I see .grad of an intermediate variable? Hi Kalamaya, By default, gradients are only retained for leaf variables. non-leaf variables’ gradients are not retained to be inspected later. This was done by design, to save memory. However, you can inspect and extract the gradients of the intermediate variables via hooks. You can register a function on a Variable that will be called when the backward of the variable is being processed. More documentation on hooks is here: http://pytorch.org/docs/autograd.html#torch.autograd.Variable.regis… eg, in the below code I drop a hook to monitor the values passing through a softmax functiion. (later I compute the entropy and pump it into tensorboard). def monitorAttention(self, input, output): if writer.global_step % 10 == 0: monitors.monitorSoftmax(self, input, output, ' input ', writer, dim=1) self.softmax.register_forward_hook(monitorAttention)
st46905
@riccardosamperna did you find out the correct way to solve this problem? I followed the 2 posts you mentioned but I found no solution
st46906
Hi everyone, I am using PyTorch 1.7 and cuda 10.2, I found a strange thing, please see the following code and corresponding outputs: test =torch.randn(3,3) print(test.device) test.to('cuda:0') print(test.device) test_2 = torch.randn(3,3,device='cuda:0') print(test_2.device) which has outputs: cpu cpu cuda:0 It seems strange to me, since I can set its location onto gpu when initialize the tensor but cannot move it onto gpu if it was initialized on cpu, can anyone help me with this? Thanks in advance!
st46907
Thanks Kirondem! I just realized I forget to reassign the variable. (actually a little silly of me
st46908
I want to profile my entire training and eval pytorch code. I am using custom dataloaders (e.g. torchmeta library) and novel pytorch libraries (e.g. higher library) and I see very significant performance slow down from what other libraries reported (despite me using better GPUs e.g. I use v100 vs titan xp). They take 2.5 hours while mine is taking 16h or more. Instead of sharing the code I want to profile the two ENTIRE scripts and pin point what is slowing things down when I compare the profilers output. Unfortuantely, I see a lot of profilers and it hard to chose and what is worse is that most examples seem focused on profiling a specific model and not include the dataloader. For me the dataloader and the entire code is crucial. These are the resources I’ve found: pytorch.org PyTorch Profiler — PyTorch Tutorials 1.7.0 documentation 3 python -m cProfile -s cumtime meta_learning_experiments_submission.py > profile.txt GitHub adityaiitb/pyprof2 2 PyProf2: PyTorch Profiling tool. Contribute to adityaiitb/pyprof2 development by creating an account on GitHub. pytorch-lightning.readthedocs.io Performance and Bottleneck Profiler — PyTorch Lightning 1.0.4 documentation 1 Understanding PyTorch Profiler autograd I’m currently using torch.autograd.profiler.profile() and torch.autograd.profiler.record_function() from PyTorch Profiler for profiling my GPU program. I get confused with the output result by using prof.key_averages().table(). The output is organized as follows: Name Self CPU total % Self CPU total CPU total % CPU total CPU time avg CUDA total % CUDA total CUDA time avg Number of Calls I don’t fully understand these, e.g., the relationship and difference between Self CPU t… which one is recommended for profiling the entire code so that it works even with the presence of GPU? is: python -m cProfile -s cumtime meta_learning_experiments_submission.py > profile.txt the best way to do this (btw profiling seems better than changing my code randomly until it speeds up) cross-posted: How to profiling ENTIRE pytorch code when GPUs are present? SO How to profiling ENTIRE pytorch code when GPUs are present? 7
st46909
Hello I am trying to use SSIM as loss function for 3D cycle GANS network. But I am getting negative SSIM loss values . Ideally SSIM should be the higher the better, as it is quality measure and hence higher the better. But as loss function we would need to minimize it ,that is 1-SSIM. Please correct me where I am going wrong. **epoch: 46, iters: 570, time: 3.734, data: 0.044) D_A: 0.058 G_A: 0.592 cycle_A_SSIM: -8.898 idt_A: 0.060 D_B: 0.067 G_B: 0.353 cycle_B_SSIM: -8.668 idt_B: 0.013 ** Below is the code for SSIM import torch import torch.nn.functional as F from math import exp def image_dim(img): return img.ndimension() - 2 def gaussian(window_size, sigma): gauss = torch.Tensor([exp(-(x - window_size//2)**2/float(2*sigma**2)) for x in range(window_size)]) return gauss/gauss.sum() def create_window(window_size, n_dim, channel=1): _1D_window = gaussian(window_size, 1.5).unsqueeze(1) _2D_window = _1D_window.mm(_1D_window.t()).float() _3D_window = torch.stack([ _2D_window * x for x in _1D_window],dim=2).float().unsqueeze(0).unsqueeze(0) _2D_window = _2D_window.unsqueeze(0).unsqueeze(0) if n_dim == 3: return _3D_window.expand(channel, 1, window_size, window_size,window_size).contiguous() else: return _2D_window.expand(channel, 1, window_size, window_size).contiguous() def ssim(img1, img2, window_size=11, window=None, size_average=True, full=False, val_range=None): # Value range can be different from 255. Other common ranges are 1 (sigmoid) and 2 (tanh). if val_range is None: if torch.max(img1) > 128: max_val = 255 else: max_val = 1 if torch.min(img1) < -0.5: min_val = -1 else: min_val = 0 L = max_val - min_val else: L = val_range padd = 0 n_dim = image_dim(img1) if n_dim == 2: (_, channel, height, width) = img1.size() convFunction = F.conv2d if n_dim == 3: convFunction = F.conv3d (_, channel, height, width, depth) = img1.size() if window is None: real_size = min(window_size, height, width) window = create_window(real_size, n_dim, channel=channel).to(img1.device) mu1 = convFunction(img1, window, padding=padd, groups=channel) mu2 = convFunction(img2, window, padding=padd, groups=channel) mu1_sq = mu1.pow(2) mu2_sq = mu2.pow(2) mu1_mu2 = mu1 * mu2 sigma1_sq = convFunction(img1 * img1, window, padding=padd, groups=channel) - mu1_sq sigma2_sq = convFunction(img2 * img2, window, padding=padd, groups=channel) - mu2_sq sigma12 = convFunction(img1 * img2, window, padding=padd, groups=channel) - mu1_mu2 C1 = (0.01 * L) ** 2 C2 = (0.03 * L) ** 2 v1 = 2.0 * sigma12 + C2 v2 = sigma1_sq + sigma2_sq + C2 cs = torch.mean(v1 / v2) # contrast sensitivity ssim_map = ((2 * mu1_mu2 + C1) * v1) / ((mu1_sq + mu2_sq + C1) * v2) if size_average: ret = ssim_map.mean() else: ret = ssim_map.mean(1).mean(1).mean(1) if full: return ret, cs return ret def msssim(img1, img2, window_size=11, size_average=True, val_range=None, normalize=False): device = img1.device weights = torch.FloatTensor([0.0448, 0.2856, 0.3001, 0.2363, 0.1333]).to(device) levels = weights.size()[0] mssim = [] mcs = [] n_dim = image_dim(img1) pool_size = [2] * n_dim if n_dim == 2: pool_function = F.avg_pool2d if n_dim == 3: pool_function = F.avg_pool3d for _ in range(levels): sim, cs = ssim(img1, img2, window_size=window_size, size_average=size_average, full=True, val_range=val_range) mssim.append(sim) mcs.append(cs) img1 = pool_function(img1, pool_size) img2 = pool_function(img2, pool_size) mssim = torch.stack(mssim) mcs = torch.stack(mcs) # Normalize (to avoid NaNs during training unstable models, not compliant with original definition) if normalize: mssim = (mssim + 1) / 2 mcs = (mcs + 1) / 2 pow1 = mcs ** weights pow2 = mssim ** weights # From Matlab implementation https://ece.uwaterloo.ca/~z70wang/research/iwssim/ output = torch.prod(pow1[:-1] * pow2[-1]) return output # Classes to re-use window class SSIM(torch.nn.Module): def __init__(self, window_size=11, size_average=True, val_range=None): super(SSIM, self).__init__() self.window_size = window_size self.size_average = size_average self.val_range = val_range # Assume 1 channel for SSIM self.channel = 1 self.window = None def forward(self, img1, img2): (_, channel, _, _) = img1.size() #Initialize window on first call if self.window is None: create_window(self.window_size, image_dim(img1)).to(img1.device).type(img1.dtype) if channel == self.channel and self.window.dtype == img1.dtype: window = self.window else: window = create_window(self.window_size, image_dim(img1),channel).to(img1.device).type(img1.dtype) self.window = window self.channel = channel return ssim(img1, img2, window=window, window_size=self.window_size, size_average=self.size_average) class MSSSIM(torch.nn.Module): def __init__(self, window_size=11, size_average=True, channel=3): super(MSSSIM, self).__init__() self.window_size = window_size self.size_average = size_average self.channel = channel def forward(self, img1, img2): # TODO: store window between calls if possible return msssim(img1, img2, window_size=self.window_size, size_average=self.size_average) The SSIM_Loss is calculated as self.criterionCycle_SSIM= pytorch_ssim.ssim def backward_G(self): """Calculate the loss for generators G_A and G_B""" lambda_idt = self.opt.lambda_identity lambda_A = self.opt.lambda_A lambda_B = self.opt.lambda_B # Identity loss if lambda_idt > 0: # G_A should be identity if real_B is fed: ||G_A(B) - B|| self.idt_A = self.netG_A(self.real_B) self.loss_idt_A = self.criterionIdt(self.idt_A, self.real_B) * lambda_B * lambda_idt # G_B should be identity if real_A is fed: ||G_B(A) - A|| self.idt_B = self.netG_B(self.real_A) self.loss_idt_B = self.criterionIdt(self.idt_B, self.real_A) * lambda_A * lambda_idt else: self.loss_idt_A = 0 self.loss_idt_B = 0 # GAN loss D_A(G_A(A)) self.loss_G_A = self.criterionGAN(self.netD_A(self.fake_B), True) # GAN loss D_B(G_B(B)) self.loss_G_B = self.criterionGAN(self.netD_B(self.fake_A), True) # Forward cycle loss || G_B(G_A(A)) - A|| self.loss_cycle_A = 1- self.criterionCycle_SSIM(self.rec_A, self.real_A) * 100 # Backward cycle loss || G_A(G_B(B)) - B|| self.loss_cycle_B = 1- self.criterionCycle_SSIM(self.rec_B, self.real_B) * 100 # combined loss and calculate gradients self.loss_G = self.loss_G_A + self.loss_G_B + self.loss_cycle_A + self.loss_cycle_B + self.loss_idt_A + self.loss_idt_B self.loss_G.backward()
st46910
Here is the keras architecture model=Sequential() model.add(Dense(units=255,activation='relu',input_shape=[21])) model.add(Dense(units=128,activation='relu')) model.add(Dense(units=64,activation='relu')) model.add(Dense(units=32,activation='relu')) model.add(Dense(units=10,activation='softmax')) Here is the pytorch architecture class nonseqV2(torch.nn.Module): def __init__(self): super(nonseqV2,self).__init__() self.fcLayer1=torch.nn.Linear(in_features=21, out_features=256) self.fcLayer2=torch.nn.Linear(in_features=256, out_features=128) self.fcLayer3=torch.nn.Linear(in_features=128, out_features=64) self.fcLayer4=torch.nn.Linear(in_features=64, out_features=10) self.relu=torch.nn.ReLU() self.softmax=torch.nn.Softmax(dim=1) def forward(self,x): x=self.fcLayer1(x) x=self.relu(x) x=self.fcLayer2(x) x=self.relu(x) x=self.fcLayer3(x) x=self.relu(x) x=self.fcLayer4(x) x=self.softmax(x) return x Keras compiling and training model.compile(optimizer=Adam(0.0001),loss='categorical_crossentropy',metrics=['accuracy']) model.fit(x=train_modeling,y=train_target_binary,batch_size=200,epochs=50,verbose=0,validation_data=(valid_modeling,valid_target_binary)) Pytoch training for ep in range(50): for xtb,ytb in train_loader: xtb,ytb=xtb.cuda(),ytb.cuda() tpred=model(xtb) loss=crossentropy(tpred,ytb.long()) opt.zero_grad() loss.backward() opt.step() Keras loss log_loss(valid_target_binary,y_pred) Pytorch Loss log_loss(val_labels,yprd.detach().cpu().numpy()) As pytorch accept 1D label, so the variable names are different, but essentially they have same data. This is how data feeded to keras is converted to pytorch tensor val_features=torch.Tensor(valid_modeling) val_labels=torch.Tensor(np.argmax(valid_target_binary,axis=1)) But loss obtained from Keras is 1.4 and PyTorch its 10.43. I know because of initilization factor, result should be bit different, but it should not be much different.
st46911
Your Keras model seems to use 5 layers, while your PyTorch model only defines 4, so you might want to add the missing layer. Also, nn.CrossEntropyLoss expects the raw logits as the model output, so you should remove the softmax in your PyTorch model at the end of the forward.
st46912
I have chnaged the keras model and pytorch model and also incorporated the chnages you have suggested #create a keras model model=Sequential() model.add(Dense(units=256,activation='relu',input_shape=[21])) model.add(Dense(units=128,activation='relu')) model.add(Dense(units=64,activation='relu')) model.add(Dense(units=32,activation='relu')) model.add(Dense(units=10,activation='softmax')) Here is keras summary keras948×559 17.8 KB Here is pytorch model #we have create the same architecure in pytorch as we did in keras class nonseqV2(torch.nn.Module): def __init__(self): super(nonseqV2,self).__init__() self.fcLayer1=torch.nn.Linear(in_features=21, out_features=256) self.fcLayer2=torch.nn.Linear(in_features=256, out_features=128) self.fcLayer3=torch.nn.Linear(in_features=128, out_features=64) self.fcLayer4=torch.nn.Linear(in_features=64, out_features=32) self.fcLayer5=torch.nn.Linear(in_features=32, out_features=10) self.relu=torch.nn.ReLU() self.softmax=torch.nn.Softmax(dim=1) def forward(self,x): x=self.fcLayer1(x) x=self.relu(x) x=self.fcLayer2(x) x=self.relu(x) x=self.fcLayer3(x) x=self.relu(x) x=self.fcLayer4(x) x=self.relu(x) x=self.fcLayer5(x) #x=self.softmax(x) return x Pytorch summary is in the attached image pytorch993×655 21 KB The loss obtained using Keras is 1.4 and in PyTorch its 4.0. I am expecting Pytorch output to be closer to keras output
st46913
I have made the changes, but still the difference is significant, please check notebook here 2
st46914
Once the model architecture is fixed, I would recommend to check the weight initializations, as they might be different between both frameworks. Some PyTorch init methods were directly taken from Torch7 (Lua), which might not be the state of the art anymore. I don’t know, if Keras changes the default inits regularly.
st46915
Any updates? Thanks. Recently I also find pytorch gets higher loss than keras even in the simplest setting. I init the networks in the same way, the problem is still there…
st46916
I am trying to create an LSTM encoder decoder. The following code has LSTM layers. How can I add more to it? class Encoder(nn.Module): def __init__(self, seq_len, n_features, embedding_dim=128): super(Encoder, self).__init__() self.seq_len, self.n_features = seq_len, n_features self.embedding_dim, self.hidden_dim = embedding_dim, 2 * embedding_dim self.rnn1 = nn.LSTM( input_size=n_features, hidden_size=self.hidden_dim, num_layers=1, batch_first=True) self.rnn2 = nn.LSTM( input_size=self.hidden_dim, hidden_size=embedding_dim, num_layers=1, batch_first=True) def forward(self, x): x = x.reshape((1, self.seq_len, self.n_features)) x, (_, _) = self.rnn1(x) x, (hidden_n, _) = self.rnn2(x) return hidden_n.reshape((self.n_features, self.embedding_dim)) class Decoder(nn.Module): def __init__(self, seq_len, input_dim=64, n_features=1): super(Decoder, self).__init__() self.seq_len, self.input_dim = seq_len, input_dim self.hidden_dim, self.n_features = 2 * input_dim, n_features self.rnn1 = nn.LSTM( input_size=input_dim, hidden_size=input_dim, num_layers=1, batch_first=True) self.rnn2 = nn.LSTM( input_size=input_dim, hidden_size=self.hidden_dim, num_layers=1, batch_first=True) self.output_layer = nn.Linear(self.hidden_dim, n_features) def forward(self, x): x = x.repeat(self.seq_len, self.n_features) x = x.reshape((self.n_features, self.seq_len, self.input_dim)) x, (hidden_n, cell_n) = self.rnn1(x) x, (hidden_n, cell_n) = self.rnn2(x) x = x.reshape((self.seq_len, self.hidden_dim)) return self.output_layer(x)
st46917
You have 3 ways of approaching this nn.LSTM(input_size, hidden_size, num_layers=2) num_layers=2 would mean stacking two LSTMs together to form a stacked LSTM, self.rnn = nn.Sequential(OrderedDict([ ('LSTM1', nn.LSTM(n_features, self.hidden_dim, 1), ('LSTM2', nn.LSTM(self.hidden_dim, embedding_dim, 1) ])) self.rnns = nn.ModuleList() for i in range(nlayers): input_size = input_size if i == 0 else hidden_size rnns.append(nn.LSTM(input_size, hidden_size, 1)) Limitation of the first 2 approaches, you can’t get the hidden states of each individual layer.
st46918
In the model training,should torch.zero_grad() be put after or before loss.backward()? And why are sometimes loss values fixed?
st46919
It should be called b4 fitting the model with some training input data like this: model.zero_grad() pred_y = model(X) loss = criterion (pred_y, y) loss.backward() optimizer.step() As to y some loss values are fixed, it’s a normal thing as long as it’s not fixed through out training. U should probably change ur model architecture or check if ur datasets is scaled and normalized properly. Also this is sth I’d like to say from the experience I’ve gotten over the past year: If u are working with structured data then neural networks might not just be the way forward so I suggest u use gradient boosters eg; random forest… Neural networks are only better than other machine learning models when it comes to heavy lifting like Image data modeling or NLP and language modeling.
st46920
Hello! I have a 2 channel images, but the 2 channels come in different files, so I have 2 tensors of size 64 x 64 each. How can I combine them in a single tensor of size 2 x 64 x 64? I found some ways with view, but I am not totally sure if the resizing is done the way I want (it goes from 128 x 64 to 2 x 64 x 64).
st46921
You could call torch.stack on these tensors: x1 = torch.randn(64, 64) x2 = torch.randn(64, 64) x = torch.stack((x1, x2))
st46922
Hi @ptrblck, I would be glad for your approach of stacking one-hot tensors to have the same dimension. For example. I have x1 = torch.randint(0, 2, size=(1, 3, 3)) -> tensor([[[0, 1, 1], [0, 1, 0], [0, 1, 1]]]) and x2 = tensor([[[1, 0, 0], [1, 0, 1], [1, 0, 0]]]) I want to have x3 = tensor([[[1, 1, 1], [1, 1, 1], [1, 1, 1]]]). And how can I have x2 using pytorch function?
st46923
I’m not sure if I understand the use case correctly, but you could use OR instead of any stacking operation: x1 = torch.tensor([[[0, 1, 1], [0, 1, 0], [0, 1, 1]]]) x2 = torch.tensor([[[1, 0, 0], [1, 0, 1], [1, 0, 0]]]) x3 = x1 | x2 print(x3) > tensor([[[1, 1, 1], [1, 1, 1], [1, 1, 1]]])
st46924
Thanks for the suggestion and that what I need for x3. For x2, torch.where(x1 == 0, 1, x1) worked.
st46925
I got this error RuntimeError: “bitwise_or_cpu” not implemented for ‘Float’. How can I fix this?
st46926
Which PyTorch version are you using? You might need to update it, if you are using an older version.
st46927
I cannot reproduce this issue on 1.7.0: >>> import torch >>> x1 = torch.tensor([[[0, 1, 1], ... [0, 1, 0], ... [0, 1, 1]]]) >>> x2 = torch.tensor([[[1, 0, 0], ... [1, 0, 1], ... [1, 0, 0]]]) >>> x3 = x1 | x2 >>> print(x3) tensor([[[1, 1, 1], [1, 1, 1], [1, 1, 1]]]) >>> print(torch.__version__) 1.7.0
st46928
II is because the dtype is torch.int64. If you set the dtype to torch.float, the error is reproduced
st46929
You are right. I didn’t run into this error, as I was using your originally posted tensors. For these bool operations, I would transform the tensors via .long() or byte() to get the right result or use torch.logical_or(x1, x2) alternatively.
st46930
Thanks so much. The two approaches worked although torch.logical_or gives boolean.
st46931
I found the following error that I am getting on Github, but it doesn’t seem to have been solved. This is the traceback: Traceback (most recent call last): File "/path/to/run.py", line 380, in <module> loss = train() File "/path/to/run.py", line 64, in train out = model(data).view(-1) File "/n/scratch3/users/v/vym1/nn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/path/to/WLGNN.py", line 114, in forward xs += [torch.tanh(conv(xs[-1], edge_index, edge_weight))] File "/n/scratch3/users/v/vym1/nn/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/n/scratch3/users/v/vym1/nn/lib/python3.7/site-packages/torch_geometric/nn/conv/gcn_conv.py", line 169, in forward size=None) File "/n/scratch3/users/v/vym1/nn/lib/python3.7/site-packages/torch_geometric/nn/conv/message_passing.py", line 236, in propagate out = self.message(**msg_kwargs) File "/n/scratch3/users/v/vym1/nn/lib/python3.7/site-packages/torch_geometric/nn/conv/gcn_conv.py", line 177, in message return edge_weight.view(-1, 1) * x_j RuntimeError: CUDA out of memory. Tried to allocate 3.37 GiB (GPU 0; 11.17 GiB total capacity; 5.35 GiB already allocated; 1.46 GiB free; 9.29 GiB reserved in total by PyTorch) I have 100GB of memory allocated, and it isn’t clear to me why PyTorch can’t allocate it when it has only allocated a small fraction of the memory in total.
st46932
I’m working on a cluster and allocated 100 GB of memory. Is this not available for the GPU?
st46933
If the GPU is full, there’s no other way but to get a bigger one. However, if you expect this to work normally, check if you have the tensors in their correct dimension. Sometimes broadcasting can give you a super big matrix if you are not careful.
st46934
You can have 100Gb allocated, but they are spread out across x number of GPUs. You got to make your batch size to fit per GPU.
st46935
vymao: I’m working on a cluster and allocated 100 GB of memory. Is this not available for the GPU? Your current GPU has a capacity of 11.17GiB as given in the error message. Are you allocating 100GB on your system RAM or are you combining the memory of all GPUs? As explained by others you would have to either reduce the batch size (or model) or trade compute for memory via torch.utils.checkpoint.
st46936
I try to use native amp support and realized that under my model sum() operation produces float32 from f16 inputs whereas all the other operations run with float16. I’d like to create an issue but I am not sure if it is an expected situation.
st46937
How can we print out the GLOG info level log when running Python code in PyTorch? For example, github.com pytorch/pytorch/blob/master/torch/lib/c10d/ProcessGroupNCCL.cpp#L236 3 // exception to users, after this, ncclCommWatchdog can detect nccl // communicators are aborted and clean up devNCCLCommMap_ accordingly. // if throwing timed out excepiton without aborting nccl communicators // here, it was observed that CUDA GPU will have 100% utilization and // can not run new events successfully. for (const auto& ncclComm : ncclComms_) { ncclComm->ncclCommAbort(); const auto& storeKey = getNcclAbortedCommStoreKey( buildNcclUniqueIdStr(ncclComm->getNcclId())); store_->set(storeKey, {}); LOG(INFO) << "Wrote aborted communicator id to store: " << storeKey; } throw std::runtime_error("Operation timed out!"); } // Check for errors and throw appropriate exception. checkAndThrowException(); std::this_thread::sleep_for( std::chrono::milliseconds(kSynchronizeBusyWaitMillis)); } checkAndThrowException(); } Checking https://github.com/pytorch/pytorch/blob/master/c10/util/Logging.cpp 11 It appears that we can set “caffe2_log_level” in c10 log. Basically, from caffe2.python import workspace workspace.GlobalInit(['caffe2', '--caffe2_log_level=2']) However, this only works for Caffe2 code but not c10 code in PyTorch backend.
st46938
bad impl … logging in c10, but it relys on the FLAGS_caffe2_log_level ? … but i found that it can print the LOG(INFO) in python frontend when building pytorch with USE_GLOG=1 (default GOOGLE_STRIP_LOG = 0) for example export CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname $(which conda))/../"} MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ MAX_JOBS=32 BUILD_CAFFE2=0 BUILD_CAFFE2_OPS=0 USE_GLOG=1 USE_DISTRIBUTED=1 USE_MKLDNN=0 USE_CUDA=0 USE_FBGEMM=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 python setup.py develop
st46939
Suppose each of my batches are (data, target, modified_variable) where modified_variable has shape (batch_size, *) (each datapoint has an associated mdofied_variable). At a given epoch, I’d like to modify this extra variable, and save it to be accessed at the following epoch, when the same batch is loaded. How can this be done? As an example: modified_variable could be a perturbed version of data. Thanks!
st46940
Assuming modified_variable is stored inside the Dataset, you could directly manipulate it after the epoch (if persistent_workers=False in your DataLoader) via loader.dataset.modified_variable = ....
st46941
Following is my code: from torchvision import datasets, models, transforms import matplotlib.image as mpimg import matplotlib.pyplot as plt import torch data_transforms = transforms.Compose([ transforms.RandomResizedCrop(256), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) img = mpimg.imread('dataset/train/cats/cat.1.jpg') print(type(img)) print(len(img)) # img = img.astype(float) # img = img.float() print(img) img = data_transforms(img) Error Message: Traceback (most recent call last): File "Image_Preprocessing.py", line 47, in <module> img = data_transforms(img) File "/home/debayon/anaconda3/envs/ENV2/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 61, in __call__ img = t(img) File "/home/debayon/anaconda3/envs/ENV2/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 676, in __call__ i, j, h, w = self.get_params(img, self.scale, self.ratio) File "/home/debayon/anaconda3/envs/ENV2/lib/python3.7/site-packages/torchvision/transforms/transforms.py", line 638, in get_params area = img.size[0] * img.size[1] TypeError: 'int' object is not subscriptable I am unable to understand why is that occuring and please tell how can I solve it? Thank You.
st46942
Your image seems to be a numpy array. torchvision transformations work on PIL.Images, so either load the image directly via Image.open or convert it to a PIL.Image before passing it to the transformations.
st46943
Oh. Thank You so much. Lesson Learnt: "torchvision transformations work on PIL.Image s "
st46944
To be a bit more specific: “a lot of” torchvision’s transformation use PIL, while e.g. Normalize will work on tensors.
st46945
I think the major advantage is to reuse a common library for loading and processing images. PIL is one of these libs, but I don’t know which requirements were used to pick it. Note however that as of torchvision 0.8 transformations are now supported on tensors and torchvision ships with a native image loading utility. More details can be found in the release notes 48.
st46946
Thank you very much for quick responses, and background info is very helpful. Still not direct support Numpy as input data format is a little bit strange for transform functions. For example, a project deal with both images and videos a lot, we use OpenCV for preprocessing.
st46947
I am trying to create a CRNN that will feed the output of the CNN to an LSTM layer. However, I am running into an error that there is an inplace error somewhere and I don’t know where. Here is my error message and model “one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256, 64]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient.” class CNN_RNN(nn.Module): def __init__(self): super(CNN_RNN, self).__init__() self.conv1 = nn.Conv2d(3,16,3) self.conv2 = nn.Conv2d(16,32,3) self.conv3 = nn.Conv2d(32,64,3) self.conv4 = nn.Conv2d(64,128,3) self.conv5 = nn.Conv2d(128,64,3) self.fc = nn.Linear(128, 16) # (hidden_size, input_size) self.pool1 = nn.MaxPool2d(2, 2) self.pool2 = nn.MaxPool2d(4, 4) self.lstm = nn.LSTM(input_size = 64, hidden_size=64, bidirectional=True, batch_first=True) def forward(self, x, hidden): cnn = self.pool1(F.relu(self.conv1(x), inplace=False)) cnn = self.pool1(F.relu(self.conv2(cnn), inplace=False)) cnn = self.pool1(F.relu(self.conv3(cnn), inplace=False)) cnn = self.pool2(F.relu(self.conv4(cnn), inplace=False)) cnn = self.pool2(F.relu(self.conv5(cnn), inplace=False)) # outputs (batch, filters, width, height) cnn = torch.squeeze(cnn) cnn = torch.unsqueeze(cnn, 1) lstm, hidden = self.lstm(cnn, hidden) lstm = torch.squeeze(lstm) out = self.fc(lstm) out = F.softmax(out).clone() return out, hidden model = CNN_RNN() criterion = nn.NLLLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) for epoch in range(max_epochs): for i, data in enumerate(dataloaders[“train”], 0): inputs, labels = data optimizer.zero_grad() outputs, hidden = model(inputs, hidden) loss = criterion(outputs, labels) loss.backward(retain_graph=True) optimizer.step()
st46948
I think the backward call might be raising this issue, since you are using retain_graph=True while also updating the parameters, which could be a similar issue to this one 9.
st46949
For e.g. we create a net named ‘net’. y = net(x). what is meaning of y.max(1)? (especially a,b = y.max(1), what does it mean?)I saw the range of max is (-2,1). what does that mean.
st46950
https://www.tutorialspoint.com/python/list_max.htm 624 I think you can relate with this.
st46951
Let’s check the docs http://pytorch.org/docs/0.3.1/tensors.html#torch.Tensor.max 625 y.max(1) takes the max over dimension 1 and returns two values. an example would help. Supposing we have a tensor y of shape (3, 4) containing 0.6857 0.1098 0.4687 0.7822 0.4170 0.2476 0.1339 0.5563 0.9425 0.8433 0.1335 0.3169 y.max(1) returns two tensors… # 1. the max value in each row of y 0.7822 0.5563 0.9425 # 2. the column index at which the max value is found. 3 3 0
st46952
forcefulowl: y.max(1) Why does y.max(1), give the max element in each row? (dimensions in Numpy are 0 indexed- so 0 for row, 1 for coloumn) Why is it different in PyTorch?
st46953
max(1) will return the maximal value (and index in PyTorch) in this particular dimension. Both, numpy and PyTorch return the same values (PyTorch additionally with the indices): x = np.random.randn(10, 10) print(x.max(1)) print(torch.from_numpy(x).max(1)[0])
st46954
Here are two graphs of training my model with only 1 data sample. SGD is perfectly monotonic, while Adam has these weird oscillations. Why is this? After creating the model, I saved model.state_dict(), trained with SGD, then reloaded the original state dict, and finally trained with Adam. image1417×1120 57.4 KB image1553×1305 82.1 KB For reference, here’s 100 samples (batch_size = 1): image1795×1343 119 KB image1516×1241 150 KB Prompted by this: Training with batch_size = 1, all outputs are the same and trains poorly
st46955
Problem This need here may seem to be a little weird but I need the PDF document because network instability and frequent interruption. At the same time, the only PDF version of the doc I could find is 0.11.5, which is outdated. There is a doc folder in source code directory 3 on GitHub and there is a Makefile avaiable. Therefore, I downloaded the entire source repo and entered doc to generate the PDF doc. However, when I make latexpdf in that folder, a series of errors occurred. After working around these error messages for hours, there is no luck. I also tried to first export the documentation to .html and then use pandoc to make the conversion. However, it did not work either. I am wondering if anyone has ever tried to do this and succeeded? Thank you in advance!
st46956
I finally solved this problem and I will share the exported documents with everyone in this forum. Here is the link (Google Drive 24).
st46957
Hi Guanqun, Thanks for the awesome pdf. I was wondering would you also like to share the pdf for the latest pytorch documents? Many thanks!
st46958
Given input symmetric matrix A with zero diagonals, I want to compute matrix C as below. Can I increase the efficiency by removing loops? Also, would the following break backpropagation? W = torch.tensor([[0,1,0,0,0,0,0,0,0], [1,0,1,0,0,1,0,0,0], [0,1,0,3,0,0,0,0,0], [0,0,3,0,1,0,0,0,0], [0,0,0,1,0,1,1,0,0], [0,1,0,0,1,0,0,0,0], [0,0,0,0,1,0,0,1,0], [0,0,0,0,0,0,1,0,1], [0,0,0,0,0,0,0,1,0]]) n = len(W) C = torch.empty(n, n) I = torch.eye(n) for i in range(n): for j in range(n): B = W.clone() B[i, j] = 0 B[j, i] = 0 tmp = torch.inverse(n * I - B) C[i, j] = tmp[i, j]
st46959
1999×960 93.3 KB Image_01L999×960 64.2 KB Image_01L_2ndHO999×960 11.9 KB this is the mask, segmentation result and the input image Is that overfitting or something else?my dice coefficient was stable at 0.9685 for about 7 epoches. my model is consists of two Unet which I think is not complicated.The conv block looks just like this class BasicResBlock(nn.Module): def __init__(self, in_channels, out_channels): super(BasicResBlock, self).__init__() self.double_conv = nn.Sequential( nn.Conv2d(in_channels, out_channels, kernel_size=3, padding=1), nn.BatchNorm2d(out_channels), nn.ReLU(), nn.Conv2d(out_channels, out_channels, kernel_size=3, padding=1), nn.BatchNorm2d(out_channels) ) self.cov1x1 = nn.Sequential( # 调节通道数 nn.Conv2d(in_channels, out_channels, kernel_size=1, padding=0), nn.BatchNorm2d(out_channels), nn.ReLU()) def forward(self, x): res = self.cov1x1(x) x = self.double_conv(x) return F.relu(res + x) My model output 2 results,coarse and refine, and I set the loss like this coarse_mask, refine= net(imgs) loss1 = criterion1(coarse_mask, true_masks) loss2 = criterion1(refine, true_masks) loss =0.8*loss1 +loss2 I really need some advice!!!Thank you!!!
st46960
My neural net was working perfectly, and now I’m not sure why I am getting this error upon this line: y_pred = model(train_categorical, train_numerical) Any idea how this might be fixed/what it is indicating is wrong? Thansk! Here is the offending stack trace: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-30-9aa7542c12d0> in <module> 17 for i in range(epochs_per_training): 18 i += 1 ---> 19 y_pred = model(train_categorical, train_numerical) 20 single_loss = loss_function(y_pred, train_outputs) 21 ~/new_anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), <ipython-input-25-fd94d1dedadd> in forward(self, x_categorical, x_numerical) 38 x = self.embedding_dropout(x) 39 ---> 40 x_numerical = self.batch_norm_num(x_numerical) 41 x = torch.cat([x, x_numerical], 1) 42 x = self.layers(x) ~/new_anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/new_anaconda3/lib/python3.7/site-packages/torch/nn/modules/batchnorm.py in forward(self, input) 134 self.running_mean if not self.training or self.track_running_stats else None, 135 self.running_var if not self.training or self.track_running_stats else None, --> 136 self.weight, self.bias, bn_training, exponential_average_factor, self.eps) 137 138 ~/new_anaconda3/lib/python3.7/site-packages/torch/nn/functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps) 2014 return torch.batch_norm( 2015 input, weight, bias, running_mean, running_var, -> 2016 training, momentum, eps, torch.backends.cudnn.enabled 2017 ) 2018 RuntimeError: "batch_norm" not implemented for 'Half'
st46961
Solved by ekofman in post #2 Ok basically you can’t use float16. It has to just be float. good to know.
st46962
Dear PyTorch users, I’d like to present a package of mine that automates monitoring the training process: https://github.com/dizcza/pytorch-mighty 3 Initially developed as a tool to help other projects of mine, I thought it could be valuable to some of you. The idea is similar to Pytorch Horovod and many other packages that automate the training process, therefore, I want to focus on the features that make the package particularly interesting: trainer.restore(checkpoint_path=None): seamless restoration of the training progress AND the monitor plots. Imagine that you saved the progress yesterday with 10+ infographic plots, wake up today and want to continue adding metric points (loss, accuracy, etc.) on the same figures as nothing happened. Several Mutual Information estimators: that’s where I put the most effort into. If you’re interested in information theory, how the information flows between layers across epochs, you will find a separate repository (linked there) with Mutual Information estimators benchmarks. More than just accuracy and loss curves. One of the reasons why I developed such an advanced monitoring system is that I was not satisfied with the basics plots of accuracy and loss VS epoch, and I monitor several other metrics. I work with vision only, so I don’t know how much this package is applicable in other domains. While it’s not necessary to use pytorch-mighty as it is, you may find a number of interesting monitoring plots that you can copy-paste into your projects. Best, Danylo
st46963
I can use nn.Embedding trivially to get the positive embeddings using a tensor of indices like so n = nn.Embedding(256,128) indices = torch.randint(0, 256, (10,)) positive_embeddings = n(indices) print(positive_embeddings.shape) #prints torch.Size([10, 128]) How do i now get all the other embeddings? I would like to do something like negative_embeddings = some_function(n,indices) print(negative_embeddings.shape) #prints torch.Size([10, 255, 128]) My motivation is to use nn.CosineEmbeddingLoss with the predicted embeddings of some CNN and both the positive and negative embeddings of some target indices.
st46964
Hello everyone I wish someone can help me ! Because I’m a beginner I can’t have a server, so I’m using Google colab but unfortunately my large set of images (1.5gb) is on drive so the training/ evaluation is slow, I know that I have to compress it and then decompress it on colab but it take a lot of time. Is there anyone who knows a solution or can provide another platform which provide GPU support for free Thank you
st46965
Hi all, I’m a newbie with NN and PyTorch and trying to implement a small network as shown in the following code. The problem is if I’m calculating Test accuracy at intermediate epochs, my final test accuracy is increasing, compared to if I only estimate it at the last epoch. It looks like maybe there is some test data leakage to training step, however, I’m not able to debug it. The test accuracy when estimated only at last epoch (150) is 78%. However, if estimated at intermediate epochs (every 15th or 30th etc., result does not change much) as well, the accuracy at the 150th epoch increases to 87%. Train accuracy is always reaching 100%, it is over-trained. I’ve a custom train-test data distribution code, based on some pre-known features of data, which is not shown below. However, similar behavior is observed with random_split(), with a small increase from 85% to 87%. Any help will be appreciated. Thanks, Pragya # %% All classes and functions # ------------------------------------------------------------------------------------------------------ # Writing custom dataset class RF_dataset(Dataset): def __init__(self, file_name): data = scipy.io.loadmat(file_name) # Data arranged as [batch_size,n_feature,feature_length] # In Pytorch images are represented as [batch_siz,channels,height,width] # This can be compared to images, with 4 channels and height of feature_length, width of 1 self.X = data['featVec'] # Can change this to featVec, featVec_1, featVec_2, featVec_3 # Only use with object features self.X = self.X[:,[0,4,12,13],:] # Features: Using only RSSI and Phase with object self.Y = data['labelVec'] # Labels: #Objects {0,5} self.LocTag = data['LocTag'] # Location Tag: Based on pre-defined scheme nData x 5 self.PosTag = data['PosTag'] # Posture Tag: 1=stand, 2=sit, for three people looks like 211 self.nStand = data['nStand'] # Number of standing people self.nSit = data['nSit'] # Number of sitting people def __len__(self): return len(self.X) def __getitem__(self, idx): sample = self.X[idx] return (self.Y[idx],sample) # ------------------------------------------------------------------------------------------------------ #Network definition class occupancyCounting(nn.Module): def __init__(self): super(occupancyCounting,self).__init__() n_CW = 120 # This directly affects accuracy if value is low. nFeat = 4 self.conv1 = nn.Conv1d(nFeat,n_CW,kernel_size=5, stride=4, padding=1) #featvec self.conv2 = nn.Conv1d(n_CW,int(n_CW/2) , kernel_size=64, stride=35, padding=1) #featvec self.conv3 = nn.Conv1d(int(n_CW/2),6,kernel_size=24, stride=10, padding=0) #featvec self.maxPool1 = nn.MaxPool1d(kernel_size=8,stride=4,padding=0) self.avgPool1 = nn.AvgPool1d(kernel_size=5) self.drop1 = nn.Dropout(p=0.2) def forward(self, x): x = (F.relu(self.conv1(x))) #print(x.shape) x = (F.relu(self.conv2(x))) #featvec #print(x.shape) x = self.avgPool1(F.relu(self.conv3(x))) #print(x.shape) x = self.drop1(x) return x # ------------------------------------------------------------------------------------------------------ # Performing weighted selection without replacement for location tags def WeightedSelectionWithoutReplacement(weights, m): # https://stackoverflow.com/questions/352670/weighted-random-selection-with-and-without-replacement elt = [(math.log(random.random()) / weights[i], i) for i in range(len(weights))] return [x[1] for x in heapq.nlargest(m, elt)] # ------------------------------------------------------------------------------------------------------ #%% Main function: Including training and testing model if __name__ == '__main__': file_name=r"xxxxxxxx.mat" dataset = RF_dataset(file_name) data_normalize = False # Normalize data? # ----------------------------- # Train-test data distribution fractionTrain = 0.75 trainset, testset = random_split(dataset, [int(fractionTrain*len(dataset)), len(dataset)-int(fractionTrain*len(dataset))]) batchsize_train = 50 train_loader = DataLoader(trainset, batch_size=batchsize_train,shuffle=True) batchsize_test = len(testset) test_loader = DataLoader(testset, batch_size=batchsize_test, shuffle=False) # ------------------------------------------------------------------------------------------------------ # %% Start Training Block #hyperparameter definition model = occupancyCounting() learning_rate = 0.008 # 0.02, 0.008 momentum = 0.1 random_seed=1 torch.backends.cudnn.enabled = False torch.manual_seed(random_seed) optimizer = optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum) EPOCHS = 150 train_loss_epoch = [] # Training Loss every epoch train_acc_epoch = [] train_time_epoch = [] # Training time accumulating from the 0th epoch test_loss_n_epoch = [] # Test Loss every nth epoch test_acc_n_epoch =[] # Test accuracy every nth epoch total_acc_n_epoch = [] # Total accuracy every nth epoch count_n_epoch = [] # Counting every nth epoch plt.figure() start_time = time.time() for epoch in range(EPOCHS): # ------------------------------------------------------------------------------------------------------ train_loss = 0 correct_train = [] for Y_train,X_train in train_loader: current_batchsize = X_train.shape[0] feat_length = X_train.shape[2] # ----------------------------------------------------------------- X_train = X_train.float() Y_train = Y_train.view(-1,Y_train.shape[1]) Y_train = Y_train.long() model.train() output_train = model(X_train) loss = F.cross_entropy(output_train,Y_train) # Computing total loss of this batch optimizer.zero_grad() loss.backward() #calculate the gradient decent optimizer.step() #update the weight # Clean-up step for PyTorch train_loss = train_loss + (loss*current_batchsize/len(train_loader.dataset)) # Loss for each epoch with torch.no_grad(): model.eval() output_train = model(X_train) symbol_train = output_train.data symbol_train = symbol_train.max(dim=1).indices correct_train.append(Y_train.eq(symbol_train).numpy()) train_time_epoch.append(time.time()-start_time) correct_train = [ item for sublist in correct_train for item in sublist] correct_train = np.asarray(correct_train) correct_train = correct_train.reshape(-1) train_accuracy = correct_train.sum()/len(train_loader.dataset) train_acc_epoch.append(float(train_accuracy)) train_loss_epoch.append(float(train_loss)) # Appending Training loss for the epoch if (epoch+1)%10 == 0: # Update Plot with some random case every 5 epochs plt.clf() plt.plot(Y_train[0], 'bo') plt.plot(symbol_train[0], 'rx') plt.title('epoch: %i , Training loss is ' %(epoch+1) + '%f'%float(train_loss)) plt.show() plt.pause(0.1) if (epoch+1)%15 == 0: count_n_epoch.append(epoch+1) # Current Epoch # ------------------------------------------------------------------------------------------------------ #evaluation: # Update test loss & accuracy every n epochs with torch.no_grad(): correct_test = [] test_loss = 0 for Y_test, X_test in test_loader: current_batchsize = X_test.shape[0] feat_length = X_test.shape[2] # ----------------------------------------------------------------- X_test = X_test.float() Y_test = Y_test.view(-1,Y_test.shape[1]) Y_test = Y_test.long() model.eval() output_test = model(X_test) test_loss = test_loss + (F.cross_entropy(output_test,Y_test) * current_batchsize/len(test_loader.dataset)) symbol_test = output_test.data symbol_test = symbol_test.max(dim=1).indices correct_test.append(Y_test.eq(symbol_test).numpy()) correct_test = [ item for sublist in correct_test for item in sublist] correct_test = np.asarray(correct_test) correct_test = correct_test.reshape(-1) test_accuracy = correct_test.sum()/len(test_loader.dataset) print('Epoch: %i' %(epoch+1),', Train loss: %0.2f' %(float(train_loss)),', Train Accuracy: %0.2f' %train_accuracy) print('Epoch: %i' %(epoch+1),', Test loss: %0.2f' %(float(test_loss)),', Test Accuracy: %0.2f' %test_accuracy) test_loss_n_epoch.append(float(test_loss)) test_acc_n_epoch.append(test_accuracy)
st46966
Solved by ptrblck in post #8 Thanks for the code. It seems that the additional call of the test method inside the training loop calls into the pseudo-random number generator and thus changes the order of the training data in the next step. This will result in a bit noise during the training, which thus yields a different end …
st46967
What might be happening is that you network’s capacity might be too much for a very simple dataset which is why it is overfitting. I would recommend having a validation and use early stopping based on that.