id
stringlengths
3
8
text
stringlengths
1
115k
st81868
If the input are multi-dimensional tensors, dose it wil generate seperate normal distributions? For example, dis = Normal ( torch.tensor([1.0, 200.0]), torch.tensor([1.0, 2.0]) ) Will it give tow seperate normal distributions with Normal1(mu = 1.0, sigma = 1.0) and Normal2(mu = 200.0, sigma = 2.0) ? Thanks for your answers
st81869
Hello everyone, Let’s say I have the following rnn. self.rnn = torch.nn.GRU( input_size=1024, hidden_size=512, num_layers=2, batch_first=True ) What does it mean ? I have a basic understanding of LSTMs. the input x_t goes through a GRU, then the hidden state h_t is updated to h_{t+1} and fed into the next unit with input x_{t+1]. In that regard, I understand the hidden size (the size of my input sequence) but I don’t understand the hidden_size and the number of layers. I’m sure just seeing a relevant picture would help me, can you help me out ?
st81870
Vincent_Brisse: sequence the hidden size is the output size of rnn cell. if the num_layer >= 2, the output of the previous layer is the input of next layer.
st81871
Is this picture accurate ? Is ‘g(1)’ always the same cell with the same weights. Its output is the hidden state ? Is the arrow on the right and the arrow on the top feeding the hidden state to both the next timestep and the next layer ?
st81872
yes, the output of g^(1) is the hidden state. the arrow on the top feeds the hidden state to the next layer. For the arrow on the right, it feeds the hidden state and cell state (if cell is lstm cell); it feeds the hidden state (if cell is gru cell). u can read this 1
st81873
I have a tensor size of 16x16. I only want to compute the loss and update gradient in a region of interesting (ROI) of the tensor such as ROI[4:12,4:12]=1. I used the below code but it shows an error. How to do it? Thanks import torch input = torch.rand((16,16),requires_grad=True) print (input) mask = torch.zeros_like(input) mask[4:12,4:12]=1.0 input = mask* input d = torch.mean(input) d.backward() print (input.grad.data) Error is AttributeError Traceback (most recent call last) <ipython-input-35-9826ca6fcef5> in <module>() 10 d = torch.mean(input) 11 d.backward() ---> 12 print (input.grad.data) AttributeError: 'NoneType' object has no attribute 'data' It looks that the input.grad is None
st81874
Hi John, Here, input = mask* input you are replacing (and hiding) the real input (leaf node) of your graph. If you want to access the gradients of the input, you should keep the original input reference and rename the masked input: import torch input = torch.rand((16,16),requires_grad=True) print(input) mask = torch.zeros_like(input) mask[4:12,4:12]=1.0 masked_input = mask * input d = torch.mean(masked_input) d.backward() print(input.grad.data) Hope that helps! Additionally, if you ever want to access the masked_input gradients, which is non-leaf Tensor, you will have to to call masked_input.retain_grad() after its creation. Please find more info about it here 3
st81875
spanev: mport torch input = torch.rand((16,16),requires_grad=True) print(input) mask = torch.zeros_like(input) mask[4:12,4:12]=1.0 masked_input = mask * input d = torch.mean(masked_input) d.backward() print(input.grad.data) Thanks @spanev. It worked but the result looks wrong. If I ignore the region in mask, so the expected mean grad should be 1 instead of 0.11111
st81876
What do you mean by ignoring the region in mask? Which gradient should be 1? d.grad will be 1 since we run backward on it. In d = torch.mean(masked_input) all the inputs (even the zero’d ones) will contribute to the mean, so the final gradients will be 1/(16*16) on the non masked elements, and 0 on the masked/zero’d ones.
st81877
spanev: ion in mask? Which gradient should be 1? d.grad will be 1 since we run backward on it. In d = torch.mean(masked_input) all the inputs (even the zero’d ones) will contribute to the mean, so the final gradients will be 1/(16*16) on the non masked elements, and 0 on the masked/zero’d ones. I means I want to set grad on the outside ROI to zero, it means we will untouch the region
st81878
I have seen people writing the reconstruction loss in two different ways: F.binary_cross_entropy(recon_x1, x1.view(-1, 784)) or F.binary_cross_entropy(recon_x1, x1.view(-1, 784), reduction = "sum") I was wondering if there is a theoretical reason to use one over the other?
st81879
Solved by spanev in post #2 Hi @Rojin I believe this comes from the fact that the KL divergence is an integral/sum. So the sum reduction would be the more paper faithful approach, but having the PyTorch default (mean) will still work (only have downscaled gradients).
st81880
Hi @Rojin I believe this comes from the fact that the KL divergence 4 is an integral/sum. So the sum reduction would be the more paper faithful approach, but having the PyTorch default (mean) will still work (only have downscaled gradients).
st81881
If I want to predict numbers (e.g. 12345-> 6, 23456-> 7, 34567-> 8 …) and use the nn.transformer module. Do I need to do Ebidding or linear transform like Linear (1,256) or can I use d_model = 1, and just feed 1,2,3,4,5 … to the input?
st81882
I am doing an assignment related to image classification.My dataset has 5 different classes(labels).I have to calculate accuracy of each class(labels) separately.But my problem is that whenever I tested my CNN model I obtained 100% accuracy for one label and accuracy of other labels are 0%. I could not make sense where is my implementation problem. Here is the code that i used for calculating accuracy of class labels classes = ['1', '2', '3', '4', '5'] class_correct = list(0. for i in range(5)) class_total = list(0. for i in range(5)) net.eval() # prep model for evaluation for batch in test_loader: data, target = batch['image'], batch['grade'] if len(target.data) != BS: break # # forward pass: compute predicted outputs by passing inputs to the model output = net(data) # convert output probabilities to predicted class _, pred = torch.max(output, 1) # # compare predictions to true label correct = np.squeeze(pred.eq(target.data.view_as(pred))) # # calculate test accuracy for each object class for i in range(BS): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1 for i in range(5): print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( classes[i], 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) This is my results that I obtained Accuracy of the network (overall): 46 % Test Accuracy of 1: 100% (652/652) Test Accuracy of 2: 0% ( 0/279) Test Accuracy of 3: 0% ( 0/459) Test Accuracy of 4: 0% ( 0/263) Test Accuracy of 5: 0% ( 0/47)
st81883
The calculation looks alright. Your model might just overfit to class0, thus predicting every sample as this class. Check the prediction distribution using print(pred.unique(return_counts=True)) to see, whether your model predicts other classes.
st81884
Whenever I called print(pred.unique(return_counts=True)) I got the below results (tensor([0]), tensor([50]))
st81885
Even though your class distribution is not very imbalanced, it seems your model is still overfitting to this class. You could e.g. add the weight parameter to your criterion and try to counter this effect.
st81886
I can use the &, the |, ^ operators on a ByteTensor, but the ~ logical operator doesn’t work. Is there an efficient way of applying it?
st81887
We should probably add support for the ~ operator. In the meantime, instead of ~x you can use (1 - x).
st81888
I don’t think you can use (1-x) if x is a variable. Negation is not possible on bytetensor variables. You should do something like (1-x.data) which is pretty ugly and perhaps inefficient for something as simple as logical not.
st81889
Just met with this problem. Actually we can just achieve ~x by x ^ 1. 1-x may not work because it will return a FloatTensor, which may result in some error.
st81890
It’s now 2019: ~ is available for ByteTensors, and it returns (1 - x), but it’s undocumented as far as I can tell. x.neg() does bitwise negation instead.
st81891
No x.neg() does not do bitwise negation. It computes -x which is the additive inverse of x in the field 2^8 (for ByteTensors). Another way to write that is x.neg() = -x = (256 - x) % 256.
st81892
I see, then what’s surprising is that ~ does not do bitwise negation (it’s a bitwise operator in python, but a logical(?) operator in pytorch, but pytorch does not really have booleans).
st81893
@bluehood, yes! The problem is that PyTorch does not really have bool Tensors yet. But they’re coming soon.
st81894
@colesbury PyTorch 1.2 introduced byte tensors but broke all code relying on the behavior of ~ for byte tensors. It’s now doing bitwise negation instead of logical negation! Was there a deprecation warning I skipped?
st81895
No runtime warning. See the release notes for the breaking changes. We realized after the release that it may have been better to spread the change to byte Tensors over two releases, but by that point it was too late (i.e. we could have deprecated ~ on byte tensors in 1.2 and change it to bitwise inversion for non-bool integer types in 1.3). GitHub pytorch/pytorch 13 Tensors and Dynamic neural networks in Python with strong GPU acceleration - pytorch/pytorch
st81896
I see…I didn’t think you’d introduce major breaking changes without deprecation notice between minor versions. Thanks for the link, super useful! Cheers, Enrico
st81897
import torch pytorchGPUDirectCreateWEmpty = torch.empty(size=(20000000, 128), dtype=torch.float, device='cuda', requires_grad=False, pin_memory=False).uniform_(-1, 1) pytorchGPUDirectCreateWEmpty results in tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], device='cuda:0') and import torch torch.set_default_tensor_type('torch.cuda.FloatTensor') u_embeddings = torch.nn.Embedding(20000000, 128, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None) u_embeddings.weight.data.uniform_(-1, 1) u_embeddings.weight.data results in tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]]) If I initialize with double instead of float and the initialization works fine. I could convert to float later, but I am working with limited memory and unable to initialize a double first before converting. Why is the initialization not working for float tensors?
st81898
This code seems to work for PyTorch 1.2.0 and CUDA10.0 (installed via conda binaries): pytorchGPUDirectCreateWEmpty = torch.empty(size=(20000000, 128), dtype=torch.float, device='cuda', requires_grad=False, pin_memory=False).uniform_(-1, 1) print(pytorchGPUDirectCreateWEmpty.min()) > tensor(-1., device='cuda:0') print(pytorchGPUDirectCreateWEmpty.max()) > tensor(1.0000, device='cuda:0') print(pytorchGPUDirectCreateWEmpty.mean()) > tensor(-1.4027e-05, device='cuda:0') on a Titan V (driver 418.56). Which setup are you using?
st81899
I am using Google Colab, so the GPU is a Tesla K80, Pytorch 1.10, Cuda ‘10.0.130’ I tried the embedding one this morning and somehow it works now. But torch empty still doesn’t at the shape I specified. I tried with a smaller shape, and it worked, so it may be an issue with memory. For convenience, here’s a notebook of the code I ran this morning https://colab.research.google.com/drive/1wwHWF92TzRsdCqpG3V40N-CyDlpuaEIU
st81900
Hello, I am coding a drone control algorithm (using modern control theory, not reinforcement learning) and was testing Pytorch as a replacement for Numpy. The algorithm receives as inputs the state of the drone and a desired trajectory, and computes the inputs for the drone to follow the trajectory. It must compute this at least at 100Hz. The main purpose for trying Pytorch is to see if there would be any gains of using the GPU, since most of the operations are matrix-vector operations. Links to source code of both controllers: numpy implementation 18, pytorch implementation 50 and the script where I call 15 each of them. During my testing I found out that the same control algorithm written using numpy and running on the CPU is at least 10x faster than the pytorch implementation running on the GPU (using only torch functions). I tried both in a desktop computer and on a Jetson Nano with quite similar and interesting results: Two tests on desktop: Intel Core i5 GeForce GTX 1050 CUDA10 Pytorch 1.2. bigcpugpupcputimes2.png1855×1022 35.3 KB bigcpugpupcputimes.png1855×1022 35.5 KB Two tests on Jetson Nano ARMv8 Nvidia Tegra X1 Pytorch 1.2 jetsoncpugpupcputimes2.png1855×1022 27.3 KB jetsoncpugpupcputimes.png1855×1022 28.2 KB The graphs, one for each tests, shows the computation time distribution of the code running either on a)numpy on CPU (blue) b) Pytorch on CPU (green) and c) Pytorch on GPU (red) In both hardware configurations, numpy on CPU was at least x10 faster that pytorch on GPU. Also, Pytorch on CPU is faster than on GPU. In the case of the desktop, Pytorch on CPU can be, on average, faster than numpy on CPU. Finally (and unluckily for me) Pytorch on GPU running in Jetson Nano cannot achieve 100Hz throughput. What I am interested on is actually getting the Pytorch GPU on Jetson speed to reach a performance similar than its CPU speed on Jetson Nano (>=100Hz throughput), since I cannot attach a desktop to a drone. Reading around it seems some issues could be Data transfer between CPU and GPU can be very expensive, Tensor type and dtype used I am not sure how to dramatically improve from this. Currently all operations are done on Tensor.FloatTensor, and I load all the data to GPU at the beginning of each iteration, all computation gets done only on GPU, and I offload from GPU only at the end when all the results are ready. I am aware this is not the main purpose for which pytorch was created but I would like to get advice on how to optimize the performance of Pytorch on GPU, on an smaller platform like Jetson Nano, and hopefully get a x10 increase in performance. Any advice will be very welcome! Juan For reference, I measured executions times following: For GPU: start = torch.cuda.Event(enable_timing=True) end = torch.cuda.Event(enable_timing=True) start.record() # I put my code here end.record() torch.cuda.synchronize() execution_time = start.elapsed_time(end) For CPU: start = time.time() # I put my code here end = time.time() execution_time = end - start
st81901
It is practically impossible to help you without taking a look at the code. Also, are you utilizing batches for inference on the GPU?
st81902
@dambo, thanks for your reply. I just edited the post to include links to the source code at the top. Regarding batches, I am actually not using them. I have used pytorch for machine learning in vision, where a dataset is ready beforehand. However in this case the data comes in the instant it should be processed, so I am not sure if batch processing is possible.
st81903
Using the GPU typically involves extra latency, especially when done via friendly APIs in Python. For large NN that is typically dwarfed by number of FLOPS so no big deal. For real-time control, with minimal flops and a low latency / high frequency demand it is likely not the best fit. I’d also question why Python? If you implemented that loop in C or C++ and replotted your graph, it’d be hard to compare any of the existing measures on the same scale. There is so much extra overhead under the covers of any Python app, you’d likely get a 100x+ increase on the CPU writing it in C++ for an algo like that.
st81904
@rwightman Thanks for your reply. When you mention to “implement that loop in C or C++”, do you mean purely C++ running on CPU or using the torch C++ frontend on GPU? If the answer is the latter, I totally agree a C++ will be faster than python running on the CPU, but since I have not tried the torch C++ frontend, I am not sure whether an implementation of the controller using it will have a significant impact on execution speed on the GPU. About Python, I simply wanted to try. I am used to it for machine learning and also have seen some use cases for pytorch+python on optimal control (and general optimization) with interesting results. Nonetheless, optimization sees benefits on Pytorch thanks to automatic differentiation. Also, coming from a C background, coding in Python is much easier.
st81905
@dambo @rwightman I performed some other timing tests to try to understand if bottleneck could be in CPU to GPU transfer: Move 6x 3-element np.arrays from CPU to GPU Move a single 6x3 np.array from CPU to GPU Create 6x 3-element torch.tensor directy on GPU whose elements are np.float variables Create 6x 3-element torch.tensor directly on GPU whose elements are constant floats (hard coded) I did these 4 tests using np.float16, np.float32 and np.float64, as well as on my desktop and a Jetson Nano. The findings are quiet interesting and counter-intuitive: Faster operation is of a single 6x3 np.array from CPU to GPU, both on desktop and Jetson Nano. It is around 4x faster than the rest of operations and even faster than creating 6x 3-element torch.tensor directly on GPU. Using np.float64 is slightly faster than both np.float32 and np.float16 Transfer are 10x faster on the desktop than on Jetson Nano Effectively, tranfering 6x 3-element np.arrays has transfer times as high as 5.5ms - 6.5ms, which is already too much for calculations at 100Hz in which the limit is 10ms. It seems that one approach to improve performance is by transferring all data in a single np.array rather than small pieces on several individual np.arrays. However, does this makes sense at all? Is it really possible that transferring a 6x3 np.array is faster than creating 6 torch.tensors directly on GPU? Sounds to good to be true. I understand that batch processing is one of the keypoints on using a GPU… but “batch transferring”… not really sure. Any further comments are appreciate. Below are the graphs: jetsonfloat16.png1855×1022 32.2 KB jetsonfloat64.png1855×1022 30.5 KB jetsonfloat32.png1855×1022 33 KB desktopfloat64.png1855×1022 39.3 KB desktopfloat16.png1855×1022 37.7 KB desktopfloat32.png1855×1022 37.8 KB
st81906
I added two more tests: Creating a 6x3 torch.tensor using torch.rand(6,3) Creating a 6x3 torch.tensor directly on GPU using variables Creating a 6x3 torch.tensor direcly on GPU using constants It seems that Creating a torch.rand(6,3) directly on GPU is the fastest operation However, creating a 6x3 np.array and then transfering it to GPU is yet faster than creating a 6x3 torch.tensor directly on GPU using variables or constants. This is counter intuitive. jetsonfloat64.png1855×1022 40.7 KB desktopfloat64.png1855×1022 46.6 KB
st81907
I’m not understanding what you mean ‘directly on GPU using constants’? There are no constants in Python and any variable in Python exists in the CPU and has to be transferred at some point to the GPU. You can make sure that those tensors are defined and moved to GPU outside of your high frequency loops and not modified, that is as close to constant as you get. I’m not an expert on the nitty gritty details of CUDA kernels and the specific Pytorch mechanics relating to their handling. I believe a typical kernel launch latency is approx 10us, down to about 5us and up to I’m not sure where. But I’ve seen traces with 50us, etc. So, if you are doing convolutions or mm with tens of thousands, hundreds of thousand elements, you don’t notice that. But for a tight loop of small operations that could only partially leverage the full parallalelization ability on a modern CPU, you could probably crank out thousands of iterations of your loop (C/C++ optimized) in the time it takes to launch a kernel on the GPU. If you want to keep it a little higher level, maybe use Eigen, you’d get a bit of cross platform ability in the sense that it may be able to leverage a bit of NEON or SSE depending on what platform you compile for.
st81908
You are right, I meant to say ‘hard-coded floats’. Thanks for the insights on CUDA kernels and Eigen suggestion.
st81909
Hi there! My name is Michael, I’m from spain so forgive me if i write in wrong way. One year ago i startted to learn about machine learning science and it’s an amazing and interesting world for me. I began with keras, and now i would like go deeper. I have pytorch in mind but is really hard to find complete courses (even more certifications) in pytorch. I know about Fastai but i prefer start with pytorch beofre. Also i have been doing the official documentation but what i am looking for is something like a scheduled guide. For instance this can be easy found in coursera if i search by tensorflow. In other hand, i can find great project of OCR and other advanced techniques that make me ask, how they got that skills? and how can i to achieve that level?it’s hard find good and complete pytorch training online by facebook or others. Why this is it? Do you think tensorf flow is better than pytorch? do you think pytorch will be leaved by facebook? I hope someone can lead me, greetings PDT: i am really interesting in NLP PDT2: i have a strong academic background in software by the University.
st81910
Hi Michael, Facebook created a Udacity course here 8, which might be a good starter. I haven’t had a chance to look into it, but maybe @smth could give you some more information as he’s an instructor in the course (and one of the main authors of PyTorch ). My personal point of view is to chose an interesting project and just start building it using PyTorch. If you get stuck at some point, you are more than welcome to post your issues here in this discussion board and you’ll find a lot of experts helping you out.
st81911
Thank you, i have been doing the free course and it’s really well explained. is Python actually been upgraded? Torch isnt and pytorch depend of torch, this worry me., I dont would like learn something whitch can dead in the future @smth @ptrblck
st81912
PyTorch did share some common code base with Torch7 in the past, but is now independent and under active development. If you have a look at the github repo 5, you’ll see that we have a lot of commits daily into the master branch.
st81913
my model is like below: class Net(nn.Module): def __init__(self): super().__init__() self.fc1 = nn.Linear(4909, 1500) self.relu1 = nn.ReLU() self.dout = nn.Dropout(0.2) self.fc2 = nn.Linear(1500, 300) self.prelu = nn.PReLU(1) self.out = nn.Linear(300, 1) self.out_act = nn.Sigmoid() def forward(self, input_): a1 = self.fc1(input_) h1 = self.relu1(a1) dout = self.dout(h1) a2 = self.fc2(dout) h2 = self.prelu(a2) a3 = self.out(h2) y = self.out_act(a3) return y and… define my model and loss function & optimizer function model = Net() criterion = nn.BCEWithLogitsLoss() optimizer = optim.Adam(model.parameters(), lr=0.01, betas=(0.9, 0.999)) now start to train: def train(epoch): model.train() for i in range(len(x_train_tensor)): optimizer.zero_grad() output = model(x_train_tensor[i]) print(output.data[0], y_train_tensor[i]) loss = criterion(output.data[0], y_train_tensor[i]) loss.backward() optimizer.step() losses.append(loss.data[0]) print("loss: {}".format(loss.data[0])) but… output is like below tensor(0.4984) tensor(1., grad_fn=) loss: 0.47467532753944397 tensor(0.5021) tensor(1., grad_fn=) loss: 0.4732956886291504 tensor(0.5000) tensor(0., grad_fn=) loss: 0.9740557670593262 tensor(0.4942) tensor(1., grad_fn=) when the label is 1 loss value is almost 0.4, but when a label is 0 loss function up to 0.9. these values repeat and don’t go to low value. How can I resolve it? thank you!
st81914
Solved by ptrblck in post #2 nn.Sigmoid and nn.BCEWithLogitsLoss don’t fit together. Either remove the nn.Simgoid or use nn.BCELoss.
st81915
nn.Sigmoid and nn.BCEWithLogitsLoss don’t fit together. Either remove the nn.Simgoid or use nn.BCELoss.
st81916
if i change the nn.BCEWithLogitsLoss to nn.BCELoss function error was: RuntimeError: the derivative for ‘target’ is not implemented but remove nn.Sigmoid and leave BCEWithLogitsLoss there is no error occured
st81917
Oh wait, there seems to be another issue. Could you try to pass the ouput directly to your criterion instead of output.data[0]?
st81918
I’m not sure why you get this error. Your code runs fine for this dummy input and target: def train(epoch): model.train() for i in range(len(x)): optimizer.zero_grad() output = model(x[i]) print(output, target[i]) loss = criterion(output, target[i]) loss.backward() optimizer.step() losses.append(loss.item()) print("loss: {}".format(loss.item())) x = torch.randn(1, 4909) target = torch.tensor([[1.]]) losses = [] train(0)
st81919
In theory the error during training is called bias. Ways to reduce bias Increase the number of hidden layers change the NN architecture Train NN for longer MAYBE! Threshold you are using for dropout might be too high, which are causing more units/neurons to turn off as you train longer.
st81920
I extracted my data from csv file, and define like below: x_train_tensor = Variable(torch.FloatTensor(x_train.values)) x_test_tensor = Variable(torch.FloatTensor(x_test.values)) y_train_tensor = Variable(torch.Tensor(y_train.values), requires_grad = True) y_test_tensor = Variable(torch.Tensor(y_test.values), requires_grad = True) is there any problem ?
st81921
Could you try to use torch.from_numpy() to get the tensors instead of wrapping the numpy arrays directly? It’s the recommended way, but I’m not sure if it’s related to this issue.
st81922
My experience with this was different, when I was training a test model to test this exact thing I noticed that when I applied nn.Sigmoid on model output and used BCELoss() I got very bad results, my loss actually went to NaN after some iterations. Similarly when I did not use nn.Sigmoid at model output and used BCEWithLogitsLoss() I agaain got bad results, no more NaN’s but error was not dropping from 0.999 Then I used nn.Sigmoid and BCEWithLogitsLoss() and got expected results, loss was dropping and so was error. So can you please mention why these two dont go well together whereas my tests showed them working together. Thanks
st81923
Since sigmoid will be applied twice in this (wrong) approach, you might have scaled down the gradients, thus stabilized the training, e.g. if your learning rate was too high. Here is a small example showing this effect: model = nn.Sequential( nn.Linear(10, 10), nn.ReLU(), nn.Linear(10, 1) ) data = torch.randn(1, 10) target = torch.randint(0, 2, (1, 1)).float() # 1) nn.BCEWithLogitsLoss output = model(data) loss = F.binary_cross_entropy_with_logits(output, target) loss.backward() print(model[0].weight.grad.norm()) > tensor(0.1741) print(model[2].weight.grad.norm()) > tensor(0.2671) # 2) nn.BCELoss model.zero_grad() output = model(data) loss = F.binary_cross_entropy(torch.sigmoid(output), target) loss.backward() print(model[0].weight.grad.norm()) > tensor(0.1741) print(model[2].weight.grad.norm()) > tensor(0.2671) # 3) wrong model.zero_grad() output = model(data) loss = F.binary_cross_entropy_with_logits(torch.sigmoid(output), target) loss.backward() print(model[0].weight.grad.norm()) > tensor(0.0595) print(model[2].weight.grad.norm()) > tensor(0.0914) Your loss might blow up and get eventually a NaN value, e.g. if the learning rate is set too high, which would also fit my assumption. While applying sigmoid twice might have helped in your use case, I would recommend to try to debug the exploding loss (or NaN values).
st81924
Many thanks to giving such a helpful reply to such an old topic This really helped me a lot
st81925
Hello everyone, I’m training my custom data recently, but the loss only update at the first time, and I really don’t know where the problem is here is the code for epoch in range(cfg['epoch']): for phase in ['train', 'valid']: if phase == 'train': net.train() else: net.eval() running_loss, running_acc = 0.0, 0.0 for i, (point, ans) in enumerate(all_loader[phase]): if cfg['use_cuda'] and torch.cuda.is_available(): point = point.cuda() ans = ans.cuda() optimizer.zero_grad() with torch.set_grad_enabled(phase == 'train'): out = net(point.float()) _, predicted = torch.max(out, 1) loss = criterion(out, ans.long()) if phase == 'train': loss.backward() optimizer.step() running_loss += loss.item() running_acc += (predicted == ans.long()).sum().item() the result is train, Epochs [1/5], Loss: 0.2585 valid, Epochs [1/5], Loss: 0.2465 ================================================== train, Epochs [2/5], Loss: 0.2573 valid, Epochs [2/5], Loss: 0.2465 ================================================== train, Epochs [3/5], Loss: 0.2573 valid, Epochs [3/5], Loss: 0.2465 ================================================== train, Epochs [4/5], Loss: 0.2573 valid, Epochs [4/5], Loss: 0.2465 ================================================== train, Epochs [5/5], Loss: 0.2573 valid, Epochs [5/5], Loss: 0.2465 ==================================================
st81926
Solved by ptrblck in post #4 Thanks for the code! Could you remove the last softmax call from your model? nn.CrossEntropyLoss expects logits as the model’s output, since internally F.log_softmax and nn.NLLLoss will be used. Your model should thus just return self.fc3(x) without and non-linearity.
st81927
Are you reinitializing the model somewhere? Could you post the whole training code so that we could have a look?
st81928
This is the whole training code net = Conv1DNet().float() if cfg['use_cuda'] and torch.cuda.is_available(): net = net.cuda() history = train_evaluate(net, all_loader, cfg) def train_evaluate(net, all_loader, cfg): criterion = nn.CrossEntropyLoss() optimizer = optim.RMSprop(net.parameters(), lr = cfg['learning_rate']) history = { 'train_loss': [], 'train_acc': [], 'valid_loss': [], 'valid_acc': [] } for epoch in range(cfg['epoch']): for phase in ['train', 'valid']: if phase == 'train': net.train() else: net.eval() running_loss, running_acc = 0.0, 0.0 for i, (point, ans) in enumerate(all_loader[phase]): if cfg['use_cuda'] and torch.cuda.is_available(): point = point.cuda() ans = ans.cuda() optimizer.zero_grad() with torch.set_grad_enabled(phase == 'train'): out = net(point.float()) _, predicted = torch.max(out, 1) loss = criterion(out, ans.long()) if phase == 'train': loss.backward() optimizer.step() running_loss += loss.item() running_acc += (predicted == ans.long()).sum().item() history[phase + '_loss'].append(running_loss / cfg[phase + 'set_size']) history[phase + '_acc'].append(running_acc / cfg[phase + 'set_size']) print ("{}, Epochs [{}/{}], Loss: {:.4f}".format(phase, epoch + 1, cfg['epoch'], history[phase + '_loss'][-1])) print ("==================================================") print ("Average accurancy of the Net is {:.2f}%".format(sum(history['valid_acc']) / cfg['epoch'] * 100)) return (history) and the net is class Conv1DNet(nn.Module): def __init__(self): super(Conv1DNet, self).__init__() self.fcc = nn.Linear(2, 128) self.conv1 = nn.Conv1d(18, 128, 1) self.pool = nn.MaxPool1d(2) self.conv2 = nn.Conv1d(128, 32, 1) self.fc1 = nn.Linear(32 * 32, 32) self.fc2 = nn.Linear(32, 16) self.fc3 = nn.Linear(16, 3) def forward(self, x): x = self.fcc(x) x = self.pool(fuc.relu(self.conv1(x))) x = self.pool(fuc.relu(self.conv2(x))) x = x.view(-1, 32 * 32) x = fuc.relu(self.fc1(x)) x = fuc.relu(self.fc2(x)) x = fuc.softmax(self.fc3(x), dim = 1) return (x) and th cfg is the dict cfg = {'batch_size': 4, 'num_workers': 2, 'learning_rate': 0.01, 'epoch': 5, 'use_cuda': True, 'trainset_size': 272, 'validset_size': 91, 'train_num': 68}
st81929
Thanks for the code! Could you remove the last softmax call from your model? nn.CrossEntropyLoss expects logits as the model’s output, since internally F.log_softmax and nn.NLLLoss will be used. Your model should thus just return self.fc3(x) without and non-linearity.
st81930
Ubuntu 16.04 PyTorch v1.1 I use PyTorch’s DataLoader to read in Batches in parallel. The data is in the zarr format, multithreaded reading should be supported. To profile the data loading process, I used cProfile on a script that just loads one epoch in a for loop without doing anything else: train_loader = torch.utils.data.DataLoader( sampler, batch_size=64, shuffle=True, num_workers=4, pin_memory=True, drop_last=True, worker_init_fn=worker_init_fn, collate_fn=BucketCollator(sampler, n_rep_years) ) for i, batch in enumerate(train_loader): pass Even if the DataLoader can only use 4 CPUs, all 40 CPUs of the machine are used, but this is not the issue. The DataLoader spends most of the time on ~:0(<method 'acquire' of '_thread.lock' objects>) (see profile view 1). What does this mean? Are the processes waiting until the dataset is unlocked, even if multithread reading should work? Or is this just the time the DataLoader waits until the data is read and it is limited by the data reading speed? If this is the case, why would then all 40 CPUs run at 100%? I appreciate any help to get a better understanding of what is going on under the hood. EDIT: It seems _thread.lock.acquire indicates that the main process is waiting for IO. I still wonder why all 40 CPUs run on 100% when they are actually waiting for IO… profile view 1 cProfile_1.png1980×1014 173 KB profile view 2 cProfile_2.png1976×1010 83.2 KB ncalls tottime percall cumtime percall filename:lineno(function) 891 29.21 0.03278 29.21 0.03278 ~:0(<method 'acquire' of '_thread.lock' objects>) 117/116 0.22 0.001896 0.2244 0.001935 ~:0(<built-in method _imp.create_dynamic>) 1095 0.136 0.0001242 0.136 0.0001242 ~:0(<built-in method marshal.loads>) 10378 0.1329 1.281e-05 0.9031 8.702e-05 core.py:1552(_chunk_getitem) 11208 0.113 1.008e-05 0.1131 1.009e-05 ~:0(<built-in method io.open>) 20560 0.1107 5.382e-06 0.1107 5.382e-06 ~:0(<built-in method posix.stat>) 60 0.09576 0.001596 0.2934 0.004891 afm.py:201(_parse_char_metrics) 10424 0.08875 8.514e-06 0.3511 3.368e-05 storage.py:721(__getitem__) 107531 0.07747 7.204e-07 0.1234 1.147e-06 afm.py:61(_to_float) 31124 0.06565 2.109e-06 0.06565 2.109e-06 core.py:359(<genexpr>) 3035/2950 0.06219 2.108e-05 0.2543 8.622e-05 ~:0(<built-in method builtins.__build_class__>) 10524 0.05762 5.475e-06 0.05762 5.475e-06 ~:0(<method 'read' of '_io.BufferedReader' objects>) 46 0.05739 0.001248 0.1434 0.003118 afm.py:253(_parse_kern_pairs) 314835/314125 0.05286 1.683e-07 0.06002 1.911e-07 ~:0(<built-in method builtins.isinstance>) 10392 0.05215 5.019e-06 0.08863 8.529e-06 indexing.py:293(__iter__) 1542/1 0.04539 0.04539 32.7 32.7 ~:0(<built-in method builtins.exec>) 10372 0.03969 3.827e-06 0.1322 1.274e-05 core.py:1731(_decode_chunk) 12348 0.03371 2.73e-06 0.05517 4.468e-06 posixpath.py:75(join) 71255 0.03229 4.531e-07 0.05519 7.745e-07 afm.py:232(<genexpr>) 96237 0.031 3.221e-07 0.031 3.221e-07 ~:0(<method 'split' of 'str' objects>) 11 0.03053 0.002776 0.03053 0.002776 ~:0(<built-in method torch._C._cuda_isDriverSufficient>) 1329/5 0.02997 0.005995 2.329 0.4658 <frozen importlib._bootstrap>:966(_find_and_load) 2461 0.02855 1.16e-05 0.1294 5.26e-05 <frozen importlib._bootstrap_external>:1233(find_spec) 33889 0.02833 8.36e-07 0.02833 8.36e-07 ~:0(<built-in method __new__ of type object at 0x560415201d60>) 20869 0.02831 1.357e-06 0.02831 1.357e-06 ~:0(<method 'reshape' of 'numpy.ndarray' objects>)
st81931
Solved by bask0 in post #2 It seems _thread.lock.acquire indicates that the main process is waiting for IO. Problem is solved by making the input pipeline more efficient and by setting pin_memory=False.
st81932
It seems _thread.lock.acquire indicates that the main process is waiting for IO. Problem is solved by making the input pipeline more efficient and by setting pin_memory=False.
st81933
Hi all, I’m doing a project and I’d like to freeze some weight and only update the others during training. I wish I could use hook() and register_forward_pre_hook to achieve this since I’m learning using hook now. Below is my codes: def hook(self, model, inputs): with torch.no_grad(): model.weight = model.weight * self.sparse_mask[self.type[model]] def register_hook(self,module): self.handle = module.register_forward_pre_hook(self.hook) However, I got this error TypeError: cannot assign ‘torch.FloatTensor’ as parameter ‘weight’ (torch.nn.Parameter or None expected) I’v lost a day fixing this problem, I was wondering if there’s someone could help me. Welcome with any ideas Thanks!
st81934
Hi Mandy, I think this should work: def hook(module, input): with torch.no_grad(): module.weight.data = module.weight.data * self.sparse_mask[self.type[module]]
st81935
Hi @spanev I really appreciate for your help, but I got another one def hook(module, input): with torch.no_grad(): module.weight.data = module.weight.data * self.sparse_mask[self.type[module]] I used exactly what you show me, and I got this error RuntimeError: expected device cuda:0 and dtype Float but got device cpu and dtype Float Could you please teach me how to fix this problem? Any response will be appreciated!
st81936
Hey guys, I have a simple, linear autoencoder of the following shapes: 128**3 -> 512 -> 128*3 so model = [nn.Linear(1283, 512, bias=False), nn.Linear(512,1283, bias= False)] my_ae = nn.Sequenial(*model).to(‘cuda’) which I would like to train with L2 loss. Problem is, I don;t have enough memory to declare my model in such a way. Is there any good way to approach this problem? Train it somehow in batches?
st81937
I assume you are running out of memory, if you try to pass the whole dataset into the model? If so, have a look at the Data loading tutorial 2 to see, how a Dataset and DataLoader are working and how batches are created. The model itself is quite small and its parameters will just take approx. 1.5MB of memory.
st81938
I think the model is: model = nn.Sequential(nn.Linear(128**3, 512, bias=False), nn.Linear(512,128**3, bias= False)) So it would actually take +8GB of memory
st81939
Hi, I am using multiple embedding layers for categorical values and I want to concatenate them with dense values. On CPUs my code works fine, but I keep getting the following error when running on GPU (single gpu idx:0): RuntimeError: Expected object of backend CPU but got backend CUDA for argument #3 ‘index’ Model is here: class CategoricalEmbedFNN(nn.Module): def __init__(self, config, categ_embs_dims): super().__init__() self.config = config self.hidden_size = self.config["hidden_size"] self.hidden_layers = self.config["hidden_layers"] self.activation_fn = nn.ReLU(inplace=True) self.input_dim = len(self.config["dense_features"]) self.embedding_layers = {} for k, v in categ_embs_dims.items(): self.embedding_layers[f"{k}_emb"] = nn.Embedding(v, self.config["cat_embs"][k]) self.input_dim += self.config["cat_embs"][k] self.input_linear = nn.Linear(self.input_dim, self.hidden_size) self.middle_linear = nn.Linear(self.hidden_size, self.hidden_size) self.output_linear = nn.Linear(self.hidden_size, len(self.config["output_features"])) def forward(self, x, x_cat): x_cat = [emb_layer(x_cat[:, idx]) for idx, (k, emb_layer) in enumerate(self.embedding_layers.items())] x_cat = torch.cat(x_cat, 1) x = torch.cat((x, x_cat), 1) x = self.input_linear(x) x = self.activation_fn(x) for i in range(self.hidden_layers): x = self.middle_linear(x) x = self.activation_fn(x) out = self.output_linear(x) return out I initialize CUDA like this: self.device = torch.device("cuda") torch.cuda.set_device(0) self.model = self.model.to(self.device) self.loss = self.loss.to(self.device) Additionally, in batching (with tqdm), I run with the following code: for X_batch, X_cat, y_batch in tqdm_batch: # Put data on device X_batch, X_cat, y_batch = X_batch.to(self.device), X_cat.to(self.device), y_batch.to(self.device) # Make predictions self.optimizer.zero_grad() y_pred = self.model(X_batch, X_cat) Error comes from the line x_cat = [emb_layer(x_cat[:, idx]) for idx, (k, emb_layer) in enumerate(self.embedding_layers.items())] What is the issue and how can I solve it ?
st81940
Solved the issue by using torch.nn.ModuleList. Basically, the problem was that the list I was creating, was on the CPU. self.embedding_layers = nn.ModuleList() for k, v in categ_embs_dims.items(): self.embedding_layers.append(nn.Embedding(v, self.config["cat_embs"][k])) Then I also changed the forward part into this: x_cat = [emb_layer(x_cat[:, idx]) for idx, emb_layer in enumerate(self.embedding_layers)] x_cat = torch.cat(x_cat, 1) Now it works on both CPU and CUDA gpu.
st81941
I need to create variables directly on the GPU because I am very limited in my CPU ram. I found the method to do this here How to create a tensor on GPU as default Hi, here is my code: import torch torch.set_default_tensor_type(‘torch.cuda.FloatTensor’) t = torch.rand(1,3,24,24) and it caused bellow error: TypeError Traceback (most recent call last) in () ----> 1 t = torch.rand(1,3,24,24) TypeError: Type torch.cuda.FloatTensor doesn’t implement stateless methods Any idea for this type of error? Which mentions using torch.set_default_tensor_type(‘torch.cuda.FloatTensor’) However, when I tried torch.set_default_tensor_type('torch.cuda.FloatTensor') pytorchGPUDirectCreate = torch.FloatTensor(20000000, 128).uniform_(-1, 1).cuda() It still seemed to take up mostly CPU RAM, before being transferred to GPU ram. I am using Google Colab. To view RAM usage during the variable creation process, after running the cell, go to Runtime -> Manage Sessions With and without using torch.set_default_tensor_type('torch.cuda.FloatTensor') , the CPU RAM bumps up to 11.34 GB while GPU ram stays low, and then GPU RAM goes to 9.85 and CPU ram goes back down. It seems that torch.set_default_tensor_type(‘torch.cuda.FloatTensor’) didn’t make a difference For convenience here’s a direct link to a notebook anyone can directly run https://colab.research.google.com/drive/1LxPMHl8yFAATH0i0PBYRURo5tqj0_An7 1
st81942
Hi, You can directly create a tensor on a GPU by using the device argument: device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') pytorchGPUDirectCreate = torch.rand(20000000, 128, device = device).uniform_(-1, 1).cuda() I just tried this in your notebook and got RAM 1.76GB used and GPU 9.86GB. Still, a lot of RAM is used but that ~10GB less than originally. Hope it helps!
st81943
Thanks! I am wondering if the .cuda() is still necessary since you’re already specifying the device on initialization .
st81944
I got this error when I compute the BCELoss, any idea what might be the reason? Traceback (most recent call last): File “main.py”, line 255, in train(args) File “main.py”, line 204, in train train_loss, train_acc = train_epoch(train_dataloader, model, crit, optimizer, args, reverse_dictionary) File “main.py”, line 67, in train_epoch loss = crit(pred, label) File “/share/data/speech/zewei/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 325, in call result = self.forward(*input, **kwargs) File “/share/data/speech/zewei/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/loss.py”, line 372, in forward size_average=self.size_average) File “/share/data/speech/zewei/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py”, line 1179, in binary_cross_entropy return torch._C._nn.binary_cross_entropy(input, target, weight, size_average) RuntimeError: reduce failed to synchronize: device-side assert triggered an it comes with 128 assertion fails like the following: /pytorch/torch/lib/THCUNN/BCECriterion.cu:30: Acctype bce_functor<Dtype, Acctype>::operator()(Tuple) [with Tuple = thrust::detail::tuple_of_iterator_references<thrust::device_reference, thrust::device_reference, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, Dtype = float, Acctype = float]: block: [0,0,0], thread: [125,0,0] Assertion input >= 0. && input <= 1. failed. /pytorch/torch/lib/THCUNN/BCECriterion.cu:30: Acctype bce_functor<Dtype, Acctype>::operator()(Tuple) [with Tuple = thrust::detail::tuple_of_iterator_references<thrust::device_reference, thrust::device_reference, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, Dtype = float, Acctype = float]: block: [0,0,0], thread: [126,0,0] Assertion input >= 0. && input <= 1. failed. /pytorch/torch/lib/THCUNN/BCECriterion.cu:30: Acctype bce_functor<Dtype, Acctype>::operator()(Tuple) [with Tuple = thrust::detail::tuple_of_iterator_references<thrust::device_reference, thrust::device_reference, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type, thrust::null_type>, Dtype = float, Acctype = float]: block: [0,0,0], thread: [127,0,0] Assertion input >= 0. && input <= 1. failed.
st81945
Solved by richard in post #2 Could you check that your input tensor has values that are between 0 and 1? You could add the following line to right before your loss call: assert (input >= 0. & input <= 1.).all() and see if it’s ever triggered.
st81946
Could you check that your input tensor has values that are between 0 and 1? You could add the following line to right before your loss call: assert (input >= 0. & input <= 1.).all() and see if it’s ever triggered.
st81947
@richard i have tried you solution but it did not work for me and i got an error like assert (input >= 0. & input <= 1.).all() TypeError: unsupported operand type(s) for &: ‘float’ and ‘builtin_function_or_method’
st81948
Hi, I tried your solution but I get an error below: TypeError: unsupported operand type(s) for &: 'float' and 'Tensor' could you give me more advice?
st81949
In my case, this solution did not work. So i just debug my code line by line and got some value errors. So first debug your code is better solution. This solution is effective some specific cases.
st81950
I got this error, but it was because my data had an Inf in it (between 0 and 1 is not a true error, this data usually higher than 1 and does fine).
st81951
do assert (x.data.cpu().numpy().all() >= 0. and x.data.cpu().numpy().all() <= 1.)
st81952
I had the problem too, but the inspirations above helped me. In the end, I forgot to pass the Sigmoid function (or whatever function you use to get the probability) prob_predict = torch.nn.Sigmoid()(logits_predict)
st81953
Assuming you have a N * M tensor, you can use the following code as a sanity check assert(all([val >= 0 and val <= 1 for row in x.cpu().detach().numpy() for val in row])) If this assertion fail, then you just need to normalize the output by adding a Sigmoid to x to resolve the problem. Otherwise there’s some other problems
st81954
Hello all. I have a main model class like this: class KDNN(nn.Module): def __init__(self): super(KDNN, self).__init__() self.EnE = torch.nn.Sequential( MaskedLinear(IE_dim, h_dim1, torch.LongTensor(1- Mask1.values)), nn.BatchNorm1d(h_dim1), nn.ReLU(), MaskedLinear(h_dim1, h_dim2, torch.LongTensor(1-Mask2.values)), nn.BatchNorm1d(h_dim2), nn.ReLU(), MaskedLinear(h_dim2, Out_dim, torch.LongTensor(1-Mask3.values))) def forward(self, x): output = self.EnE(x) return output And with some helps from posts here a MaskedLinear class like this (to design a layer with masked weights): class MaskedLinear(nn.Module): def __init__(self, in_dim, out_dim, mask): super(MaskedLinear, self).__init__() def backward_hook(grad): # Clone due to not being allowed to modify in-place gradients out = grad.clone() out[torch.t(self.mask)] = 0 return out self.linear = nn.Linear(in_dim, out_dim) self.mask = mask.byte() self.linear.weight.data[torch.t(self.mask)] = 0 # zero out bad weights self.linear.weight.register_hook(backward_hook) # hook to zero out bad gradients def forward(self, input): return self.linear(input) When I want to save an object of the KDNN class, I am getting an error: "Can’t pickle local object ‘MaskedLinear.init..backward_hook’ " Any suggestions? Thank you!
st81955
Why do we use data for validation if we do not update weights for these data. Just to see how the model works on real data? So you can then see after training.
st81956
Yes, the validation accuracy gives an estimate of the model performing on unseen data. Once your training is finished, you should test it once on a test set, as the estimate of the validation dataset might be bias, e.g. if you’ve used it for early stopping, model selection etc.
st81957
When using DataParallel 46 to wrap my module, do I need to do anything to also parallelize the loss functions? For example, let’s say that I have large batch size and large output tensors to compute MSE against a target. This operation would benefit from splitting the batch across multiple GPUs, but I’m not sure if the following code does that: model = MyModule() model = nn.parallel.DataParallel(model, device_ids=range(args.number_gpus)) model.cuda() output = model(data) criterion = nn.MSELoss() criterion.cuda() loss = criterion(output, target) loss.backward() optimizer.step() This is a simplification based on imagenet example 111. In my case, I have a much bigger custom loss module that includes some calls to a VGG network to estimate perceptual loss, and I’m not sure if I am maximizing performance. I tried computing loss as part of the forward function in MyModule, but this led to recursion errors during the backward step.
st81958
it would. if you’re worried about that you can put your DataParallel around your model + loss function. But depending on how many parameters you have in your fully connected layer, it might not work out in terms of speed.
st81959
Hi, If i wrap the loss within forward, let’s say I run on 2 gpus, the forward function will return with [loss1, loss2], and should I sum over the loss1 + loss2 and then backward? or loss1.backward(), loss2.backward()?
st81960
the forward function will receive output = torch.cat([loss1, loss2]), so you can do output.backward(torch.ones(2))
st81961
If output is [loss1, loss2], can I get the final loss as output.sum() ? And then do loss.backward().
st81962
Why wouldn’t you just get the mean like most loss functions do on regular batches? i.e. loss = criterion(output, target).mean() loss.backward() seems to work fine
st81963
we are passing a gradient of ones to the backward. Usually, if it’s a scalar output loss, and you do loss.backward(), it’s implied that it’s loss.backward(torch.ones(1)). Because, in this case the loss is actually two elements, output.backward() will give an error asking for gradients.
st81964
I would have the same question as the guys before, Can I get the final loss by output.sum() and then do loss.backward()? (I saw some blog posts doing that way.) Is that different from what you suggested here?
st81965
Same question here, did you find any difference between using sum() and loss.backward(torch.ones(2))?
st81966
I have 3 neural networks, A, B, C. A and B have different architecture, but I want C to have same architecture as B, but different weights, bias initialization, and its parameters to be updated differently. If I do C = B then it would mean both are same neural network with parameters getting updated in same way. how do I ensure that both have different parameters but same architecture?
st81967
import torchvision, torch.nn as nn, torch, torchsummary class CombineBase(nn.Module): def __init__(self, ModelOne, ModelTwo): super().__init__() self.modelone = ModelOne.to('cuda') self.modeltwo = ModelTwo.to('cuda') self.lin_one = nn.Linear(2000, 1000) self.lin_two = nn.Linear(1000, 10) self.softmax = nn.Softmax(dim=-1) def forward(self, x): out = torch.cat((self.modelone(x), self.modeltwo(x)), dim=-1) out = self.softmax(self.lin_two(self.lin_one(out))) return out class CombineMiddle(nn.Module): def __init__(self, ModelOne, ModelTwo): super().__init__() self.modelone = ModelOne.to('cuda') self.modeltwo = ModelTwo.to('cuda') self.lin = nn.Linear(20, 10) self.softmax = nn.Softmax(dim=-1) def forward(self, x): out = torch.cat((self.modelone(x), self.modeltwo(x)), dim=-1) out = self.softmax(self.lin(out)) return out level_one = [torchvision.models.alexnet(pretrained=False) for i in range(16)] def one_level(combination_type, number, level): return [combination_type(level[i], level[i+1]) for i in range(number) if i%2==0] level_two = one_level(CombineBase, 16, level_one) level_three = one_level(CombineMiddle, 8, level_two) level_four = one_level(CombineMiddle, 4, level_three) top_level = CombineMiddle(level_four[0], level_four[1]) torchsummary.summary(top_level, (3, 128, 128), batch_size=100) would this ensure weights of each neural network get updated differently. also it gives some error when using cuda, expected backend CPU got CUDA, for #4 ‘mat1’