id
stringlengths
3
8
text
stringlengths
1
115k
st45568
Thanks @ptrblck, so if the training will be different processes than multiprocessing dataloader, right? from your description, if training is slower than dataloading, then basically we should get continuous training and loading time will he shaded? Also if I use Data parallel, and based on understanding data parallel is using multi threading, so how this multi threading data parallel will work with multi process data loader? still the same way multi process data loader loads the data into queue, and training process(a different process) spin multi threads according to multi GPU to train ?
st45569
Yes, the main process would execute the training loop, while each worker will be spawned in a new process via multiprocessing. nn.DataParallel and the DataLoader do not interfere with each other. Also yes, if the loading pipeline is faster then the training, the data loading time would be “hidden”.
st45570
Thanks, we test our dataloading without training and it is very fast. But we also put some timestamp on our code and found training time is only part of total time, is that because GPU is aync and our time recoding may not be that accurate? pytorch.org CUDA semantics — PyTorch 1.7.0 documentation start_time = datetime.now() for for loo in range(0, config['epochs']): for fi, batch in enumerate(my_data_loader): train_time = datetime.now() train() train_endtime = datetime.now() total_endtime = datetime.now()
st45571
CUDA operations are asynchronous, so you won’t capture their runtime and it will be accumulated in the next blocking operation. You can profile the complete code e.g. with Nsight Systems and check the timeline to narrow down the bottleneck, if your current profiling with timers isn’t giving enough information (or use the PyTorch profiler and create the timeline output).
st45572
Hi @ptrblck, I did some test for dataloader and pipeline, the test looks like this Experiment 1: With preload data, data pre load into memory replay_mem = {} for fi, batch in enumerate(my_data_loader): replay_mem[fi] = batch # Training with all data in memory for i in range(0,epoch) for fi, batch in enumerate(replay_memory.items()): Train(batch) Experiment 2: Without preload data, the data is loaded via dataloader for i in range(0,epoch) for fi, batch in enumerate(ny_data_loader): Train(batch) The experiment 1 has less time, and time diff compared with experiment 2 is about the data loading time(if I just test the data loading). I imagine if we have pipeline that training just consumes multi worker data loader, if training time is higher than multi worker data loader, so data loader time will be completely shaded? (data loading is always faster and thus training always have data)?
st45573
Yes, if the model training takes more time than loading and processing the next batch, the data loading time will be hidden and comes “for free” (The next epoch would create new workers, which would have to start creating new batches in the default setup. You could use persistent_workers=True to avoid this behavior).
st45574
Hello, I read the documentation for cross entropy loss, but could someone possibly give an alternative explanation? Or even walk through a small example of a 2x2 prediction and a 2x2 target. ie if target = [0,1;2,2] pred = [0,0;1,1] what would be the step by step be to calculate the cross entropy loss? I’ve been having trouble finding an explanation relevant to semantic segmentation.
st45575
Here is a code snippet showing the PyTorch implementation and a manual approach. Note that I’ve used for loops to show how this loss can be calculated and that the difference between a standard multi-class classification and a multi-class segmentation is just the usage of the loss calculation on each pixel. You should not use this code, as it’s way slower than the internal implementation (also the numerical stability is worse, as I haven’t used log_softmax): # setup criterion = nn.CrossEntropyLoss() batch_size = 2 nb_classes = 4 output = torch.randn(batch_size, nb_classes) target = torch.randint(0, nb_classes, (batch_size,)) loss = criterion(output, target) # manual calculation loss_manual = 0. for idx in range(output.size(0)): # get current logit from the batch logit = output[idx] # get target from the batch t = target[idx] loss_manual += -1. * logit[t] + torch.log(torch.sum(torch.exp(logit))) # calculate mean loss loss_elements = output.size(0) loss_manual = loss_manual / loss_elements print(torch.allclose(loss, loss_manual)) > True # for segmentation h, w = 4, 4 output = torch.randn(batch_size, nb_classes, h, w) target = torch.randint(0, nb_classes, (batch_size, h, w,)) loss = criterion(output, target) # manual calculation loss_manual = 0. for idx in range(output.size(0)): for h_ in range(h): for w_ in range(w): # get current logit from the batch logit = output[idx, :, h_, w_] # get target from the batch t = target[idx, h_, w_] loss_manual += -1. * logit[t] + torch.log(torch.sum(torch.exp(logit))) # calculate mean loss loss_elements = (output.size(0) * output.size(2) * output.size(3)) loss_manual = loss_manual / loss_elements print(torch.allclose(loss, loss_manual)) > True
st45576
ptrblck: print(torch.allclose(loss, loss_manual)) Thank you for the thorough response. A few follow up questions: What is the mean loss used for? Just to print out as the accuracy metric? The more specific loss values (not averaged) are used in backpropagation, correct? For some reason my network is always preferring whatever is the first label in a muli-label segmentation… For example, If I were to be segmenting pixels as dog, cat, background, if the dox pixels were label ‘1’ in training, all would be classified as dog, but the same is true if I assigned cat to 1. I realize this is very unusual. Have you ever heard of something like this before?
st45577
No, the mean loss is used in the backward pass and thus to calculate the gradients. You are usually using the mean, since the sum of the loss would e.g. depend on the batch size and you would have to adapt the learning rate based on the batch size No, haven’t heard it before and I would assume your model overfits to the majority class. Is this behavior reproducible, i.e. are you seeing the same “label preference” using different seeds?
st45578
Oh wow, for some reason I thought individual loss values (pixel/voxel-wise) were used, I thought the network would need more info about where specifically/on which labels things went wrong in the prediction which wouldn’t be encapsulated in the average. That is very good to know. I also thought it was overfitting to majority class, but I swapped the labels every which way and no matter what it preferred label 1. I used https://github.com/wolny/pytorch-3dunet 1 which is not my repo, and none of the examples include multi-class segmentation but it does support cross entropy loss so I’m not too sure.
st45579
The second issue is still interesting, as it smells a bit like a code bug. Would you be able to write a code snippet to reproduce this issue (either with random data or with a torchvision dataset)?
st45580
In the code, is loss_manual += -1. * logit[t] + torch.log(torch.sum(torch.exp(logit))) the equivalent to : image869×203 15.7 KB Which works for when the outputs are probabilities summing to 1 (after a softmax layer?) I will try to reproduce the issue as well.
st45581
Not necessarily, as the posted formula looks like the “positive” part of the binary cross-entropy loss. Let me know, if you were able to create a code snippet to reproduce this issue.
st45582
When I training the model several times by single process,the resulting models are always consistent. But, when I use DDP to train the model, the modle is different every time. This makes me very confused! Here are the codes for training: #!/usr/bin/env python import sys import os import numpy as np import random import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim from torchvision import datasets, transforms from sampler import DistributedSampler class CNN(nn.Module): def init(self): super(CNN, self).init() self.conv1 = nn.Conv2d(3, 64, 5, padding=1) self.conv2 = nn.Conv2d(64, 64, 3, padding=1) self.pool = nn.MaxPool2d(2, 2) self.conv3 = nn.Conv2d(64, 128, 3, padding=1) self.conv4 = nn.Conv2d(128, 128, 3, padding=1) self.fc1 = nn.Linear(128 * 7 * 7, 1024) self.fc2 = nn.Linear(1024, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.pool(self.conv2(x))) x = F.relu(self.conv3(x)) x = F.relu(self.pool(self.conv4(x))) x = x.view(-1, 128 * 7 * 7) x = F.relu(self.fc1(x)) x = self.fc2(x) return x os.environ[“NCCL_IB_DISABLE”] = ‘1’ init_method = sys.argv[1] world_size = int(sys.argv[2]) rank = int(sys.argv[3]) torch.distributed.init_process_group( backend=“nccl”, init_method=init_method, world_size=world_size, rank=rank-1) train_data = datasets.CIFAR10(root=’./data’, train=True, download=True, transform=transforms.ToTensor()) train_sampler = torch.utils.data.distributed.DistributedSampler(train_data, num_replicas=world_size, rank=rank-1) train_loader = torch.utils.data.DataLoader(train_data, batch_size=64, pin_memory=True, shuffle=False, num_workers=0, sampler=train_sampler, worker_init_fn=lambda x: np.random.seed(1)) test_data = datasets.CIFAR10(root=’./data’, train=False, download=True, transform=transforms.ToTensor()) test_sampler = torch.utils.data.distributed.DistributedSampler(test_data, num_replicas=world_size, rank=rank-1) test_loader = torch.utils.data.DataLoader(test_data, batch_size=64, shuffle=False, num_workers=0, sampler=test_sampler) torch.manual_seed(2) torch.cuda.manual_seed_all(2) torch.backends.cudnn.deterministic = True torch.backends.cudnn.benchmark = False np.random.seed(1) random.seed(1) for i in range(torch.cuda.device_count()): device = torch.device(“cuda:” + str(i)) x = torch.zeros(1) try: x = x.to(device) break except RuntimeError: pass torch.cuda.set_device(device) model = CNN() model.to(device) model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[device]) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=0.001) model.train() for epoch in range(2): iter = 0 for inputs, labels in train_loader: inputs, labels = inputs.to(device), labels.to(device) optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() iter += 1 print(“Epoch=%d iter=%d loss=%.4f” % (epoch, iter, loss.item())) for name, para in model.named_parameters(): text = name + “:” para = para.view(-1) for i in range(min(para.size(0), 32)): text += " %.10f" % para[i].item() print(text) correct, tot = 0, 0 model.eval() with torch.no_grad(): for inputs, labels in test_loader: inputs, labels = inputs.to(device), labels.to(device) outputs = model(inputs) _, predicted = torch.max(outputs.data, 1) tot += labels.size(0) correct += (predicted == labels).sum().item() print(correct / tot * 100)
st45583
Finally,I find that the randomness caused by NCCL rings. Fix the rings and it becomes deterministic.
st45584
I have given a batch of row vectors stored in the matrix U, a batch of column vectors stored in the matrix V and a single matrix M. For each row vector u in U and each column vector v in V I want to compute the sum of the matrix product u *M*v for each batch. How can I efficiently implement this (potentially using bmm() 79, matmul() 29 or maybe even einsum 35)? Here is a small toy example doing what I want to do with a for loop: import torch U = torch.arange(1,10).reshape(3,3) V = torch.arange(1,10).reshape(3,3) M = torch.tensor([1, 2, 3]).repeat(3,1) result = 0 for u,v in zip(U.t(), V): result += torch.matmul(torch.matmul(u,V),v) result: tensor(4545) I know there is torch.bmm() to perform batch matrix matrix multiplication. If there was something similar for a batch vector dot product (e.g. torch.bvv()) I could do bvv(matmul(U,M),V) .
st45585
Solved by tom in post #6 torch.einsum('bi,ij,bj', U, M, V) if you want the sum, 'bi,ij,bj->b' if you prefer the batch items separately. Best regards Thomas
st45586
Isn’t it nothing but twice bmm()? import torch U = torch.arange(1,10).reshape(1, 3,3) V = torch.arange(1,10).reshape(1, 3,3) M = torch.tensor([1, 2, 3]).repeat(3,1).view(1,3,3) result = torch.bmm(U, M).bmm(V) result: tensor([[[ 180, 216, 252], [ 450, 540, 630], [ 720, 864, 1008]]])
st45587
InnovArul: Isn’t it nothing but twice bmm() ? I don’t think that is correct. Note that the result should be a scalar.
st45588
In that case, you can call .sum() on result to get a scalar. Basically, every element of result matrix contains the multiplication result of a combination of u, M, v. Maybe, you can work out the math and the result.sum() to see if it is correct.
st45589
torch.einsum('bi,ij,bj', U, M, V) if you want the sum, 'bi,ij,bj->b' if you prefer the batch items separately. Best regards Thomas
st45590
tom: torch.einsum('bi,ij,bj', U, M, V) if you want the sum, 'bi,ij,bj->b' if you prefer the batch items separately. Best regards Thomas Perfect, thanks! Einsum is really neat, I took the time to get familiar with it and came up with the same result.
st45591
I faced a similar problem and noticed that a faster way of doing torch.einsum('bi,ij,bj->b', U, M, V) is torch.sum(U @ M * V, dim=1).
st45592
For this particular case, the specialized bilinear function might be a good thing to use.
st45593
Thanks for pointing out the bilinear function, it’s very useful and I wasn’t aware of it. Though, in my case, I had to do a bit of squeeze/unsqueeze-ing to get it to work. The dimensions are exactly as in my einsum example, namely (B,M), (B,N), and (M,N) for U, V, and W, respectively. Calling bilinear(U,V,W) in this case requires W to have a dim of (Y,M,N), where Y is the number of out-features, so I had to call it like this bilinear(U,V,W.unsqueeze(0)).squeeze() to make it equivalent to torch.sum(U @ W * V, dim=1). In addition to that, torch.sum(U @ W * V, dim=1) seems to be about ~1.5 times faster than bilinear(U,V,W.unsqueeze(0)).squeeze() in my case, though I’m not sure why.
st45594
In the Mask R-CNN paper the optimizer is described as follows training on MS COCO 2014/2015 dataset for instance segmentation (I believe this is the dataset, correct me if this is wrong) We train on 8 GPUs (so effective minibatch size is 16) for 160k iterations, with a learning rate of 0.02 which is decreased by 10 at the 120k iteration. We use a weight decay of 0.0001 and momentum of 0.9. With ResNeXt [45], we train with 1 image per GPU and the same number of iterations, with a starting learning rate of 0.01. I’m trying to write an optimizer and learning rate scheduler in Pytorch for a similar application, to match this description. For the optimizer I have: def get_Mask_RCNN_Optimizer(model, learning_rate=0.02): optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate, momentum=0.9, weight_decay=0.0001) return optimizer For the learning rate scheduler I have: def get_MASK_RCNN_LR_Scheduler(optimizer, step_size): scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gammma=0.1, verbose=True) return scheduler When the authors say “decreased by 10” do they mean divide by 10? Or do they literally mean subtract by 10, in which case we have a negative learning rate, which seems odd/wrong. Any insights appreciated.
st45595
I’m trying to implement a Group Lasso penalty on a batched input of shape (batch_size, *), given groups made of lists of coordinates. As a simple example, if I have a batch of 3 grey-scale images of shape (3, 4, 2), I would define my groups as [(2, 0), (2, 1), (3, 0), (3, 1)], [(0, 2), (0, 3), (1, 2), (1, 3)], [(2, 2), (2, 3), (3, 2), (3, 3)]] Then, I’d like to get the batch-wise group lasso penalty of shape (batch_size,). Each coordinate of this corresponds to one datapoint. For datapoint 0, this would be torch.norm( [x[0, 0, 0], x[0, 0, 1], x[0, 1, 0], x[0, 1, 1] + x[0, group2] .... I can’t figure out how to index x to get this. As a second part, I’d like the groups to be defined starting from the last dimensions. Supposing here that x has shape (batch_size, n_channels, M, N), I want to be able to define my groups only using the last 2 dimensions, and sum the values over the channels.
st45596
I recently encounter a situation where some of the model parameters will not be updated during certain iterations. The unused parameters are those are not in computation graph (after backward(), the gradients of those unused parameters is None) I find the training result is different when I do not have those unused parameters. The only reason I could think of is the optimizer Adam and maybe other adaptive learning rate optimization. Because those optimizer take all model parameters as input when initialized but I only update part of them. Anyone knows how should I resolve this issue. I am not sure the reason I propose above is correct or not.
st45597
Solved by ptrblck in post #20 No, the manual seed is not the issue. I’ve just used it in my first example to show, that the optimizer does not have any problems optimizing a model with unused parameters. Even if we copy all parameters between models, the optimizer works identically. So back to your original question. The discr…
st45598
Could you post a small code snippet showing how these “unused” parameters are created? I would like to reproduce this issue and debug a bit as currently I can only speculate about the reason.
st45599
For example, I have a maximum number of layers of nn.Conv2d. However, the forward pass can use first few layers in computation. How many layers I need to use is an input of the forward pass. In other words, the depth of model is dynamic. For now, I only test with a fixed depth which is smaller than the maximum depth. Those layers of nn.Conv2d are created with a for loop and stored into a list. Then I use nn.ModuleList to wrap the list. I am not sure if this phenomenon indeed exists.
st45600
As long as you store the Modules in a ModuleList they should be properly registered. Are these modules missing in the state_dict? If so, could you post the model definition?
st45601
Sorry! Now I get the issue. Your parameters are all properly registered but unused. In the case where unused parameters (never called in forward) are in the model, the performance is worse compared to the plain model without the unnecessary parameters. Let me try to debug it.
st45602
Thanks you so much. Sorry for the confusion I made. I am not sure this phenomenon exists or this is just an illusion. I use Adam as optimizer and pytorch 0.4.1.
st45603
I created a small code snippet comparing two models. One model uses all its modules, while the other one has some unused modules. The code passes for 0.4.1 and 1.0.0.dev20181014. You can find the code here 48. Let me know, if you can spot some differences between your and mine implementation.
st45604
I actually modify a little bit of your code and I can reproduce the error with nn.ModuelList(). I also try SGD, and it has the same error. Can you help me verifying this? You can find the code here 29
st45605
Thanks for the code update! The difference in your implementation is due to the order of instantiation of the layers. Since you are creating the unused conv layers before the linear layer, the PRNG will be have additional calls and the weights of the linear layers will differ. If you change the __init__ method of MyModuleUnused to super(MyModelUnused, self).__init__() self.conv_list = nn.ModuleList() self.conv_list.append(nn.Conv2d(3, 6, 3, 1, 1)) self.conv_list.append(nn.Conv2d(6, 12, 3, 1, 1)) self.pool1 = nn.MaxPool2d(2) self.pool2 = nn.MaxPool2d(2) self.fc = nn.Linear(12*6*6, 2) self.conv_list.append(nn.Conv2d(12, 24, 3, 1, 1)) self.conv_list.append(nn.Conv2d(24, 12, 3, 1, 1)) , you’ll get the same results again. Alternatively, you could set the seed before instantiating each layer.
st45606
Can you explan what is PRNG? I am still confused. Why the ordering of the initialization matters? I never saw a documentation on the ordering of module intialization. Should I always initilize ModuleList first? Is there a disciplinary way of doing this so that I can avoid this kind of problem?
st45607
Sorry for being not clear enough. By PRNG I mean the Pseudorandom Number Generator 1. The ordering just matters for the sake of debugging, as we are dealing with pseudorandom numbers. In order to compare the weights and gradients, we should make sure both models have the same parameters. One way would be to initialize one model and copy the parameters into the other. Another way is to seed the PRNG for both models and just sample the same “random” numbers. You can think about seeding the random number generation as setting a start value. All “random” numbers will be the same after setting the same seed: torch.manual_seed(2809) print(torch.randn(5)) > tensor([-2.0748, 0.8152, -1.1281, 0.8386, -0.4471]) print(torch.randn(5)) > tensor([-0.5538, -0.8776, -0.5635, 0.5434, -0.8192]) torch.manual_seed(2809) print(torch.randn(5)) > tensor([-2.0748, 0.8152, -1.1281, 0.8386, -0.4471]) print(torch.randn(5)) > tensor([-0.5538, -0.8776, -0.5635, 0.5434, -0.8192]) Although we call torch.randn, we get the same “random” numbers in the successive calls. Now if you add the unused layers before the linear layer, the PRNG will get an additional call to sample the parameters of these layers, which will influence the linear layer parameters. Ususally, you don’t have to think about these issues. As I said, it’s just to debug your issue.
st45608
Thank you so much. I move the module_list after all other modules and it works. If I understand it correctly, since it only affects PRNG, it shoud not create performance issue. However, the performance will slightly differ for each run because of PRNG even if we seed in advance.
st45609
Are you getting the same or comparable results now? I still think the unused parameters are no problem for the optimizer and your results should be comparable. Note that I performed the tests on CPU and seeded unusually often just for the sake of debugging. If you don’t properly seed or use the GPU with non-deterministic operations, you will get slight differences.
st45610
After I move conv_list behind all other modules, it passes the test. However, I m not sure this is valid in other cases. For example I have an encoder and decoder and either of them will have variant depth. Then I have a large wrapper module for the encoder and decoder. Even if I move the initialization of conv_list in encoder and decoder itself. It might still have problems because of the wrapper module. For the seed before instantiating each layer, could you give me an example? I understand I will get differences but I am just wondering will this effect degrades the performance in general. If this only affects debugging, it might not be a big issue.
st45611
I think the seeding approach is getting cumbersome for more complex use cases. Let’s just initialize the model with unused modules and load the parameters in the “complete” model. Could you add this code to the gist and compare the results again? def copy_params(modelA, modelB): modelA_dict = modelA.state_dict() modelB_dict = modelB.state_dict() equal_dict = {k: v for k, v in modelB_dict.items() if k in modelA_dict} modelA.load_state_dict(equal_dict) modelA = MyModel() modelB = MyModelUnused() copy_params(modelA, modelB) # Check weights for equality check_params(modelA, modelB)
st45612
The results are the same if we copy the parameters. I find another problem. If I have two module_list each initialized with two different methods, moving the module_list behind fc layer still has errors. You can see the code here 4
st45613
I’ve checked your code and it seems you are copying the parameters between models and the seeding approach later. Just remove the second approach, as it’s not a good fit anymore regarding the construction of your models. Remove these lines and you’ll get the same results: torch.manual_seed(2809) modelA = MyModel() torch.manual_seed(2809) modelB = MyModelUnused()
st45614
No, the manual seed is not the issue. I’ve just used it in my first example to show, that the optimizer does not have any problems optimizing a model with unused parameters. Even if we copy all parameters between models, the optimizer works identically. So back to your original question. The discrepancy you are observing is not due to some unused parameters in your model. However, if your whole training procedure is very sensitive to the initialization, and therefore to the seeding as well, you might get these results. To debug the problem I would suggest to try the following: Compare the results of your “good” model using several random seeds at the beginning of your script. If tht accuracy stays approx. the same, it should be alright. If you see an accuracy drop for different seeds, I would suggest to use some weight init functions and see it we can stabilize the performance. Create your good model and save the state_dict after initializing the model. Create another script using your model containing unused layers and load the state_dict for the common layers. Then train this model and see, how the performance is compared to the initial model. I’m still in doubt the optimizer is causing this issue.
st45615
Hi, I am using distributed mode to train my model, the launch command is like this: python -m torch.distributed.launch --nproc_per_node=4 train.py In theory, this would start 4 processes. In my program, each processes would generate a list of strings: process 1: a = ['a', 'b', 'c'] process 2: a = ['1', '2', '3'] ... I need to merge the lists in to one whole list and share them among the processes, so that after this operation, each process has a list whose content is: a = ['a', 'b', 'c', '1', '2', '3'] How could I do this please ? By the way, I noticed that there is a function named torch.cuda.synchronize(), will this function ensure that all the processes are synchronized at this line, or is this only ensure the synchronization of the backend operation without considering the python frontend operations ?
st45616
Hi, I am trying to change the learning rate for any arbitrary single layer (which is part of a nn.Sequential block). For example, I use a VGG16 network and wish to control the learning rate of one of the fully connected layers in the classifier. Going by this link: https://pytorch.org/docs/0.3.0/optim.html#per-parameter-options 101, we can specify the learning rate like this - optim.SGD([ {'params': model.base.parameters()}, {'params': model.classifier.parameters(), 'lr': 1e-3} ], lr=1e-2, momentum=0.9) But here, both base and classifier are entire blocks. In the VGG16 network for example, I want to change the learning rate for classifier[0] / classifier[3] / classifier[6], which are linear layers. Any ideas as to how that can be accomplished? VGG16 network: VGG( (features): Sequential( (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU(inplace) (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (3): ReLU(inplace) (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (6): ReLU(inplace) (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (8): ReLU(inplace) (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (11): ReLU(inplace) (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (13): ReLU(inplace) (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (15): ReLU(inplace) (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (18): ReLU(inplace) (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (20): ReLU(inplace) (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (22): ReLU(inplace) (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (25): ReLU(inplace) (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (27): ReLU(inplace) (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (29): ReLU(inplace) (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) ) (avgpool): AdaptiveAvgPool2d(output_size=(7, 7)) (classifier): Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace) (2): Dropout(p=0.5) (3): Linear(in_features=4096, out_features=4096, bias=True) (4): ReLU(inplace) (5): Dropout(p=0.75) (6): Linear(in_features=4096, out_features=10, bias=True) (7): Softmax() ) )
st45617
You just need to create more groups as you did. run for name,param in model.named_parameters(): filter them out and create a list of dicts optim.SGD(list) The only constrain is you cannot repeat parameters, thus, if you decompose classifier parameters you will have to assign them all by this method.
st45618
Thank you for the response. Here is what I did: my_list = ['classifier.3.weight', 'classifier.3.bias'] params = list(filter(lambda kv: kv[0] in my_list, vgg16.named_parameters())) base_params = list(filter(lambda kv: kv[0] not in my_list, vgg16.named_parameters())) And then defined the optimizer: optimizer = SGD([{'params': base_params}, {'params': params, 'lr': '1e-4'}], lr=3e-6, momentum=0.9) However, I get the following error: TypeError: optimizer can only optimize Tensors, but one of the params is tuple I get the same error when I try this as well: optimizer = SGD([{'params': base_params, 'lr': 3e-6, 'momentum': 0.9 }, {'params': params, 'lr': 1e-4, 'momentum': 0.9}]) I am not entirely sure what I need to change. Any ideas?
st45619
Try this, hope it helps! optimizer = SGD([{'params': model.classifier[0].parameters(), 'lr': 3e-6, 'momentum': 0.9 }, {'params': m.classifier[1].parameters(), 'lr': 1e-4, 'momentum': 0.9}])
st45620
Please correct me if I am wrong, but here the learning rates have been set only for two layers of the network: classifier[0] and classifier[1]. The rest of the network doesn’t have learning rates associated with them. What I wish to accomplish is to change the learning rate for a single layer only (in a Sequential block), and have a common learning rate for the rest of the layers.
st45621
try this optimizer = SGD([{'params': model.classifier[0].parameters(), 'lr': 3e-6, 'momentum': 0.9 }], model.parameters,lr=1e-2 ,momentum=0.9 )
st45622
Hi, when I try this, that returns the following error: TypeError: __init__() got multiple values for argument 'lr' I am not quite sure what change I need to make. As @JuanFMontesinos mentioned, I think I need to specify separate parameter lists for each learning rate. Though I don’t know how to do that, given the error I mentioned earlier: TypeError: optimizer can only optimize Tensors, but one of the params is tuple Any ideas?
st45623
I’m afraid you can only set each parameter once. The way @sai_tharun mentioned you are passing parameter 0 twice. The problem is once you set parameter for classifier you need to set all the parameters of classifier . As I aforementioned you need to do for name,param in model.classifier.named_parameters(): If name == yourlayer_name: List.append(…) Else: Others.append(…) Ofc you have to pass the rest of the network which is not model.pclassifier as you were doing
st45624
I am not quite sure what you mean. As you can see, using the following code (similar to what @ptrblck details in link 23): my_list = ['classifier.3.weight', 'classifier.3.bias'] params = list(filter(lambda kv: kv[0] in my_list, vgg16.named_parameters())) base_params = list(filter(lambda kv: kv[0] not in my_list, vgg16.named_parameters())) returns two lists - the layer(s) that requires a different learning rate (classifier[3] in this case), and the rest of the network. I am having trouble passing these to the optimizer like so: optimizer = SGD([{'params': base_params}, {'params': params, 'lr': '1e-4'}], lr=3e-6, momentum=0.9) I think is incorrect since it gives me errors. Any thoughts @ptrblck, @JuanFMontesinos?
st45625
partially_observed: my_list = [‘classifier.3.weight’, ‘classifier.3.bias’] params = list(filter(lambda kv: kv[0] in my_list, vgg16.named_parameters())) Hi, You are passing also the name that way as params contains tuples of (name,parameters) could you please use from torchvision.models import vgg16 from torch.optim import SGD model = vgg16() my_list = ['classifier.3.weight', 'classifier.3.bias'] params = list(map(lambda x: x[1],list(filter(lambda kv: kv[0] in my_list, model.named_parameters())))) base_params = list(map(lambda x: x[1],list(filter(lambda kv: kv[0] not in my_list, model.named_parameters())))) optimizer = SGD([{'params': base_params}, {'params': params, 'lr': '1e-4'}], lr=3e-6, momentum=0.9) This is ok, I promise
st45626
Hi, I tried this but it is giving me the error: “TypeError: SGD() got multiple values for argument ‘lr’”. Do you have any suggestion on what the problem could be and how to solve it? Thanks and regards, Charvi
st45627
JuanFMontesinos: optimizer = SGD([{'params': base_params}, {'params': params, 'lr': '1e-4'}], lr=3e-6, momentum=0.9) optimizer = SGD([{'params': base_params}, {'params': params, 'lr': '1e-4'}], lr=3e-6, momentum=0.9) is it possible you are repeating the ‘lr’ argument in any of the dictionaries you are passing?
st45628
Does not look like that. I have used in exactly the same way as my_list = ['classifier.3.weight', 'classifier.3.bias'] params = list(map(lambda x: x[1],list(filter(lambda kv: kv[0] in my_list, model.named_parameters())))) base_params = list(map(lambda x: x[1],list(filter(lambda kv: kv[0] not in my_list, model.named_parameters())))) optimizer = SGD([{'params': base_params}, {'params': params, 'lr': '1e-4'}], lr=3e-6, momentum=0.9) params and base_params does not contain the names of the parameters involved but I printed their shapes and they seem to be distinct. I also tried optimizer = SGD([{'params': base_params, 'lr':'3e-6'}, {'params': params, 'lr': '1e-4'}], momentum=0.9) this gave me the error TypeError: '<' not supported between instances of 'list' and 'float' at the start of the training loop. File "xxx.py", line 251, in main with Trainer(model, optimizer, F.cross_entropy, scheduler=scheduler, callbacks=_callbacks) as trainer: File "/usr/local/lib/python3.8/site-packages/homura/trainers.py", line 519, in __init__ super(SupervisedTrainer, self).__init__(model, optimizer, loss_f, callbacks=callbacks, scheduler=scheduler, File "/usr/local/lib/python3.8/site-packages/homura/trainers.py", line 121, in __init__ self.set_optimizer() File "/usr/local/lib/python3.8/site-packages/homura/trainers.py", line 422, in set_optimizer self.optimizer = optimizer(self.model.parameters()) File "/usr/local/lib/python3.8/site-packages/torch/optim/sgd.py", line 57, in __init__ if lr is not required and lr < 0.0: TypeError: '<' not supported between instances of 'list' and 'float' Any suggestions are welcome and deeply appreciated!
st45629
Sorry for the late replay but without code it seems everything is about a mistake. Can you post some standalone code?
st45630
Hi, the problem got solved. I was using a github repository as my base code and the problem was that I was using the optim function defined by them instead of the torch.optim. I changed it to torch.optim and followed your method. It works very well. Sorry for the confusion and thanks for the help.
st45631
Hi guys, I want to put some structure on the weight matrix so I have something like this in the forward function. class myNN(nn.Module): def __init__(self): super(myNN, self).__init__() self.weight = nn.Linear(a, b) def transform(self): transformed_weight = do_something(self.weight) return transformed_weight def forward(self, x): transformed_weight = self.transform() x = do_something2(transformed_weight, x) return x do_something() and do_something2() create some intermediate tensors (also transformed_weight in forward() is also intermediate), will they be freed after forward() returns? I keep getting CUDA out of memory error after several iterations. Thanks.
st45632
wasabi: do_something(self.weight) It is weird to do_something() on modules, it is only correct to transform module parameters (but it is ok if you only use a module as a container). Otherwise, maybe you should use stuff from nn.functional instead. Intermediate tensors are only released before backward() if they’re not needed for gradient computations; generally, multipliers are needed, but summands are not (thus inplace summations usually work).
st45633
Hi I got a run time error: --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-53-7395aace3bff> in <module> 1 from torchsummary import summary 2 model = MancalaModel() ----> 3 summary(model, (1, 16), device='cpu') 4 d:\environments\python\lib\site-packages\torchsummary\torchsummary.py in summary(model, input_size, batch_size, device) 70 # make a forward pass 71 # print(x.shape) ---> 72 model(*x) 73 74 # remove these hooks d:\environments\python\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), <ipython-input-51-34fe4c94f943> in forward(self, x) 29 x = F.relu(self.linear1(x)) 30 print(x.shape) ---> 31 x = F.dropout(self.batch_norm1(x), p=0.1) 32 x = F.dropout(self.batch_norm2(F.relu(self.linear2(x))), p=0.1) 33 x = self.batch_norm3(F.relu(self.linear3(x))) d:\environments\python\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), d:\environments\python\lib\site-packages\torch\nn\modules\batchnorm.py in forward(self, input) 129 used for normalization (i.e. in eval mode when buffers are not None). 130 """ --> 131 return F.batch_norm( 132 input, 133 # If buffers are not to be tracked, ensure that they won't be updated d:\environments\python\lib\site-packages\torch\nn\functional.py in batch_norm(input, running_mean, running_var, weight, bias, training, momentum, eps) 2054 _verify_batch_size(input.size()) 2055 -> 2056 return torch.batch_norm( 2057 input, weight, bias, running_mean, running_var, 2058 training, momentum, eps, torch.backends.cudnn.enabled RuntimeError: running_mean should contain 1 elements not 512 Heres my code: class MancalaModel(nn.Module): def __init__(self, n_inputs=16, n_outputs=16): super().__init__() n_neurons = 512 self.linear1 = nn.Linear(n_inputs, n_neurons) self.batch_norm1 = nn.BatchNorm1d(n_neurons) self.linear2 = nn.Linear(n_neurons, n_neurons) self.batch_norm2 = nn.BatchNorm1d(n_neurons) self.linear3 = nn.Linear(n_neurons, n_neurons) self.batch_norm3 = nn.BatchNorm1d(n_neurons) self.actor = nn.Linear(n_neurons, n_outputs) self.critics = nn.Linear(n_neurons, 1) self.apply(init_weights) def forward(self, x): x = F.dropout(self.batch_norm1(F.relu(self.linear1(x))), p=0.1) x = F.dropout(self.batch_norm2(F.relu(self.linear2(x))), p=0.1) x = self.batch_norm3(F.relu(self.linear3(x))) return F.softmax(self.actor(x)), self.critics(x) from torchsummary import summary model = MancalaModel() summary(model, (1, 16), device='cpu') May I know why the batch normalization layer says the size is not expected?
st45634
Solved by Redcxx in post #2 Ah I found it, the batch norm layer’s input dimension should be 1 instead of n_neurons, it is the number of channels rather than the number of features.
st45635
Ah I found it, the batch norm layer’s input dimension should be 1 instead of n_neurons, it is the number of channels rather than the number of features.
st45636
Hi I am new to this and for most application I have been using the dataloader in utils.data to load in batches of images. However I am now trying to load images in different batch size. For example my first iteration loads in batch of 10, second loads in batch of 20. Is there a way to do this easily? Thank you.
st45637
You could implement a custom collate_fn for your DataLoader 622 and use it to load your batches.
st45638
I think the easiest way to achieve this is to change the batch_size parameter of the Dataloader.
st45639
Thank you very much for your answers!! I actually found what I wanted with the sampler in this discussion: 405015099 336 and changing the batch size with a batch_size for each source (here my data_source is the concatenation of datasets with specific batch_size for each). Not very clean but seems to work. class ClusterRandomSampler(Sampler): r"""Takes a dataset with cluster_indices property, cuts it into batch-sized chunks Drops the extra items, not fitting into exact batches Arguments: data_source (Dataset): a Dataset to sample from. Should have a cluster_indices property batch_size (int): a batch size that you would like to use later with Dataloader class shuffle (bool): whether to shuffle the data or not """ def __init__(self, data_source, batch_size=None, shuffle=True): self.data_source = data_source if batch_size is not None: assert self.data_source.batch_sizes is None, "do not declare batch size in sampler " \ "if data source already got one" self.batch_sizes = [batch_size for _ in self.data_source.cluster_indices] else: self.batch_sizes = self.data_source.batch_sizes self.shuffle = shuffle def flatten_list(self, lst): return [item for sublist in lst for item in sublist] def __iter__(self): batch_lists = [] for j, cluster_indices in enumerate(self.data_source.cluster_indices): batches = [ cluster_indices[i:i + self.batch_sizes[j]] for i in range(0, len(cluster_indices), self.batch_sizes[j]) ] # filter our the shorter batches batches = [_ for _ in batches if len(_) == self.batch_sizes[j]] if self.shuffle: random.shuffle(batches) batch_lists.append(batches) # flatten lists and shuffle the batches if necessary # this works on batch level lst = self.flatten_list(batch_lists) if self.shuffle: random.shuffle(lst) return iter(lst) def __len__(self): return len(self.data_source)
st45640
I have been trying to use collate_fn for this purpose but haven’t figured out how, yet. Can you give any pointers? My problem right now is the sampler gives collate_fn 16 samples at a time, but I want the batch size to be 128. Is this possible with this approach?
st45641
The collate_fn is used to process the batch of samples in a custom way. It doesn’t specify the batch size, which is set in the DataLoader. Could you explain your issue a bit more, i.e. are you setting a batch size of 128 in the DataLoader and each batch contains just 16 samples?
st45642
I’m trying to replicate the original StyleGAN’s batch size schedule: 128, 128, 128, 64, 32, 16 as the progressive growing is applied. I know I can recreate the DataLoader when I want to switch, but I’m working inside an extant framework that makes that a clunky change to make. I never did figure out how to use collate_fn here so instead, I’m initializing my DataLoader with a batch_size of 16, and in my training loop I collect and concatenate these batches until I reach the actual batch size I want at any given time. This only works because all the batch sizes are divisible by 16. I tried to do this in collate_fn at first, I thought maybe it received a generator and I could return a different generator, but that wasn’t the case. I’m still interested to know how collate_fn can be used to yield variable batch sizes, maybe it would be cleaner than my solution.
st45643
You could use this code snippet 307 to see an example. Note that the “variable size” is usually the temporal dimension or the spatial dimensions (e.g. images with a different resolution) not the batch size.
st45644
That snippet again does not modify the batch size, which is the subject of this thread.
st45645
I’ve come across the same issue while trying to implement this functionality of StyleGAN using PyTorch Lightning, which I believe is like your use case. Any luck on your end in resolving this issue?
st45646
When doing SGD, we can split up the update rule to each parameter tensor in a for loop. Hence, for every layer in a feedforward neural network, we would update weights and biases. However, I want to implement a different version of SGD where only the k parameters corresponding to the largest gradient entries are actually updated by SGD. The crucial thing here: These should be the k maximal gradients over the entire model parameters. If k=1 then this would mean we only want to update the parameter in the network corresponding to the largest gradient entry, and not to the largest entry in each layer. How would I find the maximal elements of the entire gradient vector efficiently? Thanks a lot!
st45647
Hi, You will most likely have to either do a for loop over all the parameters to find the one with largest value. If you actually need a topk, it might be simplet to concatenate all the grads in a single Tensor and then call topk on that Tensor.
st45648
Thanks for the fast reply. Thats what I am doing now, but I thought there might be a more efficient way. I tried to use the torch.nn.utils.parameters_to_vector function, however it only works with Parameters. Given my solution from below, can this be done more efficiently? My idea was to create a generator once and then in every layer-step of SGD I yield the relevant entries of the mask to multiply the gradient with. Not sure if this is a good approach, it is quite slow compared to SGD. I call the function with the following list param_list = [p for group in self.param_groups for p in group['params'] if p.grad is not None]. def get_mask_generator(self, param_list): """Generator for topk mask""" # Get the vector grad_vector = torch.cat([torch.abs(p.grad).view(-1) for p in param_list]) grad_vector_shape = grad_vector.shape device = grad_vector.device top_indices = torch.topk(grad_vector, k=self.Q).indices del grad_vector mask_vector = torch.zeros(grad_vector_shape, device=device) mask_vector[top_indices] = 1 # Define the generator (note: the above code is called only once) for p in param_list: numEl = p.numel() partial_mask = mask_vector[:numEl] mask_vector = mask_vector[numEl:] yield partial_mask.view(p.shape) Thanks for the help!
st45649
Well parameters_to_vector is doing the same thing as your cat operator. So that will be as efficient. I don’t think you can do anything better really and manual bookkeeping of the values will most likely end up being more expensive that these few large ops. I am not sure why you need this to be a generator though as you can directly update all the p.grad inplace from this function no?
st45650
albanD: I am not sure why you need this to be a generator though as you can directly update all the p.grad inplace from this function no? Well, it allowed me to modify SGD by adding 3 lines of code, instead of recoding everything for vector gradient updates (i.e. momentum, nesterov etc).
st45651
I’m not sure to follow can’t your code be: loss = # Your code opt.zero_grad() loss.backward() mask_gradients(model) opt.step() So that you don’t have to change the optimizer at all?
st45652
When importing pytorch, arround 1gb of memory seems to be allocated in Windows, regardless of which part of pytorch you are importing and regardless of weather or not the function is used. Is that expected behaviour? I have a couple of child processes which need access to tensors, and each one requiering an additional gb of memory is an issue for me.
st45653
Hello all, I have a feature vector size of 128x5 and a corresponding index label 128x1. For example, f =[f1, f2, f3, f4, f5] labels =[2, 3, 2, 1, 2] that means fi size of 1x5, and corresponding to label is j (i.e. f1, f3, and f5 have a label is 2, f2 has label 3 In my task, I want to compute the average feature of a label. For example for label 2 we have f(1) = f4 f(2) = (f1+f3 + f5)/3 f(3) = f2 Currently, my implementation loop all labels and compute the average, but the speed is very slow (when the size of the label increases to 5 Million). How should I speed up it? Thanks. num_freq_label = torch.zeros(len(labels)) for ind in range (len(labels)): ind_label = (labels == ind).nonzero() f[ind:ind+1, ...] += torch.sum(f[ind_label], dim=0) num_freq_label[ind] +=ind_label.size(0) f = torch.div(f, num_freq_label.view(-1,1))
st45654
First of all, sorry if the question was answered before but I couldn’t find anything specific to my case (noob here). I have a .mat file where I have a matrix 100x100x10000, representing 10000 different images of 100x100. I wanted to use this .mat file in pytorch and create a dataset of it so then I could use it in DataLoader. I wanted to test this dataset in a GAN architecture that was provided to me but I got stuck right on step one. I’m really giving my first steps here… So, sorry for that. Any help would be much appreciated!
st45655
Hi, I guess you can use tools like https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.loadmat.html 43 to load the mat file into a numpy array and then move that to pytorch (copyless) with torch.from_numpy(your_array).
st45656
As we can use native_functions.yaml to define dispatch, why we need DECLARE/DEFINE/REGISTER_DISPATCH mechanism at the same time?
st45657
Hi, These two have different purposes. The native_functions.yaml allows you to select the device and in general works at the level of the Dispatcher. The dispatch macro is a trick closer to the old TH macros that allow to write code that is generic for different dtype as well!
st45658
I installed Anaconda on virtual env with python 3.5, Cuda toolkit 10.2 (higher version not available on mac) cuDNN not available for mac. After this when I am running command “conda install numpy ninja pyyaml mkl mkl-include setuptools cmake cffi typing_extensions future six requests dataclasses” I am getting multiple conflicts. Any clue? The reason I created env with python 3.5 is that I was getting conflict while installing TensorFlow. UnsatisfiableError: The following specifications were found to be incompatible with each other: Output in format: Requested package -> Available versions Package ca-certificates conflicts for: requests -> python -> ca-certificates typing_extensions -> python[version=’>=2.7,<2.8.0a0’] -> ca-certificates six -> python -> ca-certificates numpy -> python[version=’>=2.7,<2.8.0a0’] -> ca-certificates setuptools -> python[version=’>=2.7,<2.8.0a0’] -> ca-certificates future -> python[version=’>=2.7,<2.8.0a0’] -> ca-certificates cffi -> python[version=’>=2.7,<2.8.0a0’] -> ca-certificates ninja -> python[version=’>=2.7,<2.8.0a0’] -> ca-certificates pyyaml -> python[version=’>=2.7,<2.8.0a0’] -> ca-certificates python=3.5 -> openssl[version=’>=1.0.2p,<1.0.3a’] -> ca-certificates Package libcxx conflicts for: setuptools -> python[version=’>=3.6,<3.7.0a0’] -> libcxx[version=’>=10.0.0|>=4.0.1’] python=3.5 -> libffi[version=’>=3.2.1,<3.3a0’] -> libcxx[version=’>=10.0.0’] cmake -> libcxx[version=’>=10.0.0|>=4.0.1’] numpy -> python[version=’>=3.7,<3.8.0a0’] -> libcxx[version=’>=10.0.0|>=4.0.1’] cffi -> libffi[version=’>=3.3,<3.4.0a0’] -> libcxx[version=’>=10.0.0|>=4.0.1’] six -> python[version=’>=3.9,<3.10.0a0’] -> libcxx[version=’>=10.0.0|>=4.0.1’] pyyaml -> python[version=’>=3.8,<3.9.0a0’] -> libcxx[version=’>=10.0.0|>=4.0.1’] future -> python[version=’>=3.8,<3.9.0a0’] -> libcxx[version=’>=10.0.0|>=4.0.1’] ninja -> libcxx[version=’>=10.0.0|>=4.0.1’] requests -> python -> libcxx[version=’>=10.0.0|>=4.0.1’] dataclasses -> python[version=’>=3.7,<3.8.0a0’] -> libcxx[version=’>=10.0.0|>=4.0.1’] ninja -> python[version=’>=3.6,<3.7.0a0’] -> libcxx typing_extensions -> python[version=’>=3.5’] -> libcxx[version=’>=10.0.0|>=4.0.1’] python=3.5 -> libcxx[version=’>=4.0.1’] Package certifi conflicts for: setuptools -> certifi[version=’>=2016.09|>=2016.9.26’] requests -> certifi[version=’>=2017.4.17’] requests -> urllib3[version=’>=1.21.1,<1.26,!=1.25.0,!=1.25.1’] -> certifi Package libcxxabi conflicts for: cmake -> libcxx[version=’>=4.0.1’] -> libcxxabi==4.0.1[build=‘hcfea43d_1|hebd6815_0’] ninja -> libcxx[version=’>=4.0.1’] -> libcxxabi==4.0.1[build=‘hcfea43d_1|hebd6815_0’] python=3.5 -> libcxx[version=’>=4.0.1’] -> libcxxabi==4.0.1[build=‘hcfea43d_1|hebd6815_0’] Package six conflicts for: numpy -> mkl-service[version=’>=2,<3.0a0’] -> six six Package mkl conflicts for: numpy -> mkl[version=’>=2018.0.0,<2019.0a0|>=2018.0.1,<2019.0a0|>=2018.0.2,<2019.0a0|>=2018.0.3,<2019.0a0|>=2019.1,<2020.0a0|>=2019.3,<2020.0a0|>=2019.4,<2020.0a0’] mkl Package setuptools conflicts for: python=3.5 -> pip -> setuptools setuptools Package tzdata conflicts for: requests -> python -> tzdata setuptools -> python[version=’>=3.9,<3.10.0a0’] -> tzdata cffi -> python[version=’>=3.9,<3.10.0a0’] -> tzdata typing_extensions -> python[version=’>=3.5’] -> tzdata six -> python[version=’>=3.9,<3.10.0a0’] -> tzdata
st45659
Hi, I don’t think we support python 3.5 in latest master branch. Moving to a more recent python version should help (3.6.3+)
st45660
No module called Pytorch though I install pytorch successfully. However.when I activate it using ‘conda activate pytorch’ it states that ‘Could not find conda environment: pytorch You can list all discoverable environments with conda info --envs.’ image745×313 10.3 KB
st45661
I guess when you install conda it creates a seprate space(Another python directory). {C:\Users\MyPC…}. If you can… use another IDE preferably PyChram to Jupyter. OR you have to install Pytorch for that separate py as well.
st45662
I’m using VS Code. I install anaconda then install Pycharm through ‘‘conda install -c pytorch pytorch’’, it showed installed successfully ,then I activate it through “conda activate pytorch”, then it shows 'Could not find conda environment: pytorch You can list all discoverable environments with conda info --envs.’
st45663
@Yvonne_Wong See if this 36 helps. This is what I think could’ve gone wrong in your case.
st45664
I did try the solution suggested but it is successful in anaconda prompt but fail in VS Code ,not sure what’s wrong stackoverflow.com Why successful install pytorch in anaconda prompt but can't activate it directly in VS Code? 8 python, numpy, visual-studio-code, pytorch asked by Yvonne Wong on 03:57PM - 30 Nov 20 UTC
st45665
I am trying to train a Pytorch LSTM network, but I’m getting ValueError: Expected target size (2, 13), got torch.Size([2]) when I try to calculate CrossEntropyLoss. I think I need to change the shape somewhere, but I can’t figure out where. I’ve seen this error message in some posts but couldn’t find any that use cross entropy loss. For this problem, I am trying to predict a from word using the last three words. So, if “this is example text” appeared in the corpus, the corresponding feature would be [“this”, “is”, “example”] and the label would be [“text”] (with every word mapped to their ids of course). Here is my network definition: class LSTM(nn.Module): def __init__(self, vocab_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.2): super(LSTM, self).__init__() # network size parameters self.n_layers = n_layers self.hidden_dim = hidden_dim self.vocab_size = vocab_size self.embedding_dim = embedding_dim # the layers of the network self.embedding = nn.Embedding(self.vocab_size, self.embedding_dim) self.lstm = nn.LSTM(self.embedding_dim, self.hidden_dim, self.n_layers, dropout=drop_prob, batch_first=True) self.dropout = nn.Dropout(drop_prob) self.fc = nn.Linear(self.hidden_dim, self.vocab_size) def forward(self, input, hidden): # Perform a forward pass of the model on some input and hidden state. batch_size = input.size(0) print(f'batch_size: {batch_size}') print(Input shape: {input.shape}') # pass through embeddings layer embeddings_out = self.embedding(input) print(f'Shape after Embedding: {embeddings_out.shape}') # pass through LSTM layers lstm_out, hidden = self.lstm(embeddings_out, hidden) print(f'Shape after LSTM: {lstm_out.shape}') # pass through dropout layer dropout_out = self.dropout(lstm_out) print(f'Shape after Dropout: {dropout_out.shape}') #pass through fully connected layer fc_out = self.fc(dropout_out) print(f'Shape after FC: {fc_out.shape}') # return output and hidden state return fc_out, hidden def init_hidden(self, batch_size): #Initializes hidden state # Create two new tensors `with sizes n_layers x batch_size x hidden_dim, # initialized to zero, for hidden state and cell state of LSTM hidden = (torch.zeros(self.n_layers, batch_size, self.hidden_dim), torch.zeros(self.n_layers, batch_size, self.hidden_dim)) return hidden I added comments stating the shape of the network at each spot. My data is in a TensorDataset called training_dataset with two attributes, features and labels. Features has shape torch.Size([97, 3]), and labels has shape: torch.Size([97]). This is the code for the network training: # Size parameters vocab_size = 13 embedding_dim = 256 hidden_dim = 256 n_layers = 2 # Training parameters epochs = 3 learning_rate = 0.001 clip = 1 batch_size = 2 training_loader = DataLoader(training_dataset, batch_size=batch_size, drop_last=True, shuffle=True) net = LSTM(vocab_size, embedding_dim, hidden_dim, n_layers) optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate) loss_func = torch.nn.CrossEntropyLoss() net.train() for e in range(epochs): print(f'Epoch {e}') print(batch_size) hidden = net.init_hidden(batch_size) # loops through each batch for features, labels in training_loader: # resets training history hidden = tuple([each.data for each in hidden]) net.zero_grad() # computes gradient of loss from backprop output, hidden = net.forward(features, hidden) loss = loss_func(output, labels) loss.backward() # using clipping to avoid exploding gradient nn.utils.clip_grad_norm_(net.parameters(), clip) optimizer.step() When I try to run the training I get the following error: Traceback (most recent call last): File "train.py", line 75, in <module> loss = loss_func(output, labels) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 947, in forward return F.cross_entropy(input, target, weight=self.weight, File "/usr/local/lib/python3.8/site-packages/torch/nn/functional.py", line 2422, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/usr/local/lib/python3.8/site-packages/torch/nn/functional.py", line 2227, in nll_loss raise ValueError('Expected target size {}, got {}'.format( ValueError: Expected target size (2, 13), got torch.Size([2]) Also here is the result of the print statements: batch_size: 2 Input shape: torch.Size([2, 3]) Shape after Embedding: torch.Size([2, 3, 256]) Shape after LSTM: torch.Size([2, 3, 256]) Shape after Dropout: torch.Size([2, 3, 256]) Shape after FC: torch.Size([2, 3, 13]) There is some kind of shape error happening, but I can’t figure out where. Any help would be appreciated. If relevant I’m using Python 3.8.5 and Pytorch 1.6.0.
st45666
Solved by ptrblck in post #2 The output tensor of the nn.LSTM with batch_first=Trueis returned in the shape[batch_size, seq_len, features]`. Based on your description I guess you would like to use the activation of the last time step for the classification, so you might want to slice it via: lstm_out, hidden = self.lstm(embedd…
st45667
The output tensor of the nn.LSTM with batch_first=Trueis returned in the shape[batch_size, seq_len, features]`. Based on your description I guess you would like to use the activation of the last time step for the classification, so you might want to slice it via: lstm_out, hidden = self.lstm(embeddings_out, hidden) lstm_out = lstm_out[:, -1] and process this tensor further.