id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st47368 | Pytorch 1.6
So it works, but only if I import checkpoint separately.
I tried to update the version to 1.7, but another library raises an error. |
st47369 | I still cannot reproduce it in 1.7.0, so could you post an executable code snippet, which raises the issue in this PyTorch version, please? |
st47370 | Here is code
colab.research.google.com
Google Colaboratory 9 |
st47371 | Thanks for the notebook.
It works fine when I uncomment
from torch.utils.checkpoint import checkpoint |
st47372 | Yes, but why I need this import?
Shouldnt it be covered by whole pytorch import? |
st47373 | I’m not completely sure how the import mechanism works in different Python versions and my understanding is that newer Python versions are more “flexible”. You could thus try it with e.g. Python3.8 and if it’s still not working would need to import the method directly (unsure if it’s a PyTorch limitation or the desired workflow in Python). |
st47374 | I am new to pytorch but not new to image processing. I am so old that when I started with image processing we used to build our own convolvers from discrete multipliers and accumulators (a 3x3 conv took 9 discrete multipliers).
Of course, in discrete code like C or python one can do a convolutional filter in any size or or shape. But I am new to pytorch and I see convolution functions but so far have not seen a way to adjust the “shape” of the convolution filter to be anything but square as opposed to rectangular.
Thank You
Tom |
st47375 | Solved by Usama_Hasan in post #2
Hy Tom,
Yes you can use rectangular convolution filter.
# consider input (3,128,128)
torch.nn.Conv2d(3,64,(1,3),stride=2,groups=1)
torch.nn.Conv2d(64,128,(3,1),stride=2,groups=1)
# Then output after these convolutions will be (128,31,32) |
st47376 | Hy Tom,
Yes you can use rectangular convolution filter.
# consider input (3,128,128)
torch.nn.Conv2d(3,64,(1,3),stride=2,groups=1)
torch.nn.Conv2d(64,128,(3,1),stride=2,groups=1)
# Then output after these convolutions will be (128,31,32) |
st47377 | I am facing a runtime error when running training.py in chapter 11 of dlwpt book.
2020-11-10 04:42:43,159 INFO pid:14780 __main__:082:initModel Using CUDA; 8 devices.
2020-11-10 04:42:44,775 INFO pid:14780 __main__:141:main Starting LunaTrainingApp, Namespace(batch_size=4, comment='dwlpt', epochs=1, num_workers=32, tb_prefix='p2ch11')
2020-11-10 04:42:47,521 INFO pid:14780 dsets:182:__init__ <dsets.LunaDataset object at 0x7fbd5d2b40a0>: 198764 training samples
2020-11-10 04:42:47,534 INFO pid:14780 dsets:182:__init__ <dsets.LunaDataset object at 0x7fbcef6f9d90>: 22085 validation samples
2020-11-10 04:42:47,534 INFO pid:14780 __main__:148:main Epoch 1 of 1, 6212/691 batches of size 4*8
2020-11-10 04:42:47,535 WARNING pid:14780 util:219:enumerateWithEstimate E1 Training ----/6212, starting
Traceback (most recent call last):
File "/home/user/anaconda3/envs/pytorch_updated/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 779, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/user/anaconda3/envs/pytorch_updated/lib/python3.8/queue.py", line 179, in get
self.not_empty.wait(remaining)
File "/home/user/anaconda3/envs/pytorch_updated/lib/python3.8/threading.py", line 306, in wait
gotit = waiter.acquire(True, timeout)
File "/home/user/anaconda3/envs/pytorch_updated/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 14850) is killed by signal: Killed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/pytorch_1/dlwpt-code-master/p2ch11/training.py", line 390, in <module>
LunaTrainingApp().main()
File "/home/user/pytorch_1/dlwpt-code-master/p2ch11/training.py", line 157, in main
trnMetrics_t = self.doTraining(epoch_ndx, train_dl)
File "/home/user/pytorch_1/dlwpt-code-master/p2ch11/training.py", line 181, in doTraining
for batch_ndx, batch_tup in batch_iter:
File "/home/user/pytorch_1/dlwpt-code-master/util/util.py", line 224, in enumerateWithEstimate
for (current_ndx, item) in enumerate(iter):
File "/home/user/anaconda3/envs/pytorch_updated/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 363, in __next__
data = self._next_data()
File "/home/user/anaconda3/envs/pytorch_updated/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 974, in _next_data
idx, data = self._get_data()
File "/home/user/anaconda3/envs/pytorch_updated/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 931, in _get_data
success, data = self._try_get_data()
File "/home/user/anaconda3/envs/pytorch_updated/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 792, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str))
RuntimeError: DataLoader worker (pid(s) 14850) exited unexpectedly
Process finished with exit code 1
I tried lowering the batch-size and increasing num-workers up to 32 (as shown above), but still the error isn’t disappearing. Defaults values should work considering that I have 8 GPUs. What is the issue here? |
st47378 | How can i give multiple parameters to the optimizer?
fc1 = nn.Linear(784, 500)
fc2 = nn.Linear(500, 10)
optimizer = torch.optim.SGD([fc1.parameters(), fc2.parameters()], lr=0.01) # This causes an error.
In this case, for simplicity, i don’t want to use a class with nn.Module. |
st47379 | you have to concatenate python lists:
params = list(fc1.parameters()) + list(fc2.parameters())
torch.optim.SGD(params, lr=0.01) |
st47380 | Dear Soumith,
While executing your approach, it says:
TypeError: add() received an invalid combination of arguments - got (list), but expected one of:
(Tensor other, Number alpha)
(Number other, Number alpha)
Can you help me?)
Is there something wrong? |
st47381 | Probably you set a bracket to the wrong place. You have to convert the parameters to a list separately and add the lists afterwards. |
st47382 | [SOLVED]
params = self.net.state_dict()
pas = list(params['net.0.weight']) + list(params['net.0.bias']) + list(params['net.3.weight'] + list(params['net.3.bias']) + list(params['net.6.weight']) + list(params['net.6.bias']))
self.optimizer1 = optim.Adam(pas, lr = 0.01)
Here is my code. I think everything is ok |
st47383 | Since parameters() actually returns a iteration, itertools.chain() looks a better approach:
import itertools
params = [fc1.parameters(), fc2.parameters()]
torch.optim.SGD(itertools.chain(*params), lr=0.01) |
st47384 | How is this different from just putting all of the tensors in a list directly as OP did? |
st47385 | If your models are in a list or tuple somewhere already, you can also use a nested list comprehension:
models = [nn.Linear(784, 500),
nn.Linear(500, 10)
]
optimizer = torch.optim.SGD((par for model in models for par in model.parameters()),
lr=0.01) |
st47386 | I tried to run the following code:
class ResBlock(nn.Module):
def __init__(self, in_channel, out_channel):
super(ResBlock, self).__init__()
self.bn = nn.BatchNorm2d(in_channel)
self.relu = nn.ReLU(inplace=True)
self.conv = nn.Conv2d(in_channels=in_channel, out_channels=out_channel, kernel_size=3, stride=1)
def forward(self, x):
identity = x
out = self.bn(x)
out = self.relu(out)
out = self.relu(self.conv(out))
out += identity
return out
class Net(nn.Module):
def __init__(self, h, w):
super().__init__()
self.h = h
self.w = w
self.conv1 = nn.Conv2d(in_channels=1, out_channels=32, kernel_size=2, stride=1)
self.res1 = ResBlock(in_channel=16, out_channel=32)
self.res2 = ResBlock(in_channel=32, out_channel=64)
self.fc1 = nn.Linear(64*w*h, 4*w*h-2*w-2*h)
self.fc2 = nn.Linear(4*w*h-2*w-2*h, 2*w*h-w-h)
def forward(self, x):
x = x.unsqueeze(0)
x = x.unsqueeze(0)
out = F.relu(self.conv1(x))
out = self.res1(out)
out = self.res2(out)
out = out.view(-1, 64*self.w*self.h)
return out
net = Net(3, 3)
a = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
out = net(a)
but got the following error:
image1386×441 71.8 KB
If I change the last line to:
out = net(a.to(torch.int32))
I will get the following error:
image1389×445 73.1 KB
Any help will be appreciated! |
st47387 | PyTorch parameters are created as float32 tensors by default and expect the input to have the same type.
Use net(a.float()) to create a FloatTensor and it should work. |
st47388 | Hello,
I have a network architecture like below that chooses different options based an input argument.
When I try to save the model at a certain epoch during training using torch.save() I get:
AttributeError: Can't pickle local object 'main.<locals>.Net1'
Here is the part that I design the network in the main():
IE_dim = X_tr.shape[1]
if args.hd == 1:
class Net1(nn.Module):
def __init__(self, args):
super(Net1, self).__init__()
self.features = torch.nn.Sequential(
nn.Dropout(args.idr),
nn.Linear(IE_dim, 512),
nn.Tanh(),
nn.Dropout(args.ldr),
nn.Linear(512, 1))
def forward(self, x):
out = self.features(x)
return out
Model = Net1(args)
elif args.hd == 2:
class Net2(nn.Module):
def __init__(self, args):
super(Net2, self).__init__()
self.features = torch.nn.Sequential(
nn.Dropout(args.idr),
nn.Linear(IE_dim, 256),
nn.Tanh(),
nn.Dropout(args.ldr),
nn.Linear(256, 256),
nn.Tanh(),
nn.Dropout(args.ldr),
nn.Linear(256, 1))
def forward(self, x):
out = self.features(x)
return out
Model = Net2(args)
elif args.hd == 3:
class Net3(nn.Module):
def __init__(self, args):
super(Net3, self).__init__()
self.features = torch.nn.Sequential(
nn.Dropout(args.idr),
nn.Linear(IE_dim, 128),
nn.Tanh(),
nn.Dropout(args.ldr),
nn.Linear(128, 128),
nn.Tanh(),
nn.Dropout(args.ldr),
nn.Linear(128, 128),
nn.Tanh(),
nn.Dropout(args.ldr),
nn.Linear(128, 1))
def forward(self, x):
out = self.features(x)
return out
Model = Net3(args)
and here is how I use torch.save():
torch.save(Model, os.path.join(SOME PATH, 'Best_Model.pt'))
I had that piece of code first in a separate .py file and used import * in the main() but got the above error and then, I moved the code to the main() function but got the same error.
I appreciate any help! |
st47389 | I have recently upgraded pytorch from 0.2 to 0.3. Surprisingly my old programs are throwing an out of memory error during evaluation (in eval() mode) but training works just fine. I am using the same batch size for training and evaluation. I am totally clueless what is happening? Did anyone face similar issue? Is there any possible solution? |
st47390 | Sounds strange.
Did you use the volatile=True param on your Variables?
Is the batch size larger during eval than train?
Do you use cuDNN in both cases?
Could you post a code snippet reproducing the issue? |
st47391 | I tried using volatile=True param on the variables and it didn’t help. I am using the same batch size. I am not doing anything special to use cuDNN. I am using the default setting.
def validate(self, dev_corpus):
# Turn on evaluation mode which disables dropout.
self.model.eval()
dev_batches = helper.batchify(dev_corpus.data, self.config.batch_size)
print('number of dev batches = ', len(dev_batches))
dev_loss = 0
num_batches = len(dev_batches)
for batch_no in range(1, num_batches + 1):
session_queries, session_query_length, rel_docs, rel_docs_length, doc_labels = helper.session_to_tensor(
dev_batches[batch_no - 1], self.dictionary)
if self.config.cuda:
session_queries = session_queries.cuda()
session_query_length = session_query_length.cuda()
rel_docs = rel_docs.cuda()
rel_docs_length = rel_docs_length.cuda()
doc_labels = doc_labels.cuda()
loss = self.model(session_queries, session_query_length, rel_docs, rel_docs_length, doc_labels)
if loss.size(0) > 1:
loss = loss.mean()
dev_loss += loss.data[0]
return dev_loss / num_batches
I am using the above function for evaluation. Here, session_queries, session_query_length, … rest variables are created by enabling volatile=True.
I am not sure what is hapenning!! |
st47392 | Hi. Did you do well your problem? Now, I meet the same problem as well. How do you do in this situation? |
st47393 | The volatile flag is deprecated. In the latest stable release (0.4.0) you should use a context manager:
with torch.no_grad():
# Your eval code
Have a look at the website 238 for install instructions.
You can find the migration guide here 185. |
st47394 | Hit the same problem, same solution worked, pytorch 4.0.1. Seems like there might be something weird going on with the eval mode memory management. |
st47395 | A relevant clear-cut answer on ‘model.eval()’ vs ‘with torch.no_grad()’ from @albanD:
'model.eval()' vs 'with torch.no_grad()'
Hi,
These two have different goals:
model.eval() will notify all your layers that you are in eval mode, that way, batchnorm or dropout layers will work in eval model instead of training mode.
torch.no_grad() impacts the autograd engine and deactivate it. It will reduce memory usage and speed up computations but you won’t be able to backprop (which you don’t want in an eval script). |
st47396 | Thanks Arul, That’s helpful…
Still doesn’t explain why eval mode appears to use more memory than training mode though. Theoretically it would just use the same. |
st47397 | I am a novice. Have the same problem but during the inference, I have never met ‘out of memory’ error without using the torch.no_grad() or volatile=True before. But at this time it seems not to work without using torch.no_grad(). pytorch 3.0.0. |
st47398 | You might run out of memory if you still hold references to some tensors from your training iteration.
Since Python uses function scoping, these variables are still kept alive, which might result in your OOM issue. To avoid this, you could wrap your training and validation code in separate functions. Have a look at this post 517 for more information. |
st47399 | @ptrblck , did you mean this post?
Increase the CUDA memory twice then stop increasing
I have the code below and I don’t understand why the memory increase twice then stops
I searched the forum and can not find answer
env: PyTorch 0.4.1, Ubuntu16.04, Python 2.7, CUDA 8.0/9.0
from torchvision.models import vgg16
import torch
import pdb
net = vgg16().cuda()
data1 = torch.rand(16,3,224,224).cuda()
for i in range(10):
pdb.set_trace()
out1 = net(data1)
first stop, this is what data1 and vgg16 take
[1]
second stop, this is what the intermediate status of vgg16 take
[2… |
st47400 | Yes, @colesbury explains, why the memory usage might grow if some tensors weren’t deleted using function scoping. |
st47401 | What is the effect if you forget torch.no_grad() besides the increased memory. Will you accumulate gradients in the validation block? |
st47402 | The computation graph will be created and intermediate tensors are stored.
If you don’t call backward (which wouldn’t even be possible in a torch.no_grad() block), nothing else will change. |
st47403 | Well you would call backward in the training portion. So would you then update the net with the grads tracked in the validation portion as well as those in the training portion? Assuming that torch.no_grad() was forgotten in validation. |
st47404 | During training a new computation graph would usually be created, as long as you don’t pass e.g. the output of your validation phase as the new input to the model during training.
model = models.resnet18(pretrained=True)
# Pseudo validation phase
x1 = torch.randn(1, 3, 224, 224)
out = model(x1)
# Pseudo training phase
x1 = torch.ones(1, 3, 224, 224)
out = model(x1)
out.mean().backward()
In this code snippet you have “forgotten” to use torch.no_grad() during the validation phase.
However, since out is not used, it won’t have any effect on the gradients, but will just use unnecessary memory. |
st47405 | Ok cool, what about if it’s set up this way.
crit = nn.SomeLoss()
optim = optim.SGD()
net = models.resnet18()
for e in range(num_epochs):
# training
pred = net(some_data)
optim.zero_grad()
loss = crit(pred, target)
loss.backward()
optim.step()
# validation
valid_pred = net(some_validation_data)
loss = crit(valid_pred, valid_target)
Would zero_grad take care of that? |
st47406 | As long as you don’t calculate gradients via a backward call, no gradients will be accumulated. |
st47407 | I use batchnorm 1d on batches which are padded to the max length of the samples. It dawned on me that batch norm isn’t fed a mask so it has no way of knowing which are valid timesteps in each sequence. Wouldn’t this mess with batch norm? And more importantly wouldn’t it be very different if I change batch size? Is there a way around this? |
st47408 | Hi @Dan_Erez! Did you find anything to solve the problem? I’m facing the same problem. Is there a solution similar to the one in tensorflow here 24 in PyTorch? |
st47409 | @ptrblck Please, can you help me with this?
Basically, I have a tensor padded with zeros in the end. If I feed this into torch.nn.BatchNorm1d it will consider those as well. I also have a mask (binary) for the padded tensor. Is there something in PyTorch to tackle this? |
st47410 | As far as I understand your use case, you are creating a batch (3-dimensional: [batch_size, channels, seq_len]), where some tensors were zero-padded in the last dimension.
Is that correct?
Now you would like to ignore the padded inputs in the batchnorm layer, i.e. not being taken into account for the running stats or what would the desired behavior be? |
st47411 | Thanks for the reply!
Yes precisely. This is exactly what I need. If the zeros are taken into account, it will be wrong. Need to ignore those. |
st47412 | I’m not aware of any built-in method, so you might need to implement it manually.
Maybe you could use this manual example of the batch norm calculation 33 as a starter and change the mean and var calculation using the masked method:
# Create dummy input
x = torch.randn(2, 3, 10)
x[0, 0, 5:] = 0.
x[0, 1, 6:] = 0.
x[0, 2, 7:] = 0.
x[1, 0, 8:] = 0.
x[1, 1, 9:] = 0.
# Use mask for manual calculation
mask = x!=0
mask_mean = (x.sum(2) / mask.float().sum(2)).mean(0)
# Alternatively rescale BatchNorm1d.running_mean
mean = x.mean([0, 2])
mean * x.size(2) / (x.size(2) - (x==0).float().sum(2)).sum()
The second example would work, if you would like to use the PyTorch batch norm implementation (e.g. for performance reasons) and “rescale” the running estimates.
Let me know, if that helps. |
st47413 | thanks for the reply. can you explicity show how you would operate on the batch norm parameter? |
st47414 | I wrote a solution to do this fast, explained as comments in the code. Let me know if you find any bugs.
def masked_batchnorm1d_forward(x, mask, bn):
"""x is the input tensor of shape [batch_size, n_channels, time_length]
mask is of shape [batch_size, 1, time_length]
bn is a BatchNorm1d object
"""
if not self.training:
return bn(x)
# In each example of the batch, we can have a different number of masked elements
# along the time axis. It would have to be represented as a jagged array.
# However, notice that the batch and time axes are handled the same in BatchNorm1d.
# This means, we can merge the time axis into the batch axis, and feed BatchNorm1d
# with a tensor of shape [n_valid_timesteps_in_whole_batch, n_channels, 1],
# as if the time axis had length 1.
#
# So the plan is:
# 1. Move the time axis next to the batch axis to the second place
# 2. Merge the batch and time axes using reshape
# 3. Create a dummy time axis at the end with size 1.
# 4. Select the valid time steps using the mask
# 5. Apply BatchNorm1d to the valid time steps
# 6. Scatter the resulting values to the corresponding positions
# 7. Unmerge the batch and time axes
# 8. Move the time axis to the end
n_feature_channels = x.shape[1]
time_length = x.shape[2]
reshaped = x.permute(0, 2, 1).reshape(-1, n_feature_channels, 1)
reshaped_mask = mask.reshape(-1, 1, 1) > 0
selected = torch.masked_select(reshaped, reshaped_mask).reshape(-1, n_feature_channels, 1)
batchnormed = bn(selected)
scattered = reshaped.masked_scatter(reshaped_mask, batchnormed)
backshaped = scattered.reshape(-1, time_length, n_feature_channels).permute(0,2,1)
return backshaped |
st47415 | Batchnorm1d has learnable parameters. Is there any problem with having different sized batches?
Thank you |
st47416 | Hi, I’m training a LSTM model with variable-length samples. One thing I observe is that if I sort all training data by sample length and then prepare the data loader for that, I can afford to use bigger batch sizes than without sorting them first.
Since the training data stays the same and also the max length of a padded batch also stays the same, I’m wondering if pack_padded_sequence() in the unsorted case might actually cost more GPU memory.
Say we have a padded batch after sorting by length, s*_t* means sample*_timestep*
[s0_t0, 0, 0, 0]
[s1_t0, s1_t1, 0, 0]
[s2_t0, s2_t1, s2_t2, 0]
[s3_t0, s3_t1, s3_t2, s3_t3]
In the unsorted case it might look arbitrary in terms of padding:
[s0_t0, s0_t1, s0_t2, 0]
[s1_t0, 0, 0, 0]
[s2_t0, s2_t1, 0, 0]
[s3_t0, 0, 0, 0]
My hypothesis is in the unsorted case we tend to pad more, so we may use more GPU memory.
Is that the case here? I guess it depends on how pack_padded_sequence() is implemented in CUDA too.
Any other thoughts why GPU memory may increase?
Thanks! |
st47417 | Solved by galactica147 in post #3
just want to update that it is not because of we have any GPU memory increase in LSTM layers, but the fact that it’s followed by FC layers where without sorting, the sequence length gets longer as input to FC, thus bigger FC and GPU RAM consumption. |
st47418 | just want to update that it is not because of we have any GPU memory increase in LSTM layers, but the fact that it’s followed by FC layers where without sorting, the sequence length gets longer as input to FC, thus bigger FC and GPU RAM consumption. |
st47419 | I’m working on supporting automatic model detection/logging for PyTorch models for our Machine learning platform https://iko.ai 1 and I want to know what is the base class for models that only models inherit from? i.e. I’m looking for the class X that passes this condition: If a class Y inherits from class X, Y is a PyTorch model.
Is it torch.nn.modules.module.Module ? |
st47420 | The base class is the Module class in torch.nn
Import torch
Import torch.nn as nn
class NNmodel(nn.Modules):
def __init__(self, .....):
super(NNmodel, self).__init__() |
st47421 | Yah
They are neural network models
Any neural network model made with pytorch always inherits from the Modules class. |
st47422 | What about torch.nn.CrossEntropyLoss class, it inherits from torch.nn.Module but it’s not a model, right ? |
st47423 | It’s a loss function
What I meant was that to build a neural network model in pytorch, u have to inherit from the torch.nn.Module class |
st47424 | Yeah, but I’m looking for the base class for models that only models inherit from. Is there such a class? Or any condition that only models fulfills? |
st47425 | OK, Thanks.
Do you see any other way to distinguish model classes from non-models classes? a common attribute/method? |
st47426 | I don’t Know if I understand ur question but if u mean distinguishing the class of an implemented model architecture from one that is not a model then the only difference is that model classes in pytorch inherit from torch.nn.Module.
If u are referring to ones that make up a NN model architecture like torch.nn.Linear, torch.nn.Conv2D etc and are trying to differentiate them from things like loss functions eg: torch.nn.CrossEntropy or torch.nn.NLLLoss etc then I guess u just different them by there names, by the names u know which can constitute a model architecture and which is a loss function |
st47427 | I’m referring to your second guess: how can I differentiate between classes that constitute an NN model and those which implement losses …
I want to write a python function to do that for me and clearly basing on class names is not a good way to do it.
Anyway, thank you very much for your help. |
st47428 | Hmmmm🤔
You want to write a python function that differentiate these the model constituents’ class from the loss classes? |
st47429 | Yup. The function is something like for now
def is_pytorch_model(obj):
return isinstance(obj, torch.nn.Module)
It, clearly, consider losses and other torch.nn.Module subclasses as NN models which is not true. |
st47430 | Hmmm🤔
Have U tried differentiating them by the kind of values they return ?
The output of let’s say torch.nn.Conv2D is different from what torch.nn.CrossEntropy outputs shape wise and all… |
st47431 | I can’t do that because I’m doing a static analysis of the code using astroid.
For losses, there is actually a couple of attributes that could differentiate them from the other classes
class Net(nn.Module):
# define nn
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(4, 100)
self.fc2 = nn.Linear(100, 100)
self.fc3 = nn.Linear(100, 3)
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = self.fc2(x)
x = self.fc3(x)
x = self.softmax(x)
return x
net = Net()
criterion = nn.CrossEntropyLoss()
set(dir(criterion)) - set(dir(net))
>>> {'__constants__', 'ignore_index', 'reduction', 'weight'}
What are all the other things that should inherit from torch.nn.Module? |
st47432 | Well to my current knowledge only the model constituents and loss functions inherit from torch.nn.Module class |
st47433 | Hello Haroune and Henry!
mohammedi-haroune:
how can I differentiate between classes that constitute an NN model and those which implement losses
I don’t think that there is a simple, one-stop-shopping way of doing
this that is completely reliable. Note that activations also inherit
from Module.
Losses, however, (appear to) inherit (possibly indirectly) from
torch.nn.modules.loss._Loss (which, in turn, inherits from
Module). You could look for that.
You might consider counting parameters() on the theory that any
self-respecting model has parameters (although some activation
functions also have parameters).
Also, you have to be clear about what you mean by a “model.”
Would you consider a single Linear to be a model? Arguably
it would be. How about a single Softmax(). This seems more
of a stretch, but how does it behave any differently than a
Linear? (One difference is that it doesn’t have any parameters.)
This script illustrates some of these points:
import torch
print (torch.__version__)
model_linear = torch.nn.Linear (3, 5)
model_sequential = torch.nn.Sequential ((torch.nn.Linear (4, 6)))
loss_mse = torch.nn.MSELoss()
loss_ce = torch.nn.CrossEntropyLoss()
act_softmax = torch.nn.Softmax()
act_prelu = torch.nn.PReLU()
print (sum (1 for _ in model_linear.parameters())) # count parameters
print (sum (1 for _ in model_sequential.parameters())) # count parameters
print (sum (1 for _ in loss_mse.parameters())) # count parameters
print (sum (1 for _ in loss_ce.parameters())) # count parameters
print (sum (1 for _ in act_softmax.parameters())) # count parameters
print (sum (1 for _ in act_prelu.parameters())) # count parameters
print (model_linear.__class__.__bases__) # immediate superclass
print (model_sequential.__class__.__bases__) # immediate superclass
print (loss_mse.__class__.__bases__) # immediate superclass
print (loss_ce.__class__.__bases__) # immediate superclass
print (act_softmax.__class__.__bases__) # immediate superclass
print (act_prelu.__class__.__bases__) # immediate superclass
print (model_linear.__class__.__mro__) # full class hierarchy
print (model_sequential.__class__.__mro__) # full class hierarchy
print (loss_mse.__class__.__mro__) # full class hierarchy
print (loss_ce.__class__.__mro__) # full class hierarchy
print (act_softmax.__class__.__mro__) # full class hierarchy
print (act_prelu.__class__.__mro__) # full class hierarchy
Here is the output:
>>> import torch
>>> print (torch.__version__)
1.6.0
>>> model_linear = torch.nn.Linear (3, 5)
>>> model_sequential = torch.nn.Sequential ((torch.nn.Linear (4, 6)))
>>> loss_mse = torch.nn.MSELoss()
>>> loss_ce = torch.nn.CrossEntropyLoss()
>>> act_softmax = torch.nn.Softmax()
>>> act_prelu = torch.nn.PReLU()
>>> print (sum (1 for _ in model_linear.parameters())) # count parameters
2
>>> print (sum (1 for _ in model_sequential.parameters())) # count parameters
2
>>> print (sum (1 for _ in loss_mse.parameters())) # count parameters
0
>>> print (sum (1 for _ in loss_ce.parameters())) # count parameters
0
>>> print (sum (1 for _ in act_softmax.parameters())) # count parameters
0
>>> print (sum (1 for _ in act_prelu.parameters())) # count parameters
1
>>> print (model_linear.__class__.__bases__) # immediate superclass
(<class 'torch.nn.modules.module.Module'>,)
>>> print (model_sequential.__class__.__bases__) # immediate superclass
(<class 'torch.nn.modules.module.Module'>,)
>>> print (loss_mse.__class__.__bases__) # immediate superclass
(<class 'torch.nn.modules.loss._Loss'>,)
>>> print (loss_ce.__class__.__bases__) # immediate superclass
(<class 'torch.nn.modules.loss._WeightedLoss'>,)
>>> print (act_softmax.__class__.__bases__) # immediate superclass
(<class 'torch.nn.modules.module.Module'>,)
>>> print (act_prelu.__class__.__bases__) # immediate superclass
(<class 'torch.nn.modules.module.Module'>,)
>>> print (model_linear.__class__.__mro__) # full class hierarchy
(<class 'torch.nn.modules.linear.Linear'>, <class 'torch.nn.modules.module.Module'>, <class 'object'>)
>>> print (model_sequential.__class__.__mro__) # full class hierarchy
(<class 'torch.nn.modules.container.Sequential'>, <class 'torch.nn.modules.module.Module'>, <class 'object'>)
>>> print (loss_mse.__class__.__mro__) # full class hierarchy
(<class 'torch.nn.modules.loss.MSELoss'>, <class 'torch.nn.modules.loss._Loss'>, <class 'torch.nn.modules.module.Module'>, <class 'object'>)
>>> print (loss_ce.__class__.__mro__) # full class hierarchy
(<class 'torch.nn.modules.loss.CrossEntropyLoss'>, <class 'torch.nn.modules.loss._WeightedLoss'>, <class 'torch.nn.modules.loss._Loss'>, <class 'torch.nn.modules.module.Module'>, <class 'object'>)
>>> print (act_softmax.__class__.__mro__) # full class hierarchy
(<class 'torch.nn.modules.activation.Softmax'>, <class 'torch.nn.modules.module.Module'>, <class 'object'>)
>>> print (act_prelu.__class__.__mro__) # full class hierarchy
(<class 'torch.nn.modules.activation.PReLU'>, <class 'torch.nn.modules.module.Module'>, <class 'object'>)
Best.
K. Frank |
st47434 | We often sees residual connections in today’s networks, be it in ResNet or in Transformer.
github.com
pytorch/pytorch/blob/0c5cd8c2b9cdf473e30bbb1b49ca80ed442813df/torch/nn/modules/transformer.py#L282-L300
def forward(self, src: Tensor, src_mask: Optional[Tensor] = None, src_key_padding_mask: Optional[Tensor] = None) -> Tensor:
r"""Pass the input through the encoder layer.
Args:
src: the sequence to the encoder layer (required).
src_mask: the mask for the src sequence (optional).
src_key_padding_mask: the mask for the src keys per batch (optional).
Shape:
see the docs in Transformer class.
"""
src2 = self.self_attn(src, src, src, attn_mask=src_mask,
key_padding_mask=src_key_padding_mask)[0]
src = src + self.dropout1(src2)
src = self.norm1(src)
src2 = self.linear2(self.dropout(self.activation(self.linear1(src))))
src = src + self.dropout2(src2)
src = self.norm2(src)
return src
The code above can be visualised as the encoder part here:
672×916
From what I have read, residual connections help prevent exploding/vanishing gradients because skip connections can “skip passed the non-linear activation functions”, which means that gradients enjoy the same benefit. I do not understand what that means.
In the backward pass, why would PyTorch skip the non-linearities? How does it know to skip those? So, in this fictional example:
src2 = self.activation(self.linear1(src))
src = src + src2
what is the advantage of using the skip connection? Doesn’t the backward pass flow through all operations?
I feel that I am missing an important part of the puzzle, but I can’t figure out which one. |
st47435 | In srcOut = src1 + src2
The key is that [loss] gradient w.r.t. srcOut is passed to both summands unchanged. As a result, any block or partial sum could in theory learn to produce the best srcOut. Thus, later blocks learn residuals.
Contrast this with function composition: srcOut = fc2(act(fc1(src))). Here you’ll have a chain of intermediate results, and a chain of multiplications will be applied to the initial gradient dLoss/dSrcOut. |
st47436 | Hi, all:
I try to manipulate some intermediate features of resnet, so i have to break pre-trained model into two parts.
I feed original image into first part and then feed the output of first part directly into second part. But i meet size mismatch in last linear layer.
“RuntimeError: size mismatch, m1: [512 x 1], m2: [512 x 1000]”
modules = list(resnet18.children())[:3]
resnet_1st = nn.Sequential(*modules)
for p in resnet18.parameters():
p.requires_grad = False
modules = list(resnet18.children())[3:]
resnet_2nd = nn.Sequential(*modules)
for p in resnet18.parameters():
p.requires_grad = False
#print(resnet_2nd)
out_1st = resnet_1st(image)
print(out_1st.shape)
out_2nd = resnet_2nd(out_1st)
print(out_2nd.shape)
Any one know how to solve this? Thanks in advance! |
st47437 | The error is thrown, since you are wrapping all modules in an nn.Sequential module, which is missing the flatten operation defined in resnet’s forward.
You could define a custom Flatten module and add it right before the last linear layer:
class Flatten(nn.Module):
def __init__(self):
super(Flatten, self).__init__()
def forward(self, x):
x = x.view(x.size(0), -1)
return x
modules = list(model.children())[:3]
resnet_1st = nn.Sequential(*modules)
modules = list(model.children())[3:-1]
resnet_2nd = nn.Sequential(*[*modules, Flatten(), list(model.children())[-1]])
x = torch.randn(1, 3, 224, 224)
out_1st = resnet_1st(x)
print(out_1st.shape)
out_2nd = resnet_2nd(out_1st)
print(out_2nd.shape) |
st47438 | When splitting a predefined nn.Sequential inorder to get intermediate layer output in forward, can we split it directly like using model.submodel_name[:n] (which I found is still a nn.Sequential) instead of extracting layers from model.children and wrap them again in a nn.Sequential? Is there any issues to concern about? Thanks.
螢幕快照 2020-11-09 下午11.29.461674×352 64 KB |
st47439 | Your approach should work:
model = nn.Sequential(
nn.Conv2d(3, 6, 3, 1, 1),
nn.ReLU(),
nn.MaxPool2d(2),
nn.Conv2d(6, 12, 3, 1, 1),
nn.ReLU(),
nn.Flatten(),
nn.Linear(12*12*12, 10),
)
x = torch.randn(10, 3, 24, 24)
out_ref = model(x)
out1 = model[:5](x)
out2 = model[5:](out1)
print((out_ref - out2).abs().max())
> tensor(0., grad_fn=<MaxBackward1>) |
st47440 | I’m using PyTorch for training GCN, I have a simple model that saved using this command:
torch.save(MyNet().state_dict(), PATH)
Then I load it by doing the following:
model = MyNet()
model.load_state_dict(torch.load(PATH))
model.eval()
But then when I tried to input the data as follows:
output = model(dataset)
I got this error for the previous line:
TypeError: 'NoneType' object is not callable
I don’t know what’s the reason for that, could you please help me? |
st47441 | Could you add some debug statements to your code and check, if the model variable was replaced somewhere?
model = MyNet()
print(model)
[...]
print(model)
output = model(dataset)
I guess that model was initially a valid object, but might have been replaced accidentally with None. |
st47442 | When I tried to print the model after model = MyNet() I got the model printed well, but then when I tried to print it before output = model(dataset) I got None
What could be the problem? |
st47443 | It seems your code is overwriting the model variable at one point.
Search for all usages of model and make sure it’s not replaced by a None value. |
st47444 | Hello! I am training a neural network and my dataset is in COCO format. Now I would like to change my input images’ resolution to a smaller one since my resources in terms of memory is only limited. I tried using cv2.resize but it does not work. Can anyone suggest any ways to achieve what I want? by the way here is my code loading my images
import os
import cv2
import torch
import torch.utils.data
import torchvision
from PIL import Image
from pycocotools.coco import COCO
class myOwnDataset(torch.utils.data.Dataset):
IMG_SIZE = 100
def __init__(self, root, annotation, transforms=None):
self.root = root
self.transforms = transforms
self.coco = COCO(annotation)
self.ids = list(sorted(self.coco.imgs.keys()))
def __getitem__(self, index):
# Own coco file
coco = self.coco
# Image ID
img_id = self.ids[index]
# List: get annotation id from coco
ann_ids = coco.getAnnIds(imgIds=img_id)
# Dictionary: target coco_annotation file for an image
coco_annotation = coco.loadAnns(ann_ids)
# path for input image
path = coco.loadImgs(img_id)[0]['file_name']
# open the input image
img = Image.open(os.path.join(self.root, path))
img = cv2.resize(img, (self.IMG_SIZE, self.IMG_SIZE), interpolation = cv2.INTER_AREA)
# number of objects in the image
num_objs = len(coco_annotation)
# Bounding boxes for objects
# In coco format, bbox = [xmin, ymin, width, height]
# In pytorch, the input should be [xmin, ymin, xmax, ymax]
boxes = []
for i in range(num_objs):
xmin = coco_annotation[i]['bbox'][0]
ymin = coco_annotation[i]['bbox'][1]
xmax = xmin + coco_annotation[i]['bbox'][2]
ymax = ymin + coco_annotation[i]['bbox'][3]
boxes.append([xmin, ymin, xmax, ymax])
boxes = torch.as_tensor(boxes, dtype=torch.float32)
# Labels (In my case, I only one class: target class or background)
labels = torch.ones((num_objs,), dtype=torch.int64)
# Tensorise img_id
img_id = torch.tensor([img_id])
# Size of bbox (Rectangular)
areas = []
for i in range(num_objs):
areas.append(coco_annotation[i]['area'])
areas = torch.as_tensor(areas, dtype=torch.float32)
# Iscrowd
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
# Annotation is in dictionary format
my_annotation = {}
my_annotation["boxes"] = boxes
my_annotation["labels"] = labels
my_annotation["image_id"] = img_id
my_annotation["area"] = areas
my_annotation["iscrowd"] = iscrowd
if self.transforms is not None:
img = self.transforms(img)
return img, my_annotation
def __len__(self):
return len(self.ids)
# In my case, just added ToTensor
def get_transform():
custom_transforms = []
custom_transforms.append(torchvision.transforms.ToTensor())
return torchvision.transforms.Compose(custom_transforms)
train_data_dir = '/content/TACO/data'
train_coco = '/content/TACO/data/annotations.json'
# create own Dataset
my_dataset = myOwnDataset(root=train_data_dir,
annotation=train_coco,
transforms=get_transform()
)
# collate_fn needs for batch
def collate_fn(batch):
return tuple(zip(*batch))
# Batch size
train_batch_size = 1
# own DataLoader
data_loader = torch.utils.data.DataLoader(my_dataset,
batch_size=train_batch_size,
shuffle=True,
num_workers=0,
collate_fn=collate_fn) |
st47445 | Hi,
PyTorch supports a few a few sparse matrix computation such as spmm. In principle, sparsity can reduce the complexity of matrix computation. So they are faster than the dense implementation on CPU for sure. However, on GPUs, these sparse operations are difficult to implement in parallel. In particular, this post Backprop Through Sparse Tensor Is Not Memory Efficient? 10 already shows that their memory usage may be as large as dense implementation.
So I am wondering their performance on GPUs (both speed and memory) and whether I should use the sparse implementation when I encounter sparse matrices. I can provide necessary information about my use case (matrix size, sparsity level etc) if necessary. Also, I would appreciate it if there are empirical studies comparing dense and sparse functions in pytorch. Thanks. |
st47446 | I am trying to make a quantum network that would classify images of the Messidor dataset as DR or No DR. But i am getting this error
image1373×486 57.7 KB |
st47447 | The first conv layer of your model (self.conv1) uses a single input channel, while you are providing input image tensors with 3 channels.
Set in_channels=3 in self.conv1 or transform the input to have a single channel only.
PS: it’s better to post code snippets by wrapping them into three backticks ```, as it makes debugging easier. |
st47448 | you should be doing this
self.conv1 = nn.Conv2d(3, 6, kernel_size=80)
self.conv2 = nn.Conv2d(6, 16, kernel_size=80)
instead of
self.conv1 = nn.Conv2d(1, 6, kernel_size=80, in_channels=3)
self.conv2 = nn.Conv2d(6, 16, kernel_size=80, in_channels=3)
This is how nn.Conv2d takes parameters.
nn.Conv2d(in_channels=3, out_channels=6, kernel_size=80)
in your case you had 1 as input parameter for no of input channels and again had in_channels=3. |
st47449 | Your current input is too small for the model architecture and an intermediate activation would be empty after the pooling layer, so that you would either have to increase the spatial size of the input or change the architecture and remove some pooling ops. |
st47450 | The document of torch.gather says the index argument must be an n-dimensional tensor with some certain shape. I thought this means that it will check the input dimension but it turned out it didn’t check and did the unexpected thing silently.
The following code has a bug, but it runs without even a warning.
import torch
torch.manual_seed(0)
input = torch.rand(4, 2)
index = torch.randint(2, size=(4,)).unsqueeze(0) # intended to be unsqueeze(1)
dim = 1
output = torch.gather(input, dim, index)
print("input = ", input)
print("index = ", index)
print("output = ", output)
I thought it would be good if we check the index shape. Otherwise, the documentation should mention that input dimension is not checked. |
st47451 | It seems the check was dropped somewhere between PyTorch 1.5.1 and 1.6.
Would you mind creating an issue on GitHub 1 so that we can track it? |
st47452 | Hello all, I am trying to train an LSTM in the half-precision setting. The LSTM takes an encoded input from a pre-trained autoencoder(Not trained in fp16). I am using torch.amp instead of apex and scaling the losses as suggested in the documentation.
Here is my training loop -
def train_model(self, model, dataloader, num_epochs):
model.cuda()
least_loss = 5
model.train()
optimizer = torch.optim.Adam(model.parameters(), lr =1e-5)
scaler = amp.GradScaler()
training_loss = []
for i in range(0, num_epochs + 1):
st = time.time()
training_acc = 0
epoch_loss = 0
for _, (x, y) in enumerate(dataloader):
optimizer.zero_grad()
sst = time.time()
x = x.float().half().cuda()
x, out = self.autoencoder(x)
x = x.permute(0,2, 1)
model.init_Hidden()
y = y.cuda()
output = model(x)
loss = self.criterion(output, y)
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
I call my model as -
lstm =lstm(features=1024, hidden_size=512, sequence_length=313, autoencoder=model).half().cuda()
I am getting the followiing error -
ValueError: Attempting to unscale FP16 gradients.
Could someone please tell why would this be happening
TIA |
st47453 | You shouldn’t call half manually on the model or data.
Could you remove the half call here: x = x.float().half().cuda() and rerun your script? |
st47454 | @ptrblck thanks for replying
I thought we had to convert the model to half by calling model.half() for fp16 training. (I am using torch.amp from 1.5 nightly builds) Also if .half() is only called either data or the model it gives an error saying weight type and input type should be the same(as one of them is half)
I tried running the script without calling .half() and Cuda ran out of memory. Also after calling .half(), the model did not go out of memory but raised the same error of unscaling at scaler.step(optimizer ) line.
I also did run a similar training loop and got the same error(did explicitly call model.half() and data.half()) |
st47455 | torch.cuda.amp.autocast will use mixed-precision training and cast necessary tensors under the hood for you.
From the docs 85:
When entering an autocast-enabled region, Tensors may be any type. You should not call .half() on your model(s) or inputs when using autocasting. |
st47456 | I get this error too:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-8-4d12a5af1b3f> in <module>
----> 1 trainer.run_epoch()
~/ccai/github/dev_omni/omnigan/omnigan/trainer.py in run_epoch(self)
655 param.requires_grad = True
656
--> 657 self.update_D(multi_domain_batch)
658
659 # -------------------------------
~/ccai/github/dev_omni/omnigan/omnigan/trainer.py in update_D(self, multi_domain_batch, verbose)
1133 d_loss = self.get_D_loss(multi_domain_batch, verbose)
1134 self.grad_scaler_d.scale(d_loss).backward()
-> 1135 self.grad_scaler_d.step(self.d_opt)
1136 self.grad_scaler_d.update()
1137 else:
~/.conda/envs/omnienv/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py in step(self, optimizer, *args, **kwargs)
287
288 if optimizer_state["stage"] is OptState.READY:
--> 289 self.unscale_(optimizer)
290
291 assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
~/.conda/envs/omnienv/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py in unscale_(self, optimizer)
238 found_inf = torch.full((1,), 0.0, dtype=torch.float32, device=self._scale.device)
239
--> 240 optimizer_state["found_inf_per_device"] = self._unscale_grads_(optimizer, inv_scale, found_inf, False)
241 optimizer_state["stage"] = OptState.UNSCALED
242
~/.conda/envs/omnienv/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py in _unscale_grads_(self, optimizer, inv_scale, found_inf, allow_fp16)
185 if param.grad is not None:
186 if (not allow_fp16) and param.grad.dtype == torch.float16:
--> 187 raise ValueError("Attempting to unscale FP16 gradients.")
188 else:
189 torch._amp_non_finite_check_and_unscale_(param.grad,
ValueError: Attempting to unscale FP16 gradients.
The piece of code yielding this error is:
with autocast():
d_loss = self.get_D_loss(multi_domain_batch, verbose)
self.grad_scaler_d.scale(d_loss).backward()
self.grad_scaler_d.step(self.d_opt)
self.grad_scaler_d.update()
I’m using pytorch 1.6 and not calling half() on anything. Maybe this context can help: I’m training a GAN model and the exact same procedure on the generator’s loss, optimizer and scaler works without error.
Generator and Discriminator’s optimizers are Adam optimizers from torch.optim and grad_scaler_d and grad_scaler_g are GradScaler() instances from from torch.cuda.amp. @ptrblck where do I start debugging beyond looking for .half() calls? |
st47457 | Could you post the model definitions and the general workflow, i.e. how the losses are calculated, which optimizers are used etc. so that we could help debugging? |
st47458 | It’s quite complex (here 5) so I can’t really paste it all but for some reason the culprit seems to be changing requires_grad back and forth for the discriminator
# ------------------------------
# ----- Update Generator -----
# ------------------------------
if self.d_opt is not None:
for param in self.D.parameters():
# continue
param.requires_grad = False
self.update_G(batch)
# ----------------------------------
# ----- Update Discriminator -----
# ----------------------------------
# unfreeze params of the discriminator
for param in self.D.parameters():
# continue
param.requires_grad = True
self.update_D(batch)
The error disappears if I comment-in continue or equivalently if I comment our the 2 for loops around the gradient
Where both self.update_X(batch) for being either the generator (g) or the discriminator (d) methods are structured as:
with autocast():
x_loss = self.get_x_loss(batch)
self.grad_scaler_x.scale(x_loss).backward()
self.grad_scaler_x.step(self.x_opt)
self.grad_scaler_x.update()
In both cases x_opt is a regular torch.optim.Adam(X.parameters()) |
st47459 | I cannot reproduce the issue using the DCGAN example and setting requires_grad=False for the parameters of netD in the update step of the generator. |
st47460 | Hmm this is so weird. I’m going to try and keep digging. I’ll get back to you, hopefully with a reproducible culprit. Thank you |
st47461 | @ptrblck Is there any reason why this error would suddenly appear when using code that worked locally (on a RTX3050 GPU) on a Azure Data Science VM with a T4 GPU?
Local versions:
Torch 1.10.0
Cuda 11.4
Azure versions:
Torch 1.10.0
Cuda 11.5 |
st47462 | Hi guys!
I am tying to use SWA with my custom dataloader but I have a doubt. This is my code:
swa_model.train()
for indx, batch in enumerate(train_loader):
image = batch["image"].type(torch.float).cuda()
_ = swa_model(image)
But this runs out of memory fast. If I encapsulate in torch.no_grad() runs without problems… Maybe gradients are not cleaned or something but I don’t have clear how to do it properly for SWA model.
“If your dataloader has a different structure, you can update the batch normalization statistics of the swa_model by doing a forward pass with the swa_model on each element of the dataset.” |
st47463 | Solved by Mario_Parreno in post #2
Using torch.no_grad() updates statistics:
swa_model.train()
with torch.no_grad()
for indx, batch in enumerate(train_loader):
image = batch["image"].type(torch.float).cuda()
_ = swa_model(image)
Check with:
for module in s
wa_model.modules():
if isinstance(module, torch.nn… |
st47464 | Using torch.no_grad() updates statistics:
swa_model.train()
with torch.no_grad()
for indx, batch in enumerate(train_loader):
image = batch["image"].type(torch.float).cuda()
_ = swa_model(image)
Check with:
for module in s
wa_model.modules():
if isinstance(module, torch.nn.modules.batchnorm._BatchNorm):
print(module.running_mean)
print(module.running_var)
print(module.momentum)
break |
st47465 | Hi all,
I want to run my project on google colab, as dnt have GPU facility. Uptill now I have uploaded the complete project file in my drive. What should be next, Plz give me a step wise process . My project has several parts (.py files) and dataset too.
Regards |
st47466 | Solved by Henry_Chibueze in post #9
Sorry for the late reply
I don’t know about you but when I program, I like to run my code on the terminal of the operating system I’m using windows or Linux and not on the IDE (this is just my preference)
So it’s kinda similar to colab.
Colab uses Linux as it’s operating system and the a python … |
st47467 | Compress the project folder to a zip or RAR file
Start up google colab and select the add file icon by the left plane (the plane may be collapsed)
Upload the compressed file to the colab
Then install unrar package (Linux version) in colab using the terminal (always remember to add ‘!’ before a code to tell the colab terminal that it’s a terminal command)eg: ‘!apt-get install unrar’
Then after installing unpack the compressed file by the command ‘!unrar X rarfile’
And u r done |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.