id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st47468 | Well Thank for reply , here I am not able to find add file icon, This is the left plane. When I open colab there is a pop up window, having options like github>>>upload etc.
And actually I have done uploading in drive already. (Not rar but whole folder)
Can I upload this into colab now? |
st47469 | If ur folder is already in ur drive then u can do as this link says:
MarkTechPost – 7 Jun 19
How to Connect Google Colab with Google Drive | MarkTechPost 1
How to Connect Google Colab with Google Drive. In this tutorial, you’ll learn how to connect your Google Colab with Google Drive to build some Deep Learning |
st47470 | @Henry_Chibueze
Thanks again, Just done
and now in that extream left panel i can find all those python files and relevant dataset. after mounting the drive. Now I am confused do when I double click there, it is open in right panel. Now my question is do I need to copy this to jupyter cells here or is there any other way?
Regards |
st47471 | U mean copying the python code to jupyter cell?
If that’s what u mean then no you don’t need to copy the python code to the cell
If u want to run code, what u need to do is to run on the terminal with python.
If u are familiar with Linux terminal or cmd, all u need to to is to navigate to the dir where the code is like this !cd dir then u run the command !python code.py.
Don’t forget to add the ‘!’ b4 a terminal command else the command will attempt to run as a python code which will throw an error.
If u want to edit your code, then all u need to do is click on the python file on the left panel and a text editing plane will open on the right where u can edit the code.
Hope this helps you |
st47472 | I am sorry I may sound like a fool here, but I wanted to know how to run this project on co lab, as the issue on my local machine is GPU acceleration problem.
Or are you saying after this uploading on colab the compilation done through command line?
I am still confused
Sorry |
st47473 | Sorry for the late reply
I don’t know about you but when I program, I like to run my code on the terminal of the operating system I’m using windows or Linux and not on the IDE (this is just my preference)
So it’s kinda similar to colab.
Colab uses Linux as it’s operating system and the a python version is installed in it by default with some Machine learning modules and others.
Normally in any operating system u are using, if u want to run a code from the terminal all u simply need to do is open the terminal and type the command cd 'code dir' to navigate to the code directory and then python code.py to run it. This is no different with colab.
Once u upload ur code on colab whether from Google drive or from local directory, it is stored in the current working directory EG;
Assuming u uploaded a folder named ‘project’ and this folder has ur code named ‘project_class.py’, all u need to do is to type !cd project to navigate into ‘project’ folder and then u type !python project_class.py to run the code on colab.
If u want to edit the code, simple go to the left panel and click on particular file u wish to edit and a text editor will appear on the right side.
Hope this helps |
st47474 | So to answer ur questions, yes the code execution is done via the command terminal and u don’t need to do anything special or navigate to some unknown place to open a terminal.
All u simplly need to do is to type the command on a cell like how u’ll type a python code in a cell but when u r typing a terminal command u need to add ‘!’ before the command. This will make the terminal differentiate terminal command from python code.
U grab? |
st47475 | I really find it very straight forward and clear so accepted it as answer for other readers. But still for me its not working.
I am doing exactly same but it says NO such file or directory as you can see its there in drive.
colab1127×550 35.8 KB
Thank you for helping
Regards |
st47476 | Type !dir and run the command
If u see all the files and directories of ‘CNN_as_MATLAB’ folder listed as the list of directories on the terminal, then that directory is ur working directory.
So so if u are already in that working directory, u don’t need to run the !cd..... command anymore. All u need to do is to just run the python code:
!python EmoDB_1.py |
st47477 | Cant thank you enough,
Just one more question how to set device here in colab
model=model.to(device)
? |
st47478 | Yes
The same way u always do.
Tho it’s preferable that u use the cpu environment on colab rather than the gpu environment except u are using colab pro (which is a paid version)
The gpu environment in colab can be really unstable sometimes, but u can always try it out. |
st47479 | Actually my question is what should I take as device
device = ‘torch.cuda.device’
model = model.to(device)
??/
its not working |
st47480 | To switch processor environments all u need to do is go to the top bar and select ‘runtime’, then select ‘change runtime type’ and u’ll see the one u are currently using (‘none’) and u can switch by selecting GPU or TPU on the drop down.
Remember that when u change runtime u need to upload your project folder again coz it’s like switching to a different computer
If u want to use the TPU tho u have to install ‘torch.xlsa’ package. U can just search for ‘how to use run pytorch on TPU’.
The code for specifying running device should be still be kept |
st47481 | I am trying to install torch_xla as
!pip install torch_xla
But the error is
ERROR: Could not find a version that satisfies the requirement torch_xla (from versions: none)
ERROR: No matching distribution found for torch_xla |
st47482 | LOL it’s really funny coz I experienced ur problems too and in the exact order as urs .
Don’t mind me🙂
Anyways just look at this site for details on that:
colab.research.google.com
Google Colaboratory 2 |
st47483 | Hi,
Amazing discussion going on.
Just to mention that its advisable to use %run script.py on colab instaed of !python script.py. see this 2.
Best,
Mughees |
st47484 | I used Pytorch to create 3D CNN with 2 conv layer
I used 1000 epochs as shown in the curve but the accuracy and the loss values are almost stable.
can you explain the reason to me please ?
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__() # héritage
self.conv_layer1 = self._conv_layer_set(3, 32)
self.conv_layer2 = self._conv_layer_set(32, 64)
self.fc1 = nn.Linear(64*28*28*28, 2)
self.fc2 = nn.Linear(1404928, num_classes)
self.relu = nn.LeakyReLU()
self.batch=nn.BatchNorm1d(2)
self.drop=nn.Dropout(p=0.15, inplace = True)
def _conv_layer_set(self, in_c, out_c):
conv_layer = nn.Sequential(
nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0),
nn.LeakyReLU(),
nn.MaxPool3d((2, 2, 2)),
)
return conv_layer
def forward(self, x):
# Set 1
out = self.conv_layer1(x)
out = self.conv_layer2(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = self.relu(out)
out = self.batch(out)
out = self.drop(out)
out = F.softmax(out, dim=1)
return out
# Create CNN
model = CNNModel()
model.cuda() #pour utiliser GPU
print(model)
# Cross Entropy Loss
for param in model.parameters():
param.requires_grad = True
error = nn.CrossEntropyLoss()
# SGD Optimizer
learning_rate = 0.001
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
###################################################accuracy function ##################################
def accuracyCalc (predicted, targets):
correct = 0
p = predicted.tolist()
t = targets.flatten().tolist()
for i in range(len(p)):
if (p[i] == t[i]):
correct +=1
accuracy = 100 * correct / targets.shape[0]
return(accuracy)
#######################################################################################################
print(" build model --- %s seconds ---" % (time.time() - start_time))
#######################################################{{{{{{{training}}}}}}}##################################
print('data preparation ')
training_data = np.load("/content/drive/My Drive/brats6G/train/training_data.npy", allow_pickle=True)
training_data = training_data[:2]
targets = np.load("/content/drive/My Drive/brats6G/train/targets.npy", allow_pickle=True)
targets = targets[:2]
from sklearn.utils import shuffle
training_data, targets = shuffle(training_data, targets)
training_data = changechannel(training_data, 1, 5) #Channels ordering : first channel to ==> last channel'
training_data = resize3Dimages(training_data) #resize images
training_data = channel1to3(training_data,)#1 channel to 3 channel ===> RGB
training_data = changechannel(training_data, 4, 1)# last to first
#Definition of hyperparameters
num_epochs = 5
loss_list_train = []
accuracy_list_train = []
for epoch in range(num_epochs):
outputs = []
outputs= torch.tensor(outputs).cuda()
for fold in range(0, len(training_data), 4):
xtrain = training_data[fold : fold+4]
xtrain =torch.tensor(xtrain).float().cuda()
xtrain = xtrain.view(2, 3, 120, 120, 120)
# Clear gradients
optimizer.zero_grad()
# Forward propagation
v = model(xtrain)
outputs = torch.cat((outputs,v.detach()),dim=0)
targets = torch.Tensor(targets)
labels = targets.cuda()
outputs = torch.tensor(outputs, requires_grad=True)
_, predicted = torch.max(outputs, 1)
accuracy = accuracyCalc(predicted, targets)
labels = labels.long()
labels=labels.view(-1)
loss = nn.CrossEntropyLoss()
loss = loss(outputs, labels)
# Calculating gradients
loss.backward()
# Update parameters
optimizer.step()
loss_list_train.append(loss.data) #loss values loss
accuracy_list_train.append(accuracy/100)
np.save('/content/drive/My Drive/brats6G/accuracy_list_train.npy', np.array(accuracy_list_train))
np.save('/content/drive/My Drive/brats6G/loss_list_train.npy', np.array(loss_list_train))
print('Iteration: {}/{} Loss: {} Accuracy: {} %'.format(epoch+1, num_epochs, loss.data, accuracy))
print('Model training : Finished')
122862268_647997725913779_12410710553506702_n753×442 77.5 KB |
st47485 | Remove the nn.Softmax at the end of the model, since nn.CrossEntropyLoss will internally apply F.log_softmax and nn.NLLLoss.
Also, probably unrelated to the training issue, but your linear layers are quite big.
I’m also a bit skeptical about using a dropout layer at the end of the model as it would mask logits. |
st47486 | Thank you for your reply @ptrblck
I set the linear layer in_features to 64 * 28 * 2 * 28 because the input image size is 120 * 120 * 120 and I have two conv layers. So how can I reduce the linear layer size and add another linear layer after the dopout layer?
Than you in advance. |
st47487 | You could reduce the spatial size of the activation with pooling layers or a generally deeper architecture.
However, I would focus on the other two points, i.e. the softmax layer and dropout at the end of the model.
Did you remove them and did anything change? |
st47488 | I deleted them but the result = 0.5 for all epochs (200)
class CNNModel(nn.Module):
def __init__(self):
super(CNNModel, self).__init__()
self.conv_layer1 = self._conv_layer_set(3, 32)
self.conv_layer2 = self._conv_layer_set(32, 64)
self.fc1 = nn.Linear(64*28*28*28, 2)
self.relu = nn.LeakyReLU()
self.batch=nn.BatchNorm1d(2)
def _conv_layer_set(self, in_c, out_c):
conv_layer = nn.Sequential(
nn.Conv3d(in_c, out_c, kernel_size=(3, 3, 3), padding=0),
nn.LeakyReLU(),
nn.MaxPool3d((2, 2, 2)),
)
return conv_layer
def forward(self, x):
# Set 1
out = self.conv_layer1(x)
out = self.conv_layer2(out)
out = out.view(out.size(0), -1)
out = self.fc1(out)
out = self.relu(out)
out = self.batch(out)
return out
ammm730×422 9.1 KB |
st47489 | In that case you could try to overfit a small dataset, e.g. just 10 samples, and make sure your model is able to do so by playing around with hyperparameters.
If that’s still not working, there might be other issues I haven’t seen yet. |
st47490 | ptrblck:
In that case you could try to overfit a small dataset, e.g. just 10 samples, and make sure your model is able to do so by playing around with hyperparameters.
I am only using 4 images ( 2 images as training set and 2 images as validation set) |
st47491 | Your model works fine with 10 random samples and achieves a perfect accuracy after a few steps:
device = 'cuda'
model = CNNModel().to(device)
data = torch.randn(10, 3, 120, 120, 120).to(device)
target = torch.randint(0, 2, (10,)).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.CrossEntropyLoss()
for epoch in range(100):
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
acc = (torch.argmax(output, 1) == target).float().mean()
print('epoch {}, loss {}, acc {}'.format(epoch, loss.item(), acc))
> epoch 0, loss 0.707537829875946, acc 0.5
epoch 1, loss 0.25594037771224976, acc 0.9000000357627869
epoch 2, loss 0.18612369894981384, acc 1.0
epoch 3, loss 0.18332494795322418, acc 1.0
epoch 4, loss 0.18075349926948547, acc 1.0
epoch 5, loss 0.17838451266288757, acc 1.0
so I would still recommend to play around with some hyperparameters and make sure your model can overfit the tiny dataset. |
st47492 | if I have a 2D tensor like this one:
>>> torch.ones((2, 4))
[[1, 1, 1, 1],
[1, 1, 1, 1]]
and want to fill two positions per row with 0, to get:
[[1, 0, 1, 0],
[0, 1, 1, 0]]
I can do:
torch.ones((2, 4)).index_put((torch.arange(2).unsqueeze(1), torch.LongTensor([[1,3], [0,3]])), torch.Tensor([0]))
What about a 3D tensor? Let’s say I want to fill in a torch.ones(2, 3, 4) tensor with some zeros, to get:
tensor([[[1., 0., 1., 0.],
[0., 1., 1., 0.],
[1., 0., 0., 1.]],
[[0., 1., 0., 1.],
[1., 0., 0., 1.],
[1., 1., 0., 0.]]])
if I have the zero-indices stored as:
torch.LongTensor([[[1,3],
[0,3],
[1,2]],
[[0,2],
[1,2],
[2,3]]])
is there a way to use these indices, to tell .index_put() where to place the zeros? |
st47493 | Solved by albanD in post #2
Hi,
For any number of dimensions, you can use scatter to achieve this:
import torch
ind= torch.tensor([[[1,3],
[0,3],
[1,2]],
[[0,2],
[1,2],
[2,3]]])
base = torch.ones(2, 3, 4)
base.scatter_(2, ind, 0… |
st47494 | Hi,
For any number of dimensions, you can use scatter to achieve this:
import torch
ind= torch.tensor([[[1,3],
[0,3],
[1,2]],
[[0,2],
[1,2],
[2,3]]])
base = torch.ones(2, 3, 4)
base.scatter_(2, ind, 0)
print(base) |
st47495 | I am trying to install PyTorch but every time it throws the same error ( ModuleNotFoundError: No module named ‘tools.nnwrap’).
This is what is typed: pip install torch
This is what I get every time:
Collecting torch
Using cached https://files.pythonhosted.org/packages/f8/02/880b468bd382dc79896eaecbeb8ce95e9c4b99a24902874a2cef0b562cea/torch-0.1.2.post2.tar.gz 42
Requirement already satisfied: pyyaml in c:\users\user\appdata\local\programs\python\python37-32\lib\site-packages (from torch) (5.1.2)
Installing collected packages: torch
Running setup.py install for torch … error
ERROR: Command errored out with exit status 1:
command: ‘c:\users\user\appdata\local\programs\python\python37-32\python.exe’ -u -c ‘import sys, setuptools, tokenize; sys.argv[0] = ‘"’"‘C:\Users\User\AppData\Local\Temp\pip-install-oegjniuy\torch\setup.py’"’"’; file=’"’"‘C:\Users\User\AppData\Local\Temp\pip-install-oegjniuy\torch\setup.py’"’"’;f=getattr(tokenize, ‘"’"‘open’"’"’, open)(file);code=f.read().replace(’"’"’\r\n’"’"’, ‘"’"’\n’"’"’);f.close();exec(compile(code, file, ‘"’"‘exec’"’"’))’ install --record ‘C:\Users\User\AppData\Local\Temp\pip-record-b_c49sh\install-record.txt’ --single-version-externally-managed --compile
cwd: C:\Users\User\AppData\Local\Temp\pip-install-oegjniuy\torch
Complete output (23 lines):
running install
running build_deps
Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\User\AppData\Local\Temp\pip-install-oegjniuy\torch\setup.py”, line 265, in
description=“Tensors and Dynamic neural networks in Python with strong GPU acceleration”,
File "C:\Users\User\AppData\Local\Programs\Python\Python37-32\Lib\site-packages\setuptools_init.py", line 145, in setup
return distutils.core.setup(**attrs)
File “C:\Users\User\AppData\Local\Programs\Python\Python37-32\Lib\distutils\core.py”, line 148, in setup
dist.run_commands()
File “C:\Users\User\AppData\Local\Programs\Python\Python37-32\Lib\distutils\dist.py”, line 966, in run_commands
self.run_command(cmd)
File “C:\Users\User\AppData\Local\Programs\Python\Python37-32\Lib\distutils\dist.py”, line 985, in run_command
cmd_obj.run()
File “C:\Users\User\AppData\Local\Temp\pip-install-oegjniuy\torch\setup.py”, line 99, in run
self.run_command(‘build_deps’)
File “C:\Users\User\AppData\Local\Programs\Python\Python37-32\Lib\distutils\cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “C:\Users\User\AppData\Local\Programs\Python\Python37-32\Lib\distutils\dist.py”, line 985, in run_command
cmd_obj.run()
File “C:\Users\User\AppData\Local\Temp\pip-install-oegjniuy\torch\setup.py”, line 51, in run
from tools.nnwrap import generate_wrappers as generate_nn_wrappers
ModuleNotFoundError: No module named ‘tools.nnwrap’
----------------------------------------
ERROR: Command errored out with exit status 1: ‘c:\users\user\appdata\local\programs\python\python37-32\python.exe’ -u -c ‘import sys, setuptools, tokenize; sys.argv[0] = ‘"’"‘C:\Users\User\AppData\Local\Temp\pip-install-oegjniuy\torch\setup.py’"’"’; file=’"’"‘C:\Users\User\AppData\Local\Temp\pip-install-oegjniuy\torch\setup.py’"’"’;f=getattr(tokenize, ‘"’"‘open’"’"’, open)(file);code=f.read().replace(’"’"’\r\n’"’"’, ‘"’"’\n’"’"’);f.close();exec(compile(code, file, ‘"’"‘exec’"’"’))’ install --record ‘C:\Users\User\AppData\Local\Temp\pip-record-_b_c49sh\install-record.txt’ --single-version-externally-managed --compile Check the logs for full command output. |
st47496 | Hi,
please follow the instructions in the get started page: https://pytorch.org/get-started/locally/ 3.5k
From the binary you downloaded, you are trying to install the version 0.1.2 which is the first ever release of pytorch. |
st47497 | It still didn’t work.
I have followed the instructions given but still getting some errors.
I ran the command :
pip3 install torch==1.3.1+cu92 torchvision==0.4.2+cu92 -f https://download.pytorch.org/whl/torch_stable.html 430
Error i got:
Looking in links: https://download.pytorch.org/whl/torch_stable.html 430
ERROR: Could not find a version that satisfies the requirement torch==1.3.1+cu92 (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.3.1+cu92 |
st47498 | If you open the link, you’ll see that these versions are here.
Could you give more context on which machine do you install this? How did you get python? |
st47499 | No, we don’t provide binaries for 32-bit Python because:
On desktops and servers, the mainstream OS is 64-bit.
x86 binaries doesn’t support CUDA and there is no AVX2 support.
But it should be fairly easy to build it yourself if really need that. BTW, Is there a particular reason that you need to use 32-bit Python on a x64 host system? I think it would be easier if you use x64 Python. |
st47500 | Hi!
I am facing same issue i have python 3.6.8 64 no anaconda environment i am using Bash app. I have checked above link and run command but again i got errors |
st47501 | @peterjc123 hi all and Peter…
my PC is quite old, from 2010. I have:
32bit win7
python3.6.5
no conda
is there still chance to install torch? I m facing same
tools.nnwrap
issue… |
st47502 | Hello shymal ,
I m tired with this issue too.
can you tell me which pip command you used after install python x64 |
st47503 | I am so getting the same error:
[root@ip-172-26-11-98 ~]# python3.9 -m pip install torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.7.0+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch==1.7.0+cpu
I’m on CentOS 7.8. What should I do now? Is there any lower version which can support? |
st47504 | Hi,
What is your operating system/python version? Having only this very old version available usually means that you’re running a 32bit python on windows (which we do not support, you should get a 64bit python). |
st47505 | Hi,
I have a 32 bit python on a 64 bit windows too. I want to install pytorch so that i can work on idle, even though i have it in anaconda. If I uninstall my 32 bit python and install a 64 bit python, will the libraries that i had individually installed using pip get uninstalled too? |
st47506 | Hi,
Yes you will need to re-install all libraries I’m afraid.
You can use pip freeze to get the list of all installed libraries to be able to easily re-install them. |
st47507 | You need to make sure that the python and pip you’re using properly point to the 64bit version. |
st47508 | After I installed the 64 bit python, I installed some other libraries such as numpy, pandas etc. and now I am able to use them on python. So that must mean my pip is pointing to the 64 bit python…shouldnt it? |
st47509 | How to achieve this function in pytorch?
tf.math.in_top_k(
targets, predictions, k, name=None
)
tf.nn.in_top_k 3 |
st47510 | Try something similar.
top_k = torch.topk(targets, k)
if any([(targets == c_).all() for c_ in predictions]):
print('targets in top_k predictions') |
st47511 | Hi here. I am working on a project in which I have to analyse sentiment on a arabic dataset. I have been using Camel_tools with the original dataset and Textblob, AllenNLP with the translated dataset. Both methods give me an unefficient result. That’s why I want to know if there is a pytorch module i could use with the original dataset. If not, We could work as a community to build it.
Thanks. |
st47512 | There are two things I have in mind:
If you have sufficient labeled train data, you can simply treat it as an English sentiment analysis task.
Try a multi-language pretrained model (eg: XLMR and M-BERT) and add your sentiment analysis layer ton top. |
st47513 | Try this:
aa = torch.tensor([[1, 2, 2], [3, 3, 3]])
unq, groups = aa.unique_consecutive(dim=1, return_inverse=True)
groups is:
tensor([0, 1, 1])
which are the group assignments for the first row only! expected groups to be of shape [2, 3], with the group assignments for each row… Is this a bug? Any way of achieving what I intended in an efficient way? |
st47514 | Hi,
You are reducing your function over second dimension, so it has to be in shape of [3, ] in your case.
cruvadom:
dim=1
Also, it was just a coincidence that the returned groups is same as groups for just the first row!
When you set the dim=1, your function will be applied on given dimension rather than whole tensor as a flattened array. So, set dim=None.
You can run the following code in 4 cases,
with aa=aa1 and dim=None
With aa=aa1 and dim=1
with aa=aa2 and dim=None
With aa=aa2 and dim=1
aa1 = torch.tensor([[1, 0, 0],
[0, 0, 1]])
aa2 = torch.tensor([[1, 1, 1],
[0, 1, 1]])
aa = aa1 # change this to aa2 with dim=1 and dim=None and see the difference
print(aa)
unq, groups = aa.unique_consecutive(dim=1, return_inverse=True)
print(groups)
unq, groups = aa.unique_consecutive(dim=None, return_inverse=True)
print(groups)
Bests |
st47515 | I am currently doing some experiments where I want to measure data loading times and compute times (i.e. forward/backward pass) for a couple of models (starting of with Resnet50 on ImageNet). However, I sometimes encounter negative time measurements in my experiments and would like to understand if this issue is related to PyTorch or rather core python components or something else in my code base.
My code is based on the deep learning examples from NVIDIA 1 which in turn are based on an example from the PyTorch community. The core of my time measurements looks like this:
for i, (data, label) in data_iter:
batch_size = data.size(0)
lr_policy(optimizer, i, epoch)
>>> data_time = time.time() - end
optimizer_step = ((i + 1) % batch_size_multiplier) == 0
# Forward/backward happens in step()
loss = step(data, label, optimizer_step=optimizer_step)
>>> iteration_time = time.time() - end
if logger is not None:
<some logging stuff>
>>> end = time.time()
My logs then show measurements like this:
Train [1/1492] t.data_time : 0.00018 s t.compute_time : 0.26296 s
Train [1/1493] t.data_time : 0.00018 s t.compute_time : 0.26300 s
Train [1/1494] t.data_time : 0.00018 s t.compute_time : 0.26295 s
Train [1/1495] t.data_time : 0.00018 s t.compute_time : 0.26285 s
Train [1/1496] t.data_time : 0.00018 s t.compute_time : -2.66764 s
Train [1/1497] t.data_time : 0.00018 s t.compute_time : 0.26303 s
Train [1/1498] t.data_time : 0.00017 s t.compute_time : 0.26280 s
Train [1/1499] t.data_time : 0.00017 s t.compute_time : 0.26296 s
I would greatly appreciate some insight of where these negative measurements could come from. If it helps, I can also provide more information on my experiments and code base. |
st47516 | Is t.compute_time just printing iteration_time?
Also, are you using the logger or did you add the timing code manually to the script? |
st47517 | t.compute_time is printing the difference between iteration_time and data_time.
I’m using the logger from the NVIDIA script that I mentioned above with DLLogger 1 as the logging backend. The logging code that I excluded in my initial post is:
if logger is not None:
logger.log_metric("t.loss", to_python_float(loss), batch_size)
logger.log_metric("t.data_speed", calc_speed(batch_size, data_time))
logger.log_metric("t.compute_speed", calc_speed(batch_size, iteration_time - data_time))
logger.log_metric("t.iteration_speed", calc_speed(batch_size, iteration_time))
logger.log_metric("t.data_time", data_time)
logger.log_metric("t.compute_time", iteration_time - data_time)
logger.log_metric("t.iteration_time", iteration_time)
The print logs are actually much larger but above I only showed the relevant part for conciseness. |
st47518 | Thanks for the update. I don’t see any definition of calc_speed in the repository, so could you link to its implementation? |
st47519 | I refactored some of the names in my code to be consistent with the rest of my project. calc_speed in my snippet equals calc_ips 1 in the original code. |
st47520 | Thanks for the update. In that case I don’t know what might cause the wrongly reported time, as the input values shouldn’t get negative (batch size as well as world size) and even if you are not synchronizing the code properly to measure the GPU time, you should not get negative results.
Maybe I’m missing something and someone else has an idea. |
st47521 | Hello,
I am trying to predict injury time in a football match, but my results are not too good.
I want to get a matrix/grid/plot where declared minutes of injury time is on the x-axis and predicted minutes of injury time is on the y-axis. In each position I want to know the precentage of this outcome. So, in (1,1) I want the precentage of how often my model predicts 1 minute if 1 minute is declared, in (1,2) I want to know how often 2 minutes is predicted, but only 1 minute is declared, and so on. There is a name for this, but I can’t remember.
Hopefully somebody can help me implement this in pytorch.
Thank you in advance! |
st47522 | In that case you could use sklearn.metrics.confusion_matrix or implement it directly in PyTorch (we have some implementations in this forum, e.g. here 2). |
st47523 | Is there a class in Pytorch which can save the best accurate model of all the epochs automatically ?
For example : https://keras.io/api/callbacks/model_checkpoint/ 2
There, they have options like ModelCheckpoint which saves the best model. Also like EarlyStopping.
Just wondering if there is any alternative in Pytorch. |
st47524 | You should be able to find these higher-level APIs and hooks in wrappers such as Ignite, PyTorch Lightning, Catalyst etc. |
st47525 | Hi,
I was playing around with MNIST and I came up with the following concept:
I will create 5 subsets of the training set e.g. a,b,c,d,e where b will have 25% common data of a and rest will be unique, c will have 50% common data of b and the rest will be unique, and so on.
Each subset will have the same length (11000)
As trainset_a will be unique, it was easier for me to take the subset of length 11,000 from the main trainset of MNIST.
For trainset_b, I created two subsets: one contains 25% date of trainset_a, other contains 75% unique data.
Now I want to combine these two subsets in such a way that it treats as a single subset. What I want to say is if I call the data loader upon trainset_b (trainset_b_loader), then by calling trainset_b_load.dataset I can access all of the data from two subsets without creating any subfolder as dataset/ index for two different subsets under dataset. As I am a newbie, I am stuck at this point and unable to find a way to achieve the goal.
My code is given below. Any help would be highly appreciated.
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
from torch.utils.data import Subset, Dataset, DataLoader
import matplotlib.pyplot as plt
import time
import os
import copy
import pandas as pd
import random
from torch.utils.data import Subset
from PIL import Image
from torchvision.datasets import MNIST, FashionMNIST
import torchvision.transforms as transforms
#plt.ion() # interactive mode
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# torch.cuda.set_device(device)
def get_target_label_idx(labels, targets, shots=5, test=False):
"""
Get the indices of labels that are included in targets.
:param labels: array of labels
:param targets: list/tuple of target labels
:return: list with indices of target labels
"""
final_list = []
# Both if and else operations seem to be the same, what would be the purpose of this?
for t in targets:
if test:
final_list += np.argwhere(np.isin(labels, t)).flatten().tolist()
else:
final_list += np.argwhere(np.isin(labels, t)).flatten().tolist()
return final_list
def convert_label(x):
if x >= 5:
return x - 5
else:
return x
normal_classes = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
transform = transforms.Compose([transforms.ToTensor()])
train_set = torchvision.datasets.MNIST(root='./data', train=True,
download=True, transform=transform)
trainloader_test = torch.utils.data.DataLoader(train_set, batch_size=4,
shuffle=True, num_workers=2)
train_index = get_target_label_idx(train_set.train_labels.clone().data.cpu().numpy(), normal_classes)
random.shuffle(train_index)
# Split train_index into two batch:
train_index_unseen = train_index[0:5000]
train_index_rest = train_index[5000:]
# print(type(train_index_rest))
# This data will be used in the attack model
mnist_unseen = Subset(trainloader_test.dataset, train_index_unseen) # -----------unseen
# This data will be splitted into 5 batches
mnist_trainset_rest = Subset(trainloader_test.dataset, train_index_rest)
mnist_trainset_rest_loader = torch.utils.data.DataLoader(mnist_trainset_rest.dataset, shuffle=True, num_workers=2)
# This will return the length of each batch
train_set_split_length = int(len(train_index_rest) / 5) # --------Each trainset size: 11000
# train_set_b: To choose 25% trainset from train_set_a
# Each train set will contain 11000 datapoints
common_portion_b = int((train_set_split_length * (25 / 100))) # ----25% common: 2750
unique_portion_b = train_set_split_length - common_portion_b
# print(unique_portion_b)
rest_portion_b = train_set_split_length + unique_portion_b
# print(rest_portion_b)
# train_set_c: To choose 50% trainset from train_set_b
# Each train set will contain 11000 datapoints
common_portion_c = int((train_set_split_length * (50 / 100))) # ----50% common: 5500
unique_portion_c = train_set_split_length - common_portion_c
# print(unique_portion_c)
rest_portion_c = rest_portion_b + unique_portion_c
# print(rest_portion_c)
# train_set_d: To choose 75% trainset from train_set_c
# Each train set will contain 11000 datapoints
common_portion_d = int((train_set_split_length * (75 / 100))) # ----50% common: 8250
unique_portion_d = train_set_split_length - common_portion_d
rest_portion_d = rest_portion_c + unique_portion_d
# print(rest_portion_d)
# First trainset- Unique Trainset
train_set_a = Subset(mnist_trainset_rest_loader.dataset, train_index_rest[0:train_set_split_length])
# train_set_a_df = PandasDataset(train_set_a)
train_set_a_loader = torch.utils.data.DataLoader(train_set_a.dataset, batch_size=4,
shuffle=True, num_workers=2)
# Second trainset- 25% common of first Trainset
train_set_b_1 = Subset(train_set_a_loader.dataset, train_index_rest[0:common_portion_b])
train_set_b_2 = Subset(mnist_trainset_rest.dataset, train_index_rest[train_set_split_length:rest_portion_b])
train_set_b = ???? |
st47526 | If I understand the use case correctly, you would like to create a new Dataset by concatenating train_set_b_1 and train_set_b_2?
If so, you could use ConcatDataset and pass both datasets to it. |
st47527 | I want to use the AdamW in my efficientDet
#optimizer = optim.AdamW(model.parameters(), lr=args.lr)
optimizer = optim.AdamW(model.parameters(), lr=args.lr, betas=(0.9,0.999),eps=1e-08,weight_decay=0.02,amsgrad=False)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(
optimizer, patience=3, verbose=True)
I’m not sure this is how to use the optimizer for my model.
then I want to know what betas do for this optimizer?
my loss for efficientDet d0 going down very slowly
=============================================================================
6 epoch: start training....
4500 iteration: training ...
epoch : 6
iteration : 4500
cls_loss : 1.8731008768081665
reg_loss : 1.0041449069976807
mean_loss : 2.7572750695168025
4800 iteration: training ...
epoch : 6
iteration : 4800
cls_loss : 1.754671573638916
reg_loss : 0.9150880575180054
mean_loss : 2.7382697457438905
5100 iteration: training ...
epoch : 6
iteration : 5100
cls_loss : 1.5807626247406006
reg_loss : 0.8592696189880371
mean_loss : 2.7208735038097895
time : 332.5537312030792
loss : 2.720083791859683
epoch_7
=============================================================================
7 epoch: start training....
5400 iteration: training ...
epoch : 7
iteration : 5400
cls_loss : 1.5356632471084595
reg_loss : 0.7544331550598145
mean_loss : 2.6795950939358284
5700 iteration: training ...
epoch : 7
iteration : 5700
cls_loss : 2.316020965576172
reg_loss : 1.078192949295044
mean_loss : 2.7014631390371915
time : 335.88806772232056
loss : 2.698524436832946
epoch_8
=============================================================================
8 epoch: start training....
6000 iteration: training ...
epoch : 8
iteration : 6000
cls_loss : 1.8002285957336426
reg_loss : 0.8492593765258789
mean_loss : 2.634405281572115
6300 iteration: training ...
epoch : 8
iteration : 6300
cls_loss : 2.22102427482605
reg_loss : 0.9751409292221069
mean_loss : 2.6694228873293624 |
st47528 | Hello everyone, I am doing a deep learning project which has imbalanced class dataset.
So, I am trying to use weighted cross entropy with soft dice loss.
However, I have a question regarding use of weighted ce.
I usually set my weights for classes as 1/no.instance which seems to be correct I think.
This should work well as it counts every instances for each class but, this seems to be not working so well compare to when I approximately set the weights for each class.
What could be a reason for my model performing worse when I weight classes by 1/number of occurrences in each image compare to giving weights by my random prediction?
Thank you and I look forward to hearing from someone for the help! |
st47529 | I think setting the initial weights as 1/class_count is a viable initial value, but not necessarily the most suitable for your use case and I think it’s the right approach to play around with these values to find a “sweet spot” which fits your use case well. |
st47530 | aha i see.
@ptrblck, just one more question about the loss function.
For the case of multi class focal loss, is alpha which is the hyperparameter just the same weight as the weighted cross entropy?
Thank you! |
st47531 | I’m not sure what alpha refers to, but the focal loss would be weigthed by (1 - pred)**gamma, if I remember it correctly. In this case the weighting isn’t a static value, but depends on the output probability of the model for the current target, such that “well-classified samples” get a lower loss than the wrongly classified ones. |
st47532 | Hi, thank you so much for the help.
I just read the paper again and i think alpha is a list of weight for classes.
Just one more question please.
Currently, my dataset is hugely imbalanced and when I sometimes train my model, validation accuracy and iou stay exactly same for every epoch. Do you know what might have caused this and how to solve it?
Could it be due to overfitting? |
st47533 | Is the training accuracy and IOU still decreasing while the validation metrics stay the same?
In that case, yes it sounds like overfitting. |
st47534 | Thank you for the reply!
The problem only occurs when I set learning rate scheduler.
Could lr_scheduler cause sucb the problem?
Also, Is the weight_decay parameter equivalent to L2 regularization? |
st47535 | ptrblck:
ccuracy and IOU still decreasing while the validation metrics stay the same?
In that case, yes it sounds like overfitting.
Actually training accuracy and IOU don’t decrease when this problem happens which is really weird. |
st47536 | Your learning rate scheduler might reduce the learning rate too far and maybe the training just gets stuck?
Did you try to remove the scheduler and does the training benefit from it?
weight_decay will add L2 regularization to all parameters for standard SGD. Note that for certain optimizers (such as Adam) the weight decay is not equal to L2 regularization as explained in Decoupled Weight Decay Regularization 3, which is why AdamW was implemented. |
st47537 | @ptrblck, thank you so much for the help!!
I have been using the code below for optimizer:
optimizer = AdamW(model.parameters(), 0.01,weight_decay=1e-4)
lmbda = lambda epoch: 0.95
scheduler = torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda=lmbda)
I presume this lr_scheduler works by lr*0.95 for every epoch right?
It works fine without scheduler, but when I add the scheduler, it starts to overfitting for some reason and it happens from epoch 1 so I am not sure whether this is a overfitting problem.
Also, this problem is solved by adding weight_decay parameter in AdamW but I am not sure why.
Furthermore, I am currently facing a huge class imbalance problem in my semantic segmentation task.
The original data looks as above where 0-7 are classes.
To solve this, other than using loss functions, I have also tried doing oversampling.
This looks as below and unfortunately I can not make class 0 in same range as others as class 0 appears in every dataset.
Do you think the above could help to solve the imbalance problem?
Sorry for keep asking questions. |
st47538 | edshkim98:
I presume this lr_scheduler works by lr*0.95 for every epoch right?
Yes, that should be the case. You can double check it by printing the learning rates:
[...]
optimizer.step()
scheduler.step()
print(optimizer.param_groups[0]['lr'])
print(scheduler.get_last_lr())
edshkim98:
Do you think the above could help to solve the imbalance problem?
You could certainly try your oversampling approach, but currently you are creating a lot of class0 samples, so your model might overfit to this class even more now. |
st47539 | @ptrblck, thank you for the reply.
I know, the class 0 has increased even more now, but the problem is that the class 0 appears in every single image as it refers to background, so it is inevitable I think if I use oversampling(with a few augmentation methods) technique.
I am currently training my model, but seems that the training iou has significantly increased but not so much with the validation iou yet.
Do you think I shouldn’t use the oversampling with augmentation technique in my application and rather leave my original dataset as it was? |
st47540 | Hi,
I was trying to create a denoising AE, but as I was training it I got some size error eventhough both clear and pixelated images have same size.
Here is model Arch:
############### MODEL ################
class AutoEncoder(nn.Module):
def __init__(self):
super(AutoEncoder, self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(3, 64, (3,3)),
nn.MaxPool2d((2,2)),
nn.Conv2d(64, 32, (3,3)),
nn.MaxPool2d((2,2)),
nn.Conv2d(32, 16, (3,3)),
nn.MaxPool2d((2,2)),
nn.Conv2d(16, 8, (3,3)),
nn.MaxPool2d((2,2)),
)
self.decoder = nn.Sequential(
nn.Upsample((16,16)),
nn.ConvTranspose2d(8, 16, (3,3)),
nn.Upsample((32,32)),
nn.ConvTranspose2d(16, 32, (3,3)),
nn.Upsample((64,64)),
nn.ConvTranspose2d(32, 64, (3,3)),
nn.Upsample((128,128)),
nn.ConvTranspose2d(64, 3, (3,3)),
)
def forward(self, xb):
encoded = self.encoder(xb)
return self.decoder(encoded)
The size of images is 128, 128.
HERE IS A LINK OF THE NOTEBOOK AS WELL:
colab.research.google.com
Google Colaboratory 2 |
st47541 | Solved by ptrblck in post #3
Your model works fine with an input of [batch_size, 3, 128, 128]:
model = AutoEncoder()
x = torch.randn(2, 3, 128, 128)
out = model(x)
print(out.shape)
> torch.Size([2, 3, 130, 130])
so I guess the shape of your model output doesn’t match the target shape.
If that’s the case, you would have to ch… |
st47542 | Your model works fine with an input of [batch_size, 3, 128, 128]:
model = AutoEncoder()
x = torch.randn(2, 3, 128, 128)
out = model(x)
print(out.shape)
> torch.Size([2, 3, 130, 130])
so I guess the shape of your model output doesn’t match the target shape.
If that’s the case, you would have to change the layer setups and make sure that both these tensors have an equal spatial size. |
st47543 | I want to iterate through the children() of a module,
and identify all the convolutional layers (for instance), or maybe all the maxpool layers, to do something with them.
How can I determine the type of layer?
My code would be something like this:
for layer in net.children():
if layer is a conv layer: # ??? how do I do this ???
do something with the layer
Thanks! |
st47544 | Solved by reachtarunhere in post #2
Do you plan to treat Conv1d, Conv2d and so far as different? If you were only looking for Conv2d layers you can do something like:
for layer in net.children():
if isinstance(layer, nn.Conv2d):
do something with the layer
isinstance is a Python built-in https://docs.python.org/3/library… |
st47545 | Do you plan to treat Conv1d, Conv2d and so far as different? If you were only looking for Conv2d layers you can do something like:
for layer in net.children():
if isinstance(layer, nn.Conv2d):
do something with the layer
isinstance is a Python built-in https://docs.python.org/3/library/functions.html#isinstance 169 |
st47546 | Great! Thanks!
That’s good enough for me, for now. I just need to distinguish between Conv2d and MaxPool2d.
Thank you!! |
st47547 | How can I check whether a layer is Conv2d or not for this kind of ResNet structure?
Please help.
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shortcut): Sequential()
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(shortcut): Sequential()
)
) |
st47548 | The code of @reachtarunhere should work, since net.children() will be called recursively on all submodules. |
st47549 | But it is not working, it can find only if the 1st layer is conv or not. I guess its mostly because for the BasicBlocks the nns are not directly having Conv2d as its children directly. So, the code is failing. Any suggestion? |
st47550 | Right, I was mistaken.
In that case, model.modules() or model.named_modules() should work. |
st47551 | Thanks, it worked fine! Is there any way I can specifically choose the shortcut layer conv2ds only? |
st47552 | I was searching for batchnorm layer and
model = models.densenet161()
for child in model.children():
for layer in child.modules():
if(isinstance(layer,torch.nn.modules.batchnorm.BatchNorm2d)):
print(layer)
This code worked for me. |
st47553 | Thank you for the answer. How can I select the first conv layer of the ResNet model? |
st47554 | I want to decompose each Conv layer other than the first one in the ResNet.How can I re-assign the decomposed layers to its position?What is the equivalent of model.features._modules[module] of vgg in ResNet? |
st47555 | The architectures are a bit different for these models.
You can print out the layout using:
model = models.resnet18()
print(model)
print(model.layer1)
and check all submodules. |
st47556 | I got it in the following way. But do we have a better way to do it ? here net impies the model.
for n, m in net.named_children():
num_children = sum(1 for i in m.children())
if num_children != 0:
# in a layer of resnet
layer = getattr(net, n)
# decomp every bottleneck
for i in range(num_children):
BasicBlock = layer[i]
conv2 = getattr(BasicBlock, ‘conv2’)
#print(conv2)
decompose = function_call
# s += count_params(conv2)
setattr(BasicBlock, ‘conv2’,decompose)
#print(decompose)
conv1 = getattr(BasicBlock, ‘conv1’)
#print(conv1)
decompose = function_call
# s += count_params(conv2)
setattr(BasicBlock, ‘conv1’,decompose)
conv = getattr(BasicBlock, ‘downsample’)
if(conv) :
c = getattr(conv, ‘0’)
decompose = function_call
setattr(conv, ‘0’,decompose) |
st47557 | How to determine layer type from a model saved with torch.jit.
When I attempt to access modules, it returns RecursiveScriptModule(original_name=Linear) instead of Linear. |
st47558 | Hello,
I am trying to run this 5
First I did
sh ./create_dataset.sh
And gave the correct path to scenflow_data_path.
Now I am trying to run this:
python main.py --maxdisp 192 --with_spn
as given, I am getting this following error. Please let me know how to correct this.
The error is:
Traceback (most recent call last):
File "main.py", line 194, in <module>
main()
File "main.py", line 51, in main
args.datapath)
File "/home/kbdp5524/Downloads/AnyNet-master/dataloader/listflowfile.py", line 23, in dataloader
monkaa_path = filepath + [x for x in image if 'monkaa' in x][0]
IndexError: list index out of range |
st47559 | Solved by ptrblck in post #2
This error seems to be specific to the linked repository so you might also want to create an issue in the repo to get a better answer.
That being said, based on the error message it seems as if the files are not found or the download failed. |
st47560 | This error seems to be specific to the linked repository so you might also want to create an issue in the repo to get a better answer.
That being said, based on the error message it seems as if the files are not found or the download failed. |
st47561 | I’m getting a full system crash when training large models with PyTorch on a 2080 Ti.
It crashes faster when running larger models, where anything needing less than 4GB GPU memory can run for a few hours, while anything over 9GB crashes within 10-20 minutes.
This screams “hardware issue” and “overheating”, if not for the fact that everything runs fine in other frameworks.
There’s no crash when using:
Darknet
TensorFlow
cuda_memtest 9
The cuda_memtest allocates as much memory as it can, and exercises it, leaving the device at 100% utilization in nvidia-smi. It finds no issues, and doesn’t crash the system.
This issue with PyTorch has persisted across different versions of PyTorch (1.4, 1.5 and 1.6), different nvidia drivers (version 440 and 450), OS reinstalls (Linux Mint and Ubuntu 18.04), cuda versions (10.1 and 10.2). It happens on all code bases I’ve tried: MMdetection, AdelaiDet, WongKinYiu/PyTorch_YOLOv4, ultralytics/yolov5.
Looking at the temperature reading on the GPU and CPU does not show any temperature going particularly high. It can crash with GPU temperature below 70C.
When the crash happens, the screen will freeze for a few seconds, before the system reboots.
I haven’t dug deeply into what the above mentioned code bases have in common, but obviously neural net layers, and perhaps the data loading mechanisms.
Obviously I’m not the first to run PyTorch on a 2080 Ti. Yet something consistently causes problems when running large models over time, across various software configurations, but only with PyTorch. It’s as if the probability of a crash increases with model_size * time.
Any ideas on what could be going on here? Anything I could do to troubleshoot this? |
st47562 | It rather sounds like a PSU issue. Could you check dmesg for CUDA XIDs after the crash? |
st47563 | I guess it could be the PSU. I figured the stability when using other libraries counted against this theory. It’s supposed to deliver 1000W, which should be plenty for a single GPU system. I’m not seeing XIDs in the logs, or really any reoccurring message preceding the crashes. (This is when looking at /var/log/kern.log and output from journalctl in Ubuntu 18.04.)
I’ve discovered that setting num_workers=0 on the PyTorch DataLoader makes things considerably more stable (although crashes can still happen after several hours). All four code bases that have crashed have relied on the PyTorch DataLoader. Of course, setting num_workers=0 also slows down execution, causing less stress on the GPU, power draw, etc, so I assume that is why this helps. I don’t really see how a DataLoader bug could bring down the entire system in any case.
It’s possible that the four PyTorch code bases just happen to be able to saturate the hardware better, thus causing the crash, for example through power draw. I suppose I should try very hard to get things to crash with other libraries, which would rule out PyTorch altogether as a part of the problem. |
st47564 | So far system crashes reported in this forum were isolated to hardware defects (most of the time the PSU was at fault). You could try to limit the power usage of your GPU via nvidia-smi, if your device supports it, and rerun the script. |
st47565 | Num_workers is CPU bound so something to try is running a CPU benchmark (e.g. Blender test) and see whether it crashes. |
st47566 | lars:
It’s possible that the four PyTorch code bases just happen to be able to saturate the hardware better, thus causing the crash, for example through power draw. I suppose I should try very hard to get things to crash with other libraries, which would rule out PyTorch altogether as a part of the problem.
You could compare the GPU and CPU utilization between the two and maybe try to separate them in different processes, too (e.g. by just feeding the same random inputs to the GPU which eliminates dataloading as a bottleneck and then doing something else that takes all your CPU).
Note that only saturating either GPU and CPU might not see the problem if it is power draw. |
st47567 | Thanks everyone for your help, it’s much appreciated.
I’ve tried now running large matrix multiplications in a loop on both CPU and GPU. This leaves nvidia-smi showing ~250W out of 250W power draw on the GPU while htop shows ~100% utilization on all 24 CPU cores. I ran this for 1.5 hours, with no crash.
I also tried throttling GPU power via nvidia-smi to only use 200W, and then running the PyTorch neuralnet code that has crashed before (num_workers=8). This crashed after a couple of hours, which means it’s ran longer than it usually does. But I don’t know if this is because of lower power draw, slower execution, or if its just a fluke.
This leaves me somewhat more confident that it isn’t the PSU. Do you agree? Are there other tests I could do to exercise the PSU? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.