id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st49868 | MA_CASANDRA_QUILANG:
But as a newbie I don’t understand why my model should have the shapes you have stated above, specifically the output’s shape and the target shape.
For the input shape I used the shaped defined in your script:
batch_x = train_x[i:i+BATCH_SIZE].view(-1, 3, 100, 100)
Using a random input in this shape, the model creates an output in the shape [batch_size, 2, 104, 104].
MA_CASANDRA_QUILANG:
Lastly, I still don’t fully understand what target really is.
The target is the ground truth label of the sample. During the training of your model you are trying to get the model predictions as close to the target (label) as possible. |
st49869 | Sir, I checked the shape of my output and target and I found that my output has the shape [100, 3, 104, 104] and my target shape is [100, 2]. Now, I find it difficult to adjust my target shape. Can you please give any suggestion where should I adjust it? in your previous answers you stated that my target should have the shape [batch_size, 104, 104] but it doesn’t coincide to what I got thus getting the error: RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of dimension: 2 |
st49870 | Could you explain what the target contains and what dim1 is used for in [100, 2]?
Based on the shape it should be a multi-class classification target for a sequence length of 2, but I guess you might be using a one-hot encoded target for a binary classifiaction?
If that’s the case, you would have to create a target in the shape [batch_size=100], which contains the class indices in [0, 1] (or use alternatively nn.BCEWithLogitsLoss).
Since your current model outputs a 4-dimensional tensor, you could flatten the activation and add linear layers to the model to get the desired output shape.
Have a look at this tutorial 3 for a simple CNN. |
st49871 | My target contains my labels sir and thank you for the linked tutorial I will check it out. |
st49872 | Sir, I decided to just flatten my output shape because I see it as the best answer to my problem. But I actually don’t know how |
st49873 | You can flatten the activations via x = x.view(x.size(0), -1) or by using nn.Flatten.
However, I don’t think flattening the activations alone will solve the issue, since the model output should have the shape [batch_size, nb_classes] for a multi-class classification as shown in the tutorial.
Thus I would still recommend to use e.g. an nn.Linear module to create the desired outputs. |
st49874 | Hi! I tried using nn.Linear module and tried to flatten the last layer of my network too. My code finally has no error but still I am not 100% sure if I did it correctly, any comments on this sir? :
class ESNet(nn.Module):
def __init__(self, classes):
super().__init__()
#-----ESNET---------#
self.initial_block = DownsamplerBlock(3, 16)
self.layers = nn.ModuleList()
for x in range(0, 3):
self.layers.append(FCU(16, 3, 0.03, 1))
self.layers.append(DownsamplerBlock(16,64))
for x in range(0, 2):
self.layers.append(FCU(64, 5, 0.03, 1))
self.layers.append(DownsamplerBlock(64,128))
for x in range(0, 3):
self.layers.append(PFCU(chann=128))
self.layers.append(UpsamplerBlock(128,64))
self.layers.append(FCU(64, 5, 0, 1))
self.layers.append(FCU(64, 5, 0, 1))
self.layers.append(UpsamplerBlock(64,16))
self.layers.append(FCU(16, 3, 0, 1))
self.layers.append(FCU(16, 3, 0, 1))
self.output_conv = nn.ConvTranspose2d( 16, classes, 2, stride=2, padding=0, output_padding=0, bias=True)
self.hidden = nn.Linear(2*104*104, 208)
self.out = nn.Linear(208, 2)
self.act = nn.ReLU()
def forward(self, input):
output = self.initial_block(input)
print(input.shape)
for layer in self.layers:
output = layer(output)
output = self.output_conv(output)
output = output.view(output.size(0), -1)
output = self.act(self.hidden(output))
output = self.out(output)
print(output.shape)
return output |
st49875 | The implementation looks alright for a 2-class classification.
As said before, you could also use a single output unit and then nn.BCEWithLogitsLoss for a binary classification, but your approach should also work. |
st49876 | Just some random thoughts:
I just tried to build PyTorch on the recently released CUDA 9.2, and had some weird compiler error, such as:
/usr/local/cuda-9.2/include/cuda_fp16.hpp(299): error: no operator "&&" matches these operands
operand types are: __half && __half
/usr/local/cuda-9.2/include/cuda_fp16.hpp(300): error: no operator "&&" matches these operands
operand types are: __half && __half
/usr/local/cuda-9.2/include/cuda_fp16.hpp(301): error: no operator "&&" matches these operands
operand types are: __half && __half
/usr/local/cuda-9.2/include/cuda_fp16.hpp(302): error: no operator "&&" matches these operands
operand types are: __half && __half
/usr/local/cuda-9.2/include/cuda_fp16.hpp(303): error: no operator "&&" matches these operands
operand types are: __half && __half
/usr/local/cuda-9.2/include/cuda_fp16.hpp(304): error: no operator "&&" matches these operands
operand types are: __half && __half
6 errors detected in the compilation of "/tmp/tmpxft_000040bf_00000000-6_THCReduceApplyUtils.cpp1.ii".
CMake Error at ATen_cuda_generated_THCReduceApplyUtils.cu.o.Release.cmake:279 (message):
Error generating file
/.../aten/build/src/ATen/CMakeFiles/ATen_cuda.dir/__/THC/./ATen_cuda_generated_THCReduceApplyUtils.cu.o
After some tinkering, I found that the build succeeds if I add __CUDA_NO_HALF2_OPERATORS__:
diff --git a/aten/CMakeLists.txt b/aten/CMakeLists.txt
index bdf3145..7620d23 100644
--- a/aten/CMakeLists.txt
+++ b/aten/CMakeLists.txt
@@ -165,7 +165,7 @@ ENDIF()
IF(CUDA_HAS_FP16 OR NOT ${CUDA_VERSION} LESS 7.5)
MESSAGE(STATUS "Found CUDA with FP16 support, compiling with torch.CudaHalfTensor")
- LIST(APPEND CUDA_NVCC_FLAGS "-DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__")
+ LIST(APPEND CUDA_NVCC_FLAGS "-DCUDA_HAS_FP16=1 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__")
add_compile_options(-DCUDA_HAS_FP16=1)
ELSE(CUDA_HAS_FP16 OR NOT ${CUDA_VERSION} LESS 7.5)
MESSAGE(STATUS "Could not find CUDA with FP16 support, compiling without torch.CudaHalfTensor")
I don’t know if it applies to others, but if anybody’s trying to build with CUDA 9.2 and having problems, this might help.
BTW, I didn’t know 9.2 was released three days ago! No wonder I had a problem with the build… |
st49877 | question: why are defining symbols such as CUDA_NO_HALF2_OPERATORS, which seem to remove the possibility of doing casts between float16 and float32? Am I misunderstanding the effects of these defines? In any case, if I try using half instead of at::Half, in custom torch extension kernels, I get weird casting errors. I’m trying to understand what is the background and context for this? |
st49878 | We are using these flags to use the internal PyTorch half operations instead of the one from the CUDA libraries.
This dates quite a while back, so I might miss some things but If I remember it correctly, CUDA9 added half operators in its half header, while Torch (Torch7 at this time) already shipped with its own.
The flags are used to keep the half definitions from the CUDA header, while not compiling the operators.
What kind of issues are you seeing in your custom CUDA extension?
EDIT: follow-up question seems to be in this 51 post. |
st49879 | I am trying to train an extracted contour data to check if a certain shape has defects or not.
Contours are vectors of points but i am confused how to proceed . I am also curious if this approach is right .
Thank you. |
st49880 | If your contours have a static number of points you could try to fit your model to a regression task where the outputs would be these contour points.
I don’t know what the best approach would be for a variable number of contour points. Maybe a recurrent architecture could work in this case. |
st49881 | Thank you for your reply.
I will interpolate the contours to have same size and check results. |
st49882 | I have a multi modal network with a forward function as below:
def forward(self, x): # x.shape torch.Size([1, 2, 200, 100])
x1= x.data[0][0,:,:] # torch.Size([200, 100])
x2= x.data[0][1,:,:] # torch.Size([200, 100])
out_x1 = self.conv(x1)
out_x2 = self.conv(x2)
The input x is of shape torch.Size([1, 2, 200, 100]), i.e [batch=1, ch=2, height=200, width=100 ] and I want x1 to be first channel and x2 to be second channel. If I use:
x1= x.data[0][0,:,:]
then I only get height and width, torch.Size([200, 100]), which will not work in self.conv, which expect torch.Size([1, 1, 200, 100]). How should I extract x1 and x2 from x to get the 4-dimensions that I need?
Thank you! |
st49883 | You can use the view function to reshape it.
x1 = (x.data[0][0, :, :]).view(1, 1, 200, 100) |
st49884 | Thank you. But then I would not be able to change batch size. This is a nested network so it would be good if I could extract the batch size |
st49885 | My mistake. I misunderstood what you were asking. I’ve tried some code below and got these results.
simulated_batch_size = 64
X = torch.rand((simulated_batch_size, 2, 200, 100), dtype=torch.float64)
x1 = X[:, 0, :, :]
print(x1.shape) # [64, 200, 100]
x2 = X[:, 1, :, :]
print(x2.shape) # [64, 200, 100]
#### Reshaped for Conv ####
x1 = x1.view(x1.shape[0], 1, 200, 100)
print(x1.shape) # [64, 1, 200, 100]
x2 = x2.view(x2.shape[0], 1, 200, 100)
print(x2.shape) # [64, 1, 200, 100]
So I believe in your case, it would be
x1= x.data[:, 0, :, :]
It should give you a tensor of shape [1, 200, 100] |
st49886 | Hi,
I am new to PyTorch. I have a doubt about converting a data frame with 6 columns and 50,000 rows. I want to form 6 channels with these 6 columns and 100 rows such that it can be given as input to a CNN that takes input with 6 channels. Could any of you please help me to sort this out?
Screenshot from 2020-10-11 13-31-17959×253 23.4 KB |
st49887 | I assume you would want your data sample to be of the shape - [batch, 6, 100]
So Initialize a NumPy array of that shape
data = numpy.zeros((1, 6, 100)) #batch size equal to 1
and then populate this NumPy array the way you want.
Then convert this Numpy array to a torch tensor using -
torch_tensor = torch.from_numpy(data)
if you want to batch it, you could stack these tensors on top of each other at the 0th dimension making the input of shape batch_size, 6, 100 |
st49888 | Do we have a tutorial for it? It seems jit allows people to see what is inside the model, so I don’t it helps me.
Thank you for answering. |
st49889 | I am trying to overfit a single batch in order to test, whether my network is working as intended. I would have expected, that the loss should keep decrease as long as the learning rate isn’t too high. What I observe, however, is that the loss in fact decreases over time, but it fluctuates strongly. Is that a sign, that I have a flaw in my architecture? |
st49890 | Solved by Kushaj in post #4
It should gradually decrease. (some fluctuation is ok, but not strong fluctuation as in your case). If the model is correct the fluctuation might be due to bad hyperparameters (probably lr or momentum). Also, do not use any data augmentation, weight decay, dropout or any other fancy regularization t… |
st49891 | Use small batch size (like 2). Also, this test only tells if the model has enough capacity to learn the data, so if you are able to reach a loss of 0, then it means that you passed the test. |
st49892 | I am trying that. But I can’t reach zero. My question is exactly the following: Should the loss strictly decrease provided a sufficiently small learning rate while overfitting or can it vary? |
st49893 | It should gradually decrease. (some fluctuation is ok, but not strong fluctuation as in your case). If the model is correct the fluctuation might be due to bad hyperparameters (probably lr or momentum). Also, do not use any data augmentation, weight decay, dropout or any other fancy regularization trick. |
st49894 | I can’t believe I oversaw that - I had a random augmentation activated within the Dataloder! Thanks for pointing that out! |
st49895 | I see people do it in mask = ~torch.eye(n_samples, device=sim.device).bool()
What does the ~ for?
Thank you. |
st49896 | The tilde operation is a bitwise negation and should yield the same output as torch.bitwise_not. |
st49897 | Hi everyone!
I am newbie for PyTorch.
i try to rewrite my network from keras to pytorch. keras can decrease loss to 500 but pytorch stuck at 1000.
[keras]
Seq_deepCpf1_Input_SEQ = Input(shape=(34, 4))
Seq_deepCpf1_C1 = Convolution1D(80, 5, activation=‘relu’)(Seq_deepCpf1_Input_SEQ)
Seq_deepCpf1_P1 = AveragePooling1D(2)(Seq_deepCpf1_C1)
Seq_deepCpf1_F = Flatten()(Seq_deepCpf1_P1)
Seq_deepCpf1_DO1 = Dropout(0.3)(Seq_deepCpf1_F)
Seq_deepCpf1_D1 = Dense(80, activation=‘relu’)(Seq_deepCpf1_DO1)
Seq_deepCpf1_DO2 = Dropout(0.3)(Seq_deepCpf1_D1)
Seq_deepCpf1_D2 = Dense(40, activation=‘relu’)(Seq_deepCpf1_DO2)
Seq_deepCpf1_DO3 = Dropout(0.3)(Seq_deepCpf1_D2)
Seq_deepCpf1_D3 = Dense(40, activation=‘relu’)(Seq_deepCpf1_DO3)
Seq_deepCpf1_DO4 = Dropout(0.3)(Seq_deepCpf1_D3)
Seq_deepCpf1_Output = Dense(1, activation=‘linear’)(Seq_deepCpf1_DO4)
Seq_deepCpf1 = Model(inputs=[Seq_deepCpf1_Input_SEQ], outputs=[Seq_deepCpf1_Output])
print(Seq_deepCpf1.summary())
import keras
Seq_deepCpf1.compile(optimizer=keras.optimizers.Adam(lr=0.005, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0),
loss='mse')
Seq_deepCpf1.fit(x=SEQ, y=indel_f,epochs=50)
[pytorch]
class Regression(nn.Module):
def init(self):
super(Regression, self).init()
self.conv1d = nn.Conv1d(4, 80, 5, 1) # 进去4通道出来80通道 (30,80)
self.relu = nn.ReLU()
self.avg1d = nn.AvgPool1d(2) # size of window 2 (15,80)
self.flatten = nn.Flatten()
self.dropout = nn.Dropout(p=0.3)
self.linear1200_80 = nn.Linear(80 * 15, 80)
self.linear80_40 = nn.Linear(80, 40) #(None, 40)
self.linear40_40 = nn.Linear(40, 40) # (None, 40)
self.linear40_1 = nn.Linear(40, 1) # (None, 40)
def forward(self, x):
outconv1d = self.conv1d(x) # 进去4通道出来80通道 (30,80)
outact = self.relu(outconv1d)
# Seq_deepCpf1_C1 = Convolution1D(80, 5)(Seq_deepCpf1_Input_SEQ)
outavg1d = self.avg1d(outact) # size of window 2 (15,80)
# Seq_deepCpf1_P1 = AveragePooling1D(2)(Seq_deepCpf1_C1)
out_flatten = self.flatten(outavg1d)
# Seq_deepCpf1_F = Flatten()(Seq_deepCpf1_P1)
out_dropout = self.dropout(out_flatten)
# Seq_deepCpf1_DO1 = Dropout(0.3)(Seq_deepCpf1_F)
out_linear1200_80 = self.linear1200_80(out_dropout)
out_act_linear1200_80 = self.relu(out_linear1200_80)
# Seq_deepCpf1_D1 = Dense(80, activation='relu')(Seq_deepCpf1_DO1)
out_dropout1200_80 = self.dropout(out_act_linear1200_80)
# Seq_deepCpf1_DO2 = Dropout(0.3)(Seq_deepCpf1_D1)
out_linear80_40 = self.linear80_40(out_dropout1200_80)
out_act80_40 = self.relu(out_linear80_40)
# Seq_deepCpf1_D2 = Dense(40, activation='relu')(Seq_deepCpf1_DO2)
out_dropout80_40 = self.dropout(out_act80_40)
# Seq_deepCpf1_DO3 = Dropout(0.3)(Seq_deepCpf1_D2)
out_linear40_40 = self.linear40_40(out_dropout80_40)
out_act40_40 = self.relu(out_linear40_40)
# Seq_deepCpf1_D3 = Dense(40, activation='relu')(Seq_deepCpf1_DO3)
out_dropout40_40 = self.dropout(out_act40_40)
# Seq_deepCpf1_DO4 = Dropout(0.3)(Seq_deepCpf1_D3)
out = self.linear40_1(out_dropout40_40)
# Seq_deepCpf1_Output = Dense(1, activation='linear')(Seq_deepCpf1_DO4)
return out
model = Regression().to(device)
loss = nn.MSELoss() # 所以 loss 使用 MSELoss
optimizer = torch.optim.Adam(model.parameters(), lr=0.005) # optimizer 使用 Adam
num_epoch = 50
for epoch in range(num_epoch):
train_loss = 0.0
count = int(len(train_x)/batch_size)+1
model.train()
for i, data in enumerate(train_loader):
optimizer.zero_grad()
train_pred = model(data[0].to(device=device))
batch_loss = loss(train_pred, data[1].to(device=device))
batch_loss.backward()
# print(str(i))
optimizer.step()
# train_acc += np.sum(np.argmax(train_pred.cpu().data.numpy(), axis=1) == data[1].numpy())#和groud thuth 比较看正确率
train_loss += batch_loss.item()
j = j + 1
print("Epoch :", epoch ,"train_loss:",train_loss/count)
keras output
Epoch 50/50
1000/14999 [=>…] - ETA: 0s - loss: 514.2870
3000/14999 [=====>…] - ETA: 0s - loss: 533.6077
5000/14999 [=========>…] - ETA: 0s - loss: 529.4184
7000/14999 [=============>…] - ETA: 0s - loss: 523.6750
9000/14999 [=================>…] - ETA: 0s - loss: 517.9706
11000/14999 [=====================>…] - ETA: 0s - loss: 516.3988
13000/14999 [=========================>…] - ETA: 0s - loss: 516.1699
14999/14999 [==============================] - 0s 28us/step - loss: 510.5814
pytorch output
Epoch : 42 train_loss: 1107.6384684244792
Epoch : 43 train_loss: 1124.8985188802083
Epoch : 44 train_loss: 1117.5798095703126
Epoch : 45 train_loss: 1103.8336100260417
Epoch : 46 train_loss: 1100.827498372396
Epoch : 47 train_loss: 1104.8447998046875
Epoch : 48 train_loss: 1101.6757080078125
Epoch : 49 train_loss: 1100.1193359375
code and data are available on GitHub
GitHub
desertzk/pythondemo 2
Contribute to desertzk/pythondemo development by creating an account on GitHub.
keras code is in DeepCpf1.py
pytorch code is in DeepCpf1_pytorch.py 1
thanks |
st49898 | Solved by ptrblck in post #2
I can’t find any obvious differences.
Sometimes an unwanted broadcasting takes place in the loss calculation, if you don’t pass an output and target in the same shape to nn.MSELoss, which should raise a warning but if often overlooked.
Could you double check it in your PyTorch code? |
st49899 | I can’t find any obvious differences.
Sometimes an unwanted broadcasting takes place in the loss calculation, if you don’t pass an output and target in the same shape to nn.MSELoss, which should raise a warning but if often overlooked.
Could you double check it in your PyTorch code? |
st49900 | ptrblck:
Sometimes an unwanted broadcasting takes place in the loss calculation, if you don’t pass an output and target in the same shape to nn.MSELoss , which should raise a warning but if often overlooked.
Excellent answer. thank you so much ptrblck. The loss is 500 now. |
st49901 | Hi,
I am trying to add a penalty to my loss function, but it seems that the penalty does not have a backward pass even though all elements of the penalty have been declared as a Variable.
The code that forms my penalty is as follows
def make_fisher_matrix(self, previous_dataset, previous_batch_size, prev_nums):
print("making_fisher")
prev_idxs = get_labels_indices(previous_dataset.train_labels, prev_nums)
loader = DataLoader(previous_dataset, batch_size=previous_batch_size, sampler=torch.utils.data.sampler.SubsetRandomSampler(prev_idxs))
likelihoods = []
self.eval()
#init matrices
self.fisher_matrix = {n:p.clone().zero_() for n,p in self.named_parameters()}
for k, (i_data,label) in enumerate(loader):
data = Variable(i_data)
label = Variable(label)
previous_prediction = self(data)
log_pp = F.log_softmax(previous_prediction,dim=1)#take a log and softmax
likelihood = F.nll_loss(log_pp, label)
likelihood.backward(retain_graph=True)
#print(likelihood_grad)
for n,p in self.named_parameters():
self.fisher_matrix[n] = (p.grad.clone() ** 2) / len(previous_dataset)
self.prev_parameters = {n:p.clone() for n,p in self.named_parameters()}
def get_ewc_loss(self,lamda, debug=False):
try:
losses = Variable(torch.zeros(1))
for n,p in self.named_parameters():
p.requires_grad = True
pp_fisher = Variable(self.fisher_matrix[n])
pp = Variable(self.prev_parameters[n])
loss = (pp_fisher*((p - pp)**2)).sum()
losses += loss
return (Variable((lamda/2)*(losses)))
except:
return (Variable(torch.zeros(1)))
I use my ‘get_ewc_loss’ function in the following manner:
for batch, (data, label) in enumerate(self.train_loader):
if self.use_gpu:
input_data, g_label = Variable(data.cuda()), Variable(label.cuda())
else:
input_data, g_label = Variable(data), Variable(label)
self.opt.zero_grad()
output_vector = self.model(input_data)
batch_error = self.criterion(output_vector, g_label)
ewc_error = self.model.get_ewc_loss(lamda = self.lamda, debug=False)
#final_error = batch_error + ewc_error
batch_error.backward()
ewc_error.backward()
#final_error = batch_error + ewc_error
#final_error.backward()
train_error += final_error.item()
ewc_t += ewc_error.item()
self.opt.step()
Now initially I had final_error = ewc_error + batch_error, but my ewc_error was not reducing at all, thus i thought it seemed likely that ewc_error was not impacting the gradient descent at all. But now when i explicitly use ewc_error.backward() i get the error:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
I thought if i use Variables and incorporate the parameters of the model into my loss penalty, the autograd function should recognise this as a tensor that requires grad? If that is incorrect any suggestions on how to implement a custom penalty would be greatly appreciated! |
st49902 | I am pretty new to deep learning and pytorch API , When I try to build a ResNet 50 and train the image attributes with binary (1 or -1) class ,It gives me nll_loss, in crossEntropyloss ,Target -1 is out of bounds error , and I have followed the tutorial that my resNet 50 structure is correctly built ,tested with a random input and it give me the correct output which is tensor size 2.
Here is my code for training -
def train_model(epoch):
model_net.train()
for batch_index,(input,labels) in enumerate(train_loader):
labels=labels.view(batch_size)
input,labels=input.to(device),labels.to(device)
outputs=model_net(input)
print(input.shape)
print(outputs.shape)
print(labels.shape)
loss =lostFunction(outputs,labels)
if batch_index %2==0 or batch_index==len(train_loader)-1:
print(‘epoch {} batch {}/{} loss {:.3f}’.format(
epoch, batch_index, len(train_loader)-1, loss.item()))
optimizer.zero_grad() # Set gradients to zero
loss.backward() # From the loss we compute the new gradients
optimizer.step()
this is the output ->
torch.Size([20, 3, 218, 178])
torch.Size([20, 2])
torch.Size([20])
IndexError Traceback (most recent call last)
in ()
----> 1 train_model(0)
4 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2216 .format(input.size(0), target.size(0)))
2217 if dim == 2:
-> 2218 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2219 elif dim == 4:
2220 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target -1 is out of bounds.
this is the testing data format ->
(tensor([[[-0.6794, -0.7137, -0.8164, …, -1.0048, -1.0048, -0.9705],
[-0.6794, -0.7137, -0.8164, …, -1.0048, -1.0048, -0.9705],
[-0.6794, -0.7137, -0.8164, …, -1.0048, -1.0048, -0.9705],
…,
[ 0.3994, 0.2111, 0.0741, …, 1.6667, 1.6667, 1.4954],
[ 0.2453, 0.2453, 0.2967, …, 1.5639, 1.6838, 1.7694],
[ 0.2624, 0.2624, 0.3652, …, 1.5639, 1.6838, 1.7694]],
[[-0.6176, -0.6527, -0.7752, ..., -1.1604, -1.1604, -1.1253],
[-0.6176, -0.6527, -0.7752, ..., -1.1604, -1.1604, -1.1253],
[-0.6176, -0.6527, -0.7752, ..., -1.1604, -1.1604, -1.1253],
...,
[ 0.2052, 0.0126, -0.4601, ..., 1.8333, 1.8333, 1.6583],
[ 0.0476, 0.0476, -0.2500, ..., 1.7108, 1.8158, 1.9034],
[ 0.0476, 0.0476, -0.2150, ..., 1.7108, 1.8158, 1.9034]],
[[-0.5147, -0.5495, -0.6018, ..., -1.0550, -1.0550, -1.0201],
[-0.5147, -0.5495, -0.6018, ..., -1.0550, -1.0550, -1.0201],
[-0.5147, -0.5495, -0.6018, ..., -1.0550, -1.0550, -0.9853],
...,
[ 0.3219, 0.1302, -0.2881, ..., 2.1868, 2.2217, 2.0474],
[ 0.1476, 0.1476, -0.1138, ..., 2.0648, 2.2217, 2.3088],
[ 0.1476, 0.1476, -0.0615, ..., 2.0648, 2.2217, 2.3088]]]), tensor([-1]))
I really cant figure out why this happen , I have checked that all the training data ,label tensor ,batch number ,output feature number are all correct , can someone help me to debug? |
st49903 | Hi,
CrossEntropyLoss 3 expects your labels to be in range [0, C-1], where C denotes the number of classes - so in your case [0, 1] and not -1 or 1.
PS: If its a single-label binary classification, you can also have a look at BinaryCrossEntropy 21.
Greetings. |
st49904 | Thank you ! ,this is a really stupid mistake , another question , since we output tensor with [value1.value2] , which is the prob of binary class , how do we know which value is representing 0 and which value representing 1. I am still a bit confused about this . |
st49905 | Hi Chriseven!
flipflop:
since we output tensor with [value1.value2] , which is the prob of binary class , how do we know which value is representing 0 and which value representing 1.
The output of your model, [value1, value2], means whatever you
trained your model for it to mean.
As I understand it, you have structured your model as a two-class
multi-class classifier. (Your model outputs two values, and you use
CrossEntropyLoss.) Conceptually (although not in implementation)
this is the same as a binary-classification problem. (One output value,
and BCEWithLogitsLoss.)
The following post answers your question, but in the language of a
binary-classification problem:
How to interpret the probability of classes in binary classification? nlp
Hello Shaun!
In short, “class ‘1’” means whatever you trained you model for it
to mean.
To explain this, let me go back to one of your earlier posts:
You talk about x_test and y_test (and y_pred). I assume
that y_test is a vector of length the number of test samples,
with each value in y_test being the number 0, meaning that
this sample is in class “0”, or the number 1, meaning class “1”.
(And x_test is the input data whose classes you are trying
to predict.)
You don’t mention it, b…
Best.
K. Frank |
st49906 | Thank you for your answer Frank .it helps! btw do you have any experience on google colab cuda out og memory issues. I try to train around 150,000 images (7kb each) , it seems I dunt have enough resources on GPU , but train using cpu really really too slow . |
st49907 | So I have a task where the net outputs different variables. Some outputs are bounded like [0,1], [0, +inf] and [-inf, +inf]. I want the network to handle these restrictions. My thought was to do
def forward(self, x):
x = self.net(x)
#print(x.shape), (bs, 5)
# say idx 0 is the variable with [0,1] bound so
x[:, 0] = nn.Sigmoid(x[:, 0])
# idx 1-3 is [0, +inf] bound so
x[:, 1:4] = nn.ReLU(x[:, 1:4])
# idx 5 is [-inf, +inf] so just left as is.
return x
Is this a valid approach? The model will be saved with jit.trace and served using the c++ api.
Offtopic. What you guys think of training with only MSE loss? Sure you can combine cross entrpoy loss for the 0-1 vars and MSE for the others but I wonder if it will have a major effect? |
st49908 | Using different activation functions sounds reasonable. However, I would recommend to check if your code would raise errors for invalid inplace operations before porting it to libtorch.
If so, you might assign the results to temporal tensors and concatenate them afterwards. |
st49909 | I think transforms are still not completely ported and while some transformations already work on tensors instead of PIL.Images, I don’t think Compose was ported yet.
This PR 3 seems related and I’m sure contributions are welcome in case you would be interested to port some functions.
CC @fmassa to correct me in case I’m missing something. |
st49910 | I was reading the code of mask-rcnn to see how they fix their bn parameters. I notice that they use self.register_buffer to create the weight and bias, while, in the pytorch BN definition, self.register_parameter is used when affine=True. Could I simply think that buffer and parameter have everything in common except that buffer will neglect the operations to compute grad and update its values ?
By the way, what is the different between directly defining a nn.Paramter in the module and using register_parameter ? |
st49911 | Solved by ptrblck in post #2
Yes, you are correct in your assumption. If you have parameters in your model, which should be saved and restored in the state_dict, but not trained by the optimizer, you should register them as buffers.
Buffers won’t be returned in model.parameters(), so that the optimizer won’t have a change to u… |
st49912 | Yes, you are correct in your assumption. If you have parameters in your model, which should be saved and restored in the state_dict, but not trained by the optimizer, you should register them as buffers.
Buffers won’t be returned in model.parameters(), so that the optimizer won’t have a change to update them.
Both approaches work the same regarding training etc.
There are some differences in the function calls however. Using register_parameter you have to pass the name as a string, which can make the creation of a range of parameters convenient. Besides that I think it’s just coding style which one you prefer. |
st49913 | If I have some parameters that I don’t want to be trained, can I just add them as self.some_params inside the nn.Module to preserve state? Does register_buffer do anything special in that case as compared to just storing it inside self? |
st49914 | @ pechyonkin
ptrblck:
If you have parameters in your model, which should be saved and restored in the state_dict , but not trained by the optimizer, you should register them as buffers.
If your self.some_params are nn.Parameter objects, then you don’t have to worry about this. If they’re tensors, then they won’t be in the state_dict (unless registered as buffer). |
st49915 | What are the downsides of not using a buffer? I am currently using self.some_param inside nn.Module to keep a tensor that keeps track of running average statistics of activations. I don’t need it for backprop, only to make decisions during runtime. I want to learn more about why my approach is not an optimal one. If you could explain or give some readings, that’d be great. |
st49916 | I am sorry if this is a stupid question, but I am not sure if I want that. I checked this 40, but I still don’t see why I would need that. Would I need buffers if I want to save the model later? Are there any other reasons I would like to use state_dict rather than just assigning to self? |
st49917 | As @pierrecurie explained, one reason to register the tensor as a buffer is to be able to serialize the model and restore all internal states.
Another one is that all buffers and parameters will be pushed to the device, if called on the parent model:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.my_tensor = torch.randn(1)
self.register_buffer('my_buffer', torch.randn(1))
self.my_param = nn.Parameter(torch.randn(1))
def forward(self, x):
return x
model = MyModel()
print(model.my_tensor)
> tensor([0.9329])
print(model.state_dict())
> OrderedDict([('my_param', tensor([-0.2471])), ('my_buffer', tensor([1.2112]))])
model.cuda()
print(model.my_tensor)
> tensor([0.9329])
print(model.state_dict())
> OrderedDict([('my_param', tensor([-0.2471], device='cuda:0')), ('my_buffer', tensor([1.2112], device='cuda:0'))])
As you can see, model.my_tensor is still on the CPU, where is was created, while all parameters and buffers were pushed to the GPU after calling model.cuda(). |
st49918 | Thanks for clarification! Now it makes total sense. I will actually use buffer since I am going to use GPU at some point for the model I am building. |
st49919 | @ptrblck probably another dumb question, but why wouldn’t I just use nn.Parameter for both my_tensor and my_param and just state ‘requires_grad=False’ for the first? How would that be different to the example in your post? |
st49920 | I think there wouldn’t be a difference regarding the model training, gradient flow etc., so you could probably use this approach.
However, it might be confusing to other users who are using your code to see some “buffers” in model.parameters().
Also, you would pass these buffers to the optimizer, if you just pass all model.parameters().
Again, this won’t mess with your training, but the optimizer will unnecessarily have to skip these buffers in its step() method.
I would describe it as a “clean” code style to separate buffers and parameters. |
st49921 | Ah, thanks. An example where I find this distinction difficult is in the context of fixed positional encodings in the Transformer model. Typically I see implementations where the fixed positional encodings are registered as buffers but I’d consider these tensors as non-learnable parameters (that should show up in the list of model parameters), especially when comparing between methods that don’t rely on such injection of fixed tensors.
Re. your last remark, I guess this 40 should do the trick, but from that thread I understand it is poor coding practice.
So in general:
buffers = ‘fixed tensors / non-learnable parameters / stuff that does not require gradient’
parameters = ‘learnable parameters, requires gradient’ |
st49922 | Sort of hijacking the thread, but I am struggling at implementing capsule net, there is a need for some non-trainable variables, and unwanted in case of state_dict. Since those are just computed statistics.
So the problem is those variables are in the model code, which I use code like torch.zeros(b, h, w).cuda().
But this is ugly, and if use ‘torch.zeros(b, h ,w)’, these variables will not be sent to GPU as we do model.to(device).
Please let me know if there is a better way to construct them. |
st49923 | Could you describe the usage of these tensors a bit?
I assume they are not defining the model state, as you don’t want to have them in the state_dict, which means these tensors are independent of the model?
Could you create these tensors then during runtime, e.g. by using the device attribute of a parameter or buffer? |
st49924 | Do you mean something like
independent_tensor = torch.zeros(3, 3).to(feat_map.device)
yeah, it’s a nicer workaround. Thanks.
But it will be better if there is a way to do this without setting device in the model part of code. So the whole model can be send to GPU or CPU as we set model.to(device) |
st49925 | model.to() transfers all “states” to the specified device.
However, your use case seems as if the mentioned tensors should not be in the state_dict, which seems like a special use case.
Could you therefore explain the use case a bit, i.e.:
what are these tensors used for
are they specific to the model
how do you create them (model dependent or not?) |
st49926 | sorry about the vagueness.
In the example of capsule net:
a example of these tensors in capsule net implementation 14
there tensors are used for computing coefficients assigning to feature maps which produce these coefficients under torch.no_grad().
these tensors have nothing to do with training or learning. Just some values computed by a certain procedure (dynamic routing in capsule net).
can simply seen as computing the cosine similarity of certain layers’ feature maps.
And as you mentioned:
model.to() transfers all “states” to the specified device.
It seems model.to() only cares about the “states”. Maybe some_tensor.to(feature_map.device) is the best we can get. |
st49927 | You could overwrite the to or apply methods for your module to include transferring that specific tensor. This way you would not have to pass the device to any additional parts of your module. |
st49928 | Hi, one more question:
I have a huge tensor (700MB, precomputed, requires_grad=False) which is used for tensor multiplication somewhere as a Module (as shown in the snippet)
When training the model with multiple GPUs, I need to push it to all GPUs. The easiest way would be using regist_buffer in a module. However this means the stat_dict would be larger than 700MB ( definitely not a good idea). So I was wondering the best way to push such a large tensor to all GPUs?
BTW, if I simply use “tensor.to(device)” , is the tensor gonna be pushed to all GPUs or only the default one? (Had a test, seems like it is on the defalult gpu not all gpus.)
Thanks in advance!
class NewModule(nn.Module):
def __init__(self, pre_matrix):
super(NewModule, self).__init__()
# Pre_matrix: NXP, of size 700MB, requires_grad=False
self.pre_matrix = pre_matrix
self.pre_matrix.requires_grad=False
# self.register_buffer('pre_matrix', pre_matrix) ### this means the stat_dic is larger than 700MB
def forward(self, input):
# input: MXN, on multiple gpus
# output: MXP, on multiple gpus
out = input @ self.Pre_matrix
return out |
st49929 | You could still use register_buffer and set persistent to False, which won’t add this buffer to the state_dict as described in the docs 147. |
st49930 | Hello, I have been trying to implement Opacus to my models optimiser. I have run it on colab. At the time of training I run into this error.
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle)
Could someone please guide me on this.
Thanks!
Screen Shot 2020-10-09 at 11.03.33 AM859×608 64.1 KB |
st49931 | This error might be thrown if you are running out of memory and cublas isn’t able to create its handle.
Could you check the memory usage via nvidia-smi and check if you are close to the device limit?
If that’s the case, try to lower the batch size and rerun the code. |
st49932 | WINDOWS10 VS2017 build error Q:\pytorch1\pytorch\aten\src\ATen/core/boxing/impl/boxing.h(150): error C2210: “_Ty”: 包扩展不能被用作别名模板中非打包参数的自变量 (编译源文件 Q:
pytorch1\pytorch\torch\csrc\jit\frontend\schema_type_parser.cpp) [Q:\pytorch1\pytorch\build\caffe2\torch_cpu.vcxproj] |
st49933 | Please use an online translation service, so that more users can help you.
From Google translate:
WINDOWS10 VS2017 build error Q:\pytorch1\pytorch\aten\src\ATen/core/boxing/impl/boxing.h(150): error C2210: “_Ty”: Package extension cannot be used as non-packaged parameter in alias template Argument (compiled source file Q:
pytorch1\pytorch\torch\csrc\jit\frontend\schema_type_parser.cpp) [Q:\pytorch1\pytorch\build\caffe2\torch_cpu.vcxproj] |
st49934 | I have to remove filter from layers.
VGG16 model, here are two layer
model = models.vgg16(pretrained=True)
print(model)
print(model.features[2].weight.shape) # features[2].weight has 64 filters
torch.Size([64, 64, 3, 3])
print(model.features[5].weight.shape) # features[5].weight has 128 filters
torch.Size([128, 64, 3, 3])
Is there any way (function or method) to remove filter#53 from model.features[2].weight and filter#109 from features[5].weight.
Also:
How to initialize model.features[5].weight of VGG16 with custom filter weights of same shape.
model = models.vgg16(pretrained=True)
print(model)
print(model.features[5].weight.shape)
torch.Size([128, 64, 3, 3])
Thanks |
st49935 | You can remove features by doing torch.cat([a[:53], a[54:]]).
For initialization you can directly operate on model.features[5].weight tensor or pass the tensor to some method of torch.nn.init to initialize it. |
st49936 | I want to do torch.bmm for tensor[12000, 1200, 1] and tensor[12000, 1, 1920], so I get tensor[12000, 1200, 1920], then I want to do torch.sum( , dim = 0) to it, finally I get tensor[1200, 1920].
But the process is very memory-consuming, How can I do it without for loop? |
st49937 | Solved by googlebot in post #3
that’s equivalent to a single matrix multiplication: 1200x12000 @ 12000x1920 = 1200,1920 |
st49938 | that’s equivalent to a single matrix multiplication: 1200x12000 @ 12000x1920 = 1200,1920 |
st49939 | Not sure if I understod it correctly but souldnt be it possible to convolve 1dimensional input, like I have 4096 Datasets with 45 floats ?
Is convolution on such an input even possible, or does it make sense to use convolution.
If yes how do I setup this ?
If not how yould you approach this problem ? |
st49940 | You can use nn.Conv1d to apply a convolution on a time signal.
The input shape should be [batch_size, channels, sequence_length].
Based on your description, it seems you are dealing with 4096 samples, each containing 45 time steps and a single channel? |
st49941 | If i reshape to [4096, 1, 27] (I removed 18 input values) I get this error:
RuntimeError: Given groups=1, weight of size [64, 27, 2], expected input[4096, 1, 27] to have 27 channels, but got 1 channels instead
when changing to [4096, 27, 1] it goes crazy like RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM
Propably the network isnt setup correctly.
I think I didn’t give enough information:
I have 3 3x3 pixel blocks, block 1/2 are different in time but block 3 is the same time as block 2.
Currently they are all flat shaped [4096, 27].
What would a single valide convolution layer look like and whats the best way to shape this data. |
st49942 | What are your Conv1d arguments?
FATTOMCAT:
I have 3 3x3 pixel blocks, block 1/2 are different in time but block 3 is the same time as block 2.
Currently they are all flat shaped [4096, 27].
I am not sure to understand here, but if you mean that you want to convolve over 3 time steps and your input data is [4096, 1, 27], this should work:
>> output_channels = 1
>> t = torch.rand(4096, 1, 27)
>> conv = torch.nn.Conv1d(1, output_channels, kernel_size=3, padding=1)
>> conv
Conv1d(1, 1, kernel_size=(3,), stride=(1,), padding=(1,))
>> conv(t).shape
torch.Size([4096, 1, 27]) |
st49943 | Thank you, it somehow works.
I still not really understand the convolution/input thing but at least no errors get raised now. |
st49944 | sir i have a data set of 2968 rows and 100 columns and i can’t pass my the data to CNN please help me to prepare my data for passing and also explain the structure of 1d convolutional layers. |
st49945 | Hello @ptrblck,
Taking advantage of your answer, I would like to know how to apply a CNN for text processing.
Imagine that I have the pre-processed text (tokenized + PAD):
input = torch.tensor(
[
[1, 2, 0, 0],
[3, 4, 5, 6],
[7, 0, 0, 0]
]
)
What would the Conv1D layer look like for this scenario? I mean, what would the in_channels, out_channels, and kernel_size parameters be configured?
Thanks in advance |
st49946 | It depends a bit how you would like to process this input.
Currently your input has a shape of [3, 4], which is invalid for nn.Conv1d as well as nn.Conv2d.
If you want to use these two dimensions as the “spatial size”, i.e. similar to an input image, you would have to unsqueeze the batch and channel dimensions as:
input = input[None, None]
print(input.shape)
> torch.Size([1, 1, 3, 4])
Now you could use an nn.Conv2d layer with in_channels=1 and a kernel_size, which would have the same size or which would be smaller than the padded input of 3x4.
Note that you could also use one dimension as the channel dimension, which would then change the conv layer as well as the unsqueezing, so let me know what you would like to achieve. |
st49947 | The tensor:
input = torch.tensor(
[
[1, 2, 0, 0],
[3, 4, 5, 6],
[7, 0, 0, 0]
]
)
is a batch of 3 preprocessed sentences in which I must represent as a dense vector. For instance [1, 2, 0, 0] represents the tokens id (and pad token) for the first sentence. Now I want to do something like it:
cnn_text1504×1376 150 KB
for each sentence in the batch.
I am using PyTorch Lightning (which helps a lot) but I am completely confused about how a CNN can be used for text representation.
I think that before passing the input through a convolution block, I could go through an embedding layer, which would produce a shaped tensor [3,4,768] (batch_size, sentence_size, representation_size). |
st49948 | If the input represents word indices then an embedding layer sounds reasonable.
I don’t fully understand the figure and how the convolution should be applied.
Are 3 sets of filters with different sizes used? If so, you would need three conv layers or would need to pad some filters. |
st49949 | Aldebaran:
representation_size
With this new image I think my objective becomes clearer:
conv_maxpooling_steps1000×494 1.69 MB
The filters/kernels are like a sliding window of different sizes. In the previous image, there are 2 filters of shape [2, 5], 2 filters of shape [3, 5], and other 2 filters of shape [4, 5].
For now, the model has only one embedding layer:
# batch of 3 sentences
input = torch.tensor(
[
[1, 2, 0, 0], # tokenized sentence 1
[3, 4, 5, 6], # tokenized sentence 2
[7, 0, 0, 0] # tokenized sentence 3
]
)
embedding_layer = torch.nn.Embedding(num_embeddings = 8, # vocabulary size
embedding_dim=5, # representation size
)
emb_out = embedding_layer(input) # torch.Size([3, 4, 5]) (batch_size, sentence_size, representation_size)
conv = torch.nn.Conv1d(in_channels=?,out_channels=?, kernel_size=?)
and, what I need to know is then, how to pass the embedding layer output into the convolutional layer as shown in the figure above.
Thanks in advance. |
st49950 | Thanks for the update.
In that case you would need to unsqueeze the channel dimension via:
emb_out = emb_out.unsqueeze(1)
and I would use 3 different conv layers in the first step via:
conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=(2, 5))
conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=(3, 5))
conv1 = nn.Conv2d(in_channels=1, out_channels=2, kernel_size=(4, 5))
After applying these conv layers, you could pass the outputs through the pooling layers and concatenate the the final features as given in the figure.
Note that your first figure indicates that no padding is used and thus the output activation would be smaller, while your current 3x3 filter approach is using a padding value of 1, since the feature tensor has the same number of rows. |
st49951 | Thank you @ptrblck,
The final implemented model is doing great. Check it out:
class CNNSentenceEncoder(nn.Module):
"""Represents a text as a dense vector."""
def __init__(self, vocabulary_size, representation_size, out_channels, kernel_sizes, sentence_length):
"""
:param vocabulary_size: number of tokens in corpus.
:param representation_size: dense vector length.
:param out_channels: number of kernels.
:param kernel_sizes: list of kernel sizes ([2,3,5], for instance)
:param sentence_length: max number of tokens for sentences.
"""
super(CNNEncoder, self).__init__()
# embedding layer
self.embedding = nn.Embedding(
num_embeddings=vocabulary_size,
embedding_dim=representation_size
)
# convolutional layers
self.convs = nn.ModuleList([
self.get_conv_layer(representation_size, out_channels, kernel_size, sentence_length)
for kernel_size in kernel_sizes])
self.linear = nn.Linear(len(convs) * out_channels, representation_size)
def get_conv_layer(self, representation_size, out_channels, kernel_size, sentence_length):
"""
Defines a convolutional block.
"""
return nn.Sequential(
nn.Conv1d(in_channels=representation_size, out_channels=out_channels, kernel_size=kernel_size),
nn.ReLU(),
nn.MaxPool1d(sentence_length - kernel_size + 1, stride=1),
nn.BatchNorm1d(out_channels)
)
def forward(self, x):
r1 = self.embedding(x)
r1 = torch.transpose(r1, 2, 1)
conv_outputs = []
for conv in self.convs:
conv_outputs.append(conv(r1))
# concatenates the outputs from each convolutional layer
cat = torch.cat(conv_outputs, 1)
# flatten
flatten_cat = torch.flatten(cat, start_dim=1)
return self.linear(flatten_cat) |
st49952 | I have a test_dataloader and it contains 14000 tensors. My test images are 28000 and I am taking batch = 2. I have a pre-trained model and now I am trying to test my existing model.
However, my testing.py output is showing negative floating point number but it must be positive integer number. Full code Github Link
Could tell me what I have to change?
final_output = []
for i, data in enumerate(test_data):
data = data.unsqueeze(1)
output = model(data).cpu().detach().numpy()
data = None
final_output.append(output)
result = np.concatenate(final_output)
print((result))
The output is like this:
[[-4.397916 0.7076113 2.1967683 ... 0.06060949 -2.8013513
-8.800405 ]
[-3.533296 -3.1798646 -5.6163416 ... -4.7265635 -1.8589627
0.5682605 ]
[-1.8575612 3.9310014 -4.122321 ... -1.2687542 -3.5150855
-5.7542324 ]
...
[-8.762509 -8.240637 -2.7152536 ... -1.5188062 -5.932935
2.6340218 ]
[-1.9312052 -2.675097 -2.2223709 ... -1.4572031 -8.078956
-4.047556 ]
[-1.9931098 -2.840486 -3.620531 ... -2.5536153 -1.735633
-2.317892 ]]
Any kind of suggestion is appreciable. |
st49953 | Solved by akib62 in post #4
Soved the problem using LongTensor |
st49954 | I think you meant to do data = data.unsqueeze(0). Also, it might be softmax is not included in the pretrained model (as softmax is usually combined with the loss function, so we have to manually add softmax at test time). |
st49955 | @Kushaj after giving 0 getting the error
RuntimeError: Given groups=1, weight of size [32, 1, 3, 3], expected input[1, 2, 28, 28] to have 1 channels, but got 2 channels instead |
st49956 | hey guys , i got error during the training of model. can you help me!
here is the notebook 2 |
st49957 | Your current notebook shows a KeyboardInterrupt, which seems to be created by the user.
Could you post the error you are seeing here? |
st49958 | When I use mixed precision training, the GPU’s utilization has reduced a lot, like below:
Thu Oct 8 23:42:03 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:04:00.0 Off | N/A |
| 51% 53C P2 116W / 250W | 8990MiB / 11019MiB | 78% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... Off | 00000000:05:00.0 Off | N/A |
| 58% 56C P2 201W / 250W | 8990MiB / 11019MiB | 80% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce RTX 208... Off | 00000000:08:00.0 Off | N/A |
| 58% 56C P2 151W / 250W | 8990MiB / 11019MiB | 79% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce RTX 208... Off | 00000000:09:00.0 Off | N/A |
| 58% 56C P2 108W / 250W | 8990MiB / 11019MiB | 75% Default |
+-------------------------------+----------------------+----------------------+
| 4 GeForce RTX 208... Off | 00000000:84:00.0 Off | N/A |
| 59% 56C P2 150W / 250W | 8990MiB / 11019MiB | 77% Default |
+-------------------------------+----------------------+----------------------+
| 5 GeForce RTX 208... Off | 00000000:85:00.0 Off | N/A |
| 57% 56C P2 102W / 250W | 8990MiB / 11019MiB | 81% Default |
+-------------------------------+----------------------+----------------------+
| 6 GeForce RTX 208... Off | 00000000:88:00.0 Off | N/A |
| 53% 54C P2 163W / 250W | 8990MiB / 11019MiB | 76% Default |
+-------------------------------+----------------------+----------------------+
| 7 GeForce RTX 208... Off | 00000000:89:00.0 Off | N/A |
| 61% 57C P2 141W / 250W | 8990MiB / 11019MiB | 72% Default |
+-------------------------------+----------------------+----------------------+
But with fp32, it was almost nearly 100%. What wrong happens? My environments are:
In [1]: import torch
In [2]: torch.__version__
Out[2]: '1.6.0'
In [3]: torch.version.cuda
Out[3]: '10.2'
In [4]: torch.backends.cudnn.version()
Out[4]: 7605 |
st49959 | The GPU utilization doesn’t correspond to the speed, so did you profile the code and see a speedup or slowdown?
E.g. if mixed-precision training is giving you a speedup e.g. by using TensorCores, your code might now suffer (more) from a potential data loading bottleneck, which would reduce the GPU utilization. |
st49960 | Hello all,
I was trying to include conv3D layers in my models but upon running it, it said
terminate called after throwing an instance of 'c10::Error'
what(): Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend. 'aten::slow_conv3d_forward' is only available for these backends: [CPU, BackendSelect, Named, Autograd, Tracer, Autocast, Batched].
The code is as follows -
#include <iostream>
#include <torch/torch.h>
#include "PredictFrame.h"
using namespace torch;
using namespace std;
int main() {
/*
PredictFrame predictFrame;
predictFrame.to(torch::kCUDA);
auto input = torch::rand({1, 3, 5, 900, 900});
input.to(torch::kCUDA);
auto out = predictFrame.forward(input);
cout<<out.sizes();
*/
cout<<"starting";
cout<<std::boolalpha<<torch::cuda::is_available();
auto conv1 = torch::nn::Conv3d(torch::nn::Conv3dOptions(3, 64, {3, 1,1})
.stride({1 ,1, 1}).padding({1, 0, 0}));
conv1->to(torch::kCUDA);
auto input = torch::rand({1, 3 ,5, 10, 10});
input.to(torch::kCUDA);
auto output = conv1->forward(input);
cout<<output;
}
and the complete error encountered upon running it is as follows -
/home/atharva/CLionProjects/SuperTux_RL/SuperTux_RL
terminate called after throwing an instance of 'c10::Error'
what(): Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDA' backend. 'aten::slow_conv3d_forward' is only available for these backends: [CPU, BackendSelect, Named, Autograd, Tracer, Autocast, Batched].
CPU: registered at aten/src/ATen/CPUType.cpp:1596 [kernel]
BackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Named: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Autograd: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:7155 [kernel]
Tracer: registered at ../torch/csrc/autograd/generated/TraceType_4.cpp:8208 [kernel]
Autocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:371 [backend fallback]
Batched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:229 [backend fallback]
Exception raised from reportError at ../aten/src/ATen/core/dispatch/OperatorEntry.cpp:261 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x69 (0x7f1fd082fe99 in /home/atharva/libtorch/libtorch/lib/libc10.so)
frame #1: c10::impl::OperatorEntry::reportError(c10::DispatchKey) const + 0x3ac (0x7f1fc0bf9f2c in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #2: <unknown function> + 0x1478936 (0x7f1fc1508936 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #3: at::slow_conv3d_forward(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>) + 0x139 (0x7f1fc14287e9 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #4: <unknown function> + 0x294fae8 (0x7f1fc29dfae8 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0x295031d (0x7f1fc29e031d in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x12933c2 (0x7f1fc13233c2 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #7: <unknown function> + 0x14788e7 (0x7f1fc15088e7 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #8: at::slow_conv3d_forward(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>) + 0x139 (0x7f1fc14287e9 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #9: at::native::slow_conv3d(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>) + 0x58 (0x7f1fc0d4c608 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #10: <unknown function> + 0x14cb6f0 (0x7f1fc155b6f0 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #11: <unknown function> + 0x15098b2 (0x7f1fc15998b2 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #12: <unknown function> + 0x14f9d02 (0x7f1fc1589d02 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #13: <unknown function> + 0x1477d8e (0x7f1fc1507d8e in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #14: at::slow_conv3d(at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>) + 0x139 (0x7f1fc1427f89 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #15: at::native::_convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool) + 0x4505 (0x7f1fc0d3b865 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #16: <unknown function> + 0x14adc77 (0x7f1fc153dc77 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #17: <unknown function> + 0x1507f4d (0x7f1fc1597f4d in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #18: <unknown function> + 0xb0b08c (0x7f1fc0b9b08c in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #19: <unknown function> + 0x13f9a54 (0x7f1fc1489a54 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #20: at::_convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long, bool, bool, bool) + 0x1fe (0x7f1fc1399e9e in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #21: at::native::convolution(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) + 0xd4 (0x7f1fc0d332a4 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #22: <unknown function> + 0x14ad900 (0x7f1fc153d900 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #23: <unknown function> + 0x1507db5 (0x7f1fc1597db5 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #24: <unknown function> + 0xb0b2b6 (0x7f1fc0b9b2b6 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #25: <unknown function> + 0x13f8cc8 (0x7f1fc1488cc8 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #26: at::convolution(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, bool, c10::ArrayRef<long>, long) + 0x194 (0x7f1fc1399454 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #27: at::native::conv3d(at::Tensor const&, at::Tensor const&, at::Tensor const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long) + 0x78 (0x7f1fc0d32f88 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #28: <unknown function> + 0x14ae1ef (0x7f1fc153e1ef in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #29: <unknown function> + 0x15083c8 (0x7f1fc15983c8 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #30: <unknown function> + 0xb0b176 (0x7f1fc0b9b176 in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #31: <unknown function> + 0x13faeef (0x7f1fc148aeef in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #32: at::conv3d(at::Tensor const&, at::Tensor const&, c10::optional<at::Tensor> const&, c10::ArrayRef<long>, c10::ArrayRef<long>, c10::ArrayRef<long>, long) + 0x13c (0x7f1fc139ae8c in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #33: torch::nn::Conv3dImpl::forward(at::Tensor const&) + 0x10f (0x7f1fc326c4af in /home/atharva/libtorch/libtorch/lib/libtorch_cpu.so)
frame #34: <unknown function> + 0xc233 (0x55c86651d233 in /home/atharva/CLionProjects/SuperTux_RL/SuperTux_RL)
frame #35: __libc_start_main + 0xf3 (0x7f1f80bd40b3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #36: <unknown function> + 0xbdee (0x55c86651cdee in /home/atharva/CLionProjects/SuperTux_RL/SuperTux_RL)
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
I was wondering if there is an installation issue with my system or conv3d layers not support running on GPU?
Edit - torch::cuda::is_available() returns true |
st49961 | Solved by a_d in post #2
The answer is quite simple, you have to reassign the variable in the to method. For example
input = input.to(torch::kCUDA) and it will work. |
st49962 | The answer is quite simple, you have to reassign the variable in the to method. For example
input = input.to(torch::kCUDA) and it will work. |
st49963 | Hi, I tried to replace ResNet50 architecture with ResNet32 in one model to find the accuracy of the model. But it generated the following error. Any help in this regard would be appreciated.
size mismatch, m1: [256 x 10], m2: [512 x 512] at /opt/conda/conda-bld/pytorch_1565272271120/work/aten/src/THC/generic/THCTensorMathBlas.cu:273 |
st49964 | What is your input size and other sizes? An easy way to debug your problem is to add print(tensor.shape) statements in the model. |
st49965 | My transformer network gives me a vector of size 100x128x250 (100 is the number of words, 128 is the batch, 250 is feature_size). I want to convert forward (). 100x128x250 -> 128x250 (128 average). Then 128x250 -> 128x3 (here I know it as Linear (250, 3). How to do this? 100x128x250 -> 128x250 |
st49966 | class NeuralNet(nn.Module):
def init(self):
super(NeuralNet, self).init()
self.flatten = nn.Flatten(1, -1)
self.layer1= nn.Linear(28*28, 128)
self.drop = nn.Dropout(0.2)
self.layer2 = nn.Linear(128, 10)
self.relu = nn.ReLU()
self.softmax=nn.Softmax()
def forward(self, x):
x = self.flatten(x)
x = self.relu(self.layer1(x))
x = self.drop(x)
x = self.layer2(x)
x=self.softmax(x)
return x
model = NeuralNet().to(‘cpu’)
height, width = 28, 128
x = Variable(torch.FloatTensor(1,height, width), requires_grad = True)
y = Variable(torch.FloatTensor(height,10), requires_grad = False)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)
for epoch in range(20):
running_loss = 0.0
optimizer.zero_grad()
y_pred = model(x)
print(y_pred)
print(y)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
# выводим статистику о процессе обучения
running_loss += loss.item()
print('[%d] loss: %.3f' %(epoch + 1, running_loss / 2000))
print(‘Training is finished!’)
Ukladnikov_Andrey_Lesson1 - Jupyter Notebook .MathJax_Hover_Frame {border-radius: .25em; -webkit-border-radius: .25em; -moz-border-radius: .25em; -khtml-border-radius: .25em; box-shadow: 0px 0px 15px #83A; -webkit-box-shadow: 0px 0px 15px #83A; -moz-box-shadow: 0px 0px 15px #83A; -khtml-box-shadow: 0px 0px 15px #83A; border: 1px solid #A6D ! important; display: inline-block; position: absolute} .MathJax_Menu_Button .MathJax_Hover_Arrow {position: absolute; cursor: pointer; display: inline-block; border: 2px solid #AAA; border-radius: 4px; -webkit-border-radius: 4px; -moz-border-radius: 4px; -khtml-border-radius: 4px; font-family: ‘Courier New’,Courier; font-size: 9px; color: #F0F0F0} .MathJax_Menu_Button .MathJax_Hover_Arrow span {display: block; background-color: #AAA; border: 1px solid; border-radius: 3px; line-height: 0; padding: 4px} .MathJax_Hover_Arrow:hover {color: white!important; border: 2px solid #CCC!important} .MathJax_Hover_Arrow:hover span {background-color: #CCC!important} #MathJax_About {position: fixed; left: 50%; width: auto; text-align: center; border: 3px outset; padding: 1em 2em; background-color: #DDDDDD; color: black; cursor: default; font-family: message-box; font-size: 120%; font-style: normal; text-indent: 0; text-transform: none; line-height: normal; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; z-index: 201; border-radius: 15px; -webkit-border-radius: 15px; -moz-border-radius: 15px; -khtml-border-radius: 15px; box-shadow: 0px 10px 20px #808080; -webkit-box-shadow: 0px 10px 20px #808080; -moz-box-shadow: 0px 10px 20px #808080; -khtml-box-shadow: 0px 10px 20px #808080} #MathJax_About.MathJax_MousePost {outline: none} .MathJax_Menu {position: absolute; background-color: white; color: black; width: auto; padding: 2px; border: 1px solid #CCCCCC; margin: 0; cursor: default; font: menu; text-align: left; text-indent: 0; text-transform: none; line-height: normal; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; z-index: 201; box-shadow: 0px 10px 20px #808080; -webkit-box-shadow: 0px 10px 20px #808080; -moz-box-shadow: 0px 10px 20px #808080; -khtml-box-shadow: 0px 10px 20px #808080} .MathJax_MenuItem {padding: 2px 2em; background: transparent} .MathJax_MenuArrow {position: absolute; right: .5em; padding-top: .25em; color: #666666; font-family: ‘Arial unicode MS’; font-size: .75em} .MathJax_MenuActive .MathJax_MenuArrow {color: white} .MathJax_MenuArrow.RTL {left: .5em; right: auto} .MathJax_MenuCheck {position: absolute; left: .7em; font-family: ‘Arial unicode MS’} .MathJax_MenuCheck.RTL {right: .7em; left: auto} .MathJax_MenuRadioCheck {position: absolute; left: 1em} .MathJax_MenuRadioCheck.RTL {right: 1em; left: auto} .MathJax_MenuLabel {padding: 2px 2em 4px 1.33em; font-style: italic} .MathJax_MenuRule {border-top: 1px solid #CCCCCC; margin: 4px 1px 0px} .MathJax_MenuDisabled {color: GrayText} .MathJax_MenuActive {background-color: Highlight; color: HighlightText} .MathJax_MenuDisabled:focus, .MathJax_MenuLabel:focus {background-color: #E8E8E8} .MathJax_ContextMenu:focus {outline: none} .MathJax_ContextMenu .MathJax_MenuItem:focus {outline: none} #MathJax_AboutClose {top: .2em; right: .2em} .MathJax_Menu .MathJax_MenuClose {top: -10px; left: -10px} .MathJax_MenuClose {position: absolute; cursor: pointer; display: inline-block; border: 2px solid #AAA; border-radius: 18px; -webkit-border-radius: 18px; -moz-border-radius: 18px; -khtml-border-radius: 18px; font-family: ‘Courier New’,Courier; font-size: 24px; color: #F0F0F0} .MathJax_MenuClose span {display: block; background-color: #AAA; border: 1.5px solid; border-radius: 18px; -webkit-border-radius: 18px; -moz-border-radius: 18px; -khtml-border-radius: 18px; line-height: 0; padding: 8px 0 6px} .MathJax_MenuClose:hover {color: white!important; border: 2px solid #CCC!important} .MathJax_MenuClose:hover span {background-color: #CCC!important} .MathJax_MenuClose:hover:focus {outline: none} .MathJax_Preview .MJXf-math {color: inherit!important} .MJX_Assistive_MathML {position: absolute!important; top: 0; left: 0; clip: rect(1px, 1px, 1px, 1px); padding: 1px 0 0 0!important; border: 0!important; height: 1px!important; width: 1px!important; overflow: hidden!important; display: block!important; -webkit-touch-callout: none; -webkit-user-select: none; -khtml-user-select: none; -moz-user-select: none; -ms-user-select: none; user-select: none} .MJX_Assistive_MathML.MJX_Assistive_MathML_Block {width: 100%!important} #MathJax_Zoom {position: absolute; background-color: #F0F0F0; overflow: auto; display: block; z-index: 301; padding: .5em; border: 1px solid black; margin: 0; font-weight: normal; font-style: normal; text-align: left; text-indent: 0; text-transform: none; line-height: normal; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; -webkit-box-sizing: content-box; -moz-box-sizing: content-box; box-sizing: content-box; box-shadow: 5px 5px 15px #AAAAAA; -webkit-box-shadow: 5px 5px 15px #AAAAAA; -moz-box-shadow: 5px 5px 15px #AAAAAA; -khtml-box-shadow: 5px 5px 15px #AAAAAA} #MathJax_ZoomOverlay {position: absolute; left: 0; top: 0; z-index: 300; display: inline-block; width: 100%; height: 100%; border: 0; padding: 0; margin: 0; background-color: white; opacity: 0; filter: alpha(opacity=0)} #MathJax_ZoomFrame {position: relative; display: inline-block; height: 0; width: 0} #MathJax_ZoomEventTrap {position: absolute; left: 0; top: 0; z-index: 302; display: inline-block; border: 0; padding: 0; margin: 0; background-color: white; opacity: 0; filter: alpha(opacity=0)} .MathJax_Preview {color: #888; display: contents} #MathJax_Message {position: fixed; left: 1px; bottom: 2px; background-color: #E6E6E6; border: 1px solid #959595; margin: 0px; padding: 2px 8px; z-index: 102; color: black; font-size: 80%; width: auto; white-space: nowrap} #MathJax_MSIE_Frame {position: absolute; top: 0; left: 0; width: 0px; z-index: 101; border: 0px; margin: 0px; padding: 0px} .MathJax_Error {color: #CC0000; font-style: italic} div.MathJax_MathML {text-align: center; margin: .75em 0px; display: block!important} .MathJax_MathML {font-style: normal; font-weight: normal; line-height: normal; font-size: 100%; font-size-adjust: none; text-indent: 0; text-align: left; text-transform: none; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; padding: 0; margin: 0} span.MathJax_MathML {display: inline!important} .MathJax_mmlExBox {display: block!important; overflow: hidden; height: 1px; width: 60ex; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0} .MJXp-script {font-size: .8em} .MJXp-right {-webkit-transform-origin: right; -moz-transform-origin: right; -ms-transform-origin: right; -o-transform-origin: right; transform-origin: right} .MJXp-bold {font-weight: bold} .MJXp-italic {font-style: italic} .MJXp-scr {font-family: MathJax_Script,‘Times New Roman’,Times,STIXGeneral,serif} .MJXp-frak {font-family: MathJax_Fraktur,‘Times New Roman’,Times,STIXGeneral,serif} .MJXp-sf {font-family: MathJax_SansSerif,‘Times New Roman’,Times,STIXGeneral,serif} .MJXp-cal {font-family: MathJax_Caligraphic,‘Times New Roman’,Times,STIXGeneral,serif} .MJXp-mono {font-family: MathJax_Typewriter,‘Times New Roman’,Times,STIXGeneral,serif} .MJXp-largeop {font-size: 150%} .MJXp-largeop.MJXp-int {vertical-align: -.2em} .MJXp-math {display: inline-block; line-height: 1.2; text-indent: 0; font-family: ‘Times New Roman’,Times,STIXGeneral,serif; white-space: nowrap; border-collapse: collapse} .MJXp-display {display: block; text-align: center; margin: 1em 0} .MJXp-math span {display: inline-block} .MJXp-box {display: block!important; text-align: center} .MJXp-box:after {content: " "} .MJXp-rule {display: block!important; margin-top: .1em} .MJXp-char {display: block!important} .MJXp-mo {margin: 0 .15em} .MJXp-mfrac {margin: 0 .125em; vertical-align: .25em} .MJXp-denom {display: inline-table!important; width: 100%} .MJXp-denom > * {display: table-row!important} .MJXp-surd {vertical-align: top} .MJXp-surd > * {display: block!important} .MJXp-script-box > * {display: table!important; height: 50%} .MJXp-script-box > * > * {display: table-cell!important; vertical-align: top} .MJXp-script-box > *:last-child > * {vertical-align: bottom} .MJXp-script-box > * > * > * {display: block!important} .MJXp-mphantom {visibility: hidden} .MJXp-munderover, .MJXp-munder {display: inline-table!important} .MJXp-over {display: inline-block!important; text-align: center} .MJXp-over > * {display: block!important} .MJXp-munderover > *, .MJXp-munder > * {display: table-row!important} .MJXp-mtable {vertical-align: .25em; margin: 0 .125em} .MJXp-mtable > * {display: inline-table!important; vertical-align: middle} .MJXp-mtr {display: table-row!important} .MJXp-mtd {display: table-cell!important; text-align: center; padding: .5em 0 0 .5em} .MJXp-mtr > .MJXp-mtd:first-child {padding-left: 0} .MJXp-mtr:first-child > .MJXp-mtd {padding-top: 0} .MJXp-mlabeledtr {display: table-row!important} .MJXp-mlabeledtr > .MJXp-mtd:first-child {padding-left: 0} .MJXp-mlabeledtr:first-child > .MJXp-mtd {padding-top: 0} .MJXp-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 1px 3px; font-style: normal; font-size: 90%} .MJXp-scale0 {-webkit-transform: scaleX(.0); -moz-transform: scaleX(.0); -ms-transform: scaleX(.0); -o-transform: scaleX(.0); transform: scaleX(.0)} .MJXp-scale1 {-webkit-transform: scaleX(.1); -moz-transform: scaleX(.1); -ms-transform: scaleX(.1); -o-transform: scaleX(.1); transform: scaleX(.1)} .MJXp-scale2 {-webkit-transform: scaleX(.2); -moz-transform: scaleX(.2); -ms-transform: scaleX(.2); -o-transform: scaleX(.2); transform: scaleX(.2)} .MJXp-scale3 {-webkit-transform: scaleX(.3); -moz-transform: scaleX(.3); -ms-transform: scaleX(.3); -o-transform: scaleX(.3); transform: scaleX(.3)} .MJXp-scale4 {-webkit-transform: scaleX(.4); -moz-transform: scaleX(.4); -ms-transform: scaleX(.4); -o-transform: scaleX(.4); transform: scaleX(.4)} .MJXp-scale5 {-webkit-transform: scaleX(.5); -moz-transform: scaleX(.5); -ms-transform: scaleX(.5); -o-transform: scaleX(.5); transform: scaleX(.5)} .MJXp-scale6 {-webkit-transform: scaleX(.6); -moz-transform: scaleX(.6); -ms-transform: scaleX(.6); -o-transform: scaleX(.6); transform: scaleX(.6)} .MJXp-scale7 {-webkit-transform: scaleX(.7); -moz-transform: scaleX(.7); -ms-transform: scaleX(.7); -o-transform: scaleX(.7); transform: scaleX(.7)} .MJXp-scale8 {-webkit-transform: scaleX(.8); -moz-transform: scaleX(.8); -ms-transform: scaleX(.8); -o-transform: scaleX(.8); transform: scaleX(.8)} .MJXp-scale9 {-webkit-transform: scaleX(.9); -moz-transform: scaleX(.9); -ms-transform: scaleX(.9); -o-transform: scaleX(.9); transform: scaleX(.9)} .MathJax_PHTML .noError {vertical-align: -2px; font-size: 90%; text-align: left; color: black; padding: 1px 3px; border: 1px solid} .MathJax_Display {text-align: center; margin: 0; position: relative; display: block!important; text-indent: 0; max-width: none; max-height: none; min-width: 0; min-height: 0; width: 100%} .MathJax .merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 1px 3px; font-style: normal; font-size: 90%} .MathJax .MJX-monospace {font-family: monospace} .MathJax .MJX-sans-serif {font-family: sans-serif} #MathJax_Tooltip {background-color: InfoBackground; color: InfoText; border: 1px solid black; box-shadow: 2px 2px 5px #AAAAAA; -webkit-box-shadow: 2px 2px 5px #AAAAAA; -moz-box-shadow: 2px 2px 5px #AAAAAA; -khtml-box-shadow: 2px 2px 5px #AAAAAA; padding: 3px 4px; z-index: 401; position: absolute; left: 0; top: 0; width: auto; height: auto; display: none} .MathJax {display: inline; font-style: normal; font-weight: normal; line-height: normal; font-size: 100%; font-size-adjust: none; text-indent: 0; text-align: left; text-transform: none; letter-spacing: normal; word-spacing: normal; word-wrap: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; padding: 0; margin: 0} .MathJax:focus, body :focus .MathJax {display: inline-table} .MathJax.MathJax_FullWidth {text-align: center; display: table-cell!important; width: 10000em!important} .MathJax img, .MathJax nobr, .MathJax a {border: 0; padding: 0; margin: 0; max-width: none; max-height: none; min-width: 0; min-height: 0; vertical-align: 0; line-height: normal; text-decoration: none} img.MathJax_strut {border: 0!important; padding: 0!important; margin: 0!important; vertical-align: 0!important} .MathJax span {display: inline; position: static; border: 0; padding: 0; margin: 0; vertical-align: 0; line-height: normal; text-decoration: none; box-sizing: content-box} .MathJax nobr {white-space: nowrap!important} .MathJax img {display: inline!important; float: none!important} .MathJax * {transition: none; -webkit-transition: none; -moz-transition: none; -ms-transition: none; -o-transition: none} .MathJax_Processing {visibility: hidden; position: fixed; width: 0; height: 0; overflow: hidden} .MathJax_Processed {display: none!important} .MathJax_test {font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; text-indent: 0; text-transform: none; letter-spacing: normal; word-spacing: normal; overflow: hidden; height: 1px} .MathJax_test.mjx-test-display {display: table!important} .MathJax_test.mjx-test-inline {display: inline!important; margin-right: -1px} .MathJax_test.mjx-test-default {display: block!important; clear: both} .MathJax_ex_box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex} .MathJax_em_box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60em} .mjx-test-inline .MathJax_left_box {display: inline-block; width: 0; float: left} .mjx-test-inline .MathJax_right_box {display: inline-block; width: 0; float: right} .mjx-test-display .MathJax_right_box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0} .MathJax .MathJax_HitBox {cursor: text; background: white; opacity: 0; filter: alpha(opacity=0)} .MathJax .MathJax_HitBox * {filter: none; opacity: 1; background: transparent} #MathJax_Tooltip * {filter: none; opacity: 1; background: transparent} @font-face {font-family: MathJax_Blank; src: url(‘about:blank’)} .MathJax .noError {vertical-align: -2px; font-size: 90%; text-align: left; color: black; padding: 1px 3px; border: 1px solid}
RuntimeError: size mismatch, m1: [1 x 3584], m2: [784 x 128] at C:\Users\builder\AppData\Local\Temp\pip-req-build-e5c8dddg\aten\src\TH/generic/THTensorMath.cpp:136 |
st49967 | 111391:
self.layer1= nn.Linear(28*28, 128)
height, width = 28, 128
You’ve provided the wrong height and width values. When flattened, 28x128 gives a tensor of length 3584, but your nn.Linear layer expects a 28x28 (784) value. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.