id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st81968 | In this code snippet you will initialize 16 different alexnets, which are shared in the different levels.
E.g. you could check the id of some parameter and check, that it’s the same:
print(id(level_one[0].classifier[1].weight))
> 140500209576624
print(id(level_two[0].modelone.classifier[1].weight))
> 140500209576624
The device error might come from the usage of to('cuda') inside the __init__.
Usually you could call to(device) on the complete model so make sure all parameters are pushed to the device.
If you are not using it in this way, self.lin might still be on the CPU. |
st81969 | it gives 10% accuracy on cifar10, I resized images to (128, 128).
changing it to combination of two neural networks with convnet, 4 layered gives 70% accuracy. |
st81970 | Hi everyone
Now I have samples with TxNxD size and each sample may have a different T and N. Since T and N may vary a lot, I do not want to pad. As a result, I have to use a batch_size 1.
Do you have any advice to accelerate the training? (I have a server with multiple 12GB GPUs and multiple CPUs. My model only use 1GB+ GPU memory
I have read the convenient torch.nn.DataParallel API but it seems that I need to have the same size for all samples to form batches. Right now, I am trying to read DistributedDataParallel documentation .
So could you please tell me what I should do? If you could provide some detailed documentation or even some toy codes, it would be best. (I have limited knowledge about multiprocessing and distribution.)
Thanks a lot |
st81971 | Hello, basically I’m trying to implement ReLU from scratch without using numpy. In numpy the simple implementation is np.maximum(input,x) and the torch equivalent of maximum is max. But using torch.max as torch.max(input, 0) basically computes the maximum value along the 0th axis.
One way to do is to create an all-zero matrix and then use torch.max but it’s inefficient. Any other things I can use? |
st81972 | Solved by spanev in post #2
Hi @numbpy,
Maybe something as this would work for you:
torch.where((input > x), input, torch.zeros_like(input))
Please find more about torch.where here. |
st81973 | Hi @numbpy,
Maybe something as this would work for you:
torch.where((input > x), input, torch.zeros_like(input))
Please find more about torch.where here 36. |
st81974 | That’s might be a better solution. I used input[input < 0] = 0 (don’t know how to use inline code) for ReLU
And input[input < 0] = 0, input[input > 0] = 1 for ReLU prime. Will check which one’s faster.
Thanks |
st81975 | You can inline code by adding single backquotes `as this` => as this.
Actually, you should better use clamp 18 (better than zeros_like + where):
output = input.clamp(min=0) |
st81976 | Thanks, clamp works for ReLU but not for prime ReLU,
output = input.clamp(max=1) doesn’t work since most entries are between 0 and 1.
torch.where works fine but all these are still slow compared to output=numpy.maximum(input,0) by a wide margin.
I tried searching the official documentation for ReLU but the actual implementation seems to be in lua which I couldn’t understand. Also, does pytorch has an implementation for ReLU prime or is it computed internally in the autograd |
st81977 | Hi everyone, I wonder that can I write same nn module on Pytorch
My keras codes are:
model = Sequential()
# model.add(LSTM(12, input_shape=train_x.shape[1:], return_sequences=True))
# model.add(Activation('tanh'))
# model.add(LSTM(32, input_shape=train_x.shape[1:], return_sequences=False))
model.add(LSTM(16, input_shape=(1, 3), return_sequences=False))
model.add(Activation('relu'))
# model.add(Dense(12, input_shape=train_x.shape[1:]))
# model.add(Activation('relu'))
model.add(Dense(12))
model.add(Activation('relu'))
model.add(Dense(1))
model.compile(loss='mse',
optimizer='adam',
metrics=['mse'])
model.fit(train_x, train_y,
batch_size=batch_size,
epochs=500,
validation_data=(val_x, val_y),
callbacks = [tensorboard_CB])
I have been trying for one week to write this in PyTorch but I confront with problem which is train_x=[23k,1,3] and train_y=[23k,1] so their shapes are not same so I cannot train nn on PyTorch |
st81978 | Hy guys,I have a model consisting of two models. How can I freeze the weight of first model and training only the second model?
The first model consisting of a load of .pth and I don’t want to train it more.
Sorry for my bad English. |
st81979 | Solved by spanev in post #2
Hi @Giuseppe
Let’s say supermodel is the model containing the two sub-models Model1 and Model2 and you only want to train Model2. You should be able to achieve it by adding this before training:
for param in supermodel.Model1.parameters():
param.requires_grad = False |
st81980 | Hi @Giuseppe
Let’s say supermodel is the model containing the two sub-models Model1 and Model2 and you only want to train Model2. You should be able to achieve it by adding this before training:
for param in supermodel.Model1.parameters():
param.requires_grad = False |
st81981 | Hi everyone, I created my own data set and it’s size=[22806,1,3]. But when I tried to put this dataset to training. I got this error=train x size: torch.Size([22806, 1, 3])
1
Traceback (most recent call last):
File “fev.py”, line 210, in
for inputs, labels in tr:
ValueError: too many values to unpack (expected 2)
tr=training loader
I tried different numbers of batch size but i cannot figure out |
st81982 | Hi,
Could you show the code where you create the dataset and dataloader please?
If you dataset contains a single Tensor, then you cannot do for inputs, labels in tr:, as there is no second element, you should do for inputs in tr:. |
st81983 | cycles = ['LDCC', 'LDR', 'LRP']
variables = ['iBatt_10_s', 'uBatt_10_s', 'tCell_10_s', 'SOC_Cell_10_s']
save_var = ['i', 'u', 'T', 'SOC']
DATA = {}
for cycle in cycles:
print('* Loading cycle in memory: ', cycle)
for i, variable in enumerate(variables):
print(' Loading variable in memory: ', variable)
key_name = cycle + '_' + variable
target_csv = r'/Users/dreamer/Desktop/pytorchh/csv/MiL_sim_' + key_name + '.txt'
f = open(target_csv, 'r')
fresh_data = f.read()
f.close()
fresh_data = fresh_data.split(',')
save_name = cycle + '_' + save_var[i]
DATA[save_name] = np.array(fresh_data).astype(float)
# set their shapes right
for key in DATA:
DATA[key] = DATA[key].reshape(DATA[key].shape[0], 1)
LDCC_i = []
LDCC_u = []
LDCC_T = []
LDCC_SOC = []
k = 0
limit_of_jump = 20
wrt = DATA['LDCC_u']
for i in range(len(wrt) - 1):
if wrt[i] == wrt[i + 1]:
k = k + 1
if k == limit_of_jump or wrt[i] != wrt[i + 1]:
k = 0
LDCC_SOC.append(DATA['LDCC_SOC'][i])
LDCC_i.append(DATA['LDCC_i'][i])
LDCC_u.append(DATA['LDCC_u'][i])
LDCC_T.append(DATA['LDCC_T'][i])
LDR_i = []
LDR_u = []
LDR_T = []
LDR_SOC = []
k = 0
limit_of_jump = 20
wrt = DATA['LDR_u']
for i in range(len(wrt) - 1):
if wrt[i] == wrt[i + 1]:
k = k + 1
if k == limit_of_jump or wrt[i] != wrt[i + 1]:
k = 0
LDR_SOC.append(DATA['LDR_SOC'][i])
LDR_i.append(DATA['LDR_i'][i])
LDR_u.append(DATA['LDR_u'][i])
LDR_T.append(DATA['LDR_T'][i])
def scaler(tensor):
for ch in tensor:
scale = 1.0 / (ch.max(dim=0)[0] - ch.min(dim=0)[0])
ch.mul_(scale).sub_(ch.min(dim=0)[0])
return tensor
filtered_LDCC_u = savgol_filter(np.ravel(LDCC_u), 501, 3) # 51, 3
train_i = np.concatenate((LDCC_i, LDR_i))
train_u = np.concatenate((LDCC_u, LDR_u))
train_T = np.concatenate((LDCC_T ,LDR_T))
train_inputs = np.concatenate((train_i, train_u, train_T), axis=1)
train_outputs = np.concatenate((LDCC_SOC, LDR_SOC))
input_scaler = MinMaxScaler(feature_range=(-1,1))
input_scaler.fit(train_inputs)
input_scaler_outputs=MinMaxScaler(feature_range=(-1,1))
input_scaler_outputs.fit(train_outputs)
inputs_LDCC = input_scaler.transform(np.concatenate((LDCC_i, LDCC_u, LDCC_T), axis=1))
#input_scaler.fit(train_inputs)
#inputs_LDCC1 = input_scaler.fit_transform(np.concatenate((LDCC_i, LDCC_u, LDCC_T), axis=1))
outputs_LDCC=input_scaler_outputs.transform(np.array(LDCC_SOC))
#train_inputs=torch.Tensor(np.concatenate((LDCC_i, LDCC_u, LDCC_T), axis=1))
#inputs_LDCC1=scaler(train_inputs)
inputs_LDR = input_scaler.transform(np.concatenate((DATA['LDR_i'], DATA['LDR_u'], DATA['LDR_T']), axis=1))
outputs_LDR = input_scaler_outputs.transform(DATA['LDR_SOC'])
print(inputs_LDR.shape)
inputs_LRP = input_scaler.transform(np.concatenate((DATA['LRP_i'], DATA['LRP_u'], DATA['LRP_T']), axis=1))
outputs_LRP = input_scaler_outputs.transform(DATA['LRP_SOC'])
#inputs_LDCC1=inputs_LDCC1.numpy()
data_x = np.concatenate((inputs_LDCC, inputs_LDR), axis=0)
data_x = data_x.reshape(data_x.shape[0], 1, data_x.shape[1])
data_y = np.concatenate((outputs_LDCC, outputs_LDR), axis=0)
print("data y: ",data_y.shape)
val_range = int(data_x.shape[0]/100) * 15
val_x = data_x[0:val_range, :,:]
train_x = data_x[val_range:None, :,:]
val_y = data_y[0:val_range, :]
train_y = data_y[val_range:None, :]
'''
for axis in range(0, data_x.shape[1]):
plt.figure()
plt.plot(data_x[:, axis])
plt.show()
'''
train_x=torch.Tensor(train_x)
val_x=torch.Tensor(val_x)
print("train x size:",train_x.size())
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv1d(1, 20, 1, 1)
self.conv2 = nn.Conv1d(20, 50, 1, 1)
self.fc1 = nn.Linear(1*3*50, 500)
#self.dropout1 = nn.Dropout(0.5)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool1d(x, 1, 1)
x = F.relu(self.conv2(x))
x = F.max_pool1d(x, 1, 1)
x = x.view(-1, 1*3*50)
x = F.relu(self.fc1(x))
#x = self.dropout1(x)
x = self.fc2(x)
return x
'''
class Model(nn.Module):
def __init__(self,i,h1,out):
super().__init__()
self.linear=nn.Linear(i,h1)
self.linear2=nn.Linear(h1,out)
def forward(self,x):
x=F.tanh(self.linear(x))
x=F.tanh(self.linear2(x))
return x
'''
'''
data_x = inputs_LDCC.reshape(inputs_LDCC.shape[0], 1, inputs_LDCC.shape[1])
data_y = outputs_LDCC
val_range = int(data_x.shape[0]/100) * 15
val_x = data_x[0:val_range, :, :]
train_x = data_x[val_range:None, :, :]
val_y = data_y[0:val_range, :]
train_y = data_y[val_range:None, :]
train_x=torch.Tensor(train_x)
train_x=train_x.view(train_x.size(0),-1)
print(train_x.size(0))
val_x=torch.Tensor(val_x)
'''
batch_size=1000
epochs=15
device=torch.device("cuda:0"if torch.cuda.is_available() else "cpu")
tr=torch.utils.data.DataLoader(train_x,batch_size=100,shuffle=True,drop_last=True)
val=torch.utils.data.DataLoader(val_x,batch_size=100,shuffle=False)
model=Model().to(device)
print(tr.__len__())
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
next(iter(tr))
for e in range(epochs):
running_loss = 0.0
running_corrects = 0.0
val_running_loss = 0.0
val_running_corrects = 0.0
for inputs in tr:
inputs = inputs.to(device)
outputs = model(inputs)
loss = criterion(outputs, inputs)
optimizer.zero_grad()
loss.backward()
optimizer.step() |
st81984 | Hi,
If you have an x and y for every point, you can use a TensorDataset() that you then give to the Dataloader.
That way you will be able to do for x, y in tr. |
st81985 | I solved that problem so thank u so much but I got another problem. I try to explain my problem briefly. Here is my simple nn:
class Model(nn.Module):
def __init__(self, D_in, H1, H2, D_out):
super().__init__()
self.linear1 = nn.Linear(D_in, H1)
self.linear2 = nn.Linear(H1, H2)
self.linear3 = nn.Linear(H2, D_out)
def forward(self, x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
and my datasets names are train_x train y validation_x validation_y
I build these system on keras and it works fine and my keras code:
model = Sequential()
model.add(LSTM(16, input_shape=(1, 3), return_sequences=False))
model.add(Activation('relu'))
model.add(Dense(12))
model.add(Activation('relu'))
model.add(Dense(1))
model.compile(loss='mse',
optimizer='adam',
metrics=['mse'])
model.fit(train_x, train_y,
batch_size=batch_size,
epochs=500,
validation_data=(val_x, val_y),
callbacks = [tensorboard_CB])
net_outputs_LDCC = model.predict(inputs_LDCC_ready, batch_size=batch_size)
so my question is that how can I get .predict() result at PyTorch? Because I want to compare real result and nn results. |
st81986 | To get your validation / test predictions, you should set your model to evaluation mode using model.eval(), wrap your code in a with torch.no_grad() block, and iterate your validation or test DataLoader. Have a look at the ImageNet example 25 to see, how the validate method was created.
Depending on your model, you might get the predicted class using:
output = model(data)
preds = torch.argmax(output, 1)
e.g. if you are dealing with a multi-class classification use case and your last layer is a linear layer with nb_classes output neurons. |
st81987 | Thanx u for giving me respond. Firstly, when I look your example code, I just only have array values so I cannot iterate x,y over my loader so how can I implement your example to my code? |
st81988 | As @albanD explained, you could transform the numpy arrays to tensors using:
data = torch.from_numpy(x)
target = torch.from_numpy(y)
and pass it to a TensorDataset:
dataset = TensorDataset(data, target)
loader = DataLoader(
Dataset,
batch_size=1,
shuffle=False,
num_workers=2
) |
st81989 | @ptrblck thx for respond my question. I fixed that problem but now I got this error
Traceback (most recent call last):
File "fev.py", line 227, in <module>
outputs = model(inputs)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "fev.py", line 183, in forward
x = F.relu(self.linear1(x))
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 92, in forward
return F.linear(input, self.weight, self.bias)
File "/usr/local/lib/python3.7/site-packages/torch/nn/functional.py", line 1404, in linear
if input.dim() == 2 and bias is not None:
AttributeError: 'list' object has no attribute 'dim'
my nn is:
class Modele(nn.Module):
def __init__(self, D_in, H1, H2, D_out):
super().__init__()
self.linear1 = nn.Linear(D_in, H1)
self.linear2 = nn.Linear(H1, H2)
self.linear3 = nn.Linear(H2, D_out)
def forward(self, x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
and my calculations are:
batch_size=1000
epochs=5
device=torch.device("cuda:0"if torch.cuda.is_available() else "cpu")
train_x=train_x.view(-1,1,3)
dataset=TensorDataset(train_x,train_y)
tr=torch.utils.data.DataLoader(dataset,batch_size=100,shuffle=False)
val=torch.utils.data.DataLoader(val_x,batch_size=100,shuffle=False)
model=Modele(1,100,65,1).to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
for e in range(epochs):
running_loss = 0.0
running_corrects = 0.0
val_running_loss = 0.0
val_running_corrects = 0.0
for inputs in tr:
outputs = model(inputs)
loss = criterion(outputs, inputs)
_,preds=torch.max(outputs,1)
outputss.append(preds.max().detach().numpy())
losses.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
#outputss.append(outputs.detach().numpy())
#print(loss.item()) |
st81990 | Your TensorDataset will return the data and target batch, so you could unwrap inputs or assign the return values to separate tensors:
for inputs in loader:
data, target = inputs
outputs = model(data)
loss = criterion(outputs, target)
# or
for data, target in loader:
... |
st81991 | I appreciate you to give me respond. I already did that changes but I got list error which I mentioned above. Do you have any idea of that problem? |
st81992 | I have a new question about the sizes. I do not have dataset like MNIST format [100,1,28,28]. My dataset shape is [23k,1,3] so 23k is not batch size. I just read data from .txt file to array then transform to tensor. My question is that is the size of my dataset well or should i change dimension of it? Because I want to train it efficiently |
st81993 | Essentially, the first dimension should be the batch size, if you are using Dataloader. For image dataset like MNIST, it is of the form [N X C X H X W]
N = batch size
C = channel (R,G,B)
H, W = height and width
If you are not using Dataloader and manually iterating over dataset, you don’t need it. |
st81994 | How do I display output image in the middle of a convolutional neural network, it is (64, 30, 30) in size.
How do I display all 64, 30x30 images that are the outputs of middle convolutional layers? |
st81995 | There are two things to do: get the intermediate features, display them.
First, uou will need to use register_forward_hook 13 on the concerned convolution.
To create it you have to write:
def hook(module, input, output):
# output is the [N, 64, 30, 30] Tensor containing your features
model.concerned_conv.register_forward_hook(hook)
Then depending on your display method, you will have. You can use torchvision and Tensorboard to do so. The hook body would be something like:
def hook(module, input, output):
grid = torchvision.utils.make_grid(output)
writer.add_image('conv_features', grid, 0)
Please find more info on using Tensorboard with PyTorch here 3 . |
st81996 | when should I be removing this hook, is it ok to train neural network, and not remove this hook? |
st81997 | Yes you can save the returned torch.utils.hooks.RemovableHandle:
hook = model.concerned_conv.register_forward_hook(hook)
and to remove it, simply:
hook.remove()
However, you can safely keep the hook, which is a runtime only mechanism. It won’t be saved or influence the training of your model. |
st81998 | This might not be the best forum to ask this, but if anyone has worked on Yolo algorithm. I want to ask about the Loss function, specifically about the Localization Loss.
You have calculated Localization loss with original coordinates or scaled (0 to 1) coordinates.
As, we have to multiply with lambda_coord, it will have major impact/ |
st81999 | I recently coded up my own implementation of a cross validation script for pytorch and my models keep overfitting. Can someone help me out here. I am desperate. Here’s the code.
def predict(val_x,model=mymod,loss=loss,thresh=0.5):
valxtorch=torch.tensor(val_x,dtype=torch.long).cuda()
#valytorch=torch.tensor(val_y,dtype=torch.float32).cuda()
vald=torch.utils.data.TensorDataset(valxtorch)
testit=torch.utils.data.DataLoader(vald,batch_size=1024)
sig=nn.Sigmoid()
a=[]
tes=[]
losse=[]
for i,(trainx) in enumerate(testit):
xl=trainx[0]
pred=model(xl)
#losses=loss(pred,trainy)
pred=pred.cpu().detach()
#losse.append(losses.item())
pred=sig(pred).numpy()
a.append(list(pred))
#tes.append(trainy.cpu().detach())
b=[]
for i in a:
for j in i:
b.append(j)
testq=[]
for i in tes:
for j in i:
testq.append(int(j))
a=[]
for i in b:
if i>=thresh:
a.append(1)
else:
a.append(0)
#valy=list(np.asarray(val_y,dtype=int))
#valloss=sum(losse)/len(losse)
return a
def Train(model,train_x,train_y,n_epochs=4,splits=splits,scheduler=None):
it=0
best=0.0
for k in range(len(splits)):
print(“Fold {}”.format(k+1))
v=k
train,val=splits[v]
train12=torch.tensor(train_x[train],dtype=torch.long)
trainy12=torch.tensor(train_y[train],dtype=torch.float32)
traintorch=train12.cuda()
trainytorch=trainy12.cuda()
valxt=torch.tensor(train_x[val],dtype=torch.long)
valyt=torch.tensor(train_y[val],dtype=torch.float32)
valxtorch=valxt.cuda()
valytorch=valyt.cuda()
reqd=torch.utils.data.TensorDataset(traintorch,trainytorch)
reqd1=torch.utils.data.TensorDataset(valxtorch,valytorch)
load=torch.utils.data.DataLoader(reqd,batch_size=512,shuffle=True)
valload=torch.utils.data.DataLoader(reqd1,batch_size=1024,shuffle=True)
opti=torch.optim.Adam(model.parameters())
mymod.train()
for j in range(n_epochs):
avg=0
for i,(x,y) in enumerate(load):
xl=x
opti.zero_grad()
pred=mymod(xl)
if scheduler!=None:
scheduler.batch_step()
losses=loss(pred,y)
avg+=losses.item()
losses.backward()
opti.step()
it+=1
avg=avg/len(load)
“”“qwe,valloss=predict(val_x,val_y,mymod)
if qwe>best:
best=qwe
torch.save(mymod,’/content/gdrive/My Drive/My files/Jigsaw/quoramodbest.pt’)”""
print("Epoch {}/{}: Loss:{}".format(j+1,n_epochs,avg))
mymod.eval()
sig=nn.Sigmoid()
avgs=[]
for i,(x,y) in enumerate(valload):
a=[]
xl=x
pred=mymod(xl)
pred=sig(pred)
pred=pred.detach().cpu().numpy()
a.append(list(pred))
b=[]
for i in a:
for j in i:
b.append(j)
c=[]
for i in b:
if i>=0.5:
c.append(1)
else:
c.append(0)
valyq=list(np.asarray(y.detach().cpu().numpy(),dtype=int))
avgs.append(f1_score(c,valyq))
print(sum(avgs)/len(avgs))
return model |
st82000 | Please help me to vectorize this code on the batch dimension at least:
A = torch.zeros((3, 3, 3), dtype = torch.float)
X = torch.tensor([[0, 1, 2, 0, 1, 0], [1, 0, 0, 2, 1, 1], [0, 0, 2, 2, 1, 1]])
for a, x in zip(A, X):
for i, j in zip(x, x[1:]):
a[i, j] = 1
Thanks! |
st82001 | Solved by albanD in post #6
There you go, this will be much faster
Note that I use .narrow(-1, 0, x_size-1) out of habit but [:, :-1] works as well if you prefer that notation.
import torch
A = torch.zeros((3, 3, 3), dtype = torch.float)
X = torch.tensor([[0, 1, 2, 0, 1, 0], [1, 0, 0, 2, 1, 1], [0, 0, 2, 2, 1, 1]])
for a, … |
st82002 | Hi,
When you say the batch dimension, is it the first dimension in your example? Or another dimension to add to your example?
Also the formula that you want is the following?
forall i in A.size(0), forall j in x.size(0) - 1: A[i, x[i, j], x[i, j+1]] = 1
Meaning that every pair of consecutive values in x are the the indices where you should put a 1 ? |
st82003 | Just to be sure before writing code for this (to make sure I don’t write code that does not match what you want), what is the rational for the consecutive numbers in x being used as coordinate? Why each number is first an index on the 2nd dimension then an index on the 1st dimension?
It would fell more natural to have half being indices in the 1st dimension and the other half being indices in the 2nd dimension. Or at least have them used only once, not twice. |
st82004 | Consider X[0] : [0, 1, 2, 0, 1, 0]. What I want to build an adjacency matrix such that:
A[0, 1] = 1
A[1, 2] = 1
A[2, 0] = 1
A[0, 1] = 1
A[1, 0] = 1
The code I shared works, however it’s very slow since I’m looping over each element in the batch. I’d like to vectorize it as much as possible. Thanks! |
st82005 | There you go, this will be much faster
Note that I use .narrow(-1, 0, x_size-1) out of habit but [:, :-1] works as well if you prefer that notation.
import torch
A = torch.zeros((3, 3, 3), dtype = torch.float)
X = torch.tensor([[0, 1, 2, 0, 1, 0], [1, 0, 0, 2, 1, 1], [0, 0, 2, 2, 1, 1]])
for a, x in zip(A, X):
for i, j in zip(x, x[1:]):
a[i, j] = 1
print(A)
A = torch.zeros((3, 3, 3), dtype = torch.float)
# This code assumes A is contiguous ! If it is not, add
# A = A.contiguous()
# For indexing, collapse the last two dimensions of A
A_view = A.view(A.size(0), -1)
# Compute the indices where you will index in A
x_size = X.size(-1)
indices = X.narrow(-1, 0, x_size-1) * A.stride(1) * A.stride(2) + X.narrow(-1, 1, x_size-1) * A.stride(2)
# Put 1s at the computed indices
A_view.scatter_(1, indices, 1)
print(A) |
st82006 | class Classifier(nn.Module):
def __init__(self):
super(Classifier, self).__init__()
self.LSTM1 = nn.LSTM(input_size = 256, hidden_size = 1024)
self.LSTM2 = nn.LSTM(input_size = 1024, hidden_size = 1024)
self.LSTM3 = nn.LSTM(input_size = 1024, hidden_size = 1024)
self.LSTM4 = nn.LSTM(input_size = 1024, hidden_size = 1024)
self.LSTM5 = nn.LSTM(input_size = 1024, hidden_size = 1024)
self.LSTM6 = nn.LSTM(input_size = 1024, hidden_size = 256)
def forward(self, x):
result, (hn, cn) = self.LSTM1(x)
result, (hn, cn) = self.LSTM2(result, (hn, cn))
result, (hn, cn) = self.LSTM3(result, (hn, cn))
result, (hn, cn) = self.LSTM4(result, (hn, cn))
result, (hn, cn) = self.LSTM5(result, (hn, cn))
result, (hn, cn) = self.LSTM6(result, (hn, cn))
return result
Architecture that I made above thrown an error
RuntimeError: Expected hidden[0] size (1, 256, 256), got (1, 256, 1024)
But, if I remove (hn, cn) in self.LSTM6, it run well. why does hidden cell affect the LSTM layer? |
st82007 | This module was recently merged 14.
However, it looks like the docs 5 are missing. |
st82008 | docs show in colab
Screenshot (418).png769×443 33 KB
one more thing, how to implement show_batch without using matplotlib?
I raised pull request, but I used matplotlib, and torchvision does not use matplotlib.
Is there a way to show a batch of dataset, so it could be used like this
for batch, labels in dl:
show_batch(batch) |
st82009 | The docs also show up in IPython, so maybe there is some issue on the website.
If you don’t want to use matplotlib, you could use e.g. PIL instead.
However, in your code you are free to use whatever works for you. |
st82010 | @ptrblck the function is missing there because you are on the stable doc, not master’s doc! |
st82011 | I’m pretty sure I was looking at both and reported it 1.
However, the master docs are now there.
Since this module is available in 1.2.0, I think we might need to fix the stable docs as well. |
st82012 | According to this 7, operations that involve GPU are processed asynchronously and in parallel.
Does this mean that if I have many modules that can independently work on the same data, they will be processed in parallel, even if I call them sequentially?
For example, let’s say I have some code that looks like this:
x = torch.randn(1,3,32,32).cuda()
conv_list = []
for i in range(30):
conv_list.append(torch.nn.Conv2D(3,3,3,1).cuda())
# Is this executed in parallel?
output = sum(conv(x) for conv in conv_list)
Above, I’ve created 30 different conv layers that can work independent of each other. I’m calling them in a generator expression, but my understanding is that they are actually executed in parallel under the hood. Is my understanding correct?
I have a model where many sub-networks can operate independent of each other and I’d like to make it as parallel as possible. I was thinking of creating multiple torch.cuda.Stream() objects and use it for each independent module. But if my example code above does run in parallel, using multiple torch.cuda.Stream() objects would be silly. |
st82013 | Hey,
for some reason, I know that the output (i.e. the pixel values) of my network should follow a random uniform distribution. So I would like to aid the convergence by enforcing the output not only to have the correct values but also to be uniformly distributed. My idea was to have the loss as a combination of let’s say L2 between output and target as well as a regulariser / prior so as to enforce the output to be uniform.
I am wondering whether this is possible, really simple or hard since the prior does not care about the actual values but only about its distribution. Any ideas or further reading?
Cheers
Lucas |
st82014 | Hello, I’m trying to migrate my GAN model code from Keras (backend TF) into Pytorch.
Currently I’m stuck at this operation, where I combined result from an embedding layer and image input thru element-wise multiplication.
In Keras, the code line would be such like this
label_embedding = Flatten()(Embedding(n_class, embed_dim)(label))
model_input = Multiply()([input_img, label_embedding])
Where label_embedding give a vector of size embed_dim and input_img is in shape of (h, w, 1), resulting in model_input in shape of (h, w, embed_dim)
In Pytorch, however, I can’t simply multiply the embedding result because of difference in dimension
self.embedding = nn.Embedding(n_class, embed_dim)
model_input = torch.mul(input_img, self.embedding(y))
How do I reshape the embedding vector to match with image input and able to be operated using torch.mul? |
st82015 | My neural network has two inputs , and I don’t know if I use the api rightly to put 2 images as inputs.This is the line of my code:
torch::jit::script::Module module = torch::jit::load("/media/usr515/psmnet_256.pt" , torch::kCUDA);
torch::Tensor result = module.forward({tensor_image , tensor_image_1}).toTensor();
tensor_image and tensor_image_1 are the images having converted to tensors. |
st82016 | I used a model of inceptionresnetv2 which trained in pytorch==1.0 and scripting it in pytorch == 1.0. My model can load in libtorch But it can’t do forward in libtorch. Does anyone know why this happens?
image.png1053×578 38.3 KB |
st82017 | When I use lstm, there is always a problem about data dimension(input_size,out_size). Now train_x(batch_zize,time_step,input_szie), train_y(batch_size,time_step,input_size). Lstm forward(batch_first is True) code:
def forward(self,x):
x, (h_n, h_c) = self.rnn(x, None)
x = self.reg(x)
return x
Loss calculate code:
var_x = Variable(train_x)
var_y = Variable(train_y)
out = net1(var_x[start:end]).view(-1)
var_y=var_y[start:end].view(-1)
loss = criterion(out ,var_y)
Errors are about these two aspects. |
st82018 | Hello everybody!
I try to write and train a deep multiple instance learning model for 3D image classification, I always meet the question of “out of memory”. And cannot find out what leads to this problem.
Since the 3D images are too large, I segment each 3D image into overlapped patches. So my input is an array of 3D patches, and my output is a label.
In the training process, each patch is as the input of a 3D CNN with the Softmax layer, and the 3D CNN outputs probability of each patch. After obtaining the probabilities of all the patches of a 3D image, the deep multiple instance leanring model selects the patches with the max probability to proceed back-propagation. During the process of back-propagation, I always get the problem of “cudaError: out of memory”. My codes are as bellow:
self.model.train(True)
running_loss, running_corrects = 0,0
patch_unit_size = 200
for pat_batch in self.dcm_datasets['train']:
inputs, labels, data_dir = pat_batch
# the shape of inputs are [batch_numr, patch_num, channel_num, path_height, patch_width, patch_length], where batch_num=1, patch_num varies with differente 3D images, channel_num=3, patch_height=patch_width=patch_length=24
for input_each_batch in inputs:
patches_size = input_each_batch.shape[0]
num_patches = math.ceil(1.0 * patches_size / patch_unit_size)
patch_out_max, patch_out_prob_max = None, None
for i in range(num_patches): # Find out the patches with the maximum probability
inputs_tmp = input_each_batch[i * self.patch_size: (i + 1) *
self.patch_size] if i < num_patches - 1 else input_each_batch[i * self.patch_size:]
with torch.cuda.device(self.cuda_ids[0]):
inputs_new = Variable(inputs_tmp.cuda()).to(self.cuda_ids[0])
labels_new = Variable(labels.cuda()).to((self.cuda_ids[0]))
outputs = self.model(inputs_new) # 1*2
out_probs = torch.nn.functional.softmax(outputs, dim=1).data
patch_out_prob_max = out_probs if patch_out_prob_max is None else torch.cat(
(out_probs, patch_out_prob_max), dim=0)
'''find all the indices of the maximum'''
patch_prob_max = patch_out_prob_max.cpu().numpy()
inds_x, inds_y = np.where(patch_prob_max == np.max(patch_prob_max))
patch_out_prob_max = patch_out_prob_max[inds_x]
inds_x_out = inds_x[inds_x < outputs.shape[0]]
inds_x_rest = inds_x[inds_x >= outputs.shape[0]]
if inds_x_rest.shape[0] != 0:
inds_x_rest = inds_x_rest - outputs.shape[0]
patch_out_max = patch_out_max[inds_x_rest]
if inds_x_out.shape[0] != 0:
patch_out_max_tmp = outputs[inds_x_out]
if inds_x_rest.shape[0] != 0:
patch_out_max = torch.cat((patch_out_max, patch_out_max_tmp), dim=0)
else:
patch_out_max = patch_out_max_tmp
outputs, out_probs, inputs_new = 0, 0, 0
labels_new_1 = None
for i in range(patch_out_max.shape[0]):
labels_new_1 = labels_new if labels_new_1 is None else torch.cat((labels_new_1, labels_new),dim=0)
loss = self.criterion(patch_out_max, labels_new_1)
self.optimizer.zero_grad() # zero the parameter gradients
loss.backward()
self.optimizer.step()
preds = torch.argmax(patch_out_max[0].data) # preds is still a tensor
running_loss += loss.item() # running_loss is a Python data
running_corrects += np.sum(preds.item() == labels_new.item())
loss, labels_new_1 = 0, None
torch.cuda.empty_cache()
torch.cuda.memory_allocated()
data_len = len(self.dcm_datasets['train'])
epoch_loss = running_loss / data_len
epoch_acc = running_corrects / data_len
return epoch_acc, epoch_loss
Since the patch_num may be higher to 1500, I am wondering whether the “CudaError: out of memory” is caused by the large computation graph? But I am not sure.
So my question is how large is my computation graph? Providing the computation graph of a self.patch_size is O, my computation graph is O or num_patches*O?
If my computation graph is just O, the cuda usage is not too much. In this case, what leads to “cuda out of memory”? |
st82019 | Hi,
The memory used will be only what you keep reference to. In your case, at the end of the loop, both all the patches that your kept in patch_out_max (which I’m not sure how many is that by just reading your code) and all the patches from the last run of the loop (because python does not delete stuff at the end of a loop). |
st82020 | Have a silly question:
import torch
a = torch.rand(2,2)
b= torch.rand(2,2)
what a @ b do? |
st82021 | Solved by ptrblck in post #2
It’s a matrix multiplication. If I’m not mistaken it was introduced in Python3. |
st82022 | I may have found a bug in torch.ge(), but I just wanted to re-post it here in case it’s actually not a bug, but an issue with how I’m defining my model: https://github.com/pytorch/pytorch/issues/7322 11.
There’s a runnable and short code-sample in the link, along with my version numbers of relevant libraries. |
st82023 | “greater than or equal” has zero gradient almost everywhere, and nondifferentiable at other points. It’s not a bug. |
st82024 | Thank you for your answer! So is it not possible to have stable training with .ge when used as an intermediate layer? |
st82025 | If there are no other path from input to output other than comparison ops, then there will be no gradients.
It’s not really an issue of stability because, if you think about the function, greater than or equal is just flat almost everywhere. |
st82026 | Like SimonW said, this isn’t really a bug, just how the gradient surface of a piecewise operation is. I got around it by remaking my function into something that isn’t piecewise. I explain it in a bit more detail here: https://scientificattempts.wordpress.com/2018/06/03/conditionally-thresholded-cnns-for-weakly-supervised-image-segmentation/ 23 |
st82027 | In DDP broadcast_buffers is set to true as default. I am thinking if this is necessary? I used SyncBatchNorm and according to implementation I think during backward every process will get gradients from all nodes so statistics should always be consistent.
So far my validation accuracy is not normal similar to https://github.com/facebookresearch/maskrcnn-benchmark/issues/267 7 . I am thinking to replace SyncBatchNorm with BatchNorm. So if broadcast_buffers is set to True I guess all nodes will use statistics from first node right? |
st82028 | I need to set a boolean flag in the code running on multiple GPUs. When using a single GPU, self.calculate_running is being set to False correctly after the first iteration. It’s not being set when I use more than one GPU:
class PCTL_Layer(nn.Module):
def __init__(self, calculate_running=False):
super(PCTL_Layer, self).__init__()
self.register_buffer('running', torch.zeros(1))
self.calculate_running = calculate_running
def forward(self, input):
if self.calculate_running:
pctl, _ = torch.kthvalue(input.view(-1), int(input.numel() * 0.9))
self.running = pctl
print(' gpu {} calculate_running: {}'.format(torch.cuda.current_device(), self.calculate_running))
self.calculate_running = False
return self.running
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.percentile = PCTL_Layer(calculate_running=True)
def forward(self, x):
x = self.percentile(x)
return x
model = Model()
model = torch.nn.DataParallel(model).cuda()
for i, image in enumerate(train_loader):
output = model(image)
I tried inserting torch.cuda.synchronize() everywhere (before and after), but that didn’t help. Any advice on how to make it work?
p.s. I realize that a simple solution would be to pass i to the forward method of PCTL_layer, however it’s complicated because that layer is packaged into nn.Sequential list, so I don’t know how to do that either. |
st82029 | I’ve executed your code on a machine with 8 GPUs and got the following output:
model(torch.randn(8, 100, device='cuda'))
> gpu 0 calculate_running: True
gpu 1 calculate_running: True
gpu 2 calculate_running: True
gpu 4 calculate_running: True
gpu 3 calculate_running: True
gpu 7 calculate_running: True
gpu 6 calculate_running: True
gpu 5 calculate_running: True
Which seems to indicates the attribute is properly set. |
st82030 | Did you try more than one iteration? Because when I use 2 GPUs with 3 iterations I get the following:
gpu 1 calculate_running: True
gpu 0 calculate_running: True
gpu 1 calculate_running: True
gpu 0 calculate_running: True
gpu 0 calculate_running: True
gpu 1 calculate_running: True
But I want to see this:
gpu 1 calculate_running: True
gpu 0 calculate_running: True |
st82031 | Thanks for the follow-up.
I’ve missed the change in this attribute and get the undesired behavior.
Register the condition as a BoolTensor and it should work:
self.calculate_running = torch.tensor(calculate_running, dtype=torch.bool) |
st82032 | Does not work
I tried both setting it to False and to torch.tensor(False, dtype=torch.bool):
class PCTL_Layer(nn.Module):
def __init__(self, calculate_running=False):
super(PCTL_Layer, self).__init__()
self.register_buffer('running', torch.zeros(1))
self.calculate_running = torch.tensor(calculate_running, dtype=torch.bool)
def forward(self, input):
if self.calculate_running:
pctl, _ = torch.kthvalue(input.view(-1), int(input.numel() * 0.9))
self.running = pctl
print(' gpu {} calculate_running: {}'.format(torch.cuda.current_device(), self.calculate_running))
self.calculate_running = False
#self.calculate_running = torch.tensor(False, dtype=torch.bool)
return self.running |
st82033 | Even when I synchronize GPUs like this:
def forward(self, input):
if self.calculate_running:
pctl, _ = torch.kthvalue(input.view(-1), int(input.numel() * 0.9))
self.running = pctl
print(' gpu {} calculate_running: {}'.format(torch.cuda.current_device(), self.calculate_running))
torch.cuda.synchronize()
self.calculate_running = False
torch.cuda.synchronize()
return self.running
It still does not work. Strange, isn’t it? |
st82034 | You are right. I missed the second output due to an error message, sorry.
Thinking about it, it might make sense that this flag is not propagated using DataParallel.
E.g. what would the expected result be, if the models on GPU0,1,2 set this flag to False, while all others keep it as True?
I think the cleanest approach would be to manipulate this flag after the forward pass manually using:
_ = model(torch.randn(8, 100, device='cuda'))
model.module.percentile.calculate_running = False
which will make sure that each new copy of the model uses the new value. |
st82035 | Great, boolean variable assignment works. Thank you. However, now self.running is not being assigned correctly. When I run the following code on 2 GPUs:
class PCTL_Layer(nn.Module):
def __init__(self, calculate_running=False):
super(PCTL_Layer, self).__init__()
self.register_buffer('running', torch.zeros(1))
self.calculate_running = calculate_running
def forward(self, input):
if self.calculate_running:
pctl, _ = torch.kthvalue(input.view(-1), int(input.numel() * 0.9))
self.running = pctl
self.calculate_running = False
print('calculate_running: {} input: {} running: {:.4f}'.format(self.calculate_running, input, self.running.item()))
return self.running
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.percentile = PCTL_Layer(calculate_running=True)
def forward(self, x):
x = self.percentile(x)
return x
model = Model()
model = torch.nn.DataParallel(model).cuda()
for i in range(2):
print('\nIteration', i)
output = model(torch.randn(2, 4, device='cuda'))
print('model.module.percentile.running: {} model output: {}'.format(model.module.percentile.running, output))
if i == 0:
model.module.percentile.calculate_running = False
Here’s the output:
Iteration 0
calculate_running: False input: tensor([[ 0.3747, -1.1507, -1.4812, 0.1900]], device='cuda:1') running: 0.1900
calculate_running: False input: tensor([[ 0.3961, -0.0903, -0.0330, -1.9596]], device='cuda:0') running: -0.0330
/home/michael/miniconda2/envs/pt/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
model.module.percentile.running: tensor([0.], device='cuda:0') model output: tensor([-0.0330, 0.1900], device='cuda:0')
Iteration 1
calculate_running: False input: tensor([[ 0.6758, -0.2846, 0.9055, -0.0532]], device='cuda:0') running: 0.0000
calculate_running: False input: tensor([[-0.3517, -0.8317, -0.7304, 0.7370]], device='cuda:1') running: 0.0000
model.module.percentile.running: tensor([0.], device='cuda:0') model output: tensor([0., 0.], device='cuda:0')
Why self.running is being reset to 0 after the first iteration? In the worst case, I want each GPU to use its own value of running. But ideally I want it to be the mean across all GPUs.
I can fix it like this:
class PCTL_Layer(nn.Module):
def __init__(self, calculate_running=False):
super(PCTL_Layer, self).__init__()
self.register_buffer('running', torch.zeros(1))
self.calculate_running = calculate_running
def forward(self, input):
if self.calculate_running:
pctl, _ = torch.kthvalue(input.view(-1), int(input.numel() * 0.9))
self.running = pctl
model.module.running_list.append(self.running)
self.calculate_running = False
print('calculate_running: {} input: {} running: {:.4f}'.format(self.calculate_running, input, self.running.item()))
return self.running
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.percentile = PCTL_Layer(calculate_running=True)
self.running_list = []
def forward(self, x):
x = self.percentile(x)
return x
model = Model()
model = torch.nn.DataParallel(model).cuda()
for i in range(3):
print('\nIteration', i)
output = model(torch.randn(2, 4, device='cuda'))
print('model.module.percentile.running: {}'.format(model.module.percentile.running))
if i == 0:
model.module.percentile.calculate_running = False
model.module.percentile.running = torch.tensor(model.module.running_list, device='cuda:0').mean()
print('model output (running): {} running_list: {}\n'.format(output, model.module.running_list))
Which produces the following output:
Iteration 0
calculate_running: False input: tensor([[0.2759, 1.1083, 0.3523, 0.6275]], device='cuda:1') running: 0.6275
calculate_running: False input: tensor([[-1.5641, 0.3595, 1.7428, -0.5368]], device='cuda:0') running: 0.3595
/home/michael/miniconda2/envs/pt/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:61: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
model.module.percentile.running: tensor([0.], device='cuda:0')
model output (running): tensor([0.3595, 0.6275], device='cuda:0') running_list: [tensor(0.3595, device='cuda:0'), tensor(0.6275, device='cuda:1')]
Iteration 1
calculate_running: False input: tensor([[ 0.4074, 1.4806, -0.5506, 0.3985]], device='cuda:0') running: 0.4935
calculate_running: False input: tensor([[ 0.2667, 0.4528, 0.1397, -0.5707]], device='cuda:1') running: 0.4935
model.module.percentile.running: 0.4935113787651062
model output (running): tensor([0.4935, 0.4935], device='cuda:0') running_list: [tensor(0.3595, device='cuda:0'), tensor(0.6275, device='cuda:1')]
But this is pretty ugly. Is there a better way? |
st82036 | Also, note that when I’m averaging the running_list I have to move it to gpu-0. How do other GPUs access it? I mean, do they copy it from GPU-0 every time, or just once? If every time, how can I distribute it manually? |
st82037 | nn.DataParallel will scatter the model to all devices and gather their outputs on the master device. This blog post 33 gives a good overview.
Since you are assigning a new value to running, I think the best way would be to have a look how data parallel works internally 16 and adapt these methods to your use case. |
st82038 | A simple example will show how my NN works. Usually NN takes a step, then we give a label and do backprob. In my case, NN takes 10 steps (for example), after that I count how many points NN earned. If NN earned minus 5 points, then I need to reduce the gradient at all 10 steps, and if I gained NN plus 5 points, then I need to increase the gradient. How do I remember gradients and do backprob correctly? |
st82039 | Correct me, if I am wrong. Are you referring to batch size. Like to calculate loss for 10 inputs and then do backprop?
If it is, then you can use batch size. It calculate loss for batch size (like 10) and calculate gradient and then backprop. |
st82040 | You correctly understood the size of the party, this is the number of steps. I have 3 neurons in the NN output. I used to use CrossEntropyLoss. In this situation, NN scored minus 5 points, then for those neurons that showed the highest weights, I need to reduce the weight, and for the other two, increase. For example, for one neuron I decrease by 1, and for two I increase by 0.5. I can’t make such an implementation, I don’t understand how to adjust the weights manually. |
st82041 | Hi everyone,
I am trying to use KLDivLoss loss function with but I get the following Error:
_th_log_out not supported on CUDAType for Long
I don’t know if there is some bug with KLDivLoss, but as I see there is a problem when using it with GPU.
any hint or solution will be really useful |
st82042 | I am trying to access a 3d tensor matrix by 2d matrix. The return should be an 2d matrix too. And the following is what i try to achieve.
dim1 = 2
dim2 = 3
dim3 = 4
source = torch.FloatTensor(dim1, dim2, dim3)
source.normal_()
check = zn > 0.0
index = torch.argmax(check, dim=0)
for i in range(dim2):
for j in range(dim3):
ret[i, j] = source[index[i, j], i, j] |
st82043 | I think you want to use torch.gather:
import torch
dim1 = 2
dim2 = 3
dim3 = 4
source = torch.FloatTensor(dim1, dim2, dim3)
index = source.argmax(dim=0).unsqueeze(0)
source.gather(0, index).squeeze(0) |
st82044 | I want to generate accuracy/loss vs epoch graph from a trained model. Is it possible to do so? an example image is attached.
acc.png861×650 33 KB |
st82045 | If you have saved the accuracy values for each epoch, this should be possible.
Otherwise, you would need to rerun the training and plot the metrics for each epoch. |
st82046 | I just wanted to try PyTorch after several years of TensorFlow.
But I wonder, why do I need to define layers in init and use them in forward?
In TensorFlow, for example, I just build graph in one place, and then use it all over the code, like so:
x = Conv2D(filters=16, kernel_size=(7, 7), strides=(1, 1), padding="valid")(x)
x = ReLU()(x)
x = MaxPooling2D((2, 2))(x)
x = Conv2D(filters=32, kernel_size=(5, 5), strides=(1, 1), padding="valid")(x)
x = ReLU()(x)
x = MaxPooling2D((2, 2))(x)
x = Flatten()(x)
x = Dense(100)(x)
x = ReLU()(x)
x = Dense(self._num_classes)(x)
return x
It seems not so attractive for me to calculate inputs and outputs dims for each layer.
Moreover, If I want to change the input dataset dim, I have to manually recalculate inputs outputs of layers (Linear at least).
So, is it possible to define models like in TensorFlow with automatically recalculation dims of inputs and outputs?
Thanks for attention. |
st82047 | Solved by tom in post #2
For 1. you could use Sequential.
For 2. no. And that is because PyTorch won’t ask you to specify your input size ahead of time. This does come up on the forums every now and then, but for most people it is OK to compute the (usually just one in a typical CNN) size or just put data through the lower… |
st82048 | For 1. you could use Sequential.
For 2. no. And that is because PyTorch won’t ask you to specify your input size ahead of time. This does come up on the forums every now and then, but for most people it is OK to compute the (usually just one in a typical CNN) size or just put data through the lower layers and then read the shape.
Best regards
Thomas |
st82049 | BTW:
I found following approach very suitable
import torch
from torch import nn
class AlexNet(nn.Module):
def __init__(self, dims: tuple, num_classes: int):
super().__init__()
batch_size, chanels, w, h = dims
data = torch.ones([batch_size, chanels, w, h])
self.conv_net = nn.Sequential(nn.Conv2d(in_channels=3, out_channels=96, kernel_size=(11, 11), stride=4),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(3, 3), stride=2),
nn.Conv2d(in_channels=96, out_channels=256, kernel_size=(5, 5), stride=1,
padding=2),
nn.ReLU(),
nn.MaxPool2d(kernel_size=(3, 3), stride=2),
nn.Conv2d(in_channels=256, out_channels=384, kernel_size=(3, 3), stride=1,
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=384, out_channels=384, kernel_size=(3, 3), stride=1,
padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=384, out_channels=256, kernel_size=(3, 3), stride=1),
nn.MaxPool2d(kernel_size=(3, 3), stride=2),
nn.Dropout())
batch_size = data.shape[0]
out_shape = self.conv_net(data).view(batch_size, -1).shape[-1]
fc_neurons = 4096
self.fc = nn.Sequential(nn.Linear(in_features=out_shape, out_features=fc_neurons),
nn.ReLU(),
nn.Dropout(),
nn.Linear(in_features=fc_neurons, out_features=fc_neurons),
nn.ReLU(),
nn.Linear(in_features=fc_neurons, out_features=num_classes),
nn.Dropout()
)
del data
def forward(self, input):
x = self.conv_net(input)
x = x.view((x.shape[0], -1))
x = self.fc(x)
return x
if __name__ == '__main__':
batch_size = 32
chanels = 3
w = 1024
h = 1024
num_classes = 10
data = torch.ones([batch_size, chanels, w, h])
model = AlexNet((batch_size, chanels, w, h), 10).forward(data)
pass |
st82050 | Hi everyone, i have a problem about loss function. I have 2 tensors and x=[23177,1,3] and y=[23177,1]. They are my special datasets. So when I turn them into a dataloader:
train_dataset = TensorDataset(train_x, train_y)
train_generator = DataLoader(train_dataset, batch_size=100,shuffle=False)
and my criterion and optimizer:
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
and my nn class is:
class Modele(nn.Module):
def __init__(self, D_in, H1, H2, D_out):
super().__init__()
self.linear1 = nn.Linear(D_in, H1)
self.linear2 = nn.Linear(H1, H2)
self.linear3 = nn.Linear(H2, D_out)
def forward(self, x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = self.linear3(x)
return x
and my computation loop:
epochs=15
for e in range(epochs):
running_loss = 0.0
running_corrects = 0.0
val_running_loss = 0.0
val_running_corrects = 0.0
for inputs,out in train_generator:
loss = criterion(model(inputs), out)
#_,preds=torch.max(outputs,1)
#outputss.append(preds.max().detach().numpy())
losses.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
#outputss.append(outputs.detach().numpy())
#print(loss.item())
And my loss function graph:
and I got this error
C:\Users\gun\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\loss.py:431: UserWarning: Using a target size (torch.Size([100, 1])) that is different to the input size (torch.Size([100, 1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
C:\Users\gun\AppData\Local\Continuum\anaconda3\lib\site-packages\torch\nn\modules\loss.py:431: UserWarning: Using a target size (torch.Size([12, 1])) that is different to the input size (torch.Size([12, 1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have
the same size.
return F.mse_loss(input, target, reduction=self.reduction)
How can I solve this problem? |
st82051 | Pad the target into 3d [ x, y , z ]. z-dim can be zero in this case. Loss can be computed only when they are of same size. But, check if loss is coming according to your desired expectations. |
st82052 | Hi everyone, i want to build my own dataset. I searched this topic on forum but examples are not same as my datasets. I have three different .txt file which are contains float numbers. These 3 .txt files will be mine inputs but i do not have any labels. How can I create my own dataset?
I am going to put my one of the dataset file here:
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-0.40139,-0.3999,-0.39836,-0.39678,-0.39647,-0.39083,-0.39642,-0.398,-0.39678,-0.39874,-0.3979,-0.39558,-0.39488,-0.40159,-0.39283,-0.396,-0.39641,-0.40043,-0.39269,-0.39241,-0.39713,-0.39307,-0.39429,-0.39759,-0.39755,-0.39559,-0.39275,-0.39558,-0.39083,-0.39596,-0.39042,-0.39163,-0.39555,-0.38846,-0.39321,-0.39196,-0.39196,-0.39326,-0.39003,-0.39552,-0.39038,-0.38841,-0.39044,-0.3923,-0.39,-0.38876,-0.38921,-0.38842,-0.39038,-0.39,-0.38964,-0.39005,-0.38768,-0.38923,-0.38602,-0.38957,-0.39362,-0.3848,-0.38841,-0.38921,-0.38636,-0.38598,-0.38559,-0.38718,-0.38598,-0.38559,-0.38239,-0.38564,-0.37961,-0.38481,-0.384,-0.38836,-0.38439,-0.38311,-0.38481,-0.38122,-0.38523,-0.38359,-0.37601,-0.38439,-0.38046,-0.3852,-0.38483,-0.38836,-0.3756,-0.38156,-0.38558,-0.38321,-0.38358,-0.38759,-0.37957,-0.3748,-0.38043,-0.37922,-0.3856,-0.3836,-0.37679,-0.37841,-0.37921,-0.3832,-0.3852,-0.38359,-0.3764,-0.3728,-0.37522,-0.38117,-0.38359,-0.37879,-0.3796,-0.3736,-0.37559,-0.37959,-0.3824,-0.37681,-0.374,-0.37478,-0.37596,-0.3776,-0.38358,-0.37841,-0.36883,-0.372,-0.37718,-0.37559,-0.38158,-0.37683,-0.36367,-0.36759,-0.37317,-0.37716,-0.37366,-0.36925,-0.36241,-0.36675,-0.37516,-0.37716,-0.37568,-0.36683,-0.37473,-0.36643,-0.37271,-0.37986,-0.36726,-0.37157,-0.36687,-0.36557,-0.374,-0.376,-0.36844,-0.36844,-0.37156,-0.37156,-0.368,-0.37669,-0.37756,-0.36444,-0.36244,-0.36869,-0.37666,-0.374,-0.366,-0.368,-0.36,-0.36266,-0.3731,-0.3742,-0.3669,-0.36,-0.35691,-0.37128,-0.37045,-0.36845,-0.36868,-0.36245,-0.362,-0.36709,-0.37092,-0.3669,-0.36446,-0.36091,-0.35584,-0.364,-0.36645,-0.366,-0.36693,-0.36153,-0.356,-0.36154,-0.366,-0.36494,-0.3599,-0.35647,-0.35447,-0.36058,-0.36448,-0.36448,-0.35944,-0.36047,-0.35944,-0.35904,-0.36352,-0.36552,-0.36095,-0.35848,-0.35897,-0.36551,-0.36351,-0.36144,-0.36103,-0.35951,-0.35699,-0.36,-0.41637,-0.43954,-0.43299,-0.4365,-0.4385,-0.44501,-0.4355,-0.441,-0.441,-0.44449,-0.478,-0.48349,-0.43098,-0.43301,-0.47499,-0.48502,-0.416,-0.416,-0.4145,-0.4165,-0.4155,-0.41549,-0.41451,-0.41754,-0.41451,-0.416,-0.41896,-0.412,-0.414,-0.41748,-0.416,-0.416,-0.41158,-0.41347,-0.412,-0.41452,-0.41547,-0.412,-0.40852,-0.40747,-0.41157,-0.40547,-0.41,-0.41054,-0.40947,-0.411,-0.41,-0.41387,-0.40946,-0.408,-0.40946,-0.40838,-0.40618,-0.406,-0.44933,-0.42358,-0.42945,-0.42454,-0.42745,-0.428,-0.42891,-0.42709,-0.426,-0.4251,-0.43324,-0.42108,-0.43092,-0.42055,-0.434,-0.428,-0.42779,-0.42945,-0.43,-0.42768,-0.43544,-0.43744,-0.43344,-0.43288,-0.434,-0.42287,-0.43057,-0.428,-0.43028,-0.43142,-0.59478,-1.1942,-0.54881,-0.77985,-1.3753,-0.75752,-0.81028,-1.2039,-1.4793,-1.6478,-1.6843,-1.6891,-1.59,-1.4848,-1.6849,-1.9841,-2.4618,-2.3989,-2.4914,-2.7966,-3.3461,-2.9245,-2.7487,-3.3341,-4.0765,-4.2197,-4.4234,-3.9726,-3.7445,-4.2346,-5.7134,-7.5998,-6.584,-7.617,-8.4383,-8.0122,-8.8407,-9.2153,-10.814,-12.305,-11.69,-11.804,-12.33,-12.31,-12.51,-13.449,-13.511,-13.879,-14.179,-14.636,-15.126,-15.222,-15.8,-16.209,-17.031,-18.038,-17.785,-17.934,-18.261,-18.024,-17.412,-17.567,-16.806,-16.757,-16.592,-16.828,-18.015,-18.711,-19.463,-19.879,-20.248,-20.701,-20.85,-21.085,-21.594,-24.339,-29.393,-30.207,-31.078,-32.585,-31.521,-31.719,-33.197,-34.524,-36.11,-37.141,-38.865,-39.614,-40.023,-42.008,-42.893,-43.641,-44.935,-45.792,-46.914,-47.595,-48.439,-47.132,-47.154,-48.631,-48.109,-51.832,-54.166,-56.134,-58.47,-59.472,-60.412,-61.425,-60.025,-59.407,-59.933,-58.539,-56.867,-51.192,-52.077,-51.053,-50.556,-41.308,-24.75,-11.105,-6.0969,-3.7905,-2.4182,-1.2207,-0.14451,-0.050676,-0.085366,-0.078651,-0.093326,-0.084,-0.067407,-0.05667,-0.049361,-0.057953,-0.035272,-0.048682,-0.050683,-0.058,-0.057372,-0.039315,-0.047309,-0.033371,-0.031309,-0.030075,-0.023384,-0.033462,-0.035309,-0.035919,-0.031303,-0.025391,-0.03,-0.020616,-0.0287,-0.030699,-0.028,-0.016,-0.028,-0.022601,-0.028096,-0.022701,-0.028594,-0.031902,-0.03024,-0.030588,-0.039294,-0.026235,-0.021187,-0.030709,-0.028,-0.030578,-0.030291,-0.024,-0.024291,-0.028713,-0.035286,-0.065854,-0.14059,-0.071155,-0.056865,-0.072875,-0.086721,-0.09057,-0.087611,-0.078,-0.09945,-0.104,-0.10983,-0.12255,-0.3553,-0.17307,-0.13819,-0.14018,-0.079467,-0.079267,-0.090734,-0.093656,-0.096934,-0.072527,-0.084524,-0.089717,-0.082744,-0.11041,-0.10126,-0.10921,-0.11949,-0.12028,-0.11926,-0.12424,-0.10951,-0.12486,-0.56902,-0.61725,-0.17664,-0.086494,-0.084,-0.088501,-0.082999,-0.41748,-1.9425,-4.9296,-7.3518,-10.131,-13.186,-13.635,-13.745,-14.273,-15.601,-11.849,-9.8619,-10.647,-12.412,-15.204,-15.802,-15.557,-12.904,-13.025,-13.164,-12.183,-12.123,-12.2,-12.675,-12.569,-12.593,-13.315,-13.352,-13.154,-13.856,-13.82,-13.819,-13.933,-13.723,-13.794,-9.3086,-5.1594,-3.6245,-2.7545,-1.9512,-1.0231,-0.22418,-0.42241,-0.079186,-0.07229,-0.093203,-0.086041,-1.0383,-4.0464,-5.6861,-8.0327,-10.854,-10.864,-10.142,-11.072,-11.629,-10.988,-6.576,-4.2035,-3.2572,-2.136,-2.2695,-4.2763,-6.8671,-10.606,-14.974,-17.273,-16.423,-16.516,-16.477,-16.492,-16.812,-20.292,-22.437,-24.169,-25.328,-25.836,-26.962,-28.705,-31.509,-35.557,-36.663,-37.268,-38.052,-39.077,-39.642,-41.844,-41.642,-45.001,-46.674,-47.77,-49.749,-50.159,-50.923,-52.808,-54.26,-54.808,-55.663,-57.128,-57.855,-61.041,-62.908,-64.112,-69.512,-71.165,-73.994,-76.591,-76.333,-77.481,-79.951,-89.916,-93.744,-97.209,-99.326,-98.845,-89.319,-87.79,-87.577,-89.115,-90.804,-91.321,-98.003,-110.53,-123.44,-129.71,-132.71,-130.86,-128.11,-121.95,-118.24,-116.33,-116.82,-114.31,-114.04,-115.04,-118.34,-119.73,-121.23,-120.58,-119.97,-122.53,-125.09,-123.57,-124.05,-123.72,-127.62,-128.47,-129.35,-128.56,-129.59,-130.17,-131.96,-130.36,-130.61,-132.81,-131.58,-134.64,-134.33,-134.36,-136.73,-134.6,-134.09,-136.99,-137.88,-138.05,-136.72,-136.8,-137.26,-140.3,-141.03,-139.55,-139.32,-137.44,-140.74,-138.83,-141.32,-141.84,-141.2,-142.32,-142.26,-140.23,-141.96,-144.08,-142.9,-141.97,-142.72,-139.42,-132.1,-116.8,-112.71,-101.37,-67.207,-33.749,-17.939,-13.131,-9.7993,-6.4907,-4.2059,-1.9476,1.5367,4.7462,4.4499,4.5229,4.4311,4.4026,4.319,4.211,4.1957,4.0889,4.0532,3.7051,4.1094,3.9537,4.1766,3.7138,3.7586,3.9898,3.7537,3.6785,3.7313,3.6956,3.6266,3.5259,3.5343,3.4785,3.4206,3.3793,3.3185,3.2987,3.2669,3.2137,3.1519,3.1449,3.1133,3.0669,3.0104,2.9614,2.9435,2.8923,3.5285,3.5446,3.3652,3.034,2.9761,2.8227,3.1496,3.071,2.8481,3.0991,2.9965,2.7934,3.1284,3.0198,3.1289,3.4616,3.5662,3.2016,3.1903,3.1445,3.1145,3.0103,3.0767,3.0362,3.0632,3.0921,3.1157,3.1663,3.1391,3.1361,3.1372,3.113,3.1389,3.1044,3.1097,3.5568,3.525,3.5029,3.5,3.5189,3.5222,3.5541,3.546,3.5409,3.467,3.168,3.162,3.14,3.141,3.149,3.158,3.166,3.167,3.198,3.18,3.174,3.181,3.192,3.159,3.112,3.143,3.1541,3.1171,3.0941,3.405,3.189,3.2812,3.317,3.311,3.3391,3.337,3.3842,3.438,2.9738,2.9649,2.9461,2.9402,3.0067,3.0129,2.9919,2.961,2.9458,2.9116,2.9037,2.8692,2.841,2.8069,2.7989,3.1365,3.2297,3.2119,3.163,3.141,3.1206,3.0716,2.6494,2.6379,2.5942,2.6363,2.6263,2.6112,2.6408,2.7455,3.062,3.0497,2.6352,2.758,2.7647,2.5624,1.0901,0.55178,0.4965,0.20258,0.080826,0.10427,0.176,0.15432,-0.023788,-0.43511,-1.261,-0.87845,-1.079,-1.2829,-1.936,-2.7283,-3.3569,-3.9916,-3.8476,-3.6448,-3.713,-3.8129,-4.5863,-6.2155,-8.4981,-11.144,-11.083,-11.184,-11.678,-11.566,-11.079,-11.311,-11.634,-12.424,-12.549,-12.218,-10.844,-7.5204,-7.259,-7.1785,-4.9223,-3.9538,-3.1733,-2.8563,-2.4654,-2.3138,-3.4543,-3.3595,-2.8413,-2.9027,-3.5282,-5.0238,-7.044,-8.7446,-10.507,-12.976,-13.471,-13.767,-14.368,-14.681,-15.494,-15.041,-15.195,-15.593,-15.932,-16.329,-19.229,-20.369,-20.579,-20.942,-21.652,-22.14,-21.839,-22.787,-22.945,-23.894,-24.938,-26.262,-27.706,-28.697,-31.6,-33.286,-34.38,-34.956,-35.918,-36.41,-37.146,-37.788,-40.313,-45.015,-52.078,-55.575,-58.334,-62.187,-64.115,-67.387,-73.18,-77.281,-83.032,-87.197,-90.824,-92.841,-95.12,-96.004,-98.87,-98.984,-102.99,-110.11,-124.45,-124.91,-125.09,-128.87,-127.47,-125.17,-130.14,-125.79,-126.42,-127.77,-127.45,-122.88,-122.85,-121.13,-118.34,-116.28,-114.67,-114.57,-113.04,-113.47,-116.26,-117.64,-121.46,-123.16,-124.63,-123.56,-123.59,-125.68,-125.39,-125.34,-126.63,-128.5,-131.45,-131.96,-129.84,-132.34,-132.45,-133.23,-135.75,-134.32,-131.77,-137.55,-134.75,-135.87,-134.68,-136.91,-137.15,-136.09,-138.88,-172.14,-171.76,-170.26,-171.38,-172.07,-172.37,-172.47,-172.56,-170.81,-172.33,-170.41,-172.55,-171.33,-172.11,-170.91,-173.22,-172.92,-171.43,-170.58,-169.69,-171.74,-172.97,-174.78,-173.45,-173.32,-174.51,-173.53,-171.88,-173.13,-172.91,-173.15,-173.05,-174.86,-172.58,-172.86,-172.67,-172,-172.47,-172.97,-174.24,-173.34,-172.38,-171.33,-172.7,-172.38,-172.69,-172.61,-172.93,-172.88,-152.31,-128.12,-75.047,-38.588,-21.693,-14.486,-9.8152,-5.1682,-2.92,0.96396,7.1392,9.3543,10.482,10.386,10.901,11.381,11.276,11.22,11.067,11.085,10.96,10.81,10.784,11.657,11.644,11.46,11.413,11.457,11.307,11.189,11.12,11.191,11.021,10.778,11.252,11.598,12.037,11.613,11.506,11.391,10.818,11.239,10.77,11.099,10.722,11.476,11.158,11.172,11.733,11.908,11.461,10.979,10.808,10.638,10.545,10.624,10.931,10.513,10.719,10.771,10.469,10.67,10.879,10.873,11.373,11.252,11.011,11.026,11.017,10.924,10.872,10.946,11.021,11.387,11.289,11.206,11.102,11.328,11.131,10.939,11.14,11.111,11.046,10.971,10.872,10.993,11.374,11.01,11.177,11.067,11.191,11.086,10.864,10.905,10.936,10.866,10.728,10.873,10.794,10.616,10.947,10.739,11.035,10.959,11.303,11.212,11.184,11.125,10.804,10.987,10.842,10.852,11.158,10.306,10.392,10.439,10.147,10.638,10.341,10.516,10.448,10.53,10.416,10.269,10.162,10.503,10.667,10.612,10.445,10.5,10.202,10.477,10.505,-110.81,-109.35,-109.52,-107.67,-107.49,-108.57,-106.76,-110.33,-81.688,-44.444,-20.248,-12.764,-7.6634,-4.6429,-2.2129,1.5533,6.5638,9.97,10.239,11.345,11.034,10.73,10.5,11.37,11.202,11.498,11.922,11.519,11.302,11.039,11.107,10.595,10.806,10.56,11.419,11.343,11.324,11.096,10.941,10.852,10.795,10.708,10.674,10.476,10.485,11.203,11.093,10.956,10.885,10.859,10.624,10.539,10.429,10.346,10.239,11.099,19.366,22.215,25.789,31.47,36.106,37.89,40.359,44.225,44.462,44.496,44.552,44.808,44.863,45.307,45.848,46.547,47.231,47.826,46.749,45.255,44.498,43.177,41.56,41.045,41.151,41.014,40.979,41.091,41.382,41.166,39.296,35.105,33.332,32.466,30.956,28.479,26.56,24.36,21.742,17.683,13.66,10.241,6.4583,3.4691,0.51115,-0.30891,-0.40096,-0.49115,-0.76438,-0.86387,-1.2633,-2.5215,-3.61,-4.6365,-5.5799,-5.6331,-6.3369,-6.8678,-7.3588,-6.8003,-7.3965,-7.1453,-7.1555,-6.8617,-7.0547,-7.0832,-7.0782,-7.0846,-6.7226,-6.7686,-6.7145,-6.7829,-6.8472,-6.8753,-6.8665,-6.9609,-6.158,-6.1351,-6.1697,-5.8168,-5.8201,-5.6432,-5.4063,-5.473,-5.0511,-5.0327,-5.0878,-4.7109,-4.6856,-4.6846,-4.421,-4.3815,-4.3801,-4.4183,-5.138,-7.2436,-10.324,-14.45,-17.791,-18.042,-17.634,-17.432,-17.165,-17.235,-18.357,-19.799,-20.778,-22.113,-23,-23.168,-23.458,-24.093,-23.966,-23.436,-23.278,-23.659,-24.06,-23.891,-24.709,-24.21,-25.082,-26.148,-25.4,-26.182,-22.112,-13.264,-6.7191,-4.8813,-3.1999,-2.2883,-1.0959,-0.2149,0.94924,0.97147,-0.024704,-1.5024,-5.5891,-8.8162,-12.174,-17.462,-25.697,-27.72,-28,-28.017,-28.554,-31.982,-37.546,-41.022,-41.978,-43.939,-45.053,-44.977,-48.181,-56.179,-59.762,-67.681,-74.292,-78.9,-82.773,-85.599,-84.254,-77.382,-77.851,-77.446,-78.745,-81.39,-82.245,-70.32,-65.277,-53.726,-32.695,-13.675,-6.4351,-3.5398,-1.5787,0.26799,3.2124,5.4019,7.1116,7.0738,7.0756,6.8627,6.6953,6.9402,7.1263,6.7486,6.6113,6.5871,6.6175,6.6985,6.6547,6.6385,6.6385,6.1615,6.1273,6.199,6.1421,6.1606,6.1659,6.1882,6.1807,6.1729,5.6772,5.5993,5.6315,5.6395,3.2293,-0.093806,-6.8736,-12.879,-18.197,-27.874,-38.4,-37.16,-35.757,-35.773,-35.11,-29.219,-26.737,-25.509,-22.567,-14.85,-7.3908,-4.9209,-2.7231,-1.2671,0.77822,3.3489,5.0471,5.5686,5.6324,5.6669,5.6055,5.6414,5.6375,5.6292,5.607,5.6082,5.577,5.5044,5.5171,5.0165,5.0389,4.9814,4.9887,4.9172,4.9106,4.9036,4.911,4.3401,4.3457,4.3199,4.2646,4.2663,4.2713,4.2424,4.1206,3.7071,3.6542,3.662,3.6379,3.6087,3.5956,3.4944,2.9997,6.8647,8.8544,12.971,14.446,16.062,15.431,14.531,13.899,13.028,10.864,7.7346,4.2684,1.2247,-0.071107,0.10682,-0.27036,-0.33522,-0.5322,-0.91043,-1.5003,-3.0523,-3.6757,-4.4154,-4.6322,-4.8188,-4.9931,-5.5201,-5.3716,-5.2443,-5.4123,-5.8867,-5.8267,-5.4112,-5.4618,-5.5486,-5.5907,-5.6182,-5.7376,-5.3995,-5.3177,-5.3811,-5.4504,-5.3416,-4.8237,-4.7752,-4.8049,-4.758,-4.7572,-4.8056,-4.8352,-4.844,-4.782,-4.7144,-4.6231,-4.1618,-3.3903,-2.4931,-3.7369,-2.9525,-2.8398,-2.9212,-3.6763,-4.3853,-5.0397,-6.2716,-6.7874,-7.4917,-8.4262,-9.353,-8.3536,-9.126,-9.5281,-9.6213,-8.5501,-8.7615,-8.2589,-7.7806,-7.8767,-8.1849,-7.9654,-7.9537,-7.9439,-7.8659,-8.1094,-8.2607,-7.4265,-7.4213,-7.5507,-7.4251,-7.2164,-7.2879,-7.3281,-7.272,-7.2827,-7.3634,-7.3989,-7.4829,-7.5106,-7.5666,-7.2316,-7.1333,-7.1731,-7.1828,-7.1942,-7.1718,-7.2059,-7.2724,-6.5189,-5.1399,-4.0112,-3.3118,-2.897,-2.5531,-2.5603,-2.5778,-2.585,-2.6474,-2.9007,-2.8684,-2.8235,-3.0687,-2.9843,-2.698,-2.4831,-2.2663,-2.1033,-2.0355,-1.9502,-1.8312,-1.8675,-1.704,-1.7605,-1.7102,-1.0747,-1.1052,-1.7029,-1.7418,-2.8375,-3.2607,-3.2959,-3.0909,-2.5226,-2.213,-2.7227,-3.2132,-3.2079,-3.1247,-3.2833,-3.505,-3.488,-2.7273,-3.1097,-3.6495,-3.503,-3.6604,-3.8665,-3.0862,-3.1656,-3.3158,-3.3943,-3.2091,-4.2733,-5.0246,-4.405,-4.9993,-5.4502,-5.0753,-4.9693,-5.367,-5.3713,-5.3438,-5.5848,-5.5144,-5.639,-5.857,-5.7791,-5.9804,-5.6356,-5.8239,-6.0915,-6.0534,-6.0876,-6.4824,-6.4853,-6.2395,-6.3196,-6.6196,-6.4625,-6.352,-6.5909,-6.6257,-6.4981,-6.2889,-6.0999,-5.476,-5.1195,-4.6986,-4.4794,-4.2143,-4.1772,-4.1258,-4.4798,-5.0331,-5.1245,-5.3307,-5.6103,-5.4841,-5.6623,-5.6031,-5.1247,-5.0459,-4.8208,-4.4818,-4.3611,-4.1989,-4.4659,-4.9763,-5.0092,-5.4609,-5.5793,-5.4738, |
st82053 | You can build your own dataset by using Dataset Class
class my dataset (Dataset):
You have to over ride two function - len() and getitem()
Now,
len() returns the length of the dataset, which you need, so that you can iterate over it getitem() returns the item
class myDataset(Dataset):
def __init__(self):
self.load_files = []
load_file = pd.csv(filename)
load_files.append(load_file)
def __len__(self):
return len(self.load_files)
def __getitem__(self,idx):
sample = self.load_files[idx]
# type of sample would be of dataframe, we need to convert it into Tensor and then return
return sample
It is not necessary to have labels. Only, thing is that you need to have tensor output from getitem() and when you iterate over dataset, extract the data carefully. And override those two functions
Check this :https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
It has explained it beautifully |
st82054 | I checked other suggestions, but other people used RNN Networks and data labels.
In my case, everything was working fine yesterday, but suddenly my code is not working anymore.
I am trying to run this code on my local machine: https://colab.research.google.com/github/Curt-Park/rainbow-is-all-you-need/blob/master/08.rainbow.ipynb 4
C:/w/1/s/tmp_conda_3.6_045031/conda/conda-bld/pytorch_1565412750030/work/aten/src/THC/THCTensorIndex.cu:189: block: [25,0,0], thread: [63,0,0] Assertion dstIndex < dstAddDimSize failed.
THCudaCheck FAIL file=C:/w/1/s/tmp_conda_3.6_045031/conda/conda-bld/pytorch_1565412750030/work/aten/src\THC/THCTensorMathCompareT.cuh line=69 error=59 : device-side assert triggered
Traceback (most recent call last):
File “rainbow.py”, line 763, in
agent.train(epochs, horizon)
File “rainbow.py”, line 629, in train
loss = self.update_model()
File “rainbow.py”, line 578, in update_model
elementwise_loss_n_loss = self._compute_dqn_loss(samples, gamma)
File “rainbow.py”, line 710, in _compute_dqn_loss
dist = self.dqn.dist(state)
File “rainbow.py”, line 386, in dist
print(x)
File “C:\Users\un_po\Anaconda3\envs\rainbowPy\lib\site-packages\torch\tensor.py”, line 82, in repr
return torch._tensor_str._str(self)
File “C:\Users\un_po\Anaconda3\envs\rainbowPy\lib\site-packages\torch_tensor_str.py”, line 300, in _str
tensor_str = _tensor_str(self, indent)
File “C:\Users\un_po\Anaconda3\envs\rainbowPy\lib\site-packages\torch_tensor_str.py”, line 201, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File “C:\Users\un_po\Anaconda3\envs\rainbowPy\lib\site-packages\torch_tensor_str.py”, line 87, in init
nonzero_finite_vals = torch.masked_select(tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0))
File “C:\Users\un_po\Anaconda3\envs\rainbowPy\lib\site-packages\torch\functional.py”, line 228, in isfinite
return (tensor == tensor) & (tensor.abs() != inf)
RuntimeError: cuda runtime error (59) : device-side assert triggered at C:/w/1/s/tmp_conda_3.6_045031/conda/conda-bld/pytorch_1565412750030/work/aten/src\THC/THCTensorMathCompareT.cuh:69
Yesterday the same code just worked fine.
And the code still works on CPU. |
st82055 | Did you change something else besides the PyTorch version?
What CUDA, cudnn, and PyTorch versions are you using now and which one raised this error?
Do you have a reproducible code snippet so that we could have a look? |
st82056 | suppose three Linear layers A,B,C,and I set B’s requires_grad to False,can C’s loss bp to A? |
st82057 | Yes, intermediate parameters, which don’t require gradients, do not stop the backpropagation, if some earlier parameters need gradients. |
st82058 | But if the intermediate parameter don’t require gradient,how do the earlier parameters gets loss bp from the later parameters?I don’t understand the process |
st82059 | The parameter gradients, which are not needed, won’t be computed (their .grad attribute won’t be updated), but the gradient calculation will continue, if it’s needed for earlier layers. |
st82060 | Hello!
Is there a way to optimiize only a certain part of a tensor?
I am truing to use Pythorch as a backend for ptychographycal engine, which requires to optimize large [2^12,2^12,2] tensor O. But I can formulate loss function only for set of the overlaping slices of this big tensor O[up:down,left:right,:] so I need to calculate updates for each of these slices consistently. Currently, I just iterate through list of these slices and use optimizer with respect to the whole big tensor which require to calclate gradient for the whole O each step which takes most of the time, despite I know that only small part of it O[up:down,left:right,:] influence on the loss function and should be updated this step.
So if there any way to turn on gradient calculation only for some part of O each step, or are there any other ways to solve this problem? |
st82061 | How are you calculating the gradients at the moment?
Could you post some dummy code, as I’m not sure how the loss is being calculated at the moment? |
st82062 | Currently I utilize the following strategy:
Initially I have apriory known Propagation_model,Measured_th and Slice_coordinates where:
Propagation_model - model object, which forward method transform my guessed input data into guessed approximation of measured data. This method may include several FFT and multiplications with different kernels, but it is known percicely in advance and does not require any optimization, so Propagation_model does not require gradient.
Measured_th - [n,1024,1024,2] complex tensor which represents n measurments obtained during experiment and also requires no optimization.
Slice_coordinates - list of tuples of coordinates [(up,down,left,right)] - which allows to get part of the (Sample_th - tensor which represent reconstructed object ) corresponding to exact measurment from Measured_th
During the optimizational procedure I trying to obtain two complex tensors Sample_th [4096,4096,2] and Probe_th [1024,1024,2], both of them requre to be optimized and require gradient.
My loss defined as sumed squared difference between certain slice of ‘Measured_data’ and result of propagation of certain slice of Sample_th and Probe_th
Before the begining of optimization loop I create set of slices Slices_th from Sample_th, since only a part [1024,1024,2] of Sample_th with (up,down,left,right) coordinates takes part in production of certain Measured_th slice:
Slices_th= [] for num in range(len(Measured_th)): Slices_th.append((Sample_th[Borders.up[num]: Borders.down[num], Borders.left[num]: Borders.right[num],:]))
all members of Slices_th represenc slices taken from Sample_th and partially overlap each other ( certain part of Sample_th belong to multiple memders of Slices_th simultaneously.
I am trying to optimize Sample_th and Probe_th with corresponding optimizer:
optimizer = torch.optim.Adam([ {'params': Probe_th, 'lr': 0.5e-1,'weight_decay':0.004}, {'params': Sample_th, 'lr':0.5e-1,'weight_decay':0.04}, ])
In the folllowing loop:
for i in range(epoch_num):
nums = list(range(len(measured_abs)))
np.random.shuffle(nums)
for num in nums:
optimizer.zero_grad()
Obj_th = Slices_th[num]
Meas = Measured_th[num]
loss = Sq_err(Propagation_model.forward(probe = Probe_th,obj = Obj_th,),Meas)
err.append(float(loss.cpu()))
loss.backward()
optimizer.step()
print(np.mean(err),'---',i)
err_long.append(np.mean(err))
Currently, main probem is that during each optimizer.step() gradient calculated for the whole Sample_th which takes most of the time, despite i know that only a small part of it corresponding to current slice participated in loss calculation and should be optimized during this step. From the other hand, I can’t separate Slices_th into independent arrays, since I should take into account their mutual overlap, so change of one of them during optimizer.step() should be spreaded among several of them correspondingly.
Sorry for such long explanation, I just don’t know how to explain it shorter) |
st82063 | Does Pytorch work with CUDA 10.1? When I go to the site, it says get the version of pytorch with 10.0 and not 10.1. |
st82064 | I use it with CUDA 10.1 with no problems. In the worst case scenario, you can always build PyTorch from source. |
st82065 | I used CLR scheduler and it is working weirdly. It is not starting from the base_lr.
My base_lr = 1e-06 and max_lr =1. But it starts from 0.01. If I increase my step size it then, it starts from 0.001, which I think step size should not effect the start of lr scheduler.
I found a way around. For a range of 10^-6. I did 2 cycle with different base_lr and max_lr.
1st cycle : base_lr = 1e-06 ; max_lr = 1e-.3
2nd cycle : base_lr = 1e-03 ; max_lr = 1
It seems it is working when the range is less.
Did any one else found issue using CLR scheduler. ? Is it a bug or am I doing it incorrectly |
st82066 | Hi,
I want to prototype a function in pytorch and in order to achieve it, I need a function that behaves similar to scatter_add but for the maximum instead of add.
So I do have a list of indices and according values - both large, multidimensional and on GPU.
Some indices will collide – this is the crucial part! – and in such a case I want the maximum to be kept.
(Of course the minimum would also work just fine).
It would also be great to have this for argmax, argmin as well.
Here is a minimal example in numpy:
import numpy as np
ids = np.array([1,2,3,4,1,2,3,1]) # There are multiple 1s here
vals = np.array([0,0,0,1,2,3,1,1]) # 0,2,1 will written at index 1
maxs = np.zeros_like(vals)
avgs = np.zeros_like(vals)
np.maximum.at(maxs,ids,vals)
np.add.at(avgs,ids,vals)
print (maxs)
print (avgs)
#ids:0 1 2 3 4 5 6 7
>>> [0 2 3 1 1 0 0 0]
>>> [0 3 3 1 1 0 0 0] |
st82067 | Solved by hofingermarkus in post #2
Found a solution myself:
There is as library called pytorch_scatter that provides many different scatter operations (add, div, max, mean, min, mul, std). The scatter_max returns values and indices which allows to also directly use it for argmax operations. The operations also come with the accordin… |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.