id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st46268
|
Actually there’s a special kinda Neural network algorithm for time series data there the Recurrent Neural Networks specifically the LSTM(long short term memory) and GRU(Gate Recurrent Unit)
The reason these are more preferable for time series data is coz RNNs are known for recognizing sequencial patterns and predicting the next sequence coz there store data of previous instances in the sequence that serve as the dependency for the next prediction.
|
st46269
|
I’m not by any means trying to say that Neural networks can’t and shouldn’t be used for regression analysis.
What I’m saying is that the gradient booster ensemble works better on structured data and I’ve actually experienced it myself alot of times.
|
st46270
|
I absolutely agree with you. The problem is, in this case they are not able to identify the the pattern (Perhaps I should have mentioned this). Hence the overkill dense layers.
As for RNNs, I know about them, I am using them with CTC loss and decoder. I too dislike using overkill solution and go about flag questions on StackOverflow; but in this case I think they might be the solution.
|
st46271
|
I see
So what’s the context of ur data lemme see if I could come up with any suggestions
|
st46272
|
Hi, I am writing a simple MLP model, but the output of the model is always the same no matter what the input is, and also each element of the output vector approaches zero.
Here is my model:
class MLP(torch.nn.Module):
def __init__(self, D_in, D_out):
super(MLP, self).__init__()
self.linear_1 = torch.nn.Linear(D_in, 1000)
self.linear_2 = torch.nn.Linear(1000, 1500)
self.linear_3 = torch.nn.Linear(1500, 1000)
self.linear_4 = torch.nn.Linear(1000, 750)
self.linear_5 = torch.nn.Linear(750, 500)
self.linear_6 = torch.nn.Linear(500, 250)
self.linear_7 = torch.nn.Linear(250, D_out)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
x = self.sigmoid(self.linear_1(x))
x = self.sigmoid(self.linear_2(x))
x = self.sigmoid(self.linear_3(x))
x = self.sigmoid(self.linear_4(x))
x = self.sigmoid(self.linear_5(x))
x = self.sigmoid(self.linear_6(x))
y_pred = self.linear_7(x)
return y_pred
I tried normalized my data before feeding them into model, I also tried make the model simpler, but it is still not working… my input dimension is 450 and output dimension is 120.
Can anyone give any suggestions?
|
st46273
|
I have tested my data with the MLP model integrated in scikit-learn package, it worked fine, so the problem is not with my data, but the model I built…anyone has suggestions?..
|
st46274
|
Add non-linearity (like ReLU) between your linear layers. All the linear layers simply stacked together is just equivalent to one linear layer.
|
st46275
|
Hi, here is an example of the data:
Input (450 dimensions):
-0.735818388214886 0.149285303851154 -0.578067021181189 1.13708538016547 0.767335229025866 -1.00972711021682 1.44431640273288 -0.303122528928313 -1.72468227791404 0.793556029576177 0.784100630718448 0.586480416291804 -0.915689110003437 0.869155049096896 -0.327297531852296 -1.52501241901532 -1.03276643390661 -0.473968816820132 -0.295609810072135 -0.515859456805497 -1.13147975550565 1.02107095888883 -1.59113754353026 -0.00599659576347787 0.113807615216029 1.77224151085492 -1.18130844786379 -0.546862215529291 1.33021374434445 0.544488914484188 0.663008310935703 -0.858810844840316 -0.687197743326812 -0.866979526762939 -1.06245366415692 -0.469133495436108 0.315262120088441 1.51910537341530 0.535432344697486 -0.802471347416508 0.743077742553866 0.227906602985508 0.865005599397846 -0.208548061188331 0.0610479314247161 -0.296093226018993 0.836297939209928 -0.480363187213705 0.209402204752636 -0.144933987024996 -0.206533549880378 0.946925786032395 -0.537818444704563 -0.950879565089832 -0.558620082910992 -1.76818792251994 -0.325854266846343 0.596675452791685 -1.53441716013984 -0.0939427711905128 0.240581130309243 1.37753811785750 -0.448188963430959 1.01632917448316 -1.98344040443072 1.43189848114174 0.00380709676953635 0.731040941080028 -0.418491596787389 -1.45116140048975 -0.0757198878866544 0.906440506235143 -0.195489176181480 0.474885044433439 -1.75296645099251 0.0537898740590009 0.166840316185414 -0.505101220206424 -1.10305805337000 2.19016304827521 -1.64267059816572 -0.846608492414494 -0.358042930737871 0.927555101990779 -0.756696277713888 0.552364304352916 -0.377732413206058 1.46947147354448 -0.772770419104887 -0.741050050782310 1.31761816206993 1.51466053722297 0.939025985378438 -0.245934698986654 1.55397168655323 -1.31062933778198 -0.218447634575306 2.11682833593488 -0.690610835809335 1.34565879808533 0.499611055253914 1.08932242212239 -0.165717938893029 0.590346641605538 1.37617634914702 0.329118285442481 -2.33539214142729 0.848313901922739 -0.0194600743277051 0.645632341231397 1.77066412774636 -0.547402479954773 1.24247230872903 1.13078942868089 2.36441975954544 0.343426659187261 -1.35464768225976 0.486124789352450 0.338884799359261 1.01189291221377 -0.158799837350974 1.12447248143180 -0.918438770221236 0.0872829492529977 -0.452961972932257 2.06814277340707 0.410710144712870 0.0107664210118090 0.483238106571439 -0.861855465344864 -0.429852428427828 0.0846469901005720 0.912335340365856 1.32693787964457 -0.0425986801530204 -0.425727244814729 0.0881495167935712 -1.18746456014654 0.510306793511959 0.422295192312096 -0.418692851559872 0.130512265453763 0.288796882468738 1.07626440985192 -1.67555519143935 -1.51063819598542 -1.12023297875842 -0.578940433520568 0.416479009939568 1.41211309181935 0.272852582995746 0.787582140256920 -0.164531828814356 -0.892619457089434 2.35387313335990 2.15444265679402 0.853564388998705 0.773964231932885 0.392984982417099 0.413719284040007 0.408039592214066 0.179285634255306 1.18993136068326 0.346673640039687 0.849820555281563 0.663293846728990 2.37788541588976 -0.410419910437476 1.85072878124200 0.0945196445828797 0.190227044494937 -0.00862260862497045 -2.33048007706741 0.350072318185885 1.77538976014696 0.0105123358423785 -0.797439870313351 0.789658533392219 -0.0598661168735266 -0.512509292829113 0.803342888759277 0.515296074529683 -0.0199060957481239 -0.203846444532534 -0.363403865186734 -0.284998855547540 -0.870345165251578 0.0255259175842855 1.07548032372644 0.0375875009112331 0.0408941606327486 -0.118744108599266 1.87274416782378 -0.475182333652753 0.421003048629767 1.52685393347522 0.303303793119889 -1.71329299633793 -1.76254559768078 0.760726416738919 -0.0369942976896756 -1.53714846782517 0.532768205421902 0.613476076198292 -1.24587083978263 -0.0908267382857597 -1.04976688505827 -0.544466696790154 0.616182440954475 0.342816759355589 0.144503438142935 -1.48705314240430 -0.447520691642277 -1.53098062179696 -0.00626829017518429 -1.49407926767421 -0.957375697157989 0.755564452147370 -0.745437739000280 0.143221812338554 -0.961105606309910 -1.19637145356054 -0.801846004846102 -1.98341402239006 -0.141406185781072 0.284864088404030 -2.22626832605181 1.52653563599279 2.13960852388333 1.53970999012204 1.15620581147065 -0.595187848735976 0.678270451170598 -1.52005318705716 -0.157528454513269 -1.23280723924239 0.752606156694844 -1.04186084853909 0.608394332176887 0.689406908096181 -0.256367529050619 -0.280540634731785 -0.307800029661493 -0.527898501075078 1.08610152723436 -0.481592519129528 -0.212290828324039 0.126002734363314 -0.747989928955777 0.489271117537880 -0.186739355730084 0.533719515040723 -0.544797495851650 1.50339005591187 0.541926827188491 -0.243197671411433 -1.79788547055804 -0.534596968093643 -0.315525814296939 2.01761951163463 -0.349775082407994 -1.32697498989970 0.999048058049433 0.0692273613508713 -0.00892051731983100 -0.136649250302277 -0.515509695060058 0.159847278104185 1.31922530844349 0.679973155251442 0.297837499659531 -0.863219090612890 -0.902593836350390 0.00547590038320574 -0.0239774096454648 -0.771092586061540 0.0347153321517812 -1.46902769034476 -0.989314484344415 -1.38564595107017 -0.168557615557302 -2.14207620366500 1.51409818597145 0.589519340162154 0.489306004608671 -0.0990063450521083 0.569405655417508 0.348377463731985 0.351944229498122 -0.929034725173021 1.01489841483581 -1.20219322184725 0.774219873135912 1.57064016547379 1.29844172635315 0.0123181847036591 1.60857482218866 0.306305598706328 -0.163473773068933 -0.863119136540574 0.577781460723887 -0.583891061960679 0.579100742187932 -0.261218616340131 1.35914586993769 0.651286722049995 -0.445858544568678 1.98711707884207 0.855070779457029 0.324888254251314 -2.18994697806142 -1.46217532162863 0.692398681057095 0.890738573207726 0.953608201959669 -2.17322386899052 2.09195117427766 -1.02683680479155 -1.59749397470924 -1.61398938735634 1.14708631109452 -1.48190439716087 -0.958084110040263 1.73699980384352 0.171497129806649 -0.375824884166825 -0.208896664000074 -0.232916096907624 -0.999989068952020 1.43196331885695 1.05333800160815 -1.87769216621010 -0.373463909673348 -0.536355318780433 -0.0528307745492846 -0.140475950318341 0.355150500133399 1.72268869924568 0.916933214616019 0.663657095685507 0.997434965650828 0.808879108223415 -0.101854813383192 1.06833962008493 -1.30872729736308 -0.382603177176991 -2.50598994166440 0.127192043657772 -0.317857101989867 0.0109071459772834 0.472353545947209 -2.90412235034735 0.293469599086924 0.949976471229498 1.82486915988484 -0.724280810080997 1.68405009714927 0.639574714575609 -0.867532742695344 -1.80164016593070 -0.725316475286614 -0.802243410765602 -1.58894967338598 -0.593816526590895 1.54429631044769 0.0759264897006957 -0.848576163882861 1.46855445829179 -1.12023631478264 -0.584155321314308 1.73053238157822 -0.321197545709661 0.0786746576675956 -1.43887198310287 0.102445769475108 1.14323107544532 0.305349972268648 -0.808809830969838 0.784810297125262 -0.815665596211662 -0.872831400760270 1.37296766288476 0.418719711616480 -0.948158182619194 0.878295772904781 1.31898034581479 -0.817559745492998 0.831082951440480 0.949303010793698 2.01020656366090 -0.432122743923586 0.925987043019444 0.546283688870168 -0.383790650143911 3.06659138404548 0.0999160740625315 -1.10834298087316 0.368918512186226 0.508676688719668 0.353856030151058 -1.08901594591339 0.482262488823247 -1.37520937859426 -0.627315838862955 0.781117715520164 -0.0684018950749005 -0.424245568746243 0.688157912579803 0.0794974830211469 -0.593836626710330 -0.138218083932841 -0.625889046108883 -0.464703705404994 -0.617654342027112 0.461580452263993 0.0478175605105834 -0.215060595309493 -1.05892181106604 -0.451188490131863 0.0599069991116341 0.455182505311400 -1.53276713244221 0.512244554956416 -1.27103120201129 -0.950906642491205 0.182339506439061 1.08338628771804 -0.536682474414237 -1.06959635212258 0.459856625436814 -0.506621610555142 0.202810486436882 0.505627177830883 -1.35180754744127 1.28325363856324 0.745357456683776 -0.546185670707208 0.597550026378408 1.74778911588788 0.192450400164104 -0.240390964552302 0.112523767557991 -2.81556039054814 -0.218262017995826 -1.03440307996590 -0.350841063479356 -0.520668669845006 0.587456453857042 -0.664428434754110 -0.888170611420252
Output (120 dimension):
0.532300212681870 0.179998690554949 -0.0257466843027034 -0.205882453641676 0.591182642419381 -0.0886731841952033 0.263522930131506 -0.232009699062619 -0.568828657246618 0.0574685230173592 -1.60284642144261e-07 0.0662690547702825 -0.0564374703109907 -0.407917260673937 -0.343127227975678 0.180063677196068 -0.492261357637083 -0.336502362526927 0.0226164378018748 0.391798473050306 -0.511955179087885 -0.400479597369964 0.0815137946320438 -0.429874559130754 -0.165088072840650 -0.482222177400369 0.346141730818296 -0.470105832986738 0.0148054113129904 -0.248413666577270 7.85676120099218e-06 -0.109881131720543 0.487259200095905 -0.228457716311294 0.239037570703603 0.206581423495658 0.0734277499113765 -0.292353134135843 0.236769905240832 0.418475528074645 0.343941269578648 -0.242578842189166 -0.403348923623755 0.0476801596732162 0.514682257205765 0.291071959078389 -0.521966344699764 -0.0305610147893470 -0.0753008830516899 -0.275737840822758 1.73026709799811e-05 0.451837795234245 -0.377204036558573 -0.102295416442940 0.189983419757157 0.173017195991341 0.209241386814994 0.121568362397577 0.482217214315650 -0.489043431180480 0.685569308661741 0.0754196779260324 0.264387863585074 -0.460677488823237 -0.0655079802930742 0.0564285529922251 0.0224475949085858 -0.806614168786861 -0.246844101731706 -0.438966486336961 1.68718295751309e-05 0.262910868990014 -0.00652996986393479 0.320597507699597 0.256793848901400 -0.0761349976903453 -0.0714002589137728 -0.0600250235381107 0.250188193002387 -0.122352971769150 0.268844153278181 -0.194310232938437 -0.525340063393711 -0.271752750190912 0.0797757867755889 -0.222341163148859 0.0545291377135724 0.103619676878438 -0.233122479602490 -0.211476963323040 1.24631248855083e-05 -0.130492588452597 -0.520165791152049 -0.483149516929881 0.112854850291193 0.106650729585523 -0.399795351834588 0.368224983765443 -0.559589475177480 -0.466738283723279 0.684576301742124 0.00388587072033813 -0.532243187752000 0.0542254933565711 -0.186914327254024 -0.295018353999371 0.00468540340194501 0.238713708177180 0.131293241163625 -0.321632809816311 -1.10141122226793e-06 0.278443194578431 0.312817519725799 0.0718265446824297 0.172149761992720 0.406982900273825 0.259607818592667 0.539530859757522 0.454254882552196 -0.0692824383104469
And the output I got by testing the model is like (This output remains same no matter what my input is):
0.0032217987 -0.0032565966 -0.0021627229 -0.22697754 -0.0024142414 0.0026991144 -0.0015234398 -0.0028859563 0.0021582916 -0.21696138 -6.2100589e-06 0.0091556832 0.0012200102 -0.0035693087 0.0046808752 0.0068705790 -0.0010050870 0.0015123896 0.0037639998 0.00036380813 0.00043297932 -0.00035620201 -0.0032979734 -0.23171432 0.0045496225 -0.0012066588 -0.0021106657 -0.0041744113 -0.012195184 -0.21638247 2.4028122e-06 0.0059865061 -0.0042346753 -0.00071774051 -0.0026111864 -0.0018474497 -0.00078333169 -0.0054079369 0.0077128373 0.0016663412 0.0032119900 -0.0062416419 -0.0021390468 -0.22433044 -0.0027336776 0.0028707050 0.0022469088 -0.0034709517 -0.0028906241 -0.21881813 5.7823956e-05 0.0032278486 -0.0022484893 -0.010667327 -0.00026572868 -0.0018052831 0.0012878384 0.0035010017 -0.0034696758 0.0034123622 -0.0081416154 0.00037829392 0.0034989491 -0.22207867 0.010135166 0.0018704161 0.0050993972 0.0022409894 -0.0079786349 -0.22068146 -1.6111881e-06 -0.0033761226 0.00018132944 0.0041641518 0.0018924810 -0.0017091706 -0.0061378721 0.00092485966 -0.0015861169 0.00036317296 0.0098521076 0.0076599456 -0.0084095690 -0.22120586 -0.010473695 -0.0077724494 -0.0068498887 -0.0096608894 -0.0033896575 -0.22154866 0.00010513514 0.0015144497 0.0026684590 0.0033703633 -3.2403506e-05 0.0055518188 0.0079380125 -0.0090062730 0.0057077296 -0.0058418475 0.0070619956 0.0019653030 0.0018671704 -0.23226789 0.0086756833 -0.00078263949 0.0070048124 -0.0047388561 -0.0073986426 -0.21967286 -7.0333481e-05 0.0058467574 -0.0092798918 -0.0033651199 0.0044662654 -0.0027292185 -0.0056617074 0.0037126155 0.0060428437 0.0012843013
Thanks !
|
st46276
|
That’s a “brute force” network, so with this depth it will be learning slowly by design, even when everything is correct. But here your problem is likely the use of saturating Sigmoid, try non-saturating functions: [leaky_]relu, selu (aka self-normalizing network).
|
st46277
|
Hi, thanks for your reply, actually I tried Relu as activation as well as reducing the depth of the network, but this still happened, I am thinking about whether I am getting this weird output due to the high dimensions of my output. Any comments?
|
st46278
|
What do your loss and accuracy curves look like?
Some potential issues:
optimizer
make sure to pass all of your model’s parameters to the optimizer: optimizer = torch.optim.SGD(model.parameters(), lr = 0.01, momentum=0.9) as an example.
make sure to clear the optimizer at the start of running each batch: optimizer.zero_grad()
hyperparameters
make sure your hyperparameters have an appropriate size: learning rate, batch size, weight decay, etc, and when in doubt use the defaults. If you’re using L1/L2 regularization, make sure your weight decay value is not set too high. If it is, your model might instead be optimizing for reducing the values of its parameters to 0 instead of achieving good predictions.
loss function
make sure you’re using the correct loss function for your use case (MSELoss I imagine).
monitor the gradient, see if it is not vanishing/exploding.
|
st46279
|
Hi stroncea, thanks for detailed explanation! Yes I am using MSE as my loss function, and I checked my code many times to make sure the optimizer is set correctly and the gradients are cleared at beginning of each batch.
I have 3500 training points in total and I set my batch size to 100. I also have a validation set which contains 750 data points.
Below is the training history:
You can see that the loss quickly dropped to around 0.1 and it seems after this the model’s weights did not change anymore (so the output is always the same for all inputs).
Another thing I notice is that, if I creat a matrix A and calculate the MSE between A and y_test_true, the result is about 0.1, so this explained that all my outputs are approching zero (the model is being lazy and just set its weights to make all the outputs zero). But I have no idea how to improvee this.
Thanks
|
st46280
|
Hi, it is like this:
loss_function = torch.nn.MSELoss(reduction='mean')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)
# optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
for epoch in range(total_epoch_number):
loss_array_for_current_epoch = []
overall_training_loss_for_current_epoch = 0
for batch_num in range(number_of_training_data // N):
current_batch_training_input = training_set_input[batch_num*100:batch_num*100+100, :].to(device)
y_pred = model(current_batch_training_input)
# Compute loss for current batch
current_batch_training_output = training_set_output[batch_num*100:batch_num*100+100, :].to(device)
loss = loss_function(y_pred, current_batch_training_output)
# Zero gradients, perform a backward pass, and update the weights.
loss.backward()
optimizer.step()
optimizer.zero_grad()
loss_array_for_current_epoch.append(loss.item())
# Use the average loss of all batch as the overall training loss for this epoch.
overall_training_loss_for_current_epoch = np.average(loss_array_for_current_epoch)
# Validation process at current epoch
with torch.no_grad():
y_pred_val = model(validation_set_input)
val_loss = loss_function(y_pred_val, validation_set_output)
|
st46281
|
What kind of results did sci-kit learn give you for MSE compared to this?
Also, have you tried adding some regularization, either Dropout of weight decay?
|
st46282
|
I did try this with MLPRegressor in scikit-learn package, the result is similar… (testing outputs remain same no matter what input is)
Yes, I tried adding dropout layer after activation layer and the results improve a little but not so much, and also when applying the dropout layer, the MSE loss, as expected, is a little higher compared with no dropout.
|
st46283
|
Have you tried using a single hidden layer (perhaps wider)? If that works better, the problem is with excessive depth (so gradients are either tiny or move in all directions by mini-batches). If not, either optimizer needs tweaking or there is some error (loop looks ok though).
|
st46284
|
Hi, I did try to use only one hidden layer, the result has a little improvement but not so much, also in this case the model is obviously underfitted.
|
st46285
|
Yuchen_Mu:
also in this case the model is obviously underfitted.
You should be able to overfit a single layer MLP, by increasing width and/or learning rate. Then revert towards your deep model and you’ll see how it stops working.
|
st46286
|
Hi, do you mean increase the number of neurons in the hidden layer? In the model I originally used, it had 1000 neurons in the first hidden layer, if I build a MLP which contains only one hidden layer, should I increase this number even larger?
Thanks.
|
st46287
|
Yea, use a lot of neurons (10-100k maybe) to verify that your training loss can approach zero (overfit), which would indicate no coding errors and adequate optimizer params. Regarding optimizers, try Adadelta or Rprop to avoid lr tuning.
|
st46288
|
I got train images: 0 train labels: 0
I set my folders like this:
––DUTS-TR
——im_aug
———0.jpg
———1.png
——gt_aug
———0.png
———1.png
is there something wrong?
ValueError: num_samples should be a positive integer value, but got num_samples=0
what should I edit, any guidance will be helpful
`data_dir = os.path.join(os.getcwd(), ‘train_data’ + os.sep)
| tra_image_dir = os.path.join(‘DUTS’, ‘DUTS-TR’, ‘DUTS-TR’, ‘im_aug’ + os.sep)
| tra_label_dir = os.path.join(‘DUTS’, ‘DUTS-TR’, ‘DUTS-TR’, ‘gt_aug’ + os.sep)`
|
st46289
|
I assume you are using ImageFolder, which won’t work in your use case, as it’ll create the targets for a multi-class classification, while you seem to be dealing with a segmentation use case.
If that’s the case, I would recommend to write a custom Dataset as described here 3.
In the __init__ you could define the image paths to the data and target images and load the corresponding pair in the __getitem__ method.
|
st46290
|
Here is my mapper for augmentation
def mapper2(dataset_dict):
dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below
#image = utils.read_image(dataset_dict["file_name"], format="BGR")
image = utils.read_image(dataset_dict["file_name"], format="RGB")
transform_list = [
T.RandomFlip(prob=0.5, horizontal=False, vertical=True)
,T.ResizeShortestEdge(short_edge_length=(640, 672, 704, 736, 768, 800), max_size=1333, sample_style='choice')
#,T.RandomCrop('relative_range', (0.4, 0.6))
#,T.GridSampleTransform()
]
image, transforms = T.apply_transform_gens(transform_list, image)
dataset_dict["image"] = torch.as_tensor(image.transpose(2, 0, 1).astype("float32"))
annos = [
utils.transform_instance_annotations(obj, transforms, image.shape[:2])
for obj in dataset_dict.pop("annotations")
if obj.get("iscrowd", 0) == 0
]
instances = utils.annotations_to_instances(annos, image.shape[:2])
dataset_dict["instances"] = instances
return dataset_dict
I want to put normalize option like T.GridSampleTransform()
but when I perform the above code
TypeError: __init__() missing 2 required positional arguments: 'grid' and 'interp'
|
st46291
|
GridSampleTransform doesn’t seem to be a built-in transformation and based on the raised error message it requires two input arguments: grid and interp, so you would have to provide them.
Since I cannot find the implementation, I can just guess that the arguments might be similar to the grid_sample method.
|
st46292
|
Hi, good day!
I am training a neural network and my input image is a 4D tensor [batch_size, channel, height, width]. I also had a 4D tensor target but since I got an error saying my target size should only be 3D, i tried mask.squeeze(1) to get rid of the channel index. Now after I did that, I had another error, this time it says my input size and target size mismatch.
This is my code
epochs = 1
for epoch in tqdm(range(epochs)):
for batch_idx, (img, mask) in tqdm(enumerate(train_gen)):
img = torch.Tensor(img).view(-1, 3, 500, 500)
mask = torch.Tensor(mask).view(-1, 1, 500, 500)
mask = mask.squeeze(1)
model.zero_grad()
outputs = model(img)
loss = loss_function(outputs, mask)
loss.backward()
optimizer.step() # Does the update
print(f"Epoch: {epochs}. Loss: {loss}")
and my error is
0%| | 0/1 [00:00<?, ?it/s]
0it [00:06, ?it/s]
0%| | 0/1 [00:06<?, ?it/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-ad775b024a80> in <module>()
12 outputs = model(img)
13 outputs = outputs.squeeze(1)
---> 14 loss = loss_function(outputs, mask)
15 loss.backward()
16 optimizer.step() # Does the update
3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction)
2264 ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2265 elif dim == 4:
-> 2266 ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
2267 else:
2268 # dim == 3 or dim > 4
RuntimeError: size mismatch (got input: [4, 60, 504, 504] , target: [4, 500, 500]
does anyone have any idea how to solve this? Thank you!
|
st46293
|
Your model seems to increase the spatial size of the input from 500 to 504, so you should take a look at the layers and make sure the output size is as expected.
|
st46294
|
Hi,
I was trying to use the DCNv2 which uses the THFloatBlas_gemm. Now since that has been removed in Pytorch 1.7, I was wondering where I could find the source code of the same function so that I can define the said function and make the model.
TIA
|
st46295
|
import torch
import torch.nn as nn
class LinearModel:
def __init__(self, train_x, train_y):
self.train_x = train_x
self.train_y = train_y
self.W = torch.tensor([0.5], requires_grad=True)
self.b = torch.tensor([0.5], requires_grad=True)
self.params = [self.W, self.b]
def forward(self, x):
self.params = [self.W, self.b]
return self.W * x + self.b
def train(self):
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(self.params, lr=0.01)
for i in range(10):
print(self.W, self.b)
optimizer.zero_grad()
y_pred = self.forward(self.train_x)
loss = criterion(y_pred, self.train_y)
loss.backward()
optimizer.step()
train_x = torch.Tensor([0,1,2,3])
train_y = torch.Tensor([1,3,5,7])
model = LinearModel(train_x, train_y)
model.train()
tensor([0.5000], requires_grad=True) tensor([0.5000], requires_grad=True)
tensor([0.6200], requires_grad=True) tensor([0.5550], requires_grad=True)
tensor([0.7300], requires_grad=True) tensor([0.6053], requires_grad=True)
tensor([0.8307], requires_grad=True) tensor([0.6513], requires_grad=True)
tensor([0.9230], requires_grad=True) tensor([0.6933], requires_grad=True)
tensor([1.0076], requires_grad=True) tensor([0.7318], requires_grad=True)
tensor([1.0851], requires_grad=True) tensor([0.7669], requires_grad=True)
tensor([1.1561], requires_grad=True) tensor([0.7990], requires_grad=True)
tensor([1.2212], requires_grad=True) tensor([0.8284], requires_grad=True)
tensor([1.2809], requires_grad=True) tensor([0.8552], requires_grad=True)
These are the inputs and outputs from my simple linear model. I want to ask why the values of self.W and self.b get updated?
Only the values of self.W and self.b are passed into self.params, how can the optimizer locate self.W and self.b to update their values?
|
st46296
|
You are passing self.params to the optimizer in:
optimizer = torch.optim.SGD(self.params, lr=0.01)
which is why they get updated.
As a side note: you could create trainable parameters via nn.Parameter, which would properly register them inside the module and then pass all parameters via model.parameters() to the optimizer, which would avoid creating the list manually.
|
st46297
|
Hi, a question mainly out of curiosity: how does PyTorch detect the CUDA installation on the pc?
For instance, on my pc I have CUDA-10.1 installed in /usr/local/cuda-10.1 but I have not added it neither to PATH or to LD_LIBRARY_PATH. Nevertheless, PyTorch detects its seamlessly and everything works out fine. How is this done?
Thanks in advance for any hint!
|
st46298
|
The conda binaries and pip wheels ship with their own CUDA runtime and your local CUDA toolkit will not be used.
You would only need to provide a sufficiently new NVIDIA driver and it should work.
Your local CUDA toolkit would be used, if you are building PyTorch from source or are building custom CUDA extensions.
|
st46299
|
Thank you very much @ptrblck, that makes a lot of sense!
And also, it’s a great deploy strategy, a big thumbs up for PyTorch once again!
|
st46300
|
b = a[:, 0:2], I guess slice of a will copy to tensor b
a[: 0:2] = a[:, 2:4], but how this work behind?
|
st46301
|
pytorch_lzwhard:
b = a[:, 0:2], I guess slice of a will copy to tensor b
nope, b and a share memory, though b is a new python wrapper object with different strides
pytorch_lzwhard:
a[: 0:2] = a[:, 2:4],
here python’s object.__setitem__ mechanism is used instead, making partial assignment with copying possible
|
st46302
|
googlebot:
here python’s object.__setitem__ mechanism is used instead,
tkx,and how does torch tensor implement this a[: 0:2] = a[:, 2:4]?
|
st46303
|
googlebot:
Should be the same as a[:,0:2].copy_(a[:,2:4]), i.e. strided mem. copy
Indeed, in particular, the RHS is created and then copied over to the slice.
|
st46304
|
Im doing some computations (basically using torch as numpy with gpu) and I ran out of memory when there is plenty available. How is that possible?
I get the following error:
RuntimeError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 11.00 GiB total capacity; 972.39 MiB already allocated; 8.57 GiB free; 986.00 MiB reserved in total by PyTorch)
So basically I have 8.57GB free but run out of memory trying to allocate 144MB. Is it that the GPU memory can suffer from very severe heap fragmentation or something in that regard?
|
st46305
|
Could you post a code snippet to reproduce this issue?
I doubt it’s memory fragmentation, so would like to debug it.
|
st46306
|
I found the reason for the low memory. I was simply using much more memory than I expected. And the problem seem to be that the error message was missleading. The reason for that I don’t know.
|
st46307
|
Hey there,
Whilst implementing a simple MNIST digit classifier, I’ve got stuck on a bug where grad seems to be set to None after I call loss.backward(). Any ideas how I get this not to be None? What am I missing?
Here’s the error I get:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-15-22a0da261727> in <module>
23
24 with torch.no_grad():
---> 25 weights -= weights.grad * LR
26 bias -= bias * LR
27
TypeError: unsupported operand type(s) for *: 'NoneType' and 'float'
If I run the code again, I get a different error, namely:
RuntimeError Traceback (most recent call last)
<ipython-input-25-455a55143419> in <module>
7 predictions = xb@weights + bias
8 loss = get_loss(predictions, yb)
----> 9 loss.backward()
10
11 with torch.no_grad():
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
116 products. Defaults to ``False``.
117 """
--> 118 torch.autograd.backward(self, gradient, retain_graph, create_graph)
119
120 def register_hook(self, hook):
/usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
91 Variable._execution_engine.run_backward(
92 tensors, grad_tensors, retain_graph, create_graph,
---> 93 allow_unreachable=True) # allow_unreachable flag
94
95
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
And here’s what I think are the relevant parts of my code:
# Skipped DataLoader setup for brevity
def get_accuracy(precictions, actual):
return (precictions >= 0.5).float() == actual
def get_loss(predictions, actual):
normalised = predictions.sigmoid()
return torch.where(actual == IS_7, 1 - normalised, normalised).mean()
def init_params(size, variance=1.0):
return torch.randn(size, dtype=torch.float, requires_grad=True) * variance
weights = init_params((IMG_SIZE, 1))
bias = init_params(1)
for epoch in range(1):
# Iterate over dataset batches
# xb is a tensor with the independent variables for the batch (tensor of pixel values)
# yb "" dependent "" (which digit it is)
for xb, yb in dl:
print(xb.shape)
predictions = xb@weights + bias
loss = get_loss(predictions, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * LR # <-- Error here: unsupported operand type(s) for *: 'NoneType' and 'float'
bias -= bias * LR
weights.grad.zero_()
bias.grad.zero_()
Some useful notes:
I also tried to use .data instead of with torch.no_grad() but that didn’t help. with seems to be the preferred method from PyTorch (https://pytorch.org/tutorials/beginner/pytorch_with_examples.html 2)
Calling @ for matrix multiplication in the predictions vs torch.mm makes no difference.
I previously made a mistake with my tensor setup but I think that’s all fixed now. weights.shape, bias.shape outputs (torch.Size([784, 1]), torch.Size([1]))
|
st46308
|
Solved by mulholio in post #2
Found the fix. The tensor returned by init_params needs to be wrapped in .requires_grad_(), not the individual tensors within it.
def init_params(size, variance=1.0):
return (torch.randn(size, dtype=torch.float)*variance).requires_grad_()
|
st46309
|
Found the fix. The tensor returned by init_params needs to be wrapped in .requires_grad_(), not the individual tensors within it.
def init_params(size, variance=1.0):
return (torch.randn(size, dtype=torch.float)*variance).requires_grad_()
|
st46310
|
I am trying to train an autoencoder and I using torch.nn.MSELoss() as the loss function. I keep getting this error below:
RuntimeError Traceback (most recent call last)
<ipython-input-76-67910671b729> in <module>()
20
21 optimizer.zero_grad()
---> 22 loss.backward()
23 optimizer.step()
24
RuntimeError: Function 'MseLossBackward' returned nan values in its 0th output.
I have tried reading many of the other posts on this topic and I have tried various approaches such as adjusting the learning rate, gradient clipping. The weird thing is the loss doesn’t seem to diverge at all. I am not totally sure why this error keeps occurring for me, any suggestions or help would be appreciated? I am very new to pytorch.
|
st46311
|
Could you check if you have any nan’s in your input? And if there aren’t any in it, could you check if the outputs are producing nan’s?
|
st46312
|
I have trained a model for waveform synthesis and would now like to apply global pruning using an iterative pruning schedule. However, the model uses weight normalization in most of its layers, which means the weights are split into two components, weight_g and weight_v, from which the actual weights are computed on each forward pass. It doesn’t make sense to apply pruning on either weight_g or weight_v only, so I need to find some way to temporarily get the un-normalized weights back.
So far I have considered two approaches:
For each pruning step, disable weight normalization, conduct the pruning, and re-enable weight normalization. This doesn’t work because pruning will create a parameter weight_orig while after re-enabling weight normalization there will be no weight anymore. Also re-enabling weight normalization seems to create a new set of parameters which are not tracked by the optimizer.
For each pruning step, compute the actual weights by calling the _weight_norm function, replace weight_v with the result, conduct the pruning and undo the swapping. The problem here is that the pruning function will write into weight_v the values of weight with the pruning mask applied, which means you can’t just swap the former values back in.
How could I go about this? Could I make one of the described approaches work in an elegant way or is there another possibility that I didn’t think of?
|
st46313
|
I am currently working on a semantic segmentation model and I am trying out a different loss function is this case. The loss function i am using is FocalTversky Loss function.
‘’’
class FocalTverskyLoss(nn.Module):
def init(self):
super().init()
def forward(self, preds, target, alpha=0.7, beta = 0.3, epsilon=1e-6, gamma=3):
preds = torch.sigmoid(preds)
#flatten label and preds tensors
preds = preds.reshape(-1)
target = target.reshape(-1)
#True Positives, False Positives & False Negatives
TP = (preds * target).sum()
FP = ((1-target) * preds).sum()
FN = (target * (1-preds)).sum()
Tversky = (TP + epsilon)/(TP + alpha*FP + beta*FN + epsilon)
FocalTversky = (1 - Tversky)**gamma
return FocalTversky
‘’’
However when I run it in my code like this
outputs = self.model(images)
loss = self.loss_function(preds=outputs,target=labels).to(device)
train_loss += loss.item()
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
it gives this error,
RuntimeError Traceback (most recent call last)
in
1 imgdir = ‘Z:\HuaSheng\datasets\sen12msgrss\DFC_Public_Dataset’
----> 2 trainer(imgdir= imgdir, classes = list(range(0,10)) ,fsave=‘Rip_chkpt_FTL.pth’, reloadmode=‘same’, checkpoint=None, num_epochs = 110)
in init(self, imgdir, classes, num_epochs, fsave, reloadmode, checkpoint, bs, report)
59 for self.epoch in range(self.num_epochs):
60 print(’\n’+’*'6+‘TRAIN FOR ONE EPOCH’+’'6)
—> 61 train_loss = self.train()
62
63 print(’\n’+’‘6+‘EVAL FOR ONE EPOCH’+’’*6)
in train(self)
163 self.writer.flush()
164
–> 165 loss = self.loss_function(preds=outputs,target=labels.view(1, -1)).to(device)
166 #weight=torch.FloatTensor([0.,2.5,0.,1.5,2.1,0.3,4.5,0.,4.5,0.]).to(device)
167 train_loss += loss.item()
~\anaconda3\envs\pytLocal38\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
–> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
in forward(self, preds, target, alpha, beta, epsilon, gamma)
57
58 #True Positives, False Positives & False Negatives
—> 59 TP = (preds * target).sum()
60 FP = ((1-target) * preds).sum()
61 FN = (target * (1-preds)).sum()
RuntimeError: The size of tensor a (5017600) must match the size of tensor b (501760) at non-singleton dimension 0
My input size for my preds and target is torch.Size([10, 10, 224, 224]), torch.Size([1, 501760])
Is there a way I can make my target dim 0 into 10? Or is that even right for me to do when I am training it? Thanks alot for your help
|
st46314
|
Could you explain what the shapes of preds and target represent?
Currently it seems preds uses a batch size of 10, while you have a single target, which will then raise this error after flattening these tensors.
PS: you can post code snippets by wrapping them into three backticks ```, which makes debugging easier.
|
st46315
|
Hi @ptrblck, sorry for the poor posting format! haha The target represent the labels of the image and the prediction is the output after fitting in the model. The image I am working on right now consist of 13 channel images with 10 classes inside. The chip size of the image is 224. Where every pixel in the image contains a classes used for semantic segmantation modelling.
However, I have come across that using this loss function is not suitable as it is a multiclassifcation problem where as this focaltverskyloss is based off binarycrossentropy loss. So I think it may not be suitable to use this loss
|
st46316
|
Thanks for the update. While your use case makes sense, it’s still unclear why the batch size of the output images is 10 while the target has only a single sample.
If your target is supposed to contain 10 values, then I guess a reshaping operation might be wrong on this tensor.
|
st46317
|
My task requires a batch size of 1 so I can’t normalize activations within the batch. Is there a way I could normalize the activations across all the batches in the epoch? Training would improve immensely if the activations were normalized, but I can’t find a way to do that.
Thank you
|
st46318
|
Hello,
When I am trying to upsample my input using the nn.functional.interpolate function (nearest mode), I get the following error
return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors)
RuntimeError: input tensor has spatial dimension larger than the kernel capacity
I think it is mainly because my input tensor is big, but I don’t have any memory issues. Is there any way to solve this problem without shrinking the input tensor?
|
st46319
|
This is a limitation in the CUDA launch config for the current algorithm. As a workaround you could use the CPU for these large shapes and create a feature request on GitHub.
|
st46320
|
I just ran into a very weird bug. When using multiprocessing to create a network, the program hangs indefinitely. Code is as below:
import torch
from torch import nn
import torch.multiprocessing as mp
def normalized_columns_initializer(weights, std=1.0):
out = torch.randn(weights.size())
out *= std / torch.sqrt(out.pow(2).sum(1, keepdim=True))
return out
class ACNet(nn.Module):
def __init__(self, max_addrs, num_loc, hidden_dim = 64):
super().__init__()
self.loc_linear = nn.Linear(hidden_dim, num_loc)
print('start init')
self.loc_linear.weight.data = normalized_columns_initializer(
self.loc_linear.weight.data, 0.01) #todo: this line causes deadlock somehow
self.loc_linear.bias.data.fill_(0)
print("init done")
def create_model(dim):
mdl = ACNet(100, dim)
print('model is created')
if __name__ == '__main__':
shared_model = ACNet(100, 512)
p = mp.Process(target=create_model, args=(512,))
p.start()
p.join()
Interrupting the program will show that the program hangs at os.waitpid(). If either model’s second parameter is changed to a smaller number than 512, the program will pass. The same thing will happen if the two models are both created by directly invoking the constructor, or by two child processes. Does anyone have any idea about this? My environment is as below:
PyTorch version: 1.7.0+cpu
Is debug build: True
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Linux Mint 19.1 Tessa (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 7.0.0 (tags/RELEASE_700/final)
CMake version: version 3.12.3
Python version: 3.6 (64-bit runtime)
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] numpy==1.19.4
[pip3] torch==1.7.0+cpu
[pip3] torchaudio==0.7.0
[pip3] torchvision==0.8.1+cpu
[conda] Could not collect
|
st46321
|
I noticed that nn.multiheadattention does take a 3D mask of (N*num_heads, L, S) but that the transformer module only gives a 2D mask of (L,S) as a possible attention mask shape.
However, in its documentation, nn.transformer uses multiheadattention (“from .activation import MultiheadAttention”). Could I expect a transformer given a 3D mask to work properly?
|
st46322
|
I have some libtorch code which runs a forward and gets me the result like so:
torch::Tensor output = module.forward(inputs_f).toTensor();
the problem I am facing is that I am not able to write this to an output in a clean way. The best I could do is this:
std::fstream res("result.out", std::ios::out | std::ios::binary);
res << output;
res.close();
and while this works. it writes me a bit of nonsense like so:
Columns 1 to 10-0.0474 -0.0582 0.0047 -0.0048 -0.0018 -0.0153 0.0084 -0.0178 -0.0336 -0.0082
Columns 11 to 20 0.0119 -0.0617 -0.0218 0.0003 0.0205 -0.0086 0.0285 -0.0193 0.0052 -0.0009
Columns 21 to 30 0.0014 -0.0140 -0.0333 0.0931 0.0562 0.0246 0.0674 -0.0256 0.0018 0.0125
Columns 31 to 40 0.0015 -0.0564 -0.0016 0.0185 -0.0028 -0.0436 -0.0289 -0.0729 -0.0255 0.0550
Columns 41 to 50 0.1132 0.0346 -0.0030 -0.0842 0.0433 0.1089 0.0023 -0.0148 -0.0411 0.0094
It looks like this is a string represnetation of some sort - I REALLY dont want these Columns 11 to 20 etc… I just want the tensor in a file line after line
How does one go about doing this?
|
st46323
|
Hi,
I am trying to implement a pytorch 0.2 code in pytorch 1.6
In 0.2 code it contains Variable wrapper in the code as below.
while(True):
values = []
log_probs = []
rewards = []
entropies = []
for step in range(params.num_steps):
if(done):
h_out = (Variable(torch.zeros([1, params.lstm_size])), Variable(torch.zeros([1, params.lstm_size])))
state = torch.DoubleTensor(env.reset())
else:
h_out = (Variable(h_out[0].data), Variable(h_out[1].data))
h_in = h_out
state = state
value, action_values, h_out = model((Variable(state.reshape(1,-1)), h_in))
action_values = action_values.reshape(-1,)
prob = F.softmax(action_values - max(action_values), dim = 0)
log_prob = F.log_softmax(action_values - max(action_values), dim = 0)
entropy = -(log_prob * prob).sum()
entropies.append(entropy)
# action = epsilon_greedy(prob, epsilon)
action = Categorical(prob).sample().reshape(-1,)
log_prob_a = log_prob.gather(0, Variable(action))
values.append(value)
log_probs.append(log_prob_a)
# print("action_values:",action_values)
# print("prob:",prob)
# print("log_prob:",log_prob)
# print("action:",action, "log_prob_a:",log_prob_a)
state, reward, done, info, _ = env.step(action)
# reward = max(min(reward, 1), -1)
count +=1
if done:
state = env.reset()
rewards.append(reward)
if done:
break
R = torch.zeros(1, 1)
if not done:
value, _, _ = model((Variable(state.reshape(1,-1)), h_out))
R = value.data
values.append(Variable(R))
policy_loss = 0
value_loss = 0
R = Variable(R)
gae = torch.zeros(1, 1)
for i in reversed(range(len(rewards))):
R = params.gamma * R + rewards[i]
advantage = R - values[i]
value_loss = value_loss + 0.5 * advantage.pow(2)
TD = rewards[i] + params.gamma * values[i+1].data - values[i].data
gae = gae * params.gamma * params.tau + TD
policy_loss = policy_loss - log_probs[i] * Variable(gae) - 0.01 * entropies[i]
optimizer.zero_grad()
(policy_loss + 0.5 * value_loss).backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 40)
ensure_shared_grads(model, shared_model)
optimizer.step()
However variable is deprecated from pytorch 0.4, as I understand from the documentation-
Variable and Tensors are merged , tensors start to record gradients when requires_grad attribute is set True.
So, Can I just change the whole code from Variable(x) to torch.DoubleTensor(x,requires_grad = True), or do I need to change anything else as well?
From documentation it also says to use detach().Can I just replace Variable(x) by x.detach() when reusing x??
Thanks
|
st46324
|
Yes, you can just remove the Variable outside and set requires_grad = True when you are creating your tensors. If you don’t need gradients for some part of your calculations you can just use variable_name.detach() and it will detach the output from the computational graph. Which means no gradient will be back-propagated along this variable.
|
st46325
|
I want to validate my model using only a fixed subset of the whole dataset without affecting the training loop. I want my code to look something like
for epoch in range(num_epochs):
for batch in train_dataloader:
train_step() # loss, optimize, etc ..
torch.manual_seed(0) # This fixes the training data too!
for repetition in range(segments_per_speaker) # sampling multiple segments per speaker
for batch in valid_dataloader:
valid_step() # choose best model
Placing torch.manual_seed(0) after the training loop somehow fixes the training data in different epochs. What am I missing? Any recommendations to solve my issue?
Thanks in advance!
|
st46326
|
Solved by ptrblck in post #2
torch.manual_seed is used globally, so you would reseed the code each time you are calling this method.
I think the proper way would be to create the fixed subset for the validation once and reuse it in each iteration.
|
st46327
|
torch.manual_seed is used globally, so you would reseed the code each time you are calling this method.
I think the proper way would be to create the fixed subset for the validation once and reuse it in each iteration.
|
st46328
|
Thanks a lot for your answer @ptrblck.
I was hoping to find a workaround not to fix a subset of the validation set. But it seems there aren’t. At least now I can proceed with it in peace.
|
st46329
|
I have a tabular dataset , where I have to perform multi-label classfication . For that , I am unable to figure out how to write its custom dataset class for 100 target columns
e.g. if its like 5 - 10 target classes I can write like this -->
from torch.utils.data import Dataset
import torch
class for_5_target_columns(Dataset):
def __init__(self, tabular_data , is_valid):
self.tabular_data = tabular_data
self.1st_target_value = tabular_data.1st_target_value.values
self.2nd_target_value = tabular_data.2nd_target_value.values
self.3rd_target_value = tabular_data.3rd_target_value.values
self.4th_target_value= tabular_data.4th_target_value.values
self.5th_target_value = tabular_data.5th_target_value.values
def __len__(self):
return len(self.tabular_data)
def __getitem__(self, index):
tabular_data = self.tabular_data.iloc[:,:]
X = tabular_data[training_input.columns()] #training_input.columns() represents column names of X_train
X = X.values[index]
return {
'tabular_data' : torch.tensor(X, dtype = torch.float) ,
'1st_target_value' : torch.tensor(self.1st_target_value[index], dtype = torch.float),
'2nd_target_value' : torch.tensor(self.2nd_target_value[index], dtype = torch.float),
'3rd_target_value' : torch.tensor(self.3rd_target_value[index], dtype = torch.float),
'4th_target_value' : torch.tensor(self.4th_target_value[index], dtype = torch.float),
'5th_target_value' : torch.tensor(self.5th_target_value[index], dtype = torch.float)
}
How to achieve the same for 100 target columns having different column names ?
|
st46330
|
this thread has been not categorized yet , @admins plz hel me to categorize it , so that It can reach out to required people which will eventually help me in getting solution asap.
|
st46331
|
Last few days I was googling about this issue which I created here , finally this post 12 helped me in resolving my problem . Thanks
|
st46332
|
I am using Transfer Learning for Classification of my Dataset.
How to calculate Classification accuracy of each class?
|
st46333
|
I would just have an array correct filled with zeros and size of number of total classes. Then I would classify data point j, if it matches the target label[j] then just increment the array with that class index, correct[j] += 1. If num_class is an array that contains the number of points for each class c, then the accuarcy is correct[c] / num_class[c].
|
st46334
|
Answer given by @ptrblck Thanks a lot!
nb_classes = 9
confusion_matrix = torch.zeros(nb_classes, nb_classes)
with torch.no_grad():
for i, (inputs, classes) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
classes = classes.to(device)
outputs = model_ft(inputs)
_, preds = torch.max(outputs, 1)
for t, p in zip(classes.view(-1), preds.view(-1)):
confusion_matrix[t.long(), p.long()] += 1
print(confusion_matrix)
To get the per-class accuracy:
print(confusion_matrix.diag()/confusion_matrix.sum(1))
|
st46335
|
Yes, you should calculate the accuracy on your test images. I also suggest you create stratified 5 or 10 fold experiment.
|
st46336
|
This doesn’t calculate accuracy. The true negatives are missing from the numerator of your fraction:
confusion_matrix.diag()/confusion_matrix.sum(1)
You are either calculating precision or recall. I don’t know which of the two, because I can’t tell which of the axis is prediction and which is ground truth.
|
st46337
|
I’m looking to mask the input to a sequence model with independently randomly placed blocks of random length. Here is a function prototype with pseudo-code for what I want:
def mask(x, prob, max_length, batch_dim=0, seq_dim=1, mask_value=0):
"""Returns a new tensor like x but possibly with certain elements masked to mask_value.
More precisely,
ret = x.clone()
for each i < x.size(batch_dim)
for each j < x.size(seq_dim)
with probability prob
l = randint(1, max_length)
for each index tuple idx of ret
if idx[batch_dim] == i and j <= idx[seq_dim] < j+l
ret[idx] = mask_value
return ret
"""
pass
I am looking for a way to do this without actually looping through the tensor, which is at least inelegant and possibly would cause a speed issue for the model (didn’t profile it though).
I’m also open to suggestions of variant masking strategies that might work equally well but be easier to implement if this one is hard.
|
st46338
|
Your use case sound similar to torchvision.transforms.RandomErasing 89, which is used to randomly selects rectangle regions in an image and erases its pixels, so maybe you could reuse their approach for your temporal data.
|
st46339
|
Thanks. It looks like this code masks a single rectangle in a single image, which is easy to do.
More specifically:
Looping through the whole sequence to choose points to start blocks at is probably fine if it is done in C++, but it is probably slow in python, so I thought there might be some vectorized alternative. Another approach would be to draw the number of blocks from the binomial distribution and then choose that number of indices (not sure what function does this) and then loop through these indices, which is probably faster but still not nice to have to do in python. In either case, the actual masking can be done using array index notation so long as we arrange for the batch and time dimensions to come first (which can be done by transposing, which is pretty fast I guess).
The torch.Transforms code only works on one image at a time; it is meant for data loaders. That might be OK for my case, but ideally masking would work as a layer.
|
st46340
|
Hi,
I’m trying to train an NLP model that has both a character-level LSTM and a token-level LSTM. My embeddings that result from these two LSTMs are concatenated together and then passed to another LSTM.
Would anyone know if there’s any efficient way to batch the training examples so that each batch iterator have the same sentences and words in the sample place?
Thanks in advance
|
st46341
|
Hi,
I have many timeseries of varying crest and troughs amplitudes, and I want to get features out of it from a window of size 50.
I am sampling a continuous batch of 500 timesteps from the timeseries and training them.
Since all the series have different amplitudes, to avoid scaling I am using percentage change to scale the data, and then taking window of size 50.
Now, is it ok to apply nn.conv1d on this percentage window to get features from the window.
Is there a better way I can features out of window without using scaling.
Below is my current implementation.
data is a dataframe
data = data[['Value']]
data['ret'] = data['Value'].pct_change()
data = data.dropna().reset_index()[['Value','ret']]
data = torch.FloatTensor(data.values)
and now I take window of size 50 , and calculate the reward based on ‘Value’ , but pass ‘ret’ in Neural network to get features.
|
st46342
|
I’m having trouble installing PyTorch on my IDE (PyCharm). I’m not sure if I’m doing something wrong, or maybe the PyTorch package is not compiled normally through Rosetta 2, anyways I’ve been stuck on this issue for hours now!
Please refer to this Stack Overflow post for more details:
stackoverflow.com
Unable to install PyTorch in PyCharm (Python 3.9 / macOS) 95
python, python-3.x, macos, pip
asked by
Omar AlSuwaidi
on 08:37AM - 22 Nov 20 UTC
Thanks for the help!
|
st46343
|
Hi.
I have a model and an optimizer, to which I apply amp from apex package
from apex import amp
model= ...
optimizer= ...
model, optimizer = amp.initialize(model, optimizer, opt_level='O1')
After the model have been wrapped in amp, I would like to access one of its weights and change it. For example:
model.conv1.weight.data = new_tensor
The point is when I do this, it has no effect. It looks like amp keeps a different copy of the weights, and when updating the weight on the fly, there is no effect.
Is there any possibility to update the weights on the flight after my model has been wrapped by amp?
Thanks
|
st46344
|
Anyone? I have tried to reinitilize the amp wrapper, but it is not advised by the amp documentation
|
st46345
|
I would not recommend to use apex/amp anymore, but to switch to the native implementation as described here 5.
One reason is the added flexibility for such use cases.
|
st46346
|
I have two 3 dimensional Pytorch tensors, one of dimension (8, 1, 1024) and the other has dimension (8, 59, 77). I wish to multiply these two tnesors.
I know they cannot be multiplied in their current state, so I want to multiply them iteratively and append into a single tensor. The second tensor can be represented as (8, 59, 1) when we iterate over the 2nd dimension. In this state multiplying it with the first tensor of shape (8, 1, 1024), resulting in a tensor of shape (8, 59, 1024), and finally appending all these 77 outputs into one, resulting in the final shape of (8, 59, 1024, 77).
However I am having issues in it’s implementation. Can someone help me here ?
|
st46347
|
Hello.
I train my model on "cuda:0", but I can see that on others GPUs my model also allocated few MBs. Do you know what could happened? Why when I use just one GPU a little bit of memory of others is also used?
|
st46348
|
I’m not sure if your code tries to initialize a CUDA context on all devices, but you could avoid it by masking all other GPUs via:
CUDA_VISIBLE_DEVICES=0 python script.py args
|
st46349
|
I am training a conditional variational autoencoder with ELBO loss function and I am seeing a sudden drop in loss when I apply lr scheduler at some epoch.
Does anyone one have any idea why the learning rate decay drops my loss value suddenly instead of giving me a loss continuous from the loss I get when training with the learning rate before the decay?
Here is the code snippet
def loss_fn(mu_z, std_z, z_sample, mu_x, std_x, x):
S = x.shape[0]
# log posterior q(z|x)
q_z_dist = torch.distributions.Normal(mu_z, torch.exp(std_z))
log_q_z = q_z_dist.log_prob(z_sample)
# log likelihood p(x|z)
p_x_dist = torch.distributions.Normal(mu_x, torch.exp(std_x))
log_p_x = p_x_dist.log_prob(x)
# log prior
p_z_dist = torch.distributions.Normal(0, 1)
log_p_z = p_z_dist.log_prob(z_sample)
loss = (1 / S) * (
torch.sum(log_q_z) - torch.sum(log_p_x) - torch.sum(log_p_z)
)
return torch.sum(log_q_z), torch.sum(log_p_x), torch.sum(log_p_z), loss
optimizer = torch.optim.Adam(list(enc.parameters()) + list(dec.parameters()), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=500, gamma=0.1)
train_dataset = TensorDataset(X_train, C_train, K_train)
test_dataset = TensorDataset(X_test, C_test, K_test)
train_iter = DataLoader(train_dataset, batch_size=BATCH_SIZE)
test_iter = DataLoader(test_dataset, batch_size=BATCH_SIZE)
train_loss_avg = []
test_loss_avg = []
for i in range(N_EPOCHS):
train_loss_avg.append(0)
num_batches = 0
for x, c, k in train_iter:
# zero grad
optimizer.zero_grad()
# forward pass
mu_z, std_z = enc(x.to(device), torch.cat([c, k], axis=1).to(device))
eps = torch.randn_like(std_z)
z_samples = mu_z + eps * torch.exp(std_z)
mu_x, std_x = dec(z_samples.to(device), torch.cat([c, k], axis=1).to(device))
# loss
_, _, _, loss = loss_fn(mu_z, std_z, z_samples, mu_x, std_x, x)
# backward pass
loss.backward()
# update
optimizer.step()
train_loss_avg[-1] += loss.item()
num_batches += 1
if i < 501:
scheduler.step()
train_loss_avg[-1] /= num_batches
with torch.no_grad():
test_loss_avg.append(0)
num_batches = 0
for x_test, c_test, k_test in test_iter:
# forward
mu_z_test, std_z_test = enc(x_test.to(device), torch.cat([c_test, k_test], axis=1).to(device))
eps_test = torch.randn_like(std_z_test)
z_samples_test = mu_z_test + eps_test * torch.exp(std_z_test)
mu_x_test, std_x_test = dec(z_samples_test.to(device), torch.cat([c_test, k_test], axis=1).to(device))
# loss
_, _, _, test_loss = loss_fn(mu_z_test, std_z_test, z_samples_test, mu_x_test, std_x_test, x_test)
test_loss_avg[-1] += test_loss.item()
num_batches += 1
test_loss_avg[-1] /= num_batches
print("Epoch [%d / %d] train loss: %f, test loss: %f" % (i+1, N_EPOCHS, train_loss_avg[-1], test_loss_avg[-1]))
|
st46350
|
This effect is often observed when decreasing the learning rate, as your loss might be “stuck” due to too large gradient steps. E.g. the ResNet paper 6 shows the same behavior in the loss curves (this paper of course doesn’t discuss this effect).
|
st46351
|
The below mentioned are the loss values generated in the file ‘log’(the iterations are actually more than this what I listed below) after train the model. Attached the screenshot of the contents of the log file for ref. How to plot the Iteration (x-axis) vs Loss (y-axis) from these contents of the ‘log’ file ?
0: combined_hm_loss: 0.17613089
1: combined_hm_loss: 0.20243575
2: combined_hm_loss: 0.07203530
3: combined_hm_loss: 0.03444689
4: combined_hm_loss: 0.02623464
5: combined_hm_loss: 0.02061908
6: combined_hm_loss: 0.01562270
7: combined_hm_loss: 0.01253260
8: combined_hm_loss: 0.01102418
9: combined_hm_loss: 0.00958306
10: combined_hm_loss: 0.00824807
11: combined_hm_loss: 0.00694697
12: combined_hm_loss: 0.00640630
13: combined_hm_loss: 0.00593691
14: combined_hm_loss: 0.00521284
15: combined_hm_loss: 0.00445185
16: combined_hm_loss: 0.00408901
17: combined_hm_loss: 0.00377806
18: combined_hm_loss: 0.00314004
19: combined_hm_loss: 0.00287649
|
st46352
|
You could use e.g. regex to grab the iteration values (x-axis) and the losses (y-axis).
Alternatively, e.g. pandas might also be a good way to load the data and get the corresponding values.
|
st46353
|
Hi friends,
I have a question maybe sounds kind of stupid. I have read a new designed block called ACNet, which replace square kernel with horizontal, vertical and square. Here is the block looks like:
class ACBlock(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', deploy=False,
use_affine=True, reduce_gamma=False, use_last_bn=False, gamma_init=None):
super(ACBlock, self).__init__()
self.deploy = deploy
if deploy:
self.fused_conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=(kernel_size, kernel_size), stride=stride,
padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode)
else:
self.square_conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=(kernel_size, kernel_size), stride=stride,
padding=padding, dilation=dilation, groups=groups, bias=False, padding_mode=padding_mode)
self.square_bn = nn.BatchNorm2d(num_features=out_channels, affine=use_affine)
center_offset_from_origin_border = padding - kernel_size // 2
ver_pad_or_crop = (padding, center_offset_from_origin_border)
hor_pad_or_crop = (center_offset_from_origin_border, padding)
if center_offset_from_origin_border >= 0:
self.ver_conv_crop_layer = nn.Identity()
ver_conv_padding = ver_pad_or_crop
self.hor_conv_crop_layer = nn.Identity()
hor_conv_padding = hor_pad_or_crop
else:
self.ver_conv_crop_layer = CropLayer(crop_set=ver_pad_or_crop)
ver_conv_padding = (0, 0)
self.hor_conv_crop_layer = CropLayer(crop_set=hor_pad_or_crop)
hor_conv_padding = (0, 0)
self.ver_conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=(kernel_size, 1), stride=stride,
padding=ver_conv_padding, dilation=dilation, groups=groups, bias=False, padding_mode=padding_mode)
self.hor_conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=(1, kernel_size), stride=stride,
padding=hor_conv_padding, dilation=dilation, groups=groups, bias=False, padding_mode=padding_mode)
self.ver_bn = nn.BatchNorm2d(num_features=out_channels, affine=use_affine)
self.hor_bn = nn.BatchNorm2d(num_features=out_channels, affine=use_affine)
if reduce_gamma:
assert not use_last_bn
self.init_gamma(1.0 / 3)
if use_last_bn:
assert not reduce_gamma
self.last_bn = nn.BatchNorm2d(num_features=out_channels, affine=True)
if gamma_init is not None:
assert not reduce_gamma
self.init_gamma(gamma_init)
def init_gamma(self, gamma_value):
init.constant_(self.square_bn.weight, gamma_value)
init.constant_(self.ver_bn.weight, gamma_value)
init.constant_(self.hor_bn.weight, gamma_value)
print('init gamma of square, ver and hor as ', gamma_value)
def single_init(self):
init.constant_(self.square_bn.weight, 1.0)
init.constant_(self.ver_bn.weight, 0.0)
init.constant_(self.hor_bn.weight, 0.0)
print('init gamma of square as 1, ver and hor as 0')
def forward(self, input):
if self.deploy:
return self.fused_conv(input)
else:
square_outputs = self.square_conv(input)
square_outputs = self.square_bn(square_outputs)
vertical_outputs = self.ver_conv_crop_layer(input)
vertical_outputs = self.ver_conv(vertical_outputs)
vertical_outputs = self.ver_bn(vertical_outputs)
horizontal_outputs = self.hor_conv_crop_layer(input)
horizontal_outputs = self.hor_conv(horizontal_outputs)
horizontal_outputs = self.hor_bn(horizontal_outputs)
result = square_outputs + vertical_outputs + horizontal_outputs
if hasattr(self, 'last_bn'):
return self.last_bn(result)
return result
So my question is, this block has already add BN layer after each kernel(horizontal, vertical and square), if I wanna re-implement this work, when I define the ResNet, do I still need to add one more bn layer after this block? Like this:
class Bottleneck(nn.Module):
expansion = 4
def __init__(self, in_planes, planes, stride=1):
super(Bottleneck, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = AC_conv(planes, planes, kernel_size=3, stride=stride, padding=1)
#Is this line needed?
self.bn2 = nn.BatchNorm2d(planes)
#
self.conv3 = nn.Conv2d(planes, self.expansion*planes, kernel_size=1, bias=False)
self.bn3 = nn.BatchNorm2d(self.expansion*planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != self.expansion*planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion*planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion*planes)
)
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
#Is this line needed?
out = self.bn2(out)
#
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out += identity
out = self.relu(out)
return out
Any suggestion is very thankful!
|
st46354
|
Since you are already using batchnorm layers in the custom modules (and also last_bn, if defined), I would assume you don’t need self.bn2 anymore.
|
st46355
|
What is the difference between log_softmax and softmax?
How to explain them in mathematics?
Thank you!
|
st46356
|
Solved by KaiyangZhou in post #2
log_softmax applies logarithm after softmax.
softmax:
exp(x_i) / exp(x).sum()
log_softmax:
log( exp(x_i) / exp(x).sum() )
log_softmax essential does log(softmax(x)), but the practical implementation is different and more efficient while doing the same operation. You might want to have a look at…
|
st46357
|
log_softmax applies logarithm after softmax.
softmax:
exp(x_i) / exp(x).sum()
log_softmax:
log( exp(x_i) / exp(x).sum() )
log_softmax essential does log(softmax(x)), but the practical implementation is different and more efficient while doing the same operation. You might want to have a look at http://pytorch.org/docs/master/nn.html?highlight=log_softmax#torch.nn.LogSoftmax 5.8k and the source code.
|
st46358
|
Can you please link that implementation?
Is it calculated by, x_i - log( exp(x).sum() ) ?
|
st46359
|
The implementation is done in torch.nn.functional where the function is called from c code: http://pytorch.org/docs/master/_modules/torch/nn/functional.html#log_softmax 2.1k.
|
st46360
|
Where does `torch._C` come from?
I am read the code of batch normlization, and I find this line:
f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled)
But I do not find any library called the _C. I do not know where does torch._C.functions.BatchNorm come from.
|
st46361
|
@KaiyangZhou’s answer may have been correct once, but does not match the current documentation, which reads:
“While mathematically equivalent to log(softmax(x)), doing these two
operations separately is slower, and numerically unstable. This function
uses an alternative formulation to compute the output and gradient correctly.”
And unfortunately the linked-to source for log_softmax merely includes a call to another .log_softmax() method which is defined somewhere else, but I have been unable to find it, even after running grep -r 'def log_softmax * on the pytorch directory.
EDIT: Regarding the source, Similar post: “Understanding code organization: where is `log_softmax` really implemented? 31”, answered by @ptrblck as pointing to the source code here: https://github.com/pytorch/pytorch/blob/420b37f3c67950ed93cd8aa7a12e673fcfc5567b/aten/src/ATen/native/SoftMax.cpp#L146 24 …And yet all that does is call still-other functions log_softmax_lastdim_kernel() or host_softmax. Still trying to find where the actual implementation is, not just calls-to-calls-to-calls.
|
st46362
|
You are right. There are two more dispatches involved and eventually _vec_log_softmax_lastdim 80 is called for the log_softmax with a non-scalar input.
|
st46363
|
In theory these methods are equal, in practice F.log_softmax is numerically more stable, as it uses the log-sum-exp trick internally.
|
st46364
|
I have a 3D tensor of names that comes out of an LSTM that’s (batch size x name length x embedding size)
I’ve been reshaping it to a 2D to put it through a linear layer, because linear layer requires (batch size, linear dimension size) by using the following
y0 = output.contiguous().view(-1, output.size(-1))
this converts outputs to (batchsize * name length, number of possible characters)
then once I put it through the linear layer let’s call the output y0 I do this
y = y0.contiguous().view(output.size(0), -1, y0.size(-1))
But I’m not really sure if the cells of y are correlated properly with the cells of output and I worry this is messing up my learning, because batch size of 1 is actually generating proper names and any larger batch size is generating nonsense.
So what I mean exactly is
outputs = (batch size * name length, embed size)
y = (batch size * name length, number of possible characters)
I need to make sure y[i,j,:] is the linear transformed output of outputs[i,j,:]
The target tensors is of (name length x correct character index) because I’m using cross entropy. So I need to ensure that every fiber of y correlates to the same index as output.
|
st46365
|
nn.Linear accepts a variable number of dimensions as [batch_size, *, in_features], so you might use the temporal dimension as dim1.
The view operations in your code should work, since you are not permuting the dimensions (if I understand your code correctly).
|
st46366
|
Hello. I’m trying to implement a siamese network with a contrastive loss.
It’s trained on raw tabular data. I’ve started with very a few columns to check if they’re equal (1) or not (0).
The model is:
class Model(nn.Module):
def __init__(self, num_features, num_targets):
super(Model, self).__init__()
self.hidden_size = [5, 5, 5]
self.dropout_value = [0.5, 0.35, 0.25]
self.head = nn.Sequential(
nn.BatchNorm1d(num_features),
nn.Dropout(self.dropout_value[0]),
nn.Linear(num_features, self.hidden_size[0]),
nn.LeakyReLU(),
nn.BatchNorm1d(self.hidden_size[0]),
nn.Dropout(self.dropout_value[1]),
nn.Linear(self.hidden_size[0], self.hidden_size[1]),
nn.LeakyReLU(),
nn.BatchNorm1d(self.hidden_size[1]),
nn.Dropout(self.dropout_value[2]),
nn.utils.weight_norm(nn.Linear(self.hidden_size[1], self.hidden_size[2]))
)
def forward(self, x1, x2):
x1 = self.head(x1)
x2 = self.head(x2)
return x1, x2
And the loss is:
class ContrastiveLoss(nn.Module):
def __init__(self, margin=1.):
super(ContrastiveLoss, self).__init__()
self.margin = margin
self.eps = 1e-9
def forward(self, output1, output2, target, size_average=True):
distances = (output2 - output1).pow(2).sum(1)
losses = 0.5 * (target.float() * distances + \
(1 + -1 * target).float() * F.relu(self.margin - (distances + self.eps).sqrt()).pow(2))
return losses.mean() if size_average else losses.sum()
The problem is that it’s stuck at 0,13x training and 0,2xx validation loss. Using “distances”, it gives some weird results.
For the first 5 samples, instead of:
[1. 0. 0. 1. 1.]
It gives:
[0. , 0. , 0.38202894, 0. , 0.]
What could be the problem? Is this architecture appropriate? Or could it be some technical issues?
|
st46367
|
I have use this method to image the image of 24bits (3 channels), but now the image change to 32bits with 4 channels, and it got this error , how can I edit it and use np.shape or other method? !
60978×454 42.1 KB
Blockquote
def detect_image(self, image):
old_img = copy.deepcopy(image)
orininal_h = np.array(image).shape[0]
orininal_w = np.array(image).shape[1]
image, nw, nh = self.letterbox_image(image,(self.model_image_size[1],self.model_image_size[0]))
images = [np.array(image)/255]
images = np.transpose(images,(0,3,1,2))
with torch.no_grad():
images = Variable(torch.from_numpy(images).type(torch.FloatTensor))
if self.cuda:
images =images.cuda()
pr = self.net(images)[0]
pr = F.softmax(pr.permute(1,2,0),dim = -1).cpu().numpy().argmax(axis=-1)
pr = pr[int((self.model_image_size[0]-nh)//2):int((self.model_image_size[0]-nh)//2+nh), int((self.model_image_size[1]-nw)//2):int((self.model_image_size[1]-nw)//2+nw)]
seg_img = np.zeros((np.shape(pr)[0],np.shape(pr)[1],3))
for c in range(self.num_classes):
seg_img[:,:,0] += ((pr[:,: ] == c )*( self.colors[c][0] )).astype('uint8')
seg_img[:,:,1] += ((pr[:,: ] == c )*( self.colors[c][1] )).astype('uint8')
seg_img[:,:,2] += ((pr[:,: ] == c )*( self.colors[c][2] )).astype('uint8')
image = Image.fromarray(np.uint8(seg_img)).resize((orininal_w,orininal_h))
if self.blend:
image = Image.blend(old_img,image,0.7)
return image
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.