content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How to change matplotlib marker into a football icon?
I have visualization like this:
I want to change the marker icon into a football icon with the same color as the line
My code looks like this :
fig, ax = plt.subplots(figsize=(12,6))
ax.step(x = a_df['minute'], y = a_df['a_cum'], where = 'post', label= ateam, linewidth=2)
ax.step(x = h_df['minute'], y = h_df['h_cum'], where = 'post', color ='red', label= hteam,linewidth=2)
plt.scatter(x= a_goal['minute'], y = a_goal['a_cum'] , marker = 'o')
plt.scatter(x= h_goal['minute'], y = h_goal['h_cum'] , marker = 'o',color = 'red')
plt.xticks([0,15,30,45,60,75,90])
plt.yticks([0, 0.5, 1, 1.5, 2, 2.5, 3])
plt.grid()
ax.title.set_text('The Expected Goals(xG) Chart Final Champions League 2010/2011')
plt.ylabel("Expected Goals (xG)")
plt.xlabel("Minutes")
ax.legend()
plt.show()
I don't have any clue to do it.
A:
you can draw your own shapes by creating matplotlib Path objects.
You need 2 lists to create it.
1)shape's vertices(coordinates)
2)codes:describes the path from a vertice to the next (MOVETO,LINETO,CURVE3,CURVE4,CLOSEPOLY,...)
for example
import matplotlib.pyplot as plt
from matplotlib.path import Path
vertices=[[ 1.86622681e+00, -9.69864442e+01], [-5.36324682e+01, -9.69864442e+01],
[-9.86337733e+01, -5.19851396e+01], [-9.86337733e+01, 3.51356038e+00],
[-9.86337733e+01, 5.90122504e+01], [-5.36324682e+01, 1.04013560e+02],
[ 1.86622681e+00, 1.04013560e+02], [ 5.73649168e+01, 1.04013560e+02],
[ 1.02366227e+02, 5.90122504e+01], [ 1.02366227e+02, 3.51356038e+00],
[ 1.02366227e+02, -5.19851396e+01], [ 5.73649168e+01, -9.69864442e+01],
[ 1.86622681e+00, -9.69864442e+01], [ 1.86622681e+00, -9.69864442e+01],
[ 1.86622681e+00, -9.69864442e+01], [ 1.86622681e+00, -9.59864442e+01],
[ 1.49396568e+01, -9.59864442e+01], [ 2.74005268e+01, -9.34457032e+01],
[ 3.88349768e+01, -8.88614442e+01], [ 3.93477668e+01, -8.39473616e+01],
[ 3.91766768e+01, -7.84211406e+01], [ 3.83349768e+01, -7.24551946e+01],
[ 2.54705168e+01, -7.17582316e+01], [ 1.38598668e+01, -6.91771276e+01],
[ 3.49122681e+00, -6.47364446e+01], [-5.88483119e+00, -7.07454276e+01],
[-1.85084882e+01, -7.43878696e+01], [-3.31337732e+01, -7.44239446e+01],
[-3.31639232e+01, -8.07006846e+01], [-3.34889082e+01, -8.56747886e+01],
[-3.41025232e+01, -8.92676942e+01], [-2.29485092e+01, -9.35925582e+01],
[-1.08166852e+01, -9.59864442e+01], [ 1.86622681e+00, -9.59864442e+01],
[ 1.86622681e+00, -9.59864442e+01], [ 1.86622681e+00, -9.59864442e+01],
[ 3.98974768e+01, -8.84239444e+01], [ 6.30273268e+01, -7.88377716e+01],
[ 8.17782368e+01, -6.07995616e+01], [ 9.22412268e+01, -3.81426946e+01],
[ 8.94287268e+01, -3.42676946e+01], [ 8.27048568e+01, -3.89413496e+01],
[ 7.41977468e+01, -4.19580876e+01], [ 6.55537268e+01, -4.39551946e+01],
[ 6.55507268e+01, -4.39600946e+01], [ 6.55258268e+01, -4.39502946e+01],
[ 6.55225268e+01, -4.39551946e+01], [ 5.64622368e+01, -5.74584576e+01],
[ 4.77347768e+01, -6.68825886e+01], [ 3.93037768e+01, -7.22051946e+01],
[ 4.01409768e+01, -7.80795846e+01], [ 4.03596968e+01, -8.35092576e+01],
[ 3.98975268e+01, -8.84239444e+01], [ 3.98974768e+01, -8.84239444e+01],
[ 3.98974768e+01, -8.84239444e+01], [-3.33525232e+01, -7.34239446e+01],
[-3.33343532e+01, -7.34304446e+01], [-3.33081932e+01, -7.34174446e+01],
[-3.32900232e+01, -7.34239446e+01], [-1.87512102e+01, -7.34136546e+01],
[-6.26111319e+00, -6.98403626e+01], [ 2.95997681e+00, -6.39239446e+01],
[ 4.88356681e+00, -5.29429786e+01], [ 6.50358681e+00, -4.13393356e+01],
[ 7.80372681e+00, -2.91114446e+01], [-8.09469019e+00, -1.58596306e+01],
[-1.93481942e+01, -5.40333762e+00], [-2.47587732e+01, 1.32605538e+00],
[-3.69631432e+01, -2.50275662e+00], [-4.85465082e+01, -5.39578762e+00],
[-5.95087732e+01, -7.36144462e+00], [-6.28171902e+01, -1.66250136e+01],
[-6.52187002e+01, -2.98372096e+01], [-6.58837732e+01, -4.57989446e+01],
[-5.53582062e+01, -6.01863506e+01], [-4.45266302e+01, -6.94131916e+01],
[-3.33525232e+01, -7.34239446e+01], [-3.33525232e+01, -7.34239446e+01],
[-3.33525232e+01, -7.34239446e+01], [-7.57587732e+01, -4.67676946e+01],
[-7.29041812e+01, -4.67440446e+01], [-6.99334012e+01, -4.63526666e+01],
[-6.68837732e+01, -4.56426946e+01], [-6.62087282e+01, -2.96768106e+01],
[-6.37905682e+01, -1.64255576e+01], [-6.04462732e+01, -7.04894462e+00],
[-6.81326882e+01, 3.32535038e+00], [-7.26804032e+01, 1.40097104e+01],
[-7.40712732e+01, 2.50135604e+01], [-7.99916232e+01, 2.63222104e+01],
[-8.66133452e+01, 2.67559804e+01], [-9.31650233e+01, 2.54510604e+01],
[-9.31681733e+01, 2.54460604e+01], [-9.31931223e+01, 2.54560604e+01],
[-9.31962733e+01, 2.54510604e+01], [-9.44043873e+01, 2.37123804e+01],
[-9.54279373e+01, 2.17334704e+01], [-9.63212733e+01, 1.95448104e+01],
[-9.71662733e+01, 1.43262704e+01], [-9.76337733e+01, 8.97093038e+00],
[-9.76337733e+01, 3.51356038e+00], [-9.76337733e+01, -1.43647536e+01],
[-9.29174773e+01, -3.11438126e+01], [-8.46650232e+01, -4.56426946e+01],
[-8.18063532e+01, -4.64180796e+01], [-7.88476312e+01, -4.67932816e+01],
[-7.57587732e+01, -4.67676946e+01], [-7.57587732e+01, -4.67676946e+01],
[-7.57587732e+01, -4.67676946e+01], [ 6.55224768e+01, -4.28926946e+01],
[ 7.40107668e+01, -4.09146326e+01], [ 8.23640768e+01, -3.79999686e+01],
[ 8.88662268e+01, -3.34864446e+01], [ 9.61553068e+01, -1.55950616e+01],
[ 9.94808868e+01, -1.66158462e+00], [ 9.88662268e+01, 8.32606038e+00],
[ 9.42289868e+01, 2.15752904e+01], [ 8.77410868e+01, 3.15965604e+01],
[ 8.11474768e+01, 3.82010604e+01], [ 7.17659368e+01, 3.38334104e+01],
[ 6.38899668e+01, 3.03415204e+01], [ 5.74912268e+01, 2.77635604e+01],
[ 5.68036568e+01, 1.50717604e+01], [ 5.35581368e+01, -9.16606169e-02],
[ 4.82412268e+01, -1.60489446e+01], [ 5.52234668e+01, -2.62259056e+01],
[ 6.09897268e+01, -3.51652306e+01], [ 6.55224768e+01, -4.28926946e+01],
[ 6.55224768e+01, -4.28926946e+01], [ 6.55224768e+01, -4.28926946e+01],
[ 8.42872681e+00, -2.83614446e+01], [ 2.13772368e+01, -2.57261866e+01],
[ 3.43239568e+01, -2.15154036e+01], [ 4.72724768e+01, -1.57364446e+01],
[ 5.25849968e+01, 2.07647383e-01], [ 5.58247068e+01, 1.53619304e+01],
[ 5.64912268e+01, 2.79510604e+01], [ 5.64917568e+01, 2.79612604e+01],
[ 5.64906868e+01, 2.79721604e+01], [ 5.64912268e+01, 2.79822604e+01],
[ 4.74302668e+01, 3.88992704e+01], [ 3.74260968e+01, 4.79380604e+01],
[ 2.64912268e+01, 5.51072604e+01], [ 1.05529568e+01, 5.24508804e+01],
[-4.02431919e+00, 4.78459804e+01], [-1.52900232e+01, 4.18885104e+01],
[-1.91554652e+01, 2.63828404e+01], [-2.20678242e+01, 1.30703504e+01],
[-2.40400232e+01, 1.98226038e+00], [-1.87588732e+01, -4.60782062e+00],
[-7.49875919e+00, -1.50853886e+01], [ 8.42872681e+00, -2.83614946e+01],
[ 8.42872681e+00, -2.83614446e+01], [ 8.42872681e+00, -2.83614446e+01],
[ 9.97724768e+01, 8.82606038e+00], [ 1.01209977e+02, 9.29481038e+00],
[ 9.97891268e+01, 3.41125404e+01], [ 8.92576668e+01, 5.64775904e+01],
[ 7.29287268e+01, 7.31385604e+01], [ 7.01162268e+01, 7.01073104e+01],
[ 7.65398468e+01, 5.90945204e+01], [ 8.04306168e+01, 4.87012104e+01],
[ 8.18037268e+01, 3.89510604e+01], [ 8.85060268e+01, 3.22487504e+01],
[ 9.50869868e+01, 2.21436404e+01], [ 9.97724768e+01, 8.82606038e+00],
[ 9.97724768e+01, 8.82606038e+00], [ 9.97724768e+01, 8.82606038e+00],
[-7.39150232e+01, 2.60448104e+01], [-6.92374072e+01, 3.77382804e+01],
[-6.07391432e+01, 4.81501604e+01], [-4.84150232e+01, 5.72948104e+01],
[-4.77543102e+01, 6.78197404e+01], [-4.56607662e+01, 7.76814004e+01],
[-4.11025232e+01, 8.57010604e+01], [-4.52341512e+01, 8.65620704e+01],
[-4.97579362e+01, 8.64646604e+01], [-5.46650232e+01, 8.53885604e+01],
[-7.24317802e+01, 7.30970204e+01], [-8.60276902e+01, 5.51787904e+01],
[-9.28212733e+01, 3.42010604e+01], [-9.28243733e+01, 3.41920604e+01],
[-9.28181733e+01, 3.41792604e+01], [-9.28212733e+01, 3.41698604e+01],
[-9.30130013e+01, 3.14875704e+01], [-9.31144113e+01, 2.89274504e+01],
[-9.31337733e+01, 2.64511104e+01], [-8.65119202e+01, 2.77331304e+01],
[-7.98647022e+01, 2.73522904e+01], [-7.39150232e+01, 2.60448604e+01],
[-7.39150232e+01, 2.60448104e+01], [-7.39150232e+01, 2.60448104e+01],
[-1.56650232e+01, 4.27948104e+01], [-4.35766519e+00, 4.87636404e+01],
[ 1.01466668e+01, 5.33700304e+01], [ 2.60224768e+01, 5.60448104e+01],
[ 2.85590568e+01, 6.43435004e+01], [ 3.07827468e+01, 7.29492504e+01],
[ 3.27099768e+01, 8.18573104e+01], [ 2.55039768e+01, 9.03537704e+01],
[ 1.39714968e+01, 9.64983204e+01], [-1.13376819e+00, 9.85135604e+01],
[-1.57753392e+01, 9.71825004e+01], [-2.87516412e+01, 9.28553404e+01],
[-4.00712732e+01, 8.55448104e+01], [-4.46513912e+01, 7.76614604e+01],
[-4.67507882e+01, 6.78133804e+01], [-4.74150232e+01, 5.72323104e+01],
[-3.59060892e+01, 5.27285604e+01], [-2.53218622e+01, 4.79159104e+01],
[-1.56650232e+01, 4.27948104e+01], [-1.56650232e+01, 4.27948104e+01],
[ 6.94599768e+01, 7.08573104e+01], [ 7.22412268e+01, 7.38573104e+01],
[ 5.42332468e+01, 9.18657304e+01], [ 2.93485768e+01, 1.03013560e+02],
[ 1.86622681e+00, 1.03013560e+02], [ 1.03891181e+00, 1.03013560e+02],
[ 2.19951808e-01, 1.03002360e+02], [-6.02518192e-01, 1.02982360e+02],
[-1.00876819e+00, 9.94823604e+01], [ 1.43154268e+01, 9.74387404e+01],
[ 2.60994568e+01, 9.12180804e+01], [ 3.34912268e+01, 8.24823604e+01],
[ 4.89375568e+01, 8.17496704e+01], [ 6.09313968e+01, 7.78789204e+01],
[ 6.94599768e+01, 7.08573604e+01], [ 6.94599768e+01, 7.08573104e+01],
[ 6.94599768e+01, 7.08573104e+01]]
codes=[1,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,2,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,2,4,4,4,2,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,
1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,79,
1,2,4,4,4,4,4,4,2,4,4,4,4,4,4,2, 79]
print(Path.MOVETO,Path.LINETO,Path.CURVE3,Path.CURVE4,Path.CLOSEPOLY)
ball=Path(vertices,codes)
fig, ax = plt.subplots(figsize=(12,6))
plt.plot(15,1,color='b',marker=ball,markersize=30)
plt.xticks([0,15,30,45,60,75,90])
plt.yticks([0, 0.5, 1, 1.5, 2, 2.5, 3])
plt.grid()
ax.title.set_text('The Expected Goals(xG) Chart Final Champions League 2010/2011')
plt.ylabel("Expected Goals (xG)")
plt.xlabel("Minutes")
ax.legend()
plt.show()
output
A:
I don't think matplotlib can draw custom markers. Therefore, I suggest the way to draw is to use the football image as a marker with the given coordinates.
import matplotlib.pyplot as plt
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
def getImage(path):
return OffsetImage(plt.imread(path), zoom=.02)
x_coords = [8.2, 4.5, 3.3, 6.9]
y_coords = [5.4, 3.5, 4.7, 7.1]
fig, ax = plt.subplots()
for x0, y0 in zip(x_coords, y_coords):
ab = AnnotationBbox(getImage('football_icon.png'), (x0, y0), frameon=False)
ax.add_artist(ab)
plt.xticks(range(10))
plt.yticks(range(10))
plt.show()
Output
|
How to change matplotlib marker into a football icon?
|
I have visualization like this:
I want to change the marker icon into a football icon with the same color as the line
My code looks like this :
fig, ax = plt.subplots(figsize=(12,6))
ax.step(x = a_df['minute'], y = a_df['a_cum'], where = 'post', label= ateam, linewidth=2)
ax.step(x = h_df['minute'], y = h_df['h_cum'], where = 'post', color ='red', label= hteam,linewidth=2)
plt.scatter(x= a_goal['minute'], y = a_goal['a_cum'] , marker = 'o')
plt.scatter(x= h_goal['minute'], y = h_goal['h_cum'] , marker = 'o',color = 'red')
plt.xticks([0,15,30,45,60,75,90])
plt.yticks([0, 0.5, 1, 1.5, 2, 2.5, 3])
plt.grid()
ax.title.set_text('The Expected Goals(xG) Chart Final Champions League 2010/2011')
plt.ylabel("Expected Goals (xG)")
plt.xlabel("Minutes")
ax.legend()
plt.show()
I don't have any clue to do it.
|
[
"you can draw your own shapes by creating matplotlib Path objects.\nYou need 2 lists to create it.\n1)shape's vertices(coordinates)\n2)codes:describes the path from a vertice to the next (MOVETO,LINETO,CURVE3,CURVE4,CLOSEPOLY,...)\nfor example\nimport matplotlib.pyplot as plt\nfrom matplotlib.path import Path\n\nvertices=[[ 1.86622681e+00, -9.69864442e+01], [-5.36324682e+01, -9.69864442e+01],\n [-9.86337733e+01, -5.19851396e+01], [-9.86337733e+01, 3.51356038e+00],\n [-9.86337733e+01, 5.90122504e+01], [-5.36324682e+01, 1.04013560e+02],\n [ 1.86622681e+00, 1.04013560e+02], [ 5.73649168e+01, 1.04013560e+02],\n [ 1.02366227e+02, 5.90122504e+01], [ 1.02366227e+02, 3.51356038e+00],\n [ 1.02366227e+02, -5.19851396e+01], [ 5.73649168e+01, -9.69864442e+01],\n [ 1.86622681e+00, -9.69864442e+01], [ 1.86622681e+00, -9.69864442e+01],\n [ 1.86622681e+00, -9.69864442e+01], [ 1.86622681e+00, -9.59864442e+01], \n [ 1.49396568e+01, -9.59864442e+01], [ 2.74005268e+01, -9.34457032e+01],\n [ 3.88349768e+01, -8.88614442e+01], [ 3.93477668e+01, -8.39473616e+01],\n [ 3.91766768e+01, -7.84211406e+01], [ 3.83349768e+01, -7.24551946e+01],\n [ 2.54705168e+01, -7.17582316e+01], [ 1.38598668e+01, -6.91771276e+01],\n [ 3.49122681e+00, -6.47364446e+01], [-5.88483119e+00, -7.07454276e+01],\n [-1.85084882e+01, -7.43878696e+01], [-3.31337732e+01, -7.44239446e+01],\n [-3.31639232e+01, -8.07006846e+01], [-3.34889082e+01, -8.56747886e+01],\n [-3.41025232e+01, -8.92676942e+01], [-2.29485092e+01, -9.35925582e+01],\n [-1.08166852e+01, -9.59864442e+01], [ 1.86622681e+00, -9.59864442e+01],\n [ 1.86622681e+00, -9.59864442e+01], [ 1.86622681e+00, -9.59864442e+01],\n [ 3.98974768e+01, -8.84239444e+01], [ 6.30273268e+01, -7.88377716e+01],\n [ 8.17782368e+01, -6.07995616e+01], [ 9.22412268e+01, -3.81426946e+01],\n [ 8.94287268e+01, -3.42676946e+01], [ 8.27048568e+01, -3.89413496e+01],\n [ 7.41977468e+01, -4.19580876e+01], [ 6.55537268e+01, -4.39551946e+01],\n [ 6.55507268e+01, -4.39600946e+01], [ 6.55258268e+01, -4.39502946e+01],\n [ 6.55225268e+01, -4.39551946e+01], [ 5.64622368e+01, -5.74584576e+01],\n [ 4.77347768e+01, -6.68825886e+01], [ 3.93037768e+01, -7.22051946e+01],\n [ 4.01409768e+01, -7.80795846e+01], [ 4.03596968e+01, -8.35092576e+01],\n [ 3.98975268e+01, -8.84239444e+01], [ 3.98974768e+01, -8.84239444e+01],\n [ 3.98974768e+01, -8.84239444e+01], [-3.33525232e+01, -7.34239446e+01],\n [-3.33343532e+01, -7.34304446e+01], [-3.33081932e+01, -7.34174446e+01],\n [-3.32900232e+01, -7.34239446e+01], [-1.87512102e+01, -7.34136546e+01],\n [-6.26111319e+00, -6.98403626e+01], [ 2.95997681e+00, -6.39239446e+01],\n [ 4.88356681e+00, -5.29429786e+01], [ 6.50358681e+00, -4.13393356e+01],\n [ 7.80372681e+00, -2.91114446e+01], [-8.09469019e+00, -1.58596306e+01],\n [-1.93481942e+01, -5.40333762e+00], [-2.47587732e+01, 1.32605538e+00],\n [-3.69631432e+01, -2.50275662e+00], [-4.85465082e+01, -5.39578762e+00],\n [-5.95087732e+01, -7.36144462e+00], [-6.28171902e+01, -1.66250136e+01],\n [-6.52187002e+01, -2.98372096e+01], [-6.58837732e+01, -4.57989446e+01],\n [-5.53582062e+01, -6.01863506e+01], [-4.45266302e+01, -6.94131916e+01],\n [-3.33525232e+01, -7.34239446e+01], [-3.33525232e+01, -7.34239446e+01],\n [-3.33525232e+01, -7.34239446e+01], [-7.57587732e+01, -4.67676946e+01],\n [-7.29041812e+01, -4.67440446e+01], [-6.99334012e+01, -4.63526666e+01],\n [-6.68837732e+01, -4.56426946e+01], [-6.62087282e+01, -2.96768106e+01],\n [-6.37905682e+01, -1.64255576e+01], [-6.04462732e+01, -7.04894462e+00],\n [-6.81326882e+01, 3.32535038e+00], [-7.26804032e+01, 1.40097104e+01],\n [-7.40712732e+01, 2.50135604e+01], [-7.99916232e+01, 2.63222104e+01],\n [-8.66133452e+01, 2.67559804e+01], [-9.31650233e+01, 2.54510604e+01],\n [-9.31681733e+01, 2.54460604e+01], [-9.31931223e+01, 2.54560604e+01],\n [-9.31962733e+01, 2.54510604e+01], [-9.44043873e+01, 2.37123804e+01],\n [-9.54279373e+01, 2.17334704e+01], [-9.63212733e+01, 1.95448104e+01],\n [-9.71662733e+01, 1.43262704e+01], [-9.76337733e+01, 8.97093038e+00],\n [-9.76337733e+01, 3.51356038e+00], [-9.76337733e+01, -1.43647536e+01],\n [-9.29174773e+01, -3.11438126e+01], [-8.46650232e+01, -4.56426946e+01],\n [-8.18063532e+01, -4.64180796e+01], [-7.88476312e+01, -4.67932816e+01],\n [-7.57587732e+01, -4.67676946e+01], [-7.57587732e+01, -4.67676946e+01],\n [-7.57587732e+01, -4.67676946e+01], [ 6.55224768e+01, -4.28926946e+01],\n [ 7.40107668e+01, -4.09146326e+01], [ 8.23640768e+01, -3.79999686e+01],\n [ 8.88662268e+01, -3.34864446e+01], [ 9.61553068e+01, -1.55950616e+01],\n [ 9.94808868e+01, -1.66158462e+00], [ 9.88662268e+01, 8.32606038e+00],\n [ 9.42289868e+01, 2.15752904e+01], [ 8.77410868e+01, 3.15965604e+01],\n [ 8.11474768e+01, 3.82010604e+01], [ 7.17659368e+01, 3.38334104e+01],\n [ 6.38899668e+01, 3.03415204e+01], [ 5.74912268e+01, 2.77635604e+01],\n [ 5.68036568e+01, 1.50717604e+01], [ 5.35581368e+01, -9.16606169e-02],\n [ 4.82412268e+01, -1.60489446e+01], [ 5.52234668e+01, -2.62259056e+01],\n [ 6.09897268e+01, -3.51652306e+01], [ 6.55224768e+01, -4.28926946e+01],\n [ 6.55224768e+01, -4.28926946e+01], [ 6.55224768e+01, -4.28926946e+01],\n [ 8.42872681e+00, -2.83614446e+01], [ 2.13772368e+01, -2.57261866e+01],\n [ 3.43239568e+01, -2.15154036e+01], [ 4.72724768e+01, -1.57364446e+01],\n [ 5.25849968e+01, 2.07647383e-01], [ 5.58247068e+01, 1.53619304e+01],\n [ 5.64912268e+01, 2.79510604e+01], [ 5.64917568e+01, 2.79612604e+01],\n [ 5.64906868e+01, 2.79721604e+01], [ 5.64912268e+01, 2.79822604e+01],\n [ 4.74302668e+01, 3.88992704e+01], [ 3.74260968e+01, 4.79380604e+01],\n [ 2.64912268e+01, 5.51072604e+01], [ 1.05529568e+01, 5.24508804e+01],\n [-4.02431919e+00, 4.78459804e+01], [-1.52900232e+01, 4.18885104e+01],\n [-1.91554652e+01, 2.63828404e+01], [-2.20678242e+01, 1.30703504e+01],\n [-2.40400232e+01, 1.98226038e+00], [-1.87588732e+01, -4.60782062e+00],\n [-7.49875919e+00, -1.50853886e+01], [ 8.42872681e+00, -2.83614946e+01],\n [ 8.42872681e+00, -2.83614446e+01], [ 8.42872681e+00, -2.83614446e+01],\n [ 9.97724768e+01, 8.82606038e+00], [ 1.01209977e+02, 9.29481038e+00],\n [ 9.97891268e+01, 3.41125404e+01], [ 8.92576668e+01, 5.64775904e+01],\n [ 7.29287268e+01, 7.31385604e+01], [ 7.01162268e+01, 7.01073104e+01],\n [ 7.65398468e+01, 5.90945204e+01], [ 8.04306168e+01, 4.87012104e+01],\n [ 8.18037268e+01, 3.89510604e+01], [ 8.85060268e+01, 3.22487504e+01],\n [ 9.50869868e+01, 2.21436404e+01], [ 9.97724768e+01, 8.82606038e+00],\n [ 9.97724768e+01, 8.82606038e+00], [ 9.97724768e+01, 8.82606038e+00],\n [-7.39150232e+01, 2.60448104e+01], [-6.92374072e+01, 3.77382804e+01],\n [-6.07391432e+01, 4.81501604e+01], [-4.84150232e+01, 5.72948104e+01],\n [-4.77543102e+01, 6.78197404e+01], [-4.56607662e+01, 7.76814004e+01],\n [-4.11025232e+01, 8.57010604e+01], [-4.52341512e+01, 8.65620704e+01],\n [-4.97579362e+01, 8.64646604e+01], [-5.46650232e+01, 8.53885604e+01],\n [-7.24317802e+01, 7.30970204e+01], [-8.60276902e+01, 5.51787904e+01],\n [-9.28212733e+01, 3.42010604e+01], [-9.28243733e+01, 3.41920604e+01],\n [-9.28181733e+01, 3.41792604e+01], [-9.28212733e+01, 3.41698604e+01],\n [-9.30130013e+01, 3.14875704e+01], [-9.31144113e+01, 2.89274504e+01],\n [-9.31337733e+01, 2.64511104e+01], [-8.65119202e+01, 2.77331304e+01],\n [-7.98647022e+01, 2.73522904e+01], [-7.39150232e+01, 2.60448604e+01],\n [-7.39150232e+01, 2.60448104e+01], [-7.39150232e+01, 2.60448104e+01],\n [-1.56650232e+01, 4.27948104e+01], [-4.35766519e+00, 4.87636404e+01],\n [ 1.01466668e+01, 5.33700304e+01], [ 2.60224768e+01, 5.60448104e+01],\n [ 2.85590568e+01, 6.43435004e+01], [ 3.07827468e+01, 7.29492504e+01],\n [ 3.27099768e+01, 8.18573104e+01], [ 2.55039768e+01, 9.03537704e+01],\n [ 1.39714968e+01, 9.64983204e+01], [-1.13376819e+00, 9.85135604e+01],\n [-1.57753392e+01, 9.71825004e+01], [-2.87516412e+01, 9.28553404e+01],\n [-4.00712732e+01, 8.55448104e+01], [-4.46513912e+01, 7.76614604e+01],\n [-4.67507882e+01, 6.78133804e+01], [-4.74150232e+01, 5.72323104e+01],\n [-3.59060892e+01, 5.27285604e+01], [-2.53218622e+01, 4.79159104e+01],\n [-1.56650232e+01, 4.27948104e+01], [-1.56650232e+01, 4.27948104e+01],\n [ 6.94599768e+01, 7.08573104e+01], [ 7.22412268e+01, 7.38573104e+01],\n [ 5.42332468e+01, 9.18657304e+01], [ 2.93485768e+01, 1.03013560e+02],\n [ 1.86622681e+00, 1.03013560e+02], [ 1.03891181e+00, 1.03013560e+02],\n [ 2.19951808e-01, 1.03002360e+02], [-6.02518192e-01, 1.02982360e+02],\n [-1.00876819e+00, 9.94823604e+01], [ 1.43154268e+01, 9.74387404e+01],\n [ 2.60994568e+01, 9.12180804e+01], [ 3.34912268e+01, 8.24823604e+01],\n [ 4.89375568e+01, 8.17496704e+01], [ 6.09313968e+01, 7.78789204e+01],\n [ 6.94599768e+01, 7.08573604e+01], [ 6.94599768e+01, 7.08573104e+01],\n [ 6.94599768e+01, 7.08573104e+01]]\ncodes=[1,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,2,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,2,4,4,4,2,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,2,79,\n1,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,4,79,\n1,2,4,4,4,4,4,4,2,4,4,4,4,4,4,2, 79]\nprint(Path.MOVETO,Path.LINETO,Path.CURVE3,Path.CURVE4,Path.CLOSEPOLY)\nball=Path(vertices,codes)\nfig, ax = plt.subplots(figsize=(12,6))\nplt.plot(15,1,color='b',marker=ball,markersize=30)\nplt.xticks([0,15,30,45,60,75,90])\nplt.yticks([0, 0.5, 1, 1.5, 2, 2.5, 3])\nplt.grid()\nax.title.set_text('The Expected Goals(xG) Chart Final Champions League 2010/2011')\nplt.ylabel(\"Expected Goals (xG)\")\nplt.xlabel(\"Minutes\")\nax.legend()\nplt.show()\n\noutput\n\n",
"I don't think matplotlib can draw custom markers. Therefore, I suggest the way to draw is to use the football image as a marker with the given coordinates.\nimport matplotlib.pyplot as plt\nfrom matplotlib.offsetbox import OffsetImage, AnnotationBbox\n\ndef getImage(path):\n return OffsetImage(plt.imread(path), zoom=.02)\nx_coords = [8.2, 4.5, 3.3, 6.9]\ny_coords = [5.4, 3.5, 4.7, 7.1]\nfig, ax = plt.subplots()\nfor x0, y0 in zip(x_coords, y_coords):\n ab = AnnotationBbox(getImage('football_icon.png'), (x0, y0), frameon=False)\n ax.add_artist(ab)\n \nplt.xticks(range(10))\nplt.yticks(range(10))\nplt.show()\n\nOutput\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"google_maps_markers",
"matplotlib",
"python",
"seaborn",
"visualization"
] |
stackoverflow_0074664926_google_maps_markers_matplotlib_python_seaborn_visualization.txt
|
Q:
Helix Convolution in Pytorch (Machine Learning)
I currently investigate the development of a convolutional neural network involving up to 5 or 6 dimensional arrays efficiently.
I was aware that many of the tools used for convolutional neural networks do not really deal with ND convolutions, so I decided to try and write an implementation of Helix Convolution, whereby the convolution can be treated as a large, 1D convolution (see Reference 1. http://sepwww.stanford.edu/public/docs/sep95/jon1/paper_html/node2.html , Reference 2 https://sites.ualberta.ca/~mostafan/Files/Papers/md_convolution_TLE2009.pdf for more details of the concept).
I did this under the (possibly incorrect) assumption that a large, single dimensional convolution was likely to be easier on a GPU than a multidimensional one, as well as that the method is trivially scalable to N dimensions.
Particularly, a quote from Reference 2. states:
We have not found important gains in computational efficiency between N-D standard convolution versus using the
algorithm described in the text. We have, however, found that
writing codes for seismic data regularization with the described
trick leads to algorithms that can easily handle regularization
problems with any number of spatial dimensions (Naghizadeh
and Sacchi, 2009).
I have written an implementation of the function below, which compares to signal.fftconvolve. It is slower on the CPU compared to this function, but I would nonetheless like to see how it performs on the GPU in PyTorch as a forward convolutional layer.
Can someone kindly help me port this code to PyTorch so I can verify how it behaves?
"""
HELIX CONVOLUTION FUNCTION
Shrink:
CROPS THE SIZE OF THE CONVOLVED SIGNAL DOWN TO THE ORIGINAL SIZE OF THE ORIGINAL.
Pad:
PADS THE DIFFERENCE BETWEEN THE ORIGINAL SHAPE AND THE DESIRED, CONVOLVED SHAPE FOR KERNEL AND SIGNAL.
GetLength:
EXTRACTS THE LENGTH OF THE UNWOUND STRIP OF THE SIGNAL AND KERNEL THAT IS TO BE CONVOLVED.
FFTConvolve:
USES THE NUMPY FFT PACKAGE TO PERFORM FAST FOURIER CONVOLUTION ON THE SIGNALS
Convolve:
USES HELIX CONVOLUTION ON AN INPUT ARRAY AND KERNEL.
"""
import numpy as np
from numpy import *
from scipy import signal
import operator
import time
class HelixCPU:
@classmethod
def Shrink(cls,array, bounding):
start = tuple(map(lambda a, da: (a-da)//2, array.shape, bounding))
end = tuple(map(operator.add, start, bounding))
slices = tuple(map(slice, start, end))
return array[slices]
@classmethod
def Pad(cls,array, target_shape):
diff = target_shape-array.shape
padder=[(0,val) for val in diff]
padded = np.pad(array, padder, 'constant')
return padded
@classmethod
def GetLength(cls,array_shape, padded_shape):
temp=1
steps=np.zeros_like(array_shape)
for i, entry in enumerate(padded_shape[::-1]):
if(i==len(padded_shape)-1):
steps[i]=1
else:
temp=entry*temp
steps[i]=temp
steps=np.roll(steps, 1)
steps=steps[::-1]
ones=np.ones_like(array_shape)
ones[-1]=0
out=np.multiply(steps,array_shape - ones)
length = np.sum(out)
return length
@classmethod
def FFTConvolve(cls, in1, in2, len1, len2):
s1 = len1
s2 = len2
shape = s1 + s2 - 1
fsize = 2 ** np.ceil(cp.log2(shape)).astype(int)
fslice = slice(0, shape)
conv = np.fft.ifft(np.fft.fft(in1, int(fsize)) * np.fft.fft(in2, int(fsize)))[fslice].copy()
return conv
@classmethod
def Convolve(cls,array, kernel):
m = array.shape
n = kernel.shape
mn = np.add(m, n)
mn = mn-np.ones_like(mn)
k_pad=cls.Pad(kernel, mn)
a_pad=cls.Pad(array, mn)
length_k = cls.GetLength(kernel.shape, k_pad.shape);
length_a = cls.GetLength(array.shape, a_pad.shape);
k_flat = k_pad.flatten()[0:length_k]
a_flat = a_pad.flatten()[0:length_a]
conv = cls.FFTConvolve(a_flat, k_flat)
conv = np.resize(conv,mn)
conv = cls.Shrink(conv, m)
return conv
def main():
array=np.random.rand(25,25,41,51)
kernel=np.random.rand(10, 10, 10, 10)
start2 =time.process_time()
test2 = HelixCPU.Convolve(array, kernel)
end2=time.process_time()
start1= time.process_time()
test1 = signal.fftconvolve(array, kernel, "same")
end1= time.process_time()
print ("")
print ("========================")
print ("SOME LARGE CONVOLVED RANDOM ARRAYS. ")
print ("========================")
print("")
print ("Random Calorimeter Image of Size {0} Created".format(array.shape))
print ("Random Kernel of Size {0} Created".format(kernel.shape))
print("")
print ("Value\tOriginal\tHelix")
print ("Time Taken [s]\t{0}\t{1}\t{2}".format( (end1-start1), (end2-start2), (end2-start2)/(end1-start1) ))
print ("Maximum Value\t{:03.2f}\t{:13.2f}".format( np.max(test1), np.max(test2) ))
print ("Matrix Norm \t{:03.2f}\t{:13.2f}".format( np.linalg.norm(test1), np.linalg.norm(test2) ))
print ("All Close?\t{0}".format(np.allclose(test1, test2)))
A:
Sorry, I cannot add a comment due to low rep, so I ask my question as an answer and hopefully can answer your question.
By helix convolution, do you mean defining a convolution operation as a single matrix multiplcation? If so, I did try this in the past but it is really memory inefficient for it to be practical.
A:
Here is an implementation of the HelixCPU class in PyTorch:
import torch
class HelixCPU:
@classmethod
def Shrink(cls, array, bounding):
start = (array.shape - bounding) // 2
end = start + bounding
return array[start:end]
@classmethod
def Pad(cls, array, target_shape):
diff = target_shape - array.shape
padder = [(0, val) for val in diff]
padded = torch.nn.functional.pad(array, padder, 'constant')
return padded
@classmethod
def GetLength(cls, array_shape, padded_shape):
temp = 1
steps = torch.zeros_like(array_shape)
for i, entry in enumerate(padded_shape[::-1]):
if(i == len(padded_shape) - 1):
steps[i] = 1
else:
temp = entry * temp
steps[i] = temp
steps = torch.roll(steps, 1)
steps = steps[::-1]
ones = torch.ones_like(array_shape)
ones[-1] = 0
out = steps * (array_shape - ones)
length = torch.sum(out)
return length
@classmethod
def FFTConvolve(cls, in1, in2, len1, len2):
s1 = len1
s2 = len2
shape = s1 + s2 - 1
fsize = 2 ** torch.ceil(torch.log2(shape)).type(torch.int64)
fslice = slice(0, shape)
conv = torch.ifft(torch.fft(in1, fsize) * torch.fft(in2, f
|
Helix Convolution in Pytorch (Machine Learning)
|
I currently investigate the development of a convolutional neural network involving up to 5 or 6 dimensional arrays efficiently.
I was aware that many of the tools used for convolutional neural networks do not really deal with ND convolutions, so I decided to try and write an implementation of Helix Convolution, whereby the convolution can be treated as a large, 1D convolution (see Reference 1. http://sepwww.stanford.edu/public/docs/sep95/jon1/paper_html/node2.html , Reference 2 https://sites.ualberta.ca/~mostafan/Files/Papers/md_convolution_TLE2009.pdf for more details of the concept).
I did this under the (possibly incorrect) assumption that a large, single dimensional convolution was likely to be easier on a GPU than a multidimensional one, as well as that the method is trivially scalable to N dimensions.
Particularly, a quote from Reference 2. states:
We have not found important gains in computational efficiency between N-D standard convolution versus using the
algorithm described in the text. We have, however, found that
writing codes for seismic data regularization with the described
trick leads to algorithms that can easily handle regularization
problems with any number of spatial dimensions (Naghizadeh
and Sacchi, 2009).
I have written an implementation of the function below, which compares to signal.fftconvolve. It is slower on the CPU compared to this function, but I would nonetheless like to see how it performs on the GPU in PyTorch as a forward convolutional layer.
Can someone kindly help me port this code to PyTorch so I can verify how it behaves?
"""
HELIX CONVOLUTION FUNCTION
Shrink:
CROPS THE SIZE OF THE CONVOLVED SIGNAL DOWN TO THE ORIGINAL SIZE OF THE ORIGINAL.
Pad:
PADS THE DIFFERENCE BETWEEN THE ORIGINAL SHAPE AND THE DESIRED, CONVOLVED SHAPE FOR KERNEL AND SIGNAL.
GetLength:
EXTRACTS THE LENGTH OF THE UNWOUND STRIP OF THE SIGNAL AND KERNEL THAT IS TO BE CONVOLVED.
FFTConvolve:
USES THE NUMPY FFT PACKAGE TO PERFORM FAST FOURIER CONVOLUTION ON THE SIGNALS
Convolve:
USES HELIX CONVOLUTION ON AN INPUT ARRAY AND KERNEL.
"""
import numpy as np
from numpy import *
from scipy import signal
import operator
import time
class HelixCPU:
@classmethod
def Shrink(cls,array, bounding):
start = tuple(map(lambda a, da: (a-da)//2, array.shape, bounding))
end = tuple(map(operator.add, start, bounding))
slices = tuple(map(slice, start, end))
return array[slices]
@classmethod
def Pad(cls,array, target_shape):
diff = target_shape-array.shape
padder=[(0,val) for val in diff]
padded = np.pad(array, padder, 'constant')
return padded
@classmethod
def GetLength(cls,array_shape, padded_shape):
temp=1
steps=np.zeros_like(array_shape)
for i, entry in enumerate(padded_shape[::-1]):
if(i==len(padded_shape)-1):
steps[i]=1
else:
temp=entry*temp
steps[i]=temp
steps=np.roll(steps, 1)
steps=steps[::-1]
ones=np.ones_like(array_shape)
ones[-1]=0
out=np.multiply(steps,array_shape - ones)
length = np.sum(out)
return length
@classmethod
def FFTConvolve(cls, in1, in2, len1, len2):
s1 = len1
s2 = len2
shape = s1 + s2 - 1
fsize = 2 ** np.ceil(cp.log2(shape)).astype(int)
fslice = slice(0, shape)
conv = np.fft.ifft(np.fft.fft(in1, int(fsize)) * np.fft.fft(in2, int(fsize)))[fslice].copy()
return conv
@classmethod
def Convolve(cls,array, kernel):
m = array.shape
n = kernel.shape
mn = np.add(m, n)
mn = mn-np.ones_like(mn)
k_pad=cls.Pad(kernel, mn)
a_pad=cls.Pad(array, mn)
length_k = cls.GetLength(kernel.shape, k_pad.shape);
length_a = cls.GetLength(array.shape, a_pad.shape);
k_flat = k_pad.flatten()[0:length_k]
a_flat = a_pad.flatten()[0:length_a]
conv = cls.FFTConvolve(a_flat, k_flat)
conv = np.resize(conv,mn)
conv = cls.Shrink(conv, m)
return conv
def main():
array=np.random.rand(25,25,41,51)
kernel=np.random.rand(10, 10, 10, 10)
start2 =time.process_time()
test2 = HelixCPU.Convolve(array, kernel)
end2=time.process_time()
start1= time.process_time()
test1 = signal.fftconvolve(array, kernel, "same")
end1= time.process_time()
print ("")
print ("========================")
print ("SOME LARGE CONVOLVED RANDOM ARRAYS. ")
print ("========================")
print("")
print ("Random Calorimeter Image of Size {0} Created".format(array.shape))
print ("Random Kernel of Size {0} Created".format(kernel.shape))
print("")
print ("Value\tOriginal\tHelix")
print ("Time Taken [s]\t{0}\t{1}\t{2}".format( (end1-start1), (end2-start2), (end2-start2)/(end1-start1) ))
print ("Maximum Value\t{:03.2f}\t{:13.2f}".format( np.max(test1), np.max(test2) ))
print ("Matrix Norm \t{:03.2f}\t{:13.2f}".format( np.linalg.norm(test1), np.linalg.norm(test2) ))
print ("All Close?\t{0}".format(np.allclose(test1, test2)))
|
[
"Sorry, I cannot add a comment due to low rep, so I ask my question as an answer and hopefully can answer your question.\nBy helix convolution, do you mean defining a convolution operation as a single matrix multiplcation? If so, I did try this in the past but it is really memory inefficient for it to be practical.\n",
"Here is an implementation of the HelixCPU class in PyTorch:\nimport torch\n\nclass HelixCPU:\n @classmethod\n def Shrink(cls, array, bounding):\n start = (array.shape - bounding) // 2\n end = start + bounding\n return array[start:end]\n\n @classmethod\n def Pad(cls, array, target_shape):\n diff = target_shape - array.shape\n padder = [(0, val) for val in diff]\n padded = torch.nn.functional.pad(array, padder, 'constant')\n return padded\n\n @classmethod\n def GetLength(cls, array_shape, padded_shape):\n temp = 1\n steps = torch.zeros_like(array_shape)\n\n for i, entry in enumerate(padded_shape[::-1]):\n if(i == len(padded_shape) - 1):\n steps[i] = 1\n else:\n temp = entry * temp\n steps[i] = temp\n\n steps = torch.roll(steps, 1)\n steps = steps[::-1]\n ones = torch.ones_like(array_shape)\n ones[-1] = 0\n out = steps * (array_shape - ones)\n length = torch.sum(out)\n return length\n\n @classmethod\n def FFTConvolve(cls, in1, in2, len1, len2):\n s1 = len1\n s2 = len2\n shape = s1 + s2 - 1\n fsize = 2 ** torch.ceil(torch.log2(shape)).type(torch.int64)\n fslice = slice(0, shape)\n conv = torch.ifft(torch.fft(in1, fsize) * torch.fft(in2, f\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"conv_neural_network",
"convolution",
"helix",
"python",
"pytorch"
] |
stackoverflow_0060103887_conv_neural_network_convolution_helix_python_pytorch.txt
|
Q:
Unable to Include a Remote Image in NextJS Image Component
Trying this for 2 days now followed all the steps mentioned in the docs but seems like it is not reading the config or something
Error Page Shown
Uncaught Error: Invalid src prop (https://images.pexels.com/photos/14397947/pexels-photo-14397947.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1) on `next/image`, hostname "images.pexels.com" is not configured under images in your `next.config.js`
Component Used
<Image
src="https://images.pexels.com/photos/14397947/pexels-photo-14397947.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1"
alt={props.team.name}
width={props.width}
height={props.height}
placeholder="blur"
/>
next.config.js
module.exports = {
images: {
remotePatterns: [
{
protocol: 'https',
hostname: 'images.pexels.com',
port: '',
pathname: '/photos/**',
},
],
},
}
I tried many things but doesnt seem to work
npm run dev // tried restarting the server but no luck
npm run install:clean // tried clean install doesnt work
npm update // doesnt work
Any idea what i am doing wrong?
I tried different things and found the solution i was using
Notus NextJS
which had a slightly older version of nextjs so i ran
npm outdated
and updated the package.json and ran
npm run install:clean
A:
If you're trying to include a remote image in a Next.js <Image> component and are getting an error like the one you described, it's likely because Next.js is unable to automatically optimize and serve the remote image. Next.js allows you to include external images by adding the image's hostname to the next.config.js file, but in this case it looks like the hostname for the image you're trying to use isn't included.
Here's how you can fix this issue:
In your project's root directory, open the next.config.js file and add the hostname of the image you're trying to use to the images array. For example, if your image's URL is https://images.pexels.com/photos/14397947/pexels-photo-14397947.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1, you would add images.pexels.com to the images array like this:
module.exports = {
images: ['images.pexels.com'],
};
Save the next.config.js file and rebuild your app.
In your code, update the src prop of the <Image> component to include the full URL of the image you're trying to use.
Here's an example of what this might look like in your code:
import Image from 'next/image';
const MyComponent = () => {
return (
<Image
src="https://images.pexels.com/photos/14397947/pexels-photo-14397947.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1"
alt="A beautiful landscape"
layout="responsive"
width={1260}
height={750}
/>
);
};
After making these changes, the remote image should be included in your Next.js app without any errors. Let me know if you have any other questions!
|
Unable to Include a Remote Image in NextJS Image Component
|
Trying this for 2 days now followed all the steps mentioned in the docs but seems like it is not reading the config or something
Error Page Shown
Uncaught Error: Invalid src prop (https://images.pexels.com/photos/14397947/pexels-photo-14397947.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1) on `next/image`, hostname "images.pexels.com" is not configured under images in your `next.config.js`
Component Used
<Image
src="https://images.pexels.com/photos/14397947/pexels-photo-14397947.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1"
alt={props.team.name}
width={props.width}
height={props.height}
placeholder="blur"
/>
next.config.js
module.exports = {
images: {
remotePatterns: [
{
protocol: 'https',
hostname: 'images.pexels.com',
port: '',
pathname: '/photos/**',
},
],
},
}
I tried many things but doesnt seem to work
npm run dev // tried restarting the server but no luck
npm run install:clean // tried clean install doesnt work
npm update // doesnt work
Any idea what i am doing wrong?
I tried different things and found the solution i was using
Notus NextJS
which had a slightly older version of nextjs so i ran
npm outdated
and updated the package.json and ran
npm run install:clean
|
[
"If you're trying to include a remote image in a Next.js <Image> component and are getting an error like the one you described, it's likely because Next.js is unable to automatically optimize and serve the remote image. Next.js allows you to include external images by adding the image's hostname to the next.config.js file, but in this case it looks like the hostname for the image you're trying to use isn't included.\nHere's how you can fix this issue:\n\nIn your project's root directory, open the next.config.js file and add the hostname of the image you're trying to use to the images array. For example, if your image's URL is https://images.pexels.com/photos/14397947/pexels-photo-14397947.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1, you would add images.pexels.com to the images array like this:\nmodule.exports = {\n images: ['images.pexels.com'],\n };\n\n\nSave the next.config.js file and rebuild your app.\n\nIn your code, update the src prop of the <Image> component to include the full URL of the image you're trying to use.\n\n\nHere's an example of what this might look like in your code:\n import Image from 'next/image';\n\nconst MyComponent = () => {\n return (\n <Image\n src=\"https://images.pexels.com/photos/14397947/pexels-photo-14397947.jpeg?auto=compress&cs=tinysrgb&w=1260&h=750&dpr=1\"\n alt=\"A beautiful landscape\"\n layout=\"responsive\"\n width={1260}\n height={750}\n />\n );\n};\n\nAfter making these changes, the remote image should be included in your Next.js app without any errors. Let me know if you have any other questions!\n"
] |
[
0
] |
[] |
[] |
[
"next.js",
"nextjs_image"
] |
stackoverflow_0074666133_next.js_nextjs_image.txt
|
Q:
How can I make the animation smoother
I have made an animation in which the image floats.
But the image seems to be vibrating when reaching the end.
Here is the website where the image is link
This is the CSS if the div wrapping the img
.newImg {
position: relative;
width: 472px;
height: 414px;
animation-name: updown;
animation-duration: 5s;
/* animation-delay: 1.5s; */
animation-iteration-count: infinite;
transition-timing-function: ease-in-out;
}
@keyframes updown {
0% {
top: 0px;
}
25% {
top: 8px;
}
50% {
top: 0px;
}
75% {
top: 8px;
}
100% {
top: 0px;
;
}
}
A:
The vibration you see because of the top property. Try using translateY() instead. It will perform faster, animate smoother, and won't affect the layout.
@keyframes updown {
0% {
transform: translateY(0);
}
25% {
transform: translateY(8px);
}
50% {
transform: translateY(0);
}
75% {
transform: translateY(8px);
}
100% {
transform: translateY(0);
}
}
|
How can I make the animation smoother
|
I have made an animation in which the image floats.
But the image seems to be vibrating when reaching the end.
Here is the website where the image is link
This is the CSS if the div wrapping the img
.newImg {
position: relative;
width: 472px;
height: 414px;
animation-name: updown;
animation-duration: 5s;
/* animation-delay: 1.5s; */
animation-iteration-count: infinite;
transition-timing-function: ease-in-out;
}
@keyframes updown {
0% {
top: 0px;
}
25% {
top: 8px;
}
50% {
top: 0px;
}
75% {
top: 8px;
}
100% {
top: 0px;
;
}
}
|
[
"The vibration you see because of the top property. Try using translateY() instead. It will perform faster, animate smoother, and won't affect the layout.\n\n\n@keyframes updown {\n 0% {\n transform: translateY(0);\n }\n\n 25% {\n transform: translateY(8px);\n }\n\n 50% {\n transform: translateY(0);\n }\n\n 75% {\n transform: translateY(8px);\n }\n\n 100% {\n transform: translateY(0);\n }\n}\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"css"
] |
stackoverflow_0074666309_css.txt
|
Q:
Reduce Heroku Slug Size for Machine Learning (Python, PyTorch, Fastai)
I am attempting to deploy a simple maching learning app to heroku but I keep exceeding the slug size requirement of 500MB, it looks like in the end I come up to about 1GB. Most of this appears to come from PyTorch for about 700MB.
Collecting torch>=1.0.0
Downloading torch-1.6.0-cp36-cp36m-manylinux1_x86_64.whl (748.8 MB)
My requirements.txt file looks like
tensorboardX==1.6
opencv-python>=3.3.0.10
pillow>=6.2.1
flask
scikit-image
gunicorn
pandas
And the error message I get states I am over the slug size limit.
How can I only install the CPU version of PyTorch to get the slug size down?
A:
Try adding the following lines to requirements.txt
-f https://download.pytorch.org/whl/torch_stable.html
torch==1.8.1+cpu
torchvision==0.9.1+cpu
fastai
voila
ipywidgets
A:
(Aug, 2, 2022) the only solution I found was leaving the requirements.txt like this:
--find-links https://download.pytorch.org/whl/torch_stable.html
torch==1.11.0+cpu
--find-links https://download.pytorch.org/whl/torch_stable.html
torchvision==0.12.0+cpu
A:
To install the CPU version of PyTorch, you can specify the cpuonly version in your requirements.txt file like this:
torch==1.6.0+cpu
This will install the CPU version of PyTorch, which should be significantly smaller in size than the GPU version. You can also specify the specific version of PyTorch that you want to install, in this case 1.6.0, in the requirements.txt file.
Once you have updated your requirements.txt file, you can run pip install -r requirements.txt to install the required packages. This should install the CPU version of PyTorch and reduce the overall size of your app.
|
Reduce Heroku Slug Size for Machine Learning (Python, PyTorch, Fastai)
|
I am attempting to deploy a simple maching learning app to heroku but I keep exceeding the slug size requirement of 500MB, it looks like in the end I come up to about 1GB. Most of this appears to come from PyTorch for about 700MB.
Collecting torch>=1.0.0
Downloading torch-1.6.0-cp36-cp36m-manylinux1_x86_64.whl (748.8 MB)
My requirements.txt file looks like
tensorboardX==1.6
opencv-python>=3.3.0.10
pillow>=6.2.1
flask
scikit-image
gunicorn
pandas
And the error message I get states I am over the slug size limit.
How can I only install the CPU version of PyTorch to get the slug size down?
|
[
"Try adding the following lines to requirements.txt\n-f https://download.pytorch.org/whl/torch_stable.html\ntorch==1.8.1+cpu\ntorchvision==0.9.1+cpu\nfastai\nvoila\nipywidgets\n\n",
"(Aug, 2, 2022) the only solution I found was leaving the requirements.txt like this:\n--find-links https://download.pytorch.org/whl/torch_stable.html\ntorch==1.11.0+cpu\n--find-links https://download.pytorch.org/whl/torch_stable.html\ntorchvision==0.12.0+cpu\n",
"To install the CPU version of PyTorch, you can specify the cpuonly version in your requirements.txt file like this:\ntorch==1.6.0+cpu \nThis will install the CPU version of PyTorch, which should be significantly smaller in size than the GPU version. You can also specify the specific version of PyTorch that you want to install, in this case 1.6.0, in the requirements.txt file.\nOnce you have updated your requirements.txt file, you can run pip install -r requirements.txt to install the required packages. This should install the CPU version of PyTorch and reduce the overall size of your app.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"heroku",
"pip",
"python"
] |
stackoverflow_0063552330_heroku_pip_python.txt
|
Q:
I'm not sure how to use RTK without a desktop app
I'm using a ZED-F9P.
Below is the Python script I've made for printing the Latitude and Longitude without correction data, but now I'd like to try and get more accurate with RTK.
I've got familiar with desktop applications for applying RTCM like PyGPSClient and u-center but I'd like to be able to achieve RTK fix within a python script.
I say this because my goal is to achieve RTK on an Arduino or similar device, then send that to the cloud where I can compare it to an identical device in another location (i.e. get the distance between the two).
I thought perhaps I could use parts of the source code for PyGPSClient? I'm not sure where to start. Any advice would be appreciated. Thanks!
import serial
gps = serial.Serial('com5', baudrate=9600)
while True:
ser_bytes = gps.readline()
decoded_bytes = ser_bytes.decode("utf-8")
data = decoded_bytes.split(",")
if data[0] == '$GNRMC':
lat_nmea = (data[3],data[4])
lat_degrees = float(lat_nmea[0][0:2])
lat_minutes = float(lat_nmea[0][2:])
lat = lat_degrees + (lat_minutes/60)
lon_nmea = (data[5],data[6])
lon_degrees = float(lon_nmea[0][:3])
lon_minutes = float(lon_nmea[0][3:])
lon = lon_degrees + (lon_minutes/60)
if lat_nmea[1] == 'S':
lat = -lat
if lon_nmea[1] == 'W':
lon = -lon
print("%0.8f" %lat,',' "%0.8f" %lon)
A:
Check out the rtk_example.py script here:
https://github.com/semuconsulting/pygnssutils/blob/main/examples/rtk_example.py
(pygnssutils is the core package used by PyGPSClient)
|
I'm not sure how to use RTK without a desktop app
|
I'm using a ZED-F9P.
Below is the Python script I've made for printing the Latitude and Longitude without correction data, but now I'd like to try and get more accurate with RTK.
I've got familiar with desktop applications for applying RTCM like PyGPSClient and u-center but I'd like to be able to achieve RTK fix within a python script.
I say this because my goal is to achieve RTK on an Arduino or similar device, then send that to the cloud where I can compare it to an identical device in another location (i.e. get the distance between the two).
I thought perhaps I could use parts of the source code for PyGPSClient? I'm not sure where to start. Any advice would be appreciated. Thanks!
import serial
gps = serial.Serial('com5', baudrate=9600)
while True:
ser_bytes = gps.readline()
decoded_bytes = ser_bytes.decode("utf-8")
data = decoded_bytes.split(",")
if data[0] == '$GNRMC':
lat_nmea = (data[3],data[4])
lat_degrees = float(lat_nmea[0][0:2])
lat_minutes = float(lat_nmea[0][2:])
lat = lat_degrees + (lat_minutes/60)
lon_nmea = (data[5],data[6])
lon_degrees = float(lon_nmea[0][:3])
lon_minutes = float(lon_nmea[0][3:])
lon = lon_degrees + (lon_minutes/60)
if lat_nmea[1] == 'S':
lat = -lat
if lon_nmea[1] == 'W':
lon = -lon
print("%0.8f" %lat,',' "%0.8f" %lon)
|
[
"Check out the rtk_example.py script here:\nhttps://github.com/semuconsulting/pygnssutils/blob/main/examples/rtk_example.py\n(pygnssutils is the core package used by PyGPSClient)\n"
] |
[
0
] |
[] |
[] |
[
"gps",
"ntrip",
"python",
"rtk"
] |
stackoverflow_0074470405_gps_ntrip_python_rtk.txt
|
Q:
c# how do i cut a string with a random length in half?
i am trying to learn programming by doing some simple exercises online.
and after searching i couldn't find a answer.
Problem:
public static void Main(string[] args)
{
// get sentence
Console.WriteLine("type a sentence: ");
string Sentence = Console.ReadLine();
// insert code for cutting sentence in half
// display first half of the sentence
Console.Write(firstHalf);
Console.WriteLine();
}
}
thanks in advance !
A:
You can use the String.Substring method for that.
string firsthalf = Sentence.Substring(0, Sentence.Length/2);
The first parameter 0 is the starting point of the substring and the second denotes how many characters the substring should include.
The String.Length property helps you to determine the length of the string.
Important note:
When you divide the length by 2 you need to know that it is an integer division! That means that 3/2 = 1 and 1/2 = 0 so if your string is only 1 character long you will be an empty string as the first half ;) and if it is 3 letters long you get only the first letter.
Good fortune with the learning :)
A:
You can get the length of the string using the Length property and use Substring to take half of the string
firstHalf = s.Substring(0, s.Length / 2)
A:
You can use the range operator ..:
var firstHalf = sentence[..(sentence.Length / 2)];
source
A:
You can use Remove:
var firstHalf = sentence.Remove(sentence.Length/2);
|
c# how do i cut a string with a random length in half?
|
i am trying to learn programming by doing some simple exercises online.
and after searching i couldn't find a answer.
Problem:
public static void Main(string[] args)
{
// get sentence
Console.WriteLine("type a sentence: ");
string Sentence = Console.ReadLine();
// insert code for cutting sentence in half
// display first half of the sentence
Console.Write(firstHalf);
Console.WriteLine();
}
}
thanks in advance !
|
[
"You can use the String.Substring method for that.\nstring firsthalf = Sentence.Substring(0, Sentence.Length/2);\n\nThe first parameter 0 is the starting point of the substring and the second denotes how many characters the substring should include.\nThe String.Length property helps you to determine the length of the string.\nImportant note:\nWhen you divide the length by 2 you need to know that it is an integer division! That means that 3/2 = 1 and 1/2 = 0 so if your string is only 1 character long you will be an empty string as the first half ;) and if it is 3 letters long you get only the first letter.\nGood fortune with the learning :)\n",
"You can get the length of the string using the Length property and use Substring to take half of the string \n firstHalf = s.Substring(0, s.Length / 2)\n\n",
"You can use the range operator ..:\nvar firstHalf = sentence[..(sentence.Length / 2)];\n\nsource\n",
"You can use Remove:\nvar firstHalf = sentence.Remove(sentence.Length/2);\n\n"
] |
[
3,
1,
1,
0
] |
[] |
[] |
[
"c#",
"string"
] |
stackoverflow_0046115537_c#_string.txt
|
Q:
Why it's impossible to throw exception from __toString()?
Why it's impossible to throw exception from __toString()?
class a
{
public function __toString()
{
throw new Exception();
}
}
$a = new a();
echo $a;
the code above produces this:
Fatal error: Method a::__toString() must not throw an exception in /var/www/localhost/htdocs/index.php on line 12
I was pointed to http://php.net/manual/en/migration52.incompatible.php where this behavior is described, but why? Any reasons to do that?
May be anyone here knows this?
At bug tracker php-dev-team as usual says nothing but see manual: http://bugs.php.net/50699
A:
After a couple searches I found this, which says:
Johannes explained that there is no way to ensure that an exception thrown during a cast to string would be handled correctly by the Zend Engine, and that this won't change unless large parts of the Engine are rewritten. He added that there have been discussions about such issues in the past, and suggested that Guilherme check the archives.
The Johannes referenced above is the PHP 5.3 Release Manager, so it's probably as "official" an explanation as you might find as to why PHP behaves this way.
The section goes on to mention:
__toString() will, strangely enough, accept trigger_error().
So not all is lost in terms of error reporting within __toString().
A:
My guess would be that __toString is hackish and therefore exists outside of the typical stack. A thrown exception, then, wouldn't know where to go.
A:
in response to the accepted answer, I came up with a (perhaps) better way to handle exceptions inside __toString():
public function __toString()
{
try {
// ... do some stuff
// and try to return a string
$string = $this->doSomeStuff();
if (!is_string($string)) {
// we must throw an exception manually here because if $value
// is not a string, PHP will trigger an error right after the
// return statement, thus escaping our try/catch.
throw new \LogicException(__CLASS__ . "__toString() must return a string");
}
return $string;
} catch (\Exception $exception) {
$previousHandler = set_exception_handler(function (){
});
restore_error_handler();
call_user_func($previousHandler, $exception);
die;
}
}
This assumes there is an exception handler defined, which is the case for most frameworks. As with the trigger_error method, doing this will defy the purpose of try..catch, but still it is much better than dumping output with echo. Also, many framework transform errors into exceptions, so trigger_error won't work anyway.
As an added bonus, you'll get a full stack-trace as with normal exceptions and the normal dev-production behaviour of your framework of choice.
Works very well in Laravel, and I'm pretty sure it'll work in pretty much all the modern PHP frameworks out there.
Screenshot relevant:
note: in this example, output() is called by a __toString() method.
A:
It seems that as of php 7.4 throwing exception from __toString() is allowed. I had a php7.2 compatibility check and it said so and pointed the Doctrine StaticReflectionClass and StaticReflectionProperty.
Please find more information about the proposal https://wiki.php.net/rfc/tostring_exceptions
A:
Since PHP 7.4 this problem has been fixed. Migration article describing all the changes in PHP 7.4, including this one.
THE RFC that contains more details about the past problem: RFC
Example:
class TopSecret {
public function __toString() {
throw new Exception('You are not allowed to print this confidential information!');
}
}
print new TopSecret();
|
Why it's impossible to throw exception from __toString()?
|
Why it's impossible to throw exception from __toString()?
class a
{
public function __toString()
{
throw new Exception();
}
}
$a = new a();
echo $a;
the code above produces this:
Fatal error: Method a::__toString() must not throw an exception in /var/www/localhost/htdocs/index.php on line 12
I was pointed to http://php.net/manual/en/migration52.incompatible.php where this behavior is described, but why? Any reasons to do that?
May be anyone here knows this?
At bug tracker php-dev-team as usual says nothing but see manual: http://bugs.php.net/50699
|
[
"After a couple searches I found this, which says:\n\nJohannes explained that there is no way to ensure that an exception thrown during a cast to string would be handled correctly by the Zend Engine, and that this won't change unless large parts of the Engine are rewritten. He added that there have been discussions about such issues in the past, and suggested that Guilherme check the archives.\n\nThe Johannes referenced above is the PHP 5.3 Release Manager, so it's probably as \"official\" an explanation as you might find as to why PHP behaves this way.\nThe section goes on to mention:\n\n__toString() will, strangely enough, accept trigger_error().\n\nSo not all is lost in terms of error reporting within __toString().\n",
"My guess would be that __toString is hackish and therefore exists outside of the typical stack. A thrown exception, then, wouldn't know where to go.\n",
"in response to the accepted answer, I came up with a (perhaps) better way to handle exceptions inside __toString():\npublic function __toString()\n{\n try {\n // ... do some stuff\n // and try to return a string\n $string = $this->doSomeStuff();\n if (!is_string($string)) {\n // we must throw an exception manually here because if $value\n // is not a string, PHP will trigger an error right after the\n // return statement, thus escaping our try/catch.\n throw new \\LogicException(__CLASS__ . \"__toString() must return a string\");\n }\n\n return $string;\n } catch (\\Exception $exception) {\n $previousHandler = set_exception_handler(function (){\n });\n restore_error_handler();\n call_user_func($previousHandler, $exception);\n die;\n }\n}\n\nThis assumes there is an exception handler defined, which is the case for most frameworks. As with the trigger_error method, doing this will defy the purpose of try..catch, but still it is much better than dumping output with echo. Also, many framework transform errors into exceptions, so trigger_error won't work anyway. \nAs an added bonus, you'll get a full stack-trace as with normal exceptions and the normal dev-production behaviour of your framework of choice.\nWorks very well in Laravel, and I'm pretty sure it'll work in pretty much all the modern PHP frameworks out there.\nScreenshot relevant:\nnote: in this example, output() is called by a __toString() method.\n\n",
"It seems that as of php 7.4 throwing exception from __toString() is allowed. I had a php7.2 compatibility check and it said so and pointed the Doctrine StaticReflectionClass and StaticReflectionProperty. \nPlease find more information about the proposal https://wiki.php.net/rfc/tostring_exceptions\n",
"Since PHP 7.4 this problem has been fixed. Migration article describing all the changes in PHP 7.4, including this one.\nTHE RFC that contains more details about the past problem: RFC\nExample:\nclass TopSecret {\n public function __toString() {\n throw new Exception('You are not allowed to print this confidential information!');\n }\n}\n\nprint new TopSecret();\n\n"
] |
[
48,
10,
9,
5,
1
] |
[
"I don't think the rationale for this decision has ever been publicized. Looks like some internal architectural limitation.\nOn a more abstract level, it kind of makes sense. An object should be able to return a string representation of itself, no reason for that kind of action to fail.\n",
"I found simple solution:\nJust return something like non-string type in __toString when error conversion to string occurs: NULL, FALSE or even Exception.\nThis will cause output like this (in php -a interactive SHELL):\nCatchable fatal error: Method MyClass::__toString() must return a string value in php shell code on line 1\n\n"
] |
[
-1,
-1
] |
[
"exception",
"php"
] |
stackoverflow_0002429642_exception_php.txt
|
Q:
Steps for Machine Learning in Pytorch
When we define our model in PyTorch. We run through different #epochs. I want to know that in the iteration of epochs.
What is the difference between the two following snippets of code in which the order is different? These two snippet versions are:
I found over tutorials
The code provided by my supervisor for the project.
Tutorial Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
loss.backward()
optimizer.step()
optimizer.zero_grad()
Supervisor Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
optimizer.zero_grad()
loss.backward()
optimizer.step()
A:
The only difference is when the gradients are cleared. (when you call optimizer.zero_grad()) the first version zeros out the gradients after updating the weights (optimizer.step()), the second one zeroes out the gradient after updating the weights. both versions should run fine. The only difference would be the first iteration, where the second snippet is better as it makes sure the residue gradients are zero before calculating the gradients. Check this link that explains why you would zero the gradients
A:
In PyTorch, we typically want to explicitly set the gradients to zero for every mini-batch during the training phase before starting backpropagation (i.e., updating the Weights and biases) because PyTorch accumulates the gradients on subsequent backward passes.
Regarding your question, both snippets do the same, the important detail is calling optimizer.zero_grad() before loss.backward().
A:
Here is a pseudo code for the iteration:
run model
compute loss
<-- zero grads here...
go backward (compute grads if no grads otherwise accumulate)
update weights
<-- ...or here
Basically you zero grads before or after going backward and updating the weights. Both code snippets are OK.
A:
The main difference between the two snippets is the order in which the optimizer's zero_grad() and step() methods are called.
In the tutorial version, the optimizer's zero_grad() method is called before the loss.backward() method, while in the supervisor version, the optimizer's zero_grad() method is called after the loss.backward() method.
This difference in the order of the zero_grad() and step() calls can affect the performance of the model. In the tutorial version, the optimizer's gradients will be reset to zero before the backward pass, which can prevent the gradients from accumulating and potentially causing numerical instability. In the supervisor version, the optimizer's gradients will not be reset to zero until after the backward pass, which can allow the gradients to accumulate and potentially lead to numerical instability.
It is generally recommended to call the optimizer's zero_grad() method before the backward pass, as this can help prevent numerical instability and improve the model's performance. However, the exact order in which these methods are called may depend on the specific details of the model and the optimization algorithm being used.
|
Steps for Machine Learning in Pytorch
|
When we define our model in PyTorch. We run through different #epochs. I want to know that in the iteration of epochs.
What is the difference between the two following snippets of code in which the order is different? These two snippet versions are:
I found over tutorials
The code provided by my supervisor for the project.
Tutorial Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
loss.backward()
optimizer.step()
optimizer.zero_grad()
Supervisor Version
for i in range(epochs):
logits = model(x)
loss = loss_fcn(logits,lables)
optimizer.zero_grad()
loss.backward()
optimizer.step()
|
[
"The only difference is when the gradients are cleared. (when you call optimizer.zero_grad()) the first version zeros out the gradients after updating the weights (optimizer.step()), the second one zeroes out the gradient after updating the weights. both versions should run fine. The only difference would be the first iteration, where the second snippet is better as it makes sure the residue gradients are zero before calculating the gradients. Check this link that explains why you would zero the gradients\n",
"In PyTorch, we typically want to explicitly set the gradients to zero for every mini-batch during the training phase before starting backpropagation (i.e., updating the Weights and biases) because PyTorch accumulates the gradients on subsequent backward passes.\nRegarding your question, both snippets do the same, the important detail is calling optimizer.zero_grad() before loss.backward().\n",
"Here is a pseudo code for the iteration:\n\nrun model\ncompute loss\n\n<-- zero grads here...\n\ngo backward (compute grads if no grads otherwise accumulate)\nupdate weights\n\n<-- ...or here\nBasically you zero grads before or after going backward and updating the weights. Both code snippets are OK.\n",
"The main difference between the two snippets is the order in which the optimizer's zero_grad() and step() methods are called.\nIn the tutorial version, the optimizer's zero_grad() method is called before the loss.backward() method, while in the supervisor version, the optimizer's zero_grad() method is called after the loss.backward() method.\nThis difference in the order of the zero_grad() and step() calls can affect the performance of the model. In the tutorial version, the optimizer's gradients will be reset to zero before the backward pass, which can prevent the gradients from accumulating and potentially causing numerical instability. In the supervisor version, the optimizer's gradients will not be reset to zero until after the backward pass, which can allow the gradients to accumulate and potentially lead to numerical instability.\nIt is generally recommended to call the optimizer's zero_grad() method before the backward pass, as this can help prevent numerical instability and improve the model's performance. However, the exact order in which these methods are called may depend on the specific details of the model and the optimization algorithm being used.\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"machine_learning",
"python",
"pytorch"
] |
stackoverflow_0072262608_machine_learning_python_pytorch.txt
|
Q:
Creating private extension for View's subviews and constants in separate file
I have my View which is very complex structure having many different subviews.
Up to now, I have had them all in one file but it became as big as over 400 lines of code (I use SwiftLint to check code rules breaks) so I thought of moving those subviews and constants to separate file and creating an extension.
What I want is that extension to be visible only for the particular view it extends but also that this extension could be kept in separate file to reduce the lines of code in original view's file:
Example:
Up to now I had a situation like this:
File SampleView:
struct SampleView: View {
var body: some View {
VStack {
SampleView.SampleViewConstants.sampleImage
}
}
}
private extension SampleView {
static var sampleImage: some View {
Image(SampleViewConstants.imageName)
.resizable()
.frame(height: SampleViewConstants.imageBackgroundFrameHeight)
.frame(maxWidth: .infinity)
}
struct SampleViewConstants {
static let imageName: String = "sampleImageName"
static let imageBackgroundFrameHeight: CGFloat = 56
}
}
What I want:
File SampleView:
struct SampleView: View {
var body: some View {
VStack {
SampleView.SampleViewConstants.sampleImage
}
}
}
File SampleViewConstants:
private extension SampleView {
static var sampleImage: some View {
Image(SampleViewConstants.imageName)
.resizable()
.frame(height: SampleViewConstants.imageBackgroundFrameHeight)
.frame(maxWidth: .infinity)
}
struct SampleViewConstants {
static let imageName: String = "sampleImageName"
static let imageBackgroundFrameHeight: CGFloat = 56
}
}
Unfortunately XCode tells me that my SampleView does not see SampleViewConstants structure because it is marked as private and only valid at file scope.
Maybe another way to solve it?
A:
To organize your code you can use @ViewBuilder:
@ViewBuilder func sampleView(name: String = "sampleImageName", height: CGFloat = 56) -> some View {
let imageName = name
let imageBackgroundFrameHeight = height
VStack {
Image(imageName)
.resizable()
.frame(height: imageBackgroundFrameHeight)
.frame(maxWidth: .infinity)
}
}
and call the function in the body:
sampleView() //This will take the default values(imageName = "sampleImageName" and imageBackgroundFrameHeight = 56)
sampleView(name: "NewName", height: 80) //This will take on new values "NewName" and height of 80
You can also use custom view modifiers like so:
struct Shadowfy: ViewModifier {
func body(content: Content) -> some View {
content // The content is a view you want to put your custom modifier on like my shadowfy custom modifier that puts shadows on views so I don't have to repeat this code
.shadow(color: .red, radius: 1, x: 1, y: -1)
.shadow(color: .green, radius: 2, x: -2, y: 2)
}
}
extension View {
func shadowfy() -> some View {
self.modifier(Shadowfy())
}
}
and then you just use it like this:
Text("MyText")
.shadowfy()
|
Creating private extension for View's subviews and constants in separate file
|
I have my View which is very complex structure having many different subviews.
Up to now, I have had them all in one file but it became as big as over 400 lines of code (I use SwiftLint to check code rules breaks) so I thought of moving those subviews and constants to separate file and creating an extension.
What I want is that extension to be visible only for the particular view it extends but also that this extension could be kept in separate file to reduce the lines of code in original view's file:
Example:
Up to now I had a situation like this:
File SampleView:
struct SampleView: View {
var body: some View {
VStack {
SampleView.SampleViewConstants.sampleImage
}
}
}
private extension SampleView {
static var sampleImage: some View {
Image(SampleViewConstants.imageName)
.resizable()
.frame(height: SampleViewConstants.imageBackgroundFrameHeight)
.frame(maxWidth: .infinity)
}
struct SampleViewConstants {
static let imageName: String = "sampleImageName"
static let imageBackgroundFrameHeight: CGFloat = 56
}
}
What I want:
File SampleView:
struct SampleView: View {
var body: some View {
VStack {
SampleView.SampleViewConstants.sampleImage
}
}
}
File SampleViewConstants:
private extension SampleView {
static var sampleImage: some View {
Image(SampleViewConstants.imageName)
.resizable()
.frame(height: SampleViewConstants.imageBackgroundFrameHeight)
.frame(maxWidth: .infinity)
}
struct SampleViewConstants {
static let imageName: String = "sampleImageName"
static let imageBackgroundFrameHeight: CGFloat = 56
}
}
Unfortunately XCode tells me that my SampleView does not see SampleViewConstants structure because it is marked as private and only valid at file scope.
Maybe another way to solve it?
|
[
"To organize your code you can use @ViewBuilder:\n@ViewBuilder func sampleView(name: String = \"sampleImageName\", height: CGFloat = 56) -> some View {\n let imageName = name\n let imageBackgroundFrameHeight = height\n \n VStack {\n Image(imageName)\n .resizable()\n .frame(height: imageBackgroundFrameHeight)\n .frame(maxWidth: .infinity)\n }\n}\n\nand call the function in the body:\nsampleView() //This will take the default values(imageName = \"sampleImageName\" and imageBackgroundFrameHeight = 56)\n\nsampleView(name: \"NewName\", height: 80) //This will take on new values \"NewName\" and height of 80\n\nYou can also use custom view modifiers like so:\nstruct Shadowfy: ViewModifier {\n func body(content: Content) -> some View {\n content // The content is a view you want to put your custom modifier on like my shadowfy custom modifier that puts shadows on views so I don't have to repeat this code\n .shadow(color: .red, radius: 1, x: 1, y: -1)\n .shadow(color: .green, radius: 2, x: -2, y: 2)\n }\n}\nextension View { \n func shadowfy() -> some View { \n self.modifier(Shadowfy())\n }\n }\n\nand then you just use it like this:\nText(\"MyText\")\n .shadowfy()\n\n"
] |
[
0
] |
[] |
[] |
[
"constants",
"ios",
"swift",
"swiftui",
"xcode"
] |
stackoverflow_0074446148_constants_ios_swift_swiftui_xcode.txt
|
Q:
how to get access to the messages that we send itself to telegram bot?
I have seen telegram bot documentation and I know it has getUpdates endpoint which sends information about the messages we type and send into the bot.
so when webhook is disabled and I manually type some messages into bot I can get those messages from https://api.telegram.org/<token>/getUpdates
{"ok":true,"result":[{"update_id":301215553,
"message":{"message_id":31,"from":{"id":1235349470,"is_bot":false,"first_name":"XYZ","username":"ABC","language_code":"en"},"chat":{"id":1235349470,"first_name":"XYZ","username":"ABC","type":"private"},"date":1669990759,"text":"/help","entities":[{"offset":0,"length":5,"type":"bot_command"}]}}
but I couldn't get messages that I send to bot itself using sendMessage endpoint https://api.telegram.org/<token>/sendMessage?chat_id=-1001659408929&text=my sample text
After the above request when I want to get access the message my Sample text which I sent to bot itself I couldn't list that with getUpdates endpoint.
So is there any way to get that information from the telegram or telegram didn't support for that type of messages?
So basically messages that are incoming to bot is not get listed in getUpdates.
A:
Each method listed in the Bot API has a return value that you can access. sendMessage returns the resulting Message.
|
how to get access to the messages that we send itself to telegram bot?
|
I have seen telegram bot documentation and I know it has getUpdates endpoint which sends information about the messages we type and send into the bot.
so when webhook is disabled and I manually type some messages into bot I can get those messages from https://api.telegram.org/<token>/getUpdates
{"ok":true,"result":[{"update_id":301215553,
"message":{"message_id":31,"from":{"id":1235349470,"is_bot":false,"first_name":"XYZ","username":"ABC","language_code":"en"},"chat":{"id":1235349470,"first_name":"XYZ","username":"ABC","type":"private"},"date":1669990759,"text":"/help","entities":[{"offset":0,"length":5,"type":"bot_command"}]}}
but I couldn't get messages that I send to bot itself using sendMessage endpoint https://api.telegram.org/<token>/sendMessage?chat_id=-1001659408929&text=my sample text
After the above request when I want to get access the message my Sample text which I sent to bot itself I couldn't list that with getUpdates endpoint.
So is there any way to get that information from the telegram or telegram didn't support for that type of messages?
So basically messages that are incoming to bot is not get listed in getUpdates.
|
[
"Each method listed in the Bot API has a return value that you can access. sendMessage returns the resulting Message.\n"
] |
[
0
] |
[] |
[] |
[
"telegram",
"telegram_bot"
] |
stackoverflow_0074665397_telegram_telegram_bot.txt
|
Q:
how to allow s3 images/object to be downloaded only from by website request using presigned url?
I am in serious trouble. I have been uploading to the s3 bucket using aws-sdk javascript, downloading it through object link. Using s3 to store images/assets to be used for the nextjs website. I have set the bucket to the read only for everyone. I just realize that this is serious problem, as anyone will be able to download from my bucket unilimited time, and the cost will be through the roof. How can I secure the download to be only from my website through presigned link(I haven't configured the presigned link on my side)? Please help me. I will provide more details below:
current bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
CORS:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]
A:
To restrict access to your Amazon S3 objects so that they can only be downloaded from your website using presigned URLs, you will need to update your bucket policy and CORS configuration to allow requests only from your website's domain.
First, update your bucket policy to restrict access to the s3:GetObject and s3:GetObjectVersion actions to only requests that come from your website's domain. You can do this by replacing the Principal element in your bucket policy with the following:
"Principal": {
"AWS": [
"arn:aws:iam::<YOUR ACCOUNT ID>:root",
"arn:aws:iam::<YOUR ACCOUNT ID>:user/<YOUR USER NAME>"
],
"Web": [
"http://<YOUR WEBSITE DOMAIN>/*"
]
},
Next, update your CORS configuration to allow requests only from your website's domain. You can do this by replacing the AllowedOrigins element in your CORS configuration with the following:
"AllowedOrigins": [
"http://<YOUR WEBSITE DOMAIN>"
],
A:
You can restrict access to objects based on the 'referring' website.
From Bucket policy examples - Amazon Simple Storage Service:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Principal":"*",
"Action": "s3:GetObject",
"Resource":"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
"Condition":{
"StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
However, restricting access with referer is not secure since it is easy to fake this information.
The more secure method would be to use Amazon S3 pre-signed URLs, which provide time-limited access to private objects in Amazon S3. These URLs must be generated by your back-end, typically after a user has authenticated to your website. This is ideal for serving private/confidential content.
However, if you are simply serving content for a normal website that does not require authentication, then referer is more appropriate.
|
how to allow s3 images/object to be downloaded only from by website request using presigned url?
|
I am in serious trouble. I have been uploading to the s3 bucket using aws-sdk javascript, downloading it through object link. Using s3 to store images/assets to be used for the nextjs website. I have set the bucket to the read only for everyone. I just realize that this is serious problem, as anyone will be able to download from my bucket unilimited time, and the cost will be through the roof. How can I secure the download to be only from my website through presigned link(I haven't configured the presigned link on my side)? Please help me. I will provide more details below:
current bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
CORS:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE",
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]
|
[
"To restrict access to your Amazon S3 objects so that they can only be downloaded from your website using presigned URLs, you will need to update your bucket policy and CORS configuration to allow requests only from your website's domain.\nFirst, update your bucket policy to restrict access to the s3:GetObject and s3:GetObjectVersion actions to only requests that come from your website's domain. You can do this by replacing the Principal element in your bucket policy with the following:\n\"Principal\": {\n \"AWS\": [\n \"arn:aws:iam::<YOUR ACCOUNT ID>:root\",\n \"arn:aws:iam::<YOUR ACCOUNT ID>:user/<YOUR USER NAME>\"\n ],\n \"Web\": [\n \"http://<YOUR WEBSITE DOMAIN>/*\"\n ]\n},\n\nNext, update your CORS configuration to allow requests only from your website's domain. You can do this by replacing the AllowedOrigins element in your CORS configuration with the following:\n\"AllowedOrigins\": [\n \"http://<YOUR WEBSITE DOMAIN>\"\n],\n\n",
"You can restrict access to objects based on the 'referring' website.\nFrom Bucket policy examples - Amazon Simple Storage Service:\n{\n \"Version\":\"2012-10-17\",\n \"Statement\":[\n {\n \"Effect\":\"Allow\",\n \"Principal\":\"*\",\n \"Action\": \"s3:GetObject\",\n \"Resource\":\"arn:aws:s3:::DOC-EXAMPLE-BUCKET/*\",\n \"Condition\":{\n \"StringLike\":{\"aws:Referer\":[\"http://www.example.com/*\",\"http://example.com/*\"]}\n }\n }\n ]\n}\n\nHowever, restricting access with referer is not secure since it is easy to fake this information.\nThe more secure method would be to use Amazon S3 pre-signed URLs, which provide time-limited access to private objects in Amazon S3. These URLs must be generated by your back-end, typically after a user has authenticated to your website. This is ideal for serving private/confidential content.\nHowever, if you are simply serving content for a normal website that does not require authentication, then referer is more appropriate.\n"
] |
[
1,
1
] |
[] |
[] |
[
"amazon_s3",
"amazon_web_services",
"next.js",
"node.js",
"pre_signed_url"
] |
stackoverflow_0074665143_amazon_s3_amazon_web_services_next.js_node.js_pre_signed_url.txt
|
Q:
Pytorch: How to format data before execution of machine learning
I'm learning how to use pytorch and I was able to get a grasp on the overall process of construction and execution of ML models. However, what I am not able to grasp is how to "format" or "reshape" the data before executing the model. I keep getting errors like:
RuntimeError: size mismatch, m1: [1 x 700], m2: [1 x 1] at c:\programdata\miniconda3\conda-bld\pytorch_1524543037166\work\aten\src\th\generic/THTensorMath.c:2033
Or,
Expected object of type Variable[torch.DoubleTensor] but found type Variable[torch.FloatTensor] for argument #1 ‘mat2’
So, I have a csv file named "train.csv" with attributes called 'x' and 'y' and there are 700 samples in it, I want to perform a simple linear regression on the data, and I parse data from it using pandas, how do I format or reshape the data such that it will execute smoothly? How does pytorch iterate through input data?
The recent code i executed is:
import torch
import torch.nn as nn
from torch.autograd import Variable
import pandas as pd
class Linear_Reg(nn.Module):
def __init__(self, inp_sz, out_sz):
super(Linear_Reg, self).__init__()
self.linear = nn.Linear(inp_sz, out_sz)
def forward(self, x):
out = self.linear(x)
return out
train = pd.read_csv('C:\\Users\\hgstr\\Jupyter_Files\\Data_Sets\\linear_regression\\train.csv')
test = pd.read_csv('C:\\Users\\hgstr\\Jupyter_Files\\Data_Sets\\linear_regression\\test.csv')
x_train = torch.Tensor(train['x'])
y_train = torch.Tensor(train['y'])
x_test = torch.Tensor(test['x'])
y_test = torch.Tensor(test['y'])
x_train = torch.Tensor(x_train)
x_train = x_train.view(1,-1)
#================================
input_sz = 1;
output_sz = 1
epochs = 60
learning_rate = 0.001
#================================
model = Linear_Reg(input_sz, output_sz)
crit = nn.MSELoss()
opt = torch.optim.SGD(model.parameters(), learning_rate)
for e in range(epochs):
opt.zero_grad()
out = model(x_train)
loss = crit(out, y_train)
loss.backward()
opt.step()
print('epoch {}, loss {}'.format(e,loss.data[0]))
And it gave out the following:
RuntimeError: size mismatch, m1: [1 x 700], m2: [1 x 1] at c:\programdata\miniconda3\conda-bld\pytorch_1524543037166\work\aten\src\th\generic/THTensorMath.c:2033
Solutions?
A:
According to the error, I believe that your data is not correctly formatted. The tensor should be in the form [700, 2] (batch x data) and yours is [1, 700] (data x batch). This makes the model 'think' that you are adding only one entry as training with 700 features instead of 700 entries with only 1 feature.
Reshaping the x_train variable should make the code work. Just remove the line x_train = x_train.view(1,-1).
Regarding the second error, it can be that after reading the .csv into a variable its type is Double (due to pd.read_csv) while in pytorch by default Tensors are created as floats. I think that casting your input data before feeding it to the model should be enough: model(x_train.float()) or specifying it in the Tensor creation part x_train = torch.FloatTensor(train['x']). Note that you should cast all the Tensors that are not Floats.
edit: This piece of code works for me
import torch
import torch.nn as nn
import pandas as pd
class Linear_Reg(nn.Module):
def __init__(self, inp_sz, out_sz):
super(Linear_Reg, self).__init__()
self.linear = nn.Linear(inp_sz, out_sz)
def forward(self, x):
out = self.linear(x)
return out
train = pd.read_csv('yourpath')
test = pd.read_csv('yourpath')
x_train = torch.Tensor(train['x']).to(torch.float).view(700, 1)
y_train = torch.Tensor(train['y']).to(torch.float).view(700, 1)
x_test = torch.Tensor(test['x']).to(torch.float).view(300, 1)
y_test = torch.Tensor(test['y']).to(torch.float).view(300, 1)
# ================================
input_sz = 1;
output_sz = 1
epochs = 60
learning_rate = 0.001
# ================================
model = Linear_Reg(input_sz, output_sz)
crit = nn.MSELoss()
opt = torch.optim.SGD(model.parameters(), learning_rate)
for e in range(epochs):
opt.zero_grad()
out = model(x_train)
loss = crit(out, y_train)
loss.backward()
opt.step()
print('epoch {}, loss {}'.format(e, loss.data[0]))
A:
To solve this issue, you need to reshape the tensor containing your training data so that it has the correct dimensions for your model. In this case, the model expects a tensor of size [1 x 1], but your training data has the size [1 x 700].
To reshape the tensor, you can use the .view() method. For example, to reshape the tensor containing the x_train data to have the correct dimensions, you can do the following:
x_train = x_train.view(1, -1)
This reshapes the tensor to have a size of [1 x 1] by setting the first dimension to 1 and allowing the second dimension to be inferred from the size of the original tensor.
Additionally, it looks like you are trying to perform a regression task with only a single input and output dimension. In this case, you may want to consider using a different loss function, such as the L1 loss or the L2 loss, which are more commonly used for regression tasks. You can use these loss functions by replacing the line:
crit = nn.MSELoss()
with the following:
crit = nn.L1Loss() # use L1 loss
crit = nn.L2Loss() # use L2 loss
After making these changes, the model should be able to run without errors.
|
Pytorch: How to format data before execution of machine learning
|
I'm learning how to use pytorch and I was able to get a grasp on the overall process of construction and execution of ML models. However, what I am not able to grasp is how to "format" or "reshape" the data before executing the model. I keep getting errors like:
RuntimeError: size mismatch, m1: [1 x 700], m2: [1 x 1] at c:\programdata\miniconda3\conda-bld\pytorch_1524543037166\work\aten\src\th\generic/THTensorMath.c:2033
Or,
Expected object of type Variable[torch.DoubleTensor] but found type Variable[torch.FloatTensor] for argument #1 ‘mat2’
So, I have a csv file named "train.csv" with attributes called 'x' and 'y' and there are 700 samples in it, I want to perform a simple linear regression on the data, and I parse data from it using pandas, how do I format or reshape the data such that it will execute smoothly? How does pytorch iterate through input data?
The recent code i executed is:
import torch
import torch.nn as nn
from torch.autograd import Variable
import pandas as pd
class Linear_Reg(nn.Module):
def __init__(self, inp_sz, out_sz):
super(Linear_Reg, self).__init__()
self.linear = nn.Linear(inp_sz, out_sz)
def forward(self, x):
out = self.linear(x)
return out
train = pd.read_csv('C:\\Users\\hgstr\\Jupyter_Files\\Data_Sets\\linear_regression\\train.csv')
test = pd.read_csv('C:\\Users\\hgstr\\Jupyter_Files\\Data_Sets\\linear_regression\\test.csv')
x_train = torch.Tensor(train['x'])
y_train = torch.Tensor(train['y'])
x_test = torch.Tensor(test['x'])
y_test = torch.Tensor(test['y'])
x_train = torch.Tensor(x_train)
x_train = x_train.view(1,-1)
#================================
input_sz = 1;
output_sz = 1
epochs = 60
learning_rate = 0.001
#================================
model = Linear_Reg(input_sz, output_sz)
crit = nn.MSELoss()
opt = torch.optim.SGD(model.parameters(), learning_rate)
for e in range(epochs):
opt.zero_grad()
out = model(x_train)
loss = crit(out, y_train)
loss.backward()
opt.step()
print('epoch {}, loss {}'.format(e,loss.data[0]))
And it gave out the following:
RuntimeError: size mismatch, m1: [1 x 700], m2: [1 x 1] at c:\programdata\miniconda3\conda-bld\pytorch_1524543037166\work\aten\src\th\generic/THTensorMath.c:2033
Solutions?
|
[
"According to the error, I believe that your data is not correctly formatted. The tensor should be in the form [700, 2] (batch x data) and yours is [1, 700] (data x batch). This makes the model 'think' that you are adding only one entry as training with 700 features instead of 700 entries with only 1 feature. \nReshaping the x_train variable should make the code work. Just remove the line x_train = x_train.view(1,-1).\nRegarding the second error, it can be that after reading the .csv into a variable its type is Double (due to pd.read_csv) while in pytorch by default Tensors are created as floats. I think that casting your input data before feeding it to the model should be enough: model(x_train.float()) or specifying it in the Tensor creation part x_train = torch.FloatTensor(train['x']). Note that you should cast all the Tensors that are not Floats. \nedit: This piece of code works for me\nimport torch\nimport torch.nn as nn\nimport pandas as pd\n\nclass Linear_Reg(nn.Module):\n def __init__(self, inp_sz, out_sz):\n super(Linear_Reg, self).__init__()\n self.linear = nn.Linear(inp_sz, out_sz)\n\n def forward(self, x):\n out = self.linear(x)\n return out\n\n\ntrain = pd.read_csv('yourpath')\ntest = pd.read_csv('yourpath')\n\nx_train = torch.Tensor(train['x']).to(torch.float).view(700, 1)\ny_train = torch.Tensor(train['y']).to(torch.float).view(700, 1)\n\nx_test = torch.Tensor(test['x']).to(torch.float).view(300, 1)\ny_test = torch.Tensor(test['y']).to(torch.float).view(300, 1)\n\n# ================================\ninput_sz = 1;\noutput_sz = 1\nepochs = 60\nlearning_rate = 0.001\n# ================================\n\nmodel = Linear_Reg(input_sz, output_sz)\ncrit = nn.MSELoss()\nopt = torch.optim.SGD(model.parameters(), learning_rate)\n\nfor e in range(epochs):\n opt.zero_grad()\n out = model(x_train)\n\n loss = crit(out, y_train)\n loss.backward()\n opt.step()\n\n print('epoch {}, loss {}'.format(e, loss.data[0]))\n\n",
"To solve this issue, you need to reshape the tensor containing your training data so that it has the correct dimensions for your model. In this case, the model expects a tensor of size [1 x 1], but your training data has the size [1 x 700].\nTo reshape the tensor, you can use the .view() method. For example, to reshape the tensor containing the x_train data to have the correct dimensions, you can do the following:\nx_train = x_train.view(1, -1)\n\nThis reshapes the tensor to have a size of [1 x 1] by setting the first dimension to 1 and allowing the second dimension to be inferred from the size of the original tensor.\nAdditionally, it looks like you are trying to perform a regression task with only a single input and output dimension. In this case, you may want to consider using a different loss function, such as the L1 loss or the L2 loss, which are more commonly used for regression tasks. You can use these loss functions by replacing the line:\ncrit = nn.MSELoss()\n\nwith the following:\n\ncrit = nn.L1Loss() # use L1 loss\n\ncrit = nn.L2Loss() # use L2 loss\n\nAfter making these changes, the model should be able to run without errors.\n"
] |
[
0,
0
] |
[] |
[] |
[
"linear_regression",
"machine_learning",
"python",
"pytorch"
] |
stackoverflow_0050432506_linear_regression_machine_learning_python_pytorch.txt
|
Q:
Demo Code for Detectron Not Detecting Object Instances
I am trying to get the demo code for Detectron2 working locally on my laptop. Everything appears to run correctly, but no object instances are detected, even when I use the image from the Colab demo.
I am running on a non-GPU Mac. I followed the installation instructions to install Detectron. I have the following module versions on my machine:
detectron2@git+https://github.com/facebookresearch/detectron2.git@ea3b3f22bf1de58008599794f149149ff65d3780
opencv-python==4.5.3.56
torch==1.9.0
torchvision==0.10.0
I copied demo.py, predictor.py, mask_rcnn_R_101_FPN_3x.yaml, and Base-RCNN-FPN.yaml from Detectron's github. I then ran inference demo with pretrained model command. The specific command was this:
python demo.py --input 000000439715.jpeg --output output --config-file mask_rcnn_R_101_FPN_3x.yaml --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl MODEL.DEVICE cpu
000000439715.jpeg is the sample image of the man on horseback from the Colab notebook demo. The last line of the output is
000000439715.jpeg: detected 0 instances in 6.77s
The image in the output directory has no annotation on it.
The logging output looks okay to me. The only thing that may be an indication of a problem is a warning at the top
[08/28 12:35:18 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='mask_rcnn_R_101_FPN_3x.yaml', input=['000000439715.jpeg'], opts=['MODEL.WEIGHTS', 'detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl', 'MODEL.DEVICE', 'cpu'], output='output', video_input=None, webcam=False)
[08/28 12:35:18 fvcore.common.checkpoint]: [Checkpointer] Loading from detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ...
[08/28 12:35:18 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'
WARNING [08/28 12:35:19 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
I'm not sure what to do about it though.
I tried not specifying the model weights. I also tried setting the confidence threshold to zero. I got the same results.
Am I doing something wrong? What are the next debugging steps?
A:
I met the same question with you, just like:
WARNING [xxxxxxxxx fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
and this warning made my result very bad. Finally I found that I use a wrong weight file.
Hope this can help you.
|
Demo Code for Detectron Not Detecting Object Instances
|
I am trying to get the demo code for Detectron2 working locally on my laptop. Everything appears to run correctly, but no object instances are detected, even when I use the image from the Colab demo.
I am running on a non-GPU Mac. I followed the installation instructions to install Detectron. I have the following module versions on my machine:
detectron2@git+https://github.com/facebookresearch/detectron2.git@ea3b3f22bf1de58008599794f149149ff65d3780
opencv-python==4.5.3.56
torch==1.9.0
torchvision==0.10.0
I copied demo.py, predictor.py, mask_rcnn_R_101_FPN_3x.yaml, and Base-RCNN-FPN.yaml from Detectron's github. I then ran inference demo with pretrained model command. The specific command was this:
python demo.py --input 000000439715.jpeg --output output --config-file mask_rcnn_R_101_FPN_3x.yaml --opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl MODEL.DEVICE cpu
000000439715.jpeg is the sample image of the man on horseback from the Colab notebook demo. The last line of the output is
000000439715.jpeg: detected 0 instances in 6.77s
The image in the output directory has no annotation on it.
The logging output looks okay to me. The only thing that may be an indication of a problem is a warning at the top
[08/28 12:35:18 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='mask_rcnn_R_101_FPN_3x.yaml', input=['000000439715.jpeg'], opts=['MODEL.WEIGHTS', 'detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl', 'MODEL.DEVICE', 'cpu'], output='output', video_input=None, webcam=False)
[08/28 12:35:18 fvcore.common.checkpoint]: [Checkpointer] Loading from detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl ...
[08/28 12:35:18 fvcore.common.checkpoint]: Reading a file from 'Detectron2 Model Zoo'
WARNING [08/28 12:35:19 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
I'm not sure what to do about it though.
I tried not specifying the model weights. I also tried setting the confidence threshold to zero. I got the same results.
Am I doing something wrong? What are the next debugging steps?
|
[
"I met the same question with you, just like:\n WARNING [xxxxxxxxx fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:\n\nand this warning made my result very bad. Finally I found that I use a wrong weight file.\nHope this can help you.\n"
] |
[
0
] |
[] |
[] |
[
"computer_vision",
"detectron"
] |
stackoverflow_0068966792_computer_vision_detectron.txt
|
Q:
What makes it print the linked list in reverse order?
struct Node
{
int data;
Node *next;
};
void myLinkedList( Node* navigatePtr )
{
if(navigatePtr == NULL)
return;
myLinkedList(navigatePtr -> next);
cout << navigatePtr -> data << " ";
}
int main()
{
// Assuming that head is a pointer pointing to
// a linked list 1 -> 2 -> 3 -> 4 -> 5
myLinkedList(head);
return 0;
}
This is a question from a past year paper. It asks for the output which is 5,4,3,2,1. But, i do not understand what makes it print the linked list in reverse.
A:
Because that is the order you asked for
myLinkedList(navigatePtr -> next);
cout << navigatePtr -> data << " ";
Try swapping those two around to get the right order
cout << navigatePtr -> data << " ";
myLinkedList(navigatePtr -> next);
Your version prints the rest of the list first, followed by the current item, reverse order in other words.
BTW the correct version, where the recursive call is the very last thing that happens in the function is called tail recursion. Tail recursion can always be replaced by a simple while loop. That's probably what you should do here (unless you are just practising recursion).
A:
Translated into English: "first print the rest of the list, then print this element".
So, to print [1->2->3], you must first print [2->3] and then 1, and in order to do that, you must first print [3] and then 2, and in order to that, you must first print the empty list and then 3.
Printing the empty list does nothing, then you print 3, then 2, and then 1.
|
What makes it print the linked list in reverse order?
|
struct Node
{
int data;
Node *next;
};
void myLinkedList( Node* navigatePtr )
{
if(navigatePtr == NULL)
return;
myLinkedList(navigatePtr -> next);
cout << navigatePtr -> data << " ";
}
int main()
{
// Assuming that head is a pointer pointing to
// a linked list 1 -> 2 -> 3 -> 4 -> 5
myLinkedList(head);
return 0;
}
This is a question from a past year paper. It asks for the output which is 5,4,3,2,1. But, i do not understand what makes it print the linked list in reverse.
|
[
"Because that is the order you asked for\nmyLinkedList(navigatePtr -> next);\ncout << navigatePtr -> data << \" \";\n\nTry swapping those two around to get the right order\ncout << navigatePtr -> data << \" \";\nmyLinkedList(navigatePtr -> next);\n\nYour version prints the rest of the list first, followed by the current item, reverse order in other words.\nBTW the correct version, where the recursive call is the very last thing that happens in the function is called tail recursion. Tail recursion can always be replaced by a simple while loop. That's probably what you should do here (unless you are just practising recursion).\n",
"Translated into English: \"first print the rest of the list, then print this element\".\nSo, to print [1->2->3], you must first print [2->3] and then 1, and in order to do that, you must first print [3] and then 2, and in order to that, you must first print the empty list and then 3.\nPrinting the empty list does nothing, then you print 3, then 2, and then 1.\n"
] |
[
0,
0
] |
[] |
[] |
[
"c++"
] |
stackoverflow_0074666357_c++.txt
|
Q:
Does svelte can't get script like from head?
I using Svelte and FullCalendar to make a calendar in my front.
I used the code like this
<script>
document.addEventListener('DOMContentLoaded', function() {
var calendarEl = document.getElementById('calendar');
var calendar = new FullCalendar.Calendar(calendarEl, {
initialView: 'dayGridMonth'
});
calendar.render();
});
</script>
<svelte:head>
<link href='./fullcalendar/main.css' rel='stylesheet' />
<script src='./fullcalendar/main.js'></script>
</svelte:head>
<div id='calendar'></div>
But It didn't worked at all
I used svelte:head to put the script and link in it. But It says it can't identify(find name) FullCalendar whiched worked in original javascript file(myCallendar.js <- I made this)
I also putted it in the index.html file or put the fullcalendar folder(cdn files) near the src and changed the paths, but they didn't worked.
I really wanted to use it but unfortunately it only has the react, vue version not svelte one.
Is there a way to solve this problem?
A:
In Svelte, DomContentLoaded isn't the ideal way to make sure the component is mounted. Use onMount instead. See example below. If you don't want to use onMount, you can also just wait for the element to load using $: if(calendarEl) {//do stuff here}
Also, you can just [bind][1] the element to a variable to get the reference to it, instead of calling document.getElementById. Finally, var isn't used in modern JS - use const or let.
REPL: https://svelte.dev/repl/9ac6f5a22eeb4d32bf3bca9197e43d1f?version=3.53.1
<script>
import {onMount} from "svelte"
const calendarEl = undefined;
onMount(() => {
const calendar = new FullCalendar.Calendar(calendarEl, {
initialView: 'dayGridMonth'
});
calendar.render();
})
</script>
<svelte:head>
<link href='https://cdn.jsdelivr.net/npm/[email protected]/main.css' rel='stylesheet' />
<script src='https://cdn.jsdelivr.net/npm/[email protected]/main.js'></script>
</svelte:head>
<div id='calendar' bind:this={calendarEl}></div>
|
Does svelte can't get script like from head?
|
I using Svelte and FullCalendar to make a calendar in my front.
I used the code like this
<script>
document.addEventListener('DOMContentLoaded', function() {
var calendarEl = document.getElementById('calendar');
var calendar = new FullCalendar.Calendar(calendarEl, {
initialView: 'dayGridMonth'
});
calendar.render();
});
</script>
<svelte:head>
<link href='./fullcalendar/main.css' rel='stylesheet' />
<script src='./fullcalendar/main.js'></script>
</svelte:head>
<div id='calendar'></div>
But It didn't worked at all
I used svelte:head to put the script and link in it. But It says it can't identify(find name) FullCalendar whiched worked in original javascript file(myCallendar.js <- I made this)
I also putted it in the index.html file or put the fullcalendar folder(cdn files) near the src and changed the paths, but they didn't worked.
I really wanted to use it but unfortunately it only has the react, vue version not svelte one.
Is there a way to solve this problem?
|
[
"In Svelte, DomContentLoaded isn't the ideal way to make sure the component is mounted. Use onMount instead. See example below. If you don't want to use onMount, you can also just wait for the element to load using $: if(calendarEl) {//do stuff here}\nAlso, you can just [bind][1] the element to a variable to get the reference to it, instead of calling document.getElementById. Finally, var isn't used in modern JS - use const or let.\nREPL: https://svelte.dev/repl/9ac6f5a22eeb4d32bf3bca9197e43d1f?version=3.53.1\n<script>\n import {onMount} from \"svelte\"\n const calendarEl = undefined;\n \n\n onMount(() => {\n const calendar = new FullCalendar.Calendar(calendarEl, {\n initialView: 'dayGridMonth'\n });\n calendar.render();\n })\n\n</script>\n\n<svelte:head>\n <link href='https://cdn.jsdelivr.net/npm/[email protected]/main.css' rel='stylesheet' />\n <script src='https://cdn.jsdelivr.net/npm/[email protected]/main.js'></script>\n</svelte:head>\n\n<div id='calendar' bind:this={calendarEl}></div>\n\n"
] |
[
0
] |
[] |
[] |
[
"fullcalendar",
"svelte"
] |
stackoverflow_0074652377_fullcalendar_svelte.txt
|
Q:
how to test a insert into in php unit
I have made a php crud with OOP and I need to test many methods like insert_record method, this make a data insertion on my database and everything works perfect. The problem is when I'm trying to make my php unit test and it says that my connection failed.
I'm thinking about making a mock to my connection because my purposse is not testing the connection but I don't know how can I even do that.
My function to insert
function insert_record($a,$b,$c,$d)
{
global $db;
$query = "insert into employees (FirstName,LastName, UserName,Email) values('$a','$b','$c','$d')";
$result = mysqli_query($db->connection,$query);
if($result)
return true;
}
The connection
public $connection;
public function __construct()
{
$this->db_connect();
}
public function db_connect()
{
$this->connection = mysqli_connect('localhost','root','','crud', 80);
if(mysqli_connect_error())
{
die(" Connect Failed ");
}
}
The test i'm trying
public function testInsert(){
$operations = new operations();
$result = $operations->insert_record("TestName", "TestLastName", "TestUserName", "Email");
$this->assertEquals(true, $result);
}
A:
To create a mock object for your database connection, you can use a mock object framework such as PHPUnit. This will allow you to create a fake or "mocked" version of your database connection that you can use in your tests instead of the real connection.
Here is an example of how you could create a mock database connection using PHPUnit:
// Import PHPUnit dependencies
use PHPUnit\Framework\TestCase;
use PHPUnit\Framework\MockObject\MockObject;
// Create a mock object for the database connection
/** @var MockObject $db */
$db = $this->getMockBuilder(mysqli::class)
->disableOriginalConstructor()
->setMethods(['mysqli_query'])
->getMock();
// Set up the expectation that the mock mysqli_query method will be called
// with the correct arguments and will return a successful result
$db->expects($this->once())
->method('mysqli_query')
->with('insert into employees (FirstName,LastName, UserName,Email) values(\'TestName\',\'TestLastName\',\'TestUserName\',\'Email\')')
->willReturn(true);
// Inject the mock database connection into your operations class
$operations = new operations($db);
// Run your test as usual
$result = $operations->insert_record("TestName", "TestLastName", "TestUserName", "Email");
$this->assertEquals(true, $result);
With this approach, your test will use the mocked version of the database connection, which will return a successful result when mysqli_query is called. This will allow you to test the insert_record method without actually trying to connect to a real database.
|
how to test a insert into in php unit
|
I have made a php crud with OOP and I need to test many methods like insert_record method, this make a data insertion on my database and everything works perfect. The problem is when I'm trying to make my php unit test and it says that my connection failed.
I'm thinking about making a mock to my connection because my purposse is not testing the connection but I don't know how can I even do that.
My function to insert
function insert_record($a,$b,$c,$d)
{
global $db;
$query = "insert into employees (FirstName,LastName, UserName,Email) values('$a','$b','$c','$d')";
$result = mysqli_query($db->connection,$query);
if($result)
return true;
}
The connection
public $connection;
public function __construct()
{
$this->db_connect();
}
public function db_connect()
{
$this->connection = mysqli_connect('localhost','root','','crud', 80);
if(mysqli_connect_error())
{
die(" Connect Failed ");
}
}
The test i'm trying
public function testInsert(){
$operations = new operations();
$result = $operations->insert_record("TestName", "TestLastName", "TestUserName", "Email");
$this->assertEquals(true, $result);
}
|
[
"To create a mock object for your database connection, you can use a mock object framework such as PHPUnit. This will allow you to create a fake or \"mocked\" version of your database connection that you can use in your tests instead of the real connection.\nHere is an example of how you could create a mock database connection using PHPUnit:\n// Import PHPUnit dependencies\nuse PHPUnit\\Framework\\TestCase;\nuse PHPUnit\\Framework\\MockObject\\MockObject;\n\n// Create a mock object for the database connection\n/** @var MockObject $db */\n$db = $this->getMockBuilder(mysqli::class)\n ->disableOriginalConstructor()\n ->setMethods(['mysqli_query'])\n ->getMock();\n\n// Set up the expectation that the mock mysqli_query method will be called\n// with the correct arguments and will return a successful result\n$db->expects($this->once())\n ->method('mysqli_query')\n ->with('insert into employees (FirstName,LastName, UserName,Email) values(\\'TestName\\',\\'TestLastName\\',\\'TestUserName\\',\\'Email\\')')\n ->willReturn(true);\n\n// Inject the mock database connection into your operations class\n$operations = new operations($db);\n\n// Run your test as usual\n$result = $operations->insert_record(\"TestName\", \"TestLastName\", \"TestUserName\", \"Email\");\n$this->assertEquals(true, $result);\n\nWith this approach, your test will use the mocked version of the database connection, which will return a successful result when mysqli_query is called. This will allow you to test the insert_record method without actually trying to connect to a real database.\n"
] |
[
0
] |
[] |
[] |
[
"crud",
"oop",
"php",
"testing"
] |
stackoverflow_0074666327_crud_oop_php_testing.txt
|
Q:
How to hide json data from network tab
I use laravel and vue to menage some data from db, and i return json format from laravel controller to vue js. I just want to hide the response data from network tab or to mask them maybe. I didnt does this before. I mean, when i open network tab i see a request get-users?page=1 and if i double click open this urlhttp://127.0.0.1:8000/admin/users/get-users?page=1 witch show me all data like this
{
"data": [
{
"id": 1,
"name": "Admin",
"email": "[email protected]",
"email_verified_at": null,
"last_online_at": "2022-12-02 10:27:20",
is there any way to mask this data to somethink like this
"data": [
{
success: true,
response: null //or true
}
this is how i return users data
return new UserResource(User::paginate($paginate));
i want hide data from this tab
http://127.0.0.1:8000/admin/users/get-users?page=1
A:
Requests will be shown.
This cannot be stopped, the application is making requests and this will be logged to the network tab by the browser, if there are security concenrns you should be handling this a different way. Do not send data to the client that they should not be allowed access to in the first place.
To try and ensure security run over HTTPS on the off chance to data gets intercepted, that way it will not be usable data. Most data will be provided by the user. Meaning in should not need to be hidden within the network tab.
Worst case scenario, someone physically sits at their computer and reads what is in the network tab, but this is a scenario that cant be accounted for when developing applications. You could base64 encode data that is being sent to and from so it is less readable to anyone who should see the network tab. Here are some resources to have a look through related to the question.
Base64
|
How to hide json data from network tab
|
I use laravel and vue to menage some data from db, and i return json format from laravel controller to vue js. I just want to hide the response data from network tab or to mask them maybe. I didnt does this before. I mean, when i open network tab i see a request get-users?page=1 and if i double click open this urlhttp://127.0.0.1:8000/admin/users/get-users?page=1 witch show me all data like this
{
"data": [
{
"id": 1,
"name": "Admin",
"email": "[email protected]",
"email_verified_at": null,
"last_online_at": "2022-12-02 10:27:20",
is there any way to mask this data to somethink like this
"data": [
{
success: true,
response: null //or true
}
this is how i return users data
return new UserResource(User::paginate($paginate));
i want hide data from this tab
http://127.0.0.1:8000/admin/users/get-users?page=1
|
[
"Requests will be shown.\nThis cannot be stopped, the application is making requests and this will be logged to the network tab by the browser, if there are security concenrns you should be handling this a different way. Do not send data to the client that they should not be allowed access to in the first place.\nTo try and ensure security run over HTTPS on the off chance to data gets intercepted, that way it will not be usable data. Most data will be provided by the user. Meaning in should not need to be hidden within the network tab.\nWorst case scenario, someone physically sits at their computer and reads what is in the network tab, but this is a scenario that cant be accounted for when developing applications. You could base64 encode data that is being sent to and from so it is less readable to anyone who should see the network tab. Here are some resources to have a look through related to the question.\nBase64\n"
] |
[
0
] |
[] |
[] |
[
"laravel",
"laravel_8"
] |
stackoverflow_0074666375_laravel_laravel_8.txt
|
Q:
retuning a promise using async await pattern with some delay
I am trying to return some test data from a method that calls a rest api. To simulate the rest api call, I am adding some delay to return a promise using async-await pattern, but its not working as expected.
My understanding of async-await pattern in JavaScript is that any value returned from an async function is returned as a promise, so the value 100 should be returned as a promise by the function and therefore the last statement using .then should show 100, but it's not.
Question
What is wrong with the code snippet below which is causing the alert in last line of code to show undefined rather than 100?
async function f() {
function delayedResponse() {
setTimeout(function() { return 100}, 5000);
}
return await delayedResponse();
}
f().then(alert); // 100
A:
You don't return anything in your delayedResponse so it's resulted in undefined.
Instead, in order to achieve what you expect - you can create a promise explicitly and resolve a value using timeout;
async function f() {
return new Promise(resolve => {
setTimeout(function() { resolve(100)}, 5000);
});
}
f().then(alert); // 100
A:
NodeJS 14.x or Browser API
async function delay(msecs: number) {
return new Promise((resolve) => setTimeout(resolve, msecs));
}
NodeJS 16.x or later
import { setTimeout } from "timers/promises";
const result = await setTimeout(msecs, 'somevalue')
Note that the order of the setTimeout() arguments has changed!
The old setTimeout() is defined unless you import from "timers/promises".
References
Node 16 - Timers Promises API
|
retuning a promise using async await pattern with some delay
|
I am trying to return some test data from a method that calls a rest api. To simulate the rest api call, I am adding some delay to return a promise using async-await pattern, but its not working as expected.
My understanding of async-await pattern in JavaScript is that any value returned from an async function is returned as a promise, so the value 100 should be returned as a promise by the function and therefore the last statement using .then should show 100, but it's not.
Question
What is wrong with the code snippet below which is causing the alert in last line of code to show undefined rather than 100?
async function f() {
function delayedResponse() {
setTimeout(function() { return 100}, 5000);
}
return await delayedResponse();
}
f().then(alert); // 100
|
[
"You don't return anything in your delayedResponse so it's resulted in undefined.\nInstead, in order to achieve what you expect - you can create a promise explicitly and resolve a value using timeout;\n\n\nasync function f() {\n return new Promise(resolve => {\n setTimeout(function() { resolve(100)}, 5000);\n }); \n}\n\nf().then(alert); // 100\n\n\n\n",
"NodeJS 14.x or Browser API\nasync function delay(msecs: number) {\n return new Promise((resolve) => setTimeout(resolve, msecs));\n}\n\nNodeJS 16.x or later\nimport { setTimeout } from \"timers/promises\";\nconst result = await setTimeout(msecs, 'somevalue')\n\nNote that the order of the setTimeout() arguments has changed!\nThe old setTimeout() is defined unless you import from \"timers/promises\".\nReferences\n\nNode 16 - Timers Promises API\n\n"
] |
[
5,
0
] |
[] |
[] |
[
"async_await",
"asynchronous",
"es6_promise",
"javascript",
"promise"
] |
stackoverflow_0064296883_async_await_asynchronous_es6_promise_javascript_promise.txt
|
Q:
Docker and Process Virtual Machine
Is Docker (container technology) a "Process Virtual Machine"?
If different, in what ways are they different?
A:
Docker is a containerization technology, while a process virtual machine is a type of virtualization technology. While both technologies involve running multiple processes or applications on a single physical machine, they differ in how they implement this.
A process virtual machine (also known as a system virtual machine) is a type of virtualization technology that creates a separate, isolated environment for each application or process running on a physical machine. This is accomplished by simulating the hardware and operating system environment of a separate physical machine, allowing each application to run as if it were on its own dedicated machine.
In contrast, Docker uses containers to run multiple applications on a single physical machine. A container is a lightweight, standalone, and executable package of an application, which includes the application code, libraries, dependencies, and runtime. Unlike a process virtual machine, which simulates a separate hardware and operating system environment for each application, Docker containers share the host machine's operating system kernel and use operating system-level virtualization to isolate applications from each other. This allows Docker containers to be more lightweight and efficient than process virtual machines.
Overall, the main difference between Docker and a process virtual machine is that Docker uses containers to isolate applications from each other, while a process virtual machine simulates a separate hardware and operating system environment for each application.
|
Docker and Process Virtual Machine
|
Is Docker (container technology) a "Process Virtual Machine"?
If different, in what ways are they different?
|
[
"Docker is a containerization technology, while a process virtual machine is a type of virtualization technology. While both technologies involve running multiple processes or applications on a single physical machine, they differ in how they implement this.\nA process virtual machine (also known as a system virtual machine) is a type of virtualization technology that creates a separate, isolated environment for each application or process running on a physical machine. This is accomplished by simulating the hardware and operating system environment of a separate physical machine, allowing each application to run as if it were on its own dedicated machine.\nIn contrast, Docker uses containers to run multiple applications on a single physical machine. A container is a lightweight, standalone, and executable package of an application, which includes the application code, libraries, dependencies, and runtime. Unlike a process virtual machine, which simulates a separate hardware and operating system environment for each application, Docker containers share the host machine's operating system kernel and use operating system-level virtualization to isolate applications from each other. This allows Docker containers to be more lightweight and efficient than process virtual machines.\nOverall, the main difference between Docker and a process virtual machine is that Docker uses containers to isolate applications from each other, while a process virtual machine simulates a separate hardware and operating system environment for each application.\n"
] |
[
0
] |
[] |
[] |
[
"containers",
"docker",
"virtual",
"virtual_machine"
] |
stackoverflow_0074666229_containers_docker_virtual_virtual_machine.txt
|
Q:
Screen Recorded Through Python Script is Too fast
I could record the screen, but whenever I play the video it is very fast. How can I solve this issue?
import pyautogui
import cv2
import numpy as np
resolution = (1920, 1080)
codec = cv2.VideoWriter_fourcc(*"XVID")
filename = "Recording.avi"
fps = 60.0
out = cv2.VideoWriter(filename, codec, fps, resolution)
cv2.namedWindow("Live", cv2.WINDOW_NORMAL)
cv2.resizeWindow("Live", 480, 270)
while True:
img = pyautogui.screenshot()
frame = np.array(img)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
out.write(frame)
cv2.imshow('Live', frame)
if cv2.waitKey(1) == ord('q'):
break
time.sleep(1/30)
out.release()
cv2.destroyAllWindows()
A:
There are a few things you can try to make the recorded video play at a normal speed. One possible solution is to reduce the number of frames per second (fps) that are being recorded. In your code, you are setting the fps value to 60.0, which is a very high value and may be causing the recorded video to play back too quickly. Try set fps to 25 or 30. Also you can try increasing the amount of time that the sleep() function is called, which will cause the loop to pause for a longer period of time between frames.
|
Screen Recorded Through Python Script is Too fast
|
I could record the screen, but whenever I play the video it is very fast. How can I solve this issue?
import pyautogui
import cv2
import numpy as np
resolution = (1920, 1080)
codec = cv2.VideoWriter_fourcc(*"XVID")
filename = "Recording.avi"
fps = 60.0
out = cv2.VideoWriter(filename, codec, fps, resolution)
cv2.namedWindow("Live", cv2.WINDOW_NORMAL)
cv2.resizeWindow("Live", 480, 270)
while True:
img = pyautogui.screenshot()
frame = np.array(img)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
out.write(frame)
cv2.imshow('Live', frame)
if cv2.waitKey(1) == ord('q'):
break
time.sleep(1/30)
out.release()
cv2.destroyAllWindows()
|
[
"There are a few things you can try to make the recorded video play at a normal speed. One possible solution is to reduce the number of frames per second (fps) that are being recorded. In your code, you are setting the fps value to 60.0, which is a very high value and may be causing the recorded video to play back too quickly. Try set fps to 25 or 30. Also you can try increasing the amount of time that the sleep() function is called, which will cause the loop to pause for a longer period of time between frames.\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"pyautogui",
"python",
"screen_recording"
] |
stackoverflow_0074666388_numpy_pyautogui_python_screen_recording.txt
|
Q:
Convert long series keys to hex, then Choose desired values from a list of long separated keys
I have code to generate series of keys as in below:
def Keygen (x,r,size):
key=[]
for i in range(size):
x= r*x*(1-x)
key.append(int((x*pow(10,16))%256))
return key
if __name__=="__main__":
key=Keygen(0.45,0.685,92)#Intial Parameters
print('nx key:', key, "\n")
The output keys are:
nx key: [0, 11, 53, 42, 111, 38, 55, 102, 252, 155, 54, 219, 149, 220, 235, 177, 140, 46, 209, 249, 46, 241, 218, 243, 6, 166, 247, 106, 33, 24, 220, 185, 129, 182, 214, 210, 180, 28, 84, 117, 228, 213, 205, 240, 125, 37, 181, 234, 246, 54, 22, 195, 38, 174, 212, 166, 9, 237, 25, 225, 81, 23, 244, 235, 171, 196, 111, 182, 227, 26, 22, 246, 35, 52, 225, 249, 90, 237, 162, 111, 76, 52, 35, 24, 16, 11, 7, 5, 3, 2, 1, 1]
I try to convert all key values to hex by used the following code:
K=hex(key)
print('nx key:', key, "\n")
But when run I got the error "TypeError: 'list' object cannot be interpreted as an integer"
Then try to use "K= hex(ord(key))" but also got another error "TypeError: ord() expected string of length 1, but list found"
What I need is to convert all keys to hex, then select just 4 keys to be like this
K = (0x3412, 0x7856, 0xBC9A, 0xF0DE)
A:
In order to get hex values for your list of keys, you have to iterate over the list and turn each element seperately into a hex value:
K = tuple(hex(x) for x in key)
Then you can select 4 random keys (no repeat) from this list by:
import random
selectedKeys = random.sample(K, 4)
A:
Maybe a better name for key is keys, cause is a list of keys. That said
[hex(key) for key in keys]
should do the trick.
This a is a usage of list comprehension
A:
I might be able to help you with your error.
Based on your output with your values wrapped in [], you have a list for key. What you then want to do is iterate through each element in that list to apply your hex.
hexed_keys = [hex(i) for i in key]
Good luck and happy coding! Please up vote my answer if useful so I can contribute more on Stack Overflow:)
|
Convert long series keys to hex, then Choose desired values from a list of long separated keys
|
I have code to generate series of keys as in below:
def Keygen (x,r,size):
key=[]
for i in range(size):
x= r*x*(1-x)
key.append(int((x*pow(10,16))%256))
return key
if __name__=="__main__":
key=Keygen(0.45,0.685,92)#Intial Parameters
print('nx key:', key, "\n")
The output keys are:
nx key: [0, 11, 53, 42, 111, 38, 55, 102, 252, 155, 54, 219, 149, 220, 235, 177, 140, 46, 209, 249, 46, 241, 218, 243, 6, 166, 247, 106, 33, 24, 220, 185, 129, 182, 214, 210, 180, 28, 84, 117, 228, 213, 205, 240, 125, 37, 181, 234, 246, 54, 22, 195, 38, 174, 212, 166, 9, 237, 25, 225, 81, 23, 244, 235, 171, 196, 111, 182, 227, 26, 22, 246, 35, 52, 225, 249, 90, 237, 162, 111, 76, 52, 35, 24, 16, 11, 7, 5, 3, 2, 1, 1]
I try to convert all key values to hex by used the following code:
K=hex(key)
print('nx key:', key, "\n")
But when run I got the error "TypeError: 'list' object cannot be interpreted as an integer"
Then try to use "K= hex(ord(key))" but also got another error "TypeError: ord() expected string of length 1, but list found"
What I need is to convert all keys to hex, then select just 4 keys to be like this
K = (0x3412, 0x7856, 0xBC9A, 0xF0DE)
|
[
"In order to get hex values for your list of keys, you have to iterate over the list and turn each element seperately into a hex value:\nK = tuple(hex(x) for x in key)\n\nThen you can select 4 random keys (no repeat) from this list by:\nimport random\nselectedKeys = random.sample(K, 4)\n\n",
"Maybe a better name for key is keys, cause is a list of keys. That said\n[hex(key) for key in keys]\nshould do the trick.\nThis a is a usage of list comprehension\n",
"I might be able to help you with your error.\nBased on your output with your values wrapped in [], you have a list for key. What you then want to do is iterate through each element in that list to apply your hex.\nhexed_keys = [hex(i) for i in key]\n\nGood luck and happy coding! Please up vote my answer if useful so I can contribute more on Stack Overflow:)\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"hex",
"python"
] |
stackoverflow_0074666330_hex_python.txt
|
Q:
Removing a column in java from a 2d array
I am trying to write a java method that will take a 2d array and add the contents to an new 2d array expect for a specified row. So if I have the 2d array
1234
1234
1234
and I want to remove the 3rd column, I would like to get
124
124
124
The problem is I can't figure out how to get this to work. The best I can come up with is the following method.
private static int[][] removeCol(int [][] array, int colRemove)
{
int row = array.length;
int col = array[0].length;
int [][] newArray = new int[row][col];
for(int i = 0; i < row; i++)
{
for(int j = 0; j < col; j++)
{
if(j != colRemove)
{
newArray[i][j] = array[i][j];
}
}
}
return newEx;
}
Right now this method will return this
1204
1204
1204
but it would work much better if I could get my desired results. Is there a way to do this or am I stuck with my current results?
A:
You can have one variable currColumn which indicates the position of current column and the resultant array will have one less column than the original. So According to this you can change your code.
private static int[][] removeCol(int [][] array, int colRemove)
{
int row = array.length;
int col = array[0].length;
int [][] newArray = new int[row][col-1]; //new Array will have one column less
for(int i = 0; i < row; i++)
{
for(int j = 0,currColumn=0; j < col; j++)
{
if(j != colRemove)
{
newArray[i][currColumn++] = array[i][j];
}
}
}
return newEx;
}
Another better approach is to use a Dynamic Structure like ArrayList. So here you need to have Array of ArrayList and then you can remove the element with remove() method. In case if you want to update any element then you can use set() method.
A:
Just use another index that doesn't automatically increment on loop run:
private static int[][] removeCol(int [][] array, int colRemove)
{
int row = array.length;
int col = array[0].length-1;
int oldCol = array[0].length;
int [][] newArray = new int[row][col];
for(int i = 0; i < row; i++)
{
for(int j = 0, k=0; j < oldCol && k < col; j++)
{
if(j != colRemove)
{
newArray[i][k++] = array[i][j];
}
}
}
return newArray;
}
A:
By keeping your logic with 0-based column number :
private static int[][] removeCol(int[][] array, int colRemove) {
int row = array.length;
int col = array[0].length;
int[][] newArray = new int[row][col - 1]; // You will have one column less
for (int i = 0; i < row; i++) {
for (int j = 0; j < col; j++) {
if (j != colRemove) {
newArray[i][j > colRemove ? j -1 : j] = array[i][j]; // If you're looking at an index greater than the one to remove, you have to reduce index by one
}
}
}
return newArray;
}
A:
In second iteration check the colRemove value and move +1 for other iteration. See the code below
private static int[][] removeCol(int [][] array, int colRemove)
{
int row = array.length;
int col = array[0].length-1;
int [][] newArray = new int[row][col];
for(int i = 0; i < row; i++)
{
for(int j = 0; j < col; j++)
{
if(j>=colRemove){
newArray[i][j] = array[i][j+1];
}
else{
newArray[i][j] = array[i][j];
}
}
}
return newArray;
}
A:
Actually you replace the elements to remove by the default value of int, that is : 0.
If it is not the element to remove you copy the element :
if(j != colRemove)
{
newArray[i][j] = array[i][j];
}
Otherwise you do nothing (so it takes the default 0 value from int).
You should rather reduce from 1 the second dimension of the new created array.
You could follow this way :
Use a global loop to iterate on the row (first dimension of the array)
Use two sequential inner loops to iterate on the column (second dimension of the array).
The first one iterates until "the column to remove - 1" and makes a simple copy of the element in the new array.
The second starts from "the column to remove" and shifts to left each element of the original array in the new array.
Here is a working code :
import java.util.Arrays;
public class Array2D {
public static void main(String[] args) {
int[][] arrayOriginal = new int[][]{{1,2,3,4},{1,2,3,4},{1,2,3,4}};
int[][] arrayNew = removeCol(arrayOriginal, 2);
System.out.println(Arrays.deepToString(arrayNew));;
}
private static int[][] removeCol(int [][] array, int colRemove)
{
int row = array.length;
int col = array[0].length;
int [][] newArray = new int[row][col-1];
for(int i = 0; i < row; i++)
{
for(int j = 0; j < colRemove; j++)
{
newArray[i][j] = array[i][j];
}
for(int j = colRemove; j < col-1; j++)
{
newArray[i][j] = array[i][j+1];
}
}
return newArray;
}
}
The output is :
[[1, 2, 4], [1, 2, 4], [1, 2, 4]]
A:
Perhaps this is not as efficient as it could, but I does work:
public String[][] colDel(String[][] array, int deletedCol) {
if (deletedCol > array[0].length - 1) {
return null;
}
var result = new String[array.length][];
int n = 0;
ArrayList<String> tmp;
for (var row : array) {
tmp = new ArrayList<>(Arrays.asList(row));
tmp.remove(deletedCol);
result[n++] = tmp.toArray(String[]::new);
}
return result;
}
Briefly, an instance of ArrayList is created for each row. Then the n-th column is removed, as indicated by the second parameter. Finally the arraylist is transformed back into an instance of String[], and the resulting String[][] is returned.
Cheers,
Coti
|
Removing a column in java from a 2d array
|
I am trying to write a java method that will take a 2d array and add the contents to an new 2d array expect for a specified row. So if I have the 2d array
1234
1234
1234
and I want to remove the 3rd column, I would like to get
124
124
124
The problem is I can't figure out how to get this to work. The best I can come up with is the following method.
private static int[][] removeCol(int [][] array, int colRemove)
{
int row = array.length;
int col = array[0].length;
int [][] newArray = new int[row][col];
for(int i = 0; i < row; i++)
{
for(int j = 0; j < col; j++)
{
if(j != colRemove)
{
newArray[i][j] = array[i][j];
}
}
}
return newEx;
}
Right now this method will return this
1204
1204
1204
but it would work much better if I could get my desired results. Is there a way to do this or am I stuck with my current results?
|
[
"You can have one variable currColumn which indicates the position of current column and the resultant array will have one less column than the original. So According to this you can change your code.\nprivate static int[][] removeCol(int [][] array, int colRemove)\n{\n int row = array.length;\n int col = array[0].length;\n\n int [][] newArray = new int[row][col-1]; //new Array will have one column less\n\n\n for(int i = 0; i < row; i++)\n {\n for(int j = 0,currColumn=0; j < col; j++)\n {\n if(j != colRemove)\n {\n newArray[i][currColumn++] = array[i][j];\n }\n }\n }\n\n return newEx;\n}\n\nAnother better approach is to use a Dynamic Structure like ArrayList. So here you need to have Array of ArrayList and then you can remove the element with remove() method. In case if you want to update any element then you can use set() method.\n",
"Just use another index that doesn't automatically increment on loop run:\nprivate static int[][] removeCol(int [][] array, int colRemove)\n {\nint row = array.length;\nint col = array[0].length-1;\nint oldCol = array[0].length;\n\nint [][] newArray = new int[row][col];\n\nfor(int i = 0; i < row; i++)\n{\n for(int j = 0, k=0; j < oldCol && k < col; j++)\n {\n if(j != colRemove)\n {\n newArray[i][k++] = array[i][j];\n }\n }\n}\n\nreturn newArray;\n}\n\n",
"By keeping your logic with 0-based column number :\n private static int[][] removeCol(int[][] array, int colRemove) {\n int row = array.length;\n int col = array[0].length;\n\n int[][] newArray = new int[row][col - 1]; // You will have one column less\n\n for (int i = 0; i < row; i++) {\n for (int j = 0; j < col; j++) {\n if (j != colRemove) {\n newArray[i][j > colRemove ? j -1 : j] = array[i][j]; // If you're looking at an index greater than the one to remove, you have to reduce index by one\n }\n }\n }\n\n return newArray;\n }\n\n",
"In second iteration check the colRemove value and move +1 for other iteration. See the code below \nprivate static int[][] removeCol(int [][] array, int colRemove)\n{\n int row = array.length;\n int col = array[0].length-1;\n\n int [][] newArray = new int[row][col];\n\n for(int i = 0; i < row; i++)\n {\n for(int j = 0; j < col; j++)\n {\n if(j>=colRemove){\n newArray[i][j] = array[i][j+1];\n }\n else{\n newArray[i][j] = array[i][j];\n }\n }\n }\n\n return newArray;\n}\n\n",
"Actually you replace the elements to remove by the default value of int, that is : 0.\nIf it is not the element to remove you copy the element :\nif(j != colRemove)\n{\n newArray[i][j] = array[i][j];\n}\n\nOtherwise you do nothing (so it takes the default 0 value from int).\nYou should rather reduce from 1 the second dimension of the new created array.\nYou could follow this way :\n\nUse a global loop to iterate on the row (first dimension of the array)\nUse two sequential inner loops to iterate on the column (second dimension of the array). \nThe first one iterates until \"the column to remove - 1\" and makes a simple copy of the element in the new array.\nThe second starts from \"the column to remove\" and shifts to left each element of the original array in the new array.\n\nHere is a working code :\nimport java.util.Arrays;\n\npublic class Array2D {\n\n public static void main(String[] args) {\n\n int[][] arrayOriginal = new int[][]{{1,2,3,4},{1,2,3,4},{1,2,3,4}};\n int[][] arrayNew = removeCol(arrayOriginal, 2);\n System.out.println(Arrays.deepToString(arrayNew));;\n }\n private static int[][] removeCol(int [][] array, int colRemove)\n {\n int row = array.length;\n int col = array[0].length;\n\n int [][] newArray = new int[row][col-1];\n\n for(int i = 0; i < row; i++)\n {\n for(int j = 0; j < colRemove; j++)\n { \n newArray[i][j] = array[i][j]; \n }\n\n for(int j = colRemove; j < col-1; j++)\n { \n newArray[i][j] = array[i][j+1];\n }\n\n }\n\n return newArray;\n }\n}\n\nThe output is :\n[[1, 2, 4], [1, 2, 4], [1, 2, 4]]\n\n",
"Perhaps this is not as efficient as it could, but I does work:\npublic String[][] colDel(String[][] array, int deletedCol) {\n if (deletedCol > array[0].length - 1) {\n return null;\n }\n var result = new String[array.length][];\n int n = 0;\n ArrayList<String> tmp;\n for (var row : array) {\n tmp = new ArrayList<>(Arrays.asList(row));\n tmp.remove(deletedCol);\n result[n++] = tmp.toArray(String[]::new);\n }\n return result;\n}\n\nBriefly, an instance of ArrayList is created for each row. Then the n-th column is removed, as indicated by the second parameter. Finally the arraylist is transformed back into an instance of String[], and the resulting String[][] is returned.\nCheers,\nCoti\n"
] |
[
1,
0,
0,
0,
0,
0
] |
[
"public static String[][] dropColumnFromTable(String[][] table, int colNumber) {\n int rows = table.length;\n int col = table[0].length - 1;\n String[][] result = new String[rows][col];\n for (int i = 0; i < rows; i++) {\n for (int j = 0; j < col+1; j++) {\n if (j < colNumber) {\n result[i][j] = table[i][j];\n } else if (j == colNumber) {\n // Do nothing\n } else if (j > colNumber) {\n result[i][j-1] = table[i][j ];\n }\n }\n }\n return result;\n}\n\n"
] |
[
-1
] |
[
"arrays",
"java",
"multidimensional_array"
] |
stackoverflow_0043424324_arrays_java_multidimensional_array.txt
|
Q:
How to use the data type "text" in Jhipster jdl
Good Morning, pls I wish to know if it is possible to use the data type 'text' in Jhipster jdl.If Yes, pls help me.
I 've try by inserting "text" but it seems not been recognize by the Jdl
I wish to store the data as "text" by using the datatype TEXT, but the jhipster jdl doesn't recognize it
A:
TEXT does not exist in JDL field types, check the doc: https://www.jhipster.tech/jdl/entities-fields#field-types-and-validations
Depending on what you want to achieve, it could be String or TextBlob.
|
How to use the data type "text" in Jhipster jdl
|
Good Morning, pls I wish to know if it is possible to use the data type 'text' in Jhipster jdl.If Yes, pls help me.
I 've try by inserting "text" but it seems not been recognize by the Jdl
I wish to store the data as "text" by using the datatype TEXT, but the jhipster jdl doesn't recognize it
|
[
"TEXT does not exist in JDL field types, check the doc: https://www.jhipster.tech/jdl/entities-fields#field-types-and-validations\nDepending on what you want to achieve, it could be String or TextBlob.\n"
] |
[
0
] |
[] |
[] |
[
"jdl",
"jhipster",
"text",
"types",
"variables"
] |
stackoverflow_0074653521_jdl_jhipster_text_types_variables.txt
|
Q:
single and multiple file upload
I'm facing issue. single and multiple file uploaded file. Then multiple file upload successfully but when single file one by one upload then last one upload other are override by last one. Please help me to find out this problem solution. As you can see below code it's work properly for multiple upload file and send data by ajax then get array value all images but when upload single upload one by one then last one image data get only in ajax data in. please help me to provide me solution.
index.php
`
<!doctype html>
<html lang="en">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap CSS -->
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"
integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
<title>Hello, world!</title>
</head>
<style>
#selectedFiles img {
max-width: 200px;
max-height: 200px;
float: left;
margin-bottom: 10px;
}
</style>
<body>
<form id="myForm" method="post">
<input type="file" id="files" class="file_uploader_file" name="files[]" multiple="true" accept="image/*" />
<p class="validateError" id="imgerror" style="color:red;display:none;">Please select your design.</p>
<input type="button" id="fees_stream_submit1" name="submit">
</form>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" type="text/javascript"></script>
<script>
(function () {
$(document).on('click', '#fees_stream_submit1', function (e) {
var myfiles = document.getElementById("files");
// var myfiles = $('#files').val();
var files = myfiles.files;
var form = new FormData();
alert(files.length);
for (i = 0; i < files.length; i++) {
form.append('file' + i, files[i]);
}
$.ajax({
url: "fileuploadmultidata.php",
type: "POST",
data: form,
contentType: false,
processData: false,
success: function (result) {
// alert(result);
}
});
});
})();
$(document).ready(function () {
var imgCnt = 0;
var onebyoneImg = [];
var countImg = 1;
if (window.File && window.FileList && window.FileReader) {
$("#files").on("change", function (e) {
var files = e.target.files,
filesLength = files.length;
for (var i = 0; i < filesLength; i++) {
var f = files[i];
// var f = new File([""], files[i]);
var fileReader = new FileReader();
fileReader.onload = (function (e) {
imgCnt++;
alert(imgCnt);
var file = e.target;
$("<span class='pip'><div class=\"file_uploaded_view img-thumb-wrapper image-preview-height\">" +
"<img class=\"img-thumb\" src=\"" + e.target.result + "\" title=\"" + file.name + "\" style='heigh:100px;width:100px'/>" +
"<br/><span class='remove'><i class='fa fa-trash'></i></span></span>" +
"</div>").insertAfter("#files");
$(".remove").click(function () {
$(this).parent(".img-thumb-wrapper").remove();
imgCnt--;
});
});
fileReader.readAsDataURL(f);
}
console.log(f);
});
} else {
alert("Your browser doesn't support to File API")
}
});
</script>
</body>
</html>
`
**fileuploadmultidata.php**
`<?php
echo "<pre>";
print_r($_FILES);die();
?>`
A:
The behaviors of file uploading will be like that only see https://www.w3schools.com/jsref/tryit.asp?filename=tryjsref_fileupload_files
To achieve your requirement you need to store file values in variable and use.
var storeMultiFiles = [];
var file = $(file_id)[0].files;
for(var l=0; l<file.length; l++){
var fileData = file[l];
(function(file) {
var fileReader = new FileReader();
fileReader.readAsDataURL(file);
fileReader.onload = function(oFREvent){
storeMultiFiles.push(oFREvent.target.result)
};
})(fileData);
}
Use files details using "storeMultiFiles" for show, save, update and delete for selected.
|
single and multiple file upload
|
I'm facing issue. single and multiple file uploaded file. Then multiple file upload successfully but when single file one by one upload then last one upload other are override by last one. Please help me to find out this problem solution. As you can see below code it's work properly for multiple upload file and send data by ajax then get array value all images but when upload single upload one by one then last one image data get only in ajax data in. please help me to provide me solution.
index.php
`
<!doctype html>
<html lang="en">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<!-- Bootstrap CSS -->
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet"
integrity="sha384-EVSTQN3/azprG1Anm3QDgpJLIm9Nao0Yz1ztcQTwFspd3yD65VohhpuuCOmLASjC" crossorigin="anonymous">
<title>Hello, world!</title>
</head>
<style>
#selectedFiles img {
max-width: 200px;
max-height: 200px;
float: left;
margin-bottom: 10px;
}
</style>
<body>
<form id="myForm" method="post">
<input type="file" id="files" class="file_uploader_file" name="files[]" multiple="true" accept="image/*" />
<p class="validateError" id="imgerror" style="color:red;display:none;">Please select your design.</p>
<input type="button" id="fees_stream_submit1" name="submit">
</form>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.1/jquery.min.js" type="text/javascript"></script>
<script>
(function () {
$(document).on('click', '#fees_stream_submit1', function (e) {
var myfiles = document.getElementById("files");
// var myfiles = $('#files').val();
var files = myfiles.files;
var form = new FormData();
alert(files.length);
for (i = 0; i < files.length; i++) {
form.append('file' + i, files[i]);
}
$.ajax({
url: "fileuploadmultidata.php",
type: "POST",
data: form,
contentType: false,
processData: false,
success: function (result) {
// alert(result);
}
});
});
})();
$(document).ready(function () {
var imgCnt = 0;
var onebyoneImg = [];
var countImg = 1;
if (window.File && window.FileList && window.FileReader) {
$("#files").on("change", function (e) {
var files = e.target.files,
filesLength = files.length;
for (var i = 0; i < filesLength; i++) {
var f = files[i];
// var f = new File([""], files[i]);
var fileReader = new FileReader();
fileReader.onload = (function (e) {
imgCnt++;
alert(imgCnt);
var file = e.target;
$("<span class='pip'><div class=\"file_uploaded_view img-thumb-wrapper image-preview-height\">" +
"<img class=\"img-thumb\" src=\"" + e.target.result + "\" title=\"" + file.name + "\" style='heigh:100px;width:100px'/>" +
"<br/><span class='remove'><i class='fa fa-trash'></i></span></span>" +
"</div>").insertAfter("#files");
$(".remove").click(function () {
$(this).parent(".img-thumb-wrapper").remove();
imgCnt--;
});
});
fileReader.readAsDataURL(f);
}
console.log(f);
});
} else {
alert("Your browser doesn't support to File API")
}
});
</script>
</body>
</html>
`
**fileuploadmultidata.php**
`<?php
echo "<pre>";
print_r($_FILES);die();
?>`
|
[
"The behaviors of file uploading will be like that only see https://www.w3schools.com/jsref/tryit.asp?filename=tryjsref_fileupload_files\nTo achieve your requirement you need to store file values in variable and use.\nvar storeMultiFiles = []; \n\nvar file = $(file_id)[0].files;\nfor(var l=0; l<file.length; l++){\n var fileData = file[l];\n (function(file) {\n var fileReader = new FileReader();\n fileReader.readAsDataURL(file);\n fileReader.onload = function(oFREvent){\n storeMultiFiles.push(oFREvent.target.result)\n };\n })(fileData);\n}\n\nUse files details using \"storeMultiFiles\" for show, save, update and delete for selected.\n"
] |
[
0
] |
[] |
[] |
[
"ajax",
"javascript",
"jquery",
"php_7.4"
] |
stackoverflow_0074665928_ajax_javascript_jquery_php_7.4.txt
|
Q:
FFmpeg hevc_nvenc encoder B Frame problem
I'm using the latest FFmpeg windows Build (2022-12-02 12:44) from BtbN.
I'm trying to encode a video into HEVC codec using hevc_nvenc encoder. But it says [hevc_nvenc @ 00000263983f4280] B frames as references are not supported. Cause my GPU GTX1060 (GP106) doesn't support hardware accelerate encode HEVC of B frames.
command line
I tried to disable the B frames by adding the parameter -bf 0, but it doesn't work.
Then I tried to use the latest build from gyan.dev and it is the same. But when I tried to use an older build (2021-02-28 12:32) of BtbN, it doesn't have the problem.
Is there a workaround to bypass this B frame problem? Cause I don't want to switch to an older build. Thanks.
A:
Thanks Gyan. The solution is to use add parameter -b_ref_mode 0
|
FFmpeg hevc_nvenc encoder B Frame problem
|
I'm using the latest FFmpeg windows Build (2022-12-02 12:44) from BtbN.
I'm trying to encode a video into HEVC codec using hevc_nvenc encoder. But it says [hevc_nvenc @ 00000263983f4280] B frames as references are not supported. Cause my GPU GTX1060 (GP106) doesn't support hardware accelerate encode HEVC of B frames.
command line
I tried to disable the B frames by adding the parameter -bf 0, but it doesn't work.
Then I tried to use the latest build from gyan.dev and it is the same. But when I tried to use an older build (2021-02-28 12:32) of BtbN, it doesn't have the problem.
Is there a workaround to bypass this B frame problem? Cause I don't want to switch to an older build. Thanks.
|
[
"Thanks Gyan. The solution is to use add parameter -b_ref_mode 0\n"
] |
[
0
] |
[] |
[] |
[
"encoding",
"ffmpeg",
"hevc",
"nvenc",
"transcoding"
] |
stackoverflow_0074664753_encoding_ffmpeg_hevc_nvenc_transcoding.txt
|
Q:
Oracle Apex application
We are doing a project on Oracle Apex for university. We have 12 tables and try to build an app for our project. When we try to add a new page for some of our tables (not all of them) we encounter this error error description.
Can someone know how to solve this issue which is really blocking us right now.
We tried everything to solve it. All of our constraints in our tables work. What we don't understand is why we can create sometimes new pages from some tables but for other it does not work.
A:
To me, that (unfortunately) looks like bug as you don't have any impact on Apex' data dictionary tables.
If you connect as a privileged user and check what's exactly being violated, you'll see something like this.
Which table is that constraint related to? Apparently, none:
SQL> select table_name from dba_constraints where owner = 'APEX_200200' and constraint_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';
no rows selected
Any luck with (unique) indexes, then? Yes!
SQL> select table_name from dba_indexes where owner = 'APEX_200200' and index_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';
TABLE_NAME
------------------------------
WWV_DICTIONARY_CACHE_OBJ
Which columns are used to enforce uniqueness?
SQL> select column_name from dba_ind_columns where index_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';
COLUMN_NAME
--------------------------------------------------------------------------------
SECURITY_GROUP_ID
OBJECT_ID
OBJECT_TYPE
SQL>
That's to get you started; you know which table you used for that page so write some more queries and you'll - hopefully - find some move info.
How to "fix" that error? I hope you won't delete or update anything on Apex' dictionary tables! Maybe you'd rather rename that table (to avoid uniqueness violation) and try to use it, with its new name, while creating the page in your application.
|
Oracle Apex application
|
We are doing a project on Oracle Apex for university. We have 12 tables and try to build an app for our project. When we try to add a new page for some of our tables (not all of them) we encounter this error error description.
Can someone know how to solve this issue which is really blocking us right now.
We tried everything to solve it. All of our constraints in our tables work. What we don't understand is why we can create sometimes new pages from some tables but for other it does not work.
|
[
"To me, that (unfortunately) looks like bug as you don't have any impact on Apex' data dictionary tables.\nIf you connect as a privileged user and check what's exactly being violated, you'll see something like this.\nWhich table is that constraint related to? Apparently, none:\nSQL> select table_name from dba_constraints where owner = 'APEX_200200' and constraint_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';\n\nno rows selected\n\nAny luck with (unique) indexes, then? Yes!\nSQL> select table_name from dba_indexes where owner = 'APEX_200200' and index_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';\n\nTABLE_NAME\n------------------------------\nWWV_DICTIONARY_CACHE_OBJ\n\nWhich columns are used to enforce uniqueness?\nSQL> select column_name from dba_ind_columns where index_name = 'WWV_DICTIONARY_CACHE_OBJ_IDX2';\n\nCOLUMN_NAME\n--------------------------------------------------------------------------------\nSECURITY_GROUP_ID\nOBJECT_ID\nOBJECT_TYPE\n\nSQL>\n\nThat's to get you started; you know which table you used for that page so write some more queries and you'll - hopefully - find some move info.\nHow to \"fix\" that error? I hope you won't delete or update anything on Apex' dictionary tables! Maybe you'd rather rename that table (to avoid uniqueness violation) and try to use it, with its new name, while creating the page in your application.\n"
] |
[
0
] |
[] |
[] |
[
"oracle",
"oracle_apex"
] |
stackoverflow_0074665764_oracle_oracle_apex.txt
|
Q:
Display pho echo list in Descending order
Am displaying list haviing
Date
News Heading
Short Descrption
The list is spread to around 100 pages, having 20 news in each page
Issue is: This is working absolute fine in php 7.3, 7.4 in joomla 3.10 where on clicking url - list is shown spread over multiple pages having sort by date wise as first criteria, latest date of publishing is coming
But when same used on php 8.0.x - its showing incorrectly, where on clicking URL - last page of list having page number 100 is shown first. Now when i add on limitstart=0 in url then its showing correctly as the first page.
Now when i change from descending to ascending - its bringing the content on last page and its opening, but page number is 100 again
Seems like URL when opened is directly taking to last page of news item as published (although no limitstart is mentioned in it), which is incorrect as ideally should open in descending order and open the page having latest one
Below is code of views/list/tmpl/default.php
if(count($this->items) >0){
//$i=1;
foreach($this->items as $newslist)
{
$date = JFactory::getDate($newslist->n_date);
$list .='<h3><strong>'.$newslist->v_heading.'</strong></h3>
<p>'. $date->format('F j, Y').'</p>
<p>'.substr($newslist->v_short_description,0,100).'</p>
<p><i>Know More on:- </i><a href="index.php?option=com_news&view=detail&v_id='.$newslist->id.'&Itemid='.$Itemid.'"><b><i>'.$newslist->v_heading.'</i></b></a></p><hr/><br>';
//$i=$i+1;
}
}else{
JError::raiseError(404, "Message");
}
<?php echo $list?>
and for models/list.php this is the function
protected function getListQuery()
{
// Create a new query object.
$db = $this->getDbo();
$query = $db->getQuery(true);
// Select the required fields from the table.
$query
->select(
$this->getState(
'list.select', 'DISTINCT a.*'
)
);
$query->from('`#__news` AS a');
if (!JFactory::getUser()->authorise('core.edit', 'com_news'))
{
$query->where('a.state = 1');
}
// Filter by search in title
$search = $this->getState('filter.search');
if (!empty($search))
{
if (stripos($search, 'id:') === 0)
{
$query->where('a.id = ' . (int) substr($search, 3));
}
else
{
$search = $db->Quote('%' . $db->escape($search, true) . '%');
$query->where('( a.n_heading LIKE ' . $search . ' )');
}
}
/*
// Add the list ordering clause.
$orderCol = $this->state->get('list.ordering');
$orderDirn = $this->state->get('list.direction');
if ($orderCol && $orderDirn)
{
$query->order($db->escape($orderCol . ' ' . $orderDirn));
}
*/ //Order by date
$query->order ('a.n_date DESC');
$query->order ('a.id DESC');
return $query;
}
This is the code for views/list/view.html.php
public function display($tpl = null)
{
$app = JFactory::getApplication();
$this->state = $this->get('State');
$this->items = $this->get('Items');
$this->pagination = $this->get('Pagination');
$this->params = $app->getParams('com_news');
$this->filterForm = $this->get('FilterForm');
$this->activeFilters = $this->get('ActiveFilters');
// Check for errors.
if (count($errors = $this->get('Errors')))
{
throw new Exception(implode("\n", $errors));
}
$this->_prepareDocument();
parent::display($tpl);
}
Unsure how to achieve in and why its not working in php 8.0 where url should open the 1st page and not last page
A:
To display the list in descending order, you can modify the SQL query used to retrieve the news items from the database. The ORDER BY clause in the query can be used to specify the criteria by which the items should be ordered. In your case, you can add the following line to the getListQuery() function in your models/list.php file:
$query->order('a.n_date DESC');
This will cause the items to be ordered by the n_date field in descending order (i.e., the most recent items will appear first in the list). You can also add additional ORDER BY clauses to further refine the order of the items, such as ordering by the id field if there are items with the same n_date value.
$query->order('a.n_date DESC, a.id DESC');
With this change, the list should be displayed in the correct order when the page is loaded. Note that this solution assumes that the n_date field in the #__news table contains valid date and time values that can be used for ordering the items. If this is not the case, you may need to use a different field or method to determine the order of the items in the list.
A:
$query->order ('a.n_date DESC, a.id DESC');
This will first order the items in the list by the n_date field in descending order, and then order them by the id field in descending order. This should fix the issue where the list is not being ordered correctly on PHP 8.0.x :) Since the n_date field is already a date field, there is no need to use the JFactory::getDate function to convert it to a date. You can simply use the $newslist->n_date value directly in your code.
|
Display pho echo list in Descending order
|
Am displaying list haviing
Date
News Heading
Short Descrption
The list is spread to around 100 pages, having 20 news in each page
Issue is: This is working absolute fine in php 7.3, 7.4 in joomla 3.10 where on clicking url - list is shown spread over multiple pages having sort by date wise as first criteria, latest date of publishing is coming
But when same used on php 8.0.x - its showing incorrectly, where on clicking URL - last page of list having page number 100 is shown first. Now when i add on limitstart=0 in url then its showing correctly as the first page.
Now when i change from descending to ascending - its bringing the content on last page and its opening, but page number is 100 again
Seems like URL when opened is directly taking to last page of news item as published (although no limitstart is mentioned in it), which is incorrect as ideally should open in descending order and open the page having latest one
Below is code of views/list/tmpl/default.php
if(count($this->items) >0){
//$i=1;
foreach($this->items as $newslist)
{
$date = JFactory::getDate($newslist->n_date);
$list .='<h3><strong>'.$newslist->v_heading.'</strong></h3>
<p>'. $date->format('F j, Y').'</p>
<p>'.substr($newslist->v_short_description,0,100).'</p>
<p><i>Know More on:- </i><a href="index.php?option=com_news&view=detail&v_id='.$newslist->id.'&Itemid='.$Itemid.'"><b><i>'.$newslist->v_heading.'</i></b></a></p><hr/><br>';
//$i=$i+1;
}
}else{
JError::raiseError(404, "Message");
}
<?php echo $list?>
and for models/list.php this is the function
protected function getListQuery()
{
// Create a new query object.
$db = $this->getDbo();
$query = $db->getQuery(true);
// Select the required fields from the table.
$query
->select(
$this->getState(
'list.select', 'DISTINCT a.*'
)
);
$query->from('`#__news` AS a');
if (!JFactory::getUser()->authorise('core.edit', 'com_news'))
{
$query->where('a.state = 1');
}
// Filter by search in title
$search = $this->getState('filter.search');
if (!empty($search))
{
if (stripos($search, 'id:') === 0)
{
$query->where('a.id = ' . (int) substr($search, 3));
}
else
{
$search = $db->Quote('%' . $db->escape($search, true) . '%');
$query->where('( a.n_heading LIKE ' . $search . ' )');
}
}
/*
// Add the list ordering clause.
$orderCol = $this->state->get('list.ordering');
$orderDirn = $this->state->get('list.direction');
if ($orderCol && $orderDirn)
{
$query->order($db->escape($orderCol . ' ' . $orderDirn));
}
*/ //Order by date
$query->order ('a.n_date DESC');
$query->order ('a.id DESC');
return $query;
}
This is the code for views/list/view.html.php
public function display($tpl = null)
{
$app = JFactory::getApplication();
$this->state = $this->get('State');
$this->items = $this->get('Items');
$this->pagination = $this->get('Pagination');
$this->params = $app->getParams('com_news');
$this->filterForm = $this->get('FilterForm');
$this->activeFilters = $this->get('ActiveFilters');
// Check for errors.
if (count($errors = $this->get('Errors')))
{
throw new Exception(implode("\n", $errors));
}
$this->_prepareDocument();
parent::display($tpl);
}
Unsure how to achieve in and why its not working in php 8.0 where url should open the 1st page and not last page
|
[
"To display the list in descending order, you can modify the SQL query used to retrieve the news items from the database. The ORDER BY clause in the query can be used to specify the criteria by which the items should be ordered. In your case, you can add the following line to the getListQuery() function in your models/list.php file:\n$query->order('a.n_date DESC');\n\nThis will cause the items to be ordered by the n_date field in descending order (i.e., the most recent items will appear first in the list). You can also add additional ORDER BY clauses to further refine the order of the items, such as ordering by the id field if there are items with the same n_date value.\n$query->order('a.n_date DESC, a.id DESC');\n\nWith this change, the list should be displayed in the correct order when the page is loaded. Note that this solution assumes that the n_date field in the #__news table contains valid date and time values that can be used for ordering the items. If this is not the case, you may need to use a different field or method to determine the order of the items in the list.\n",
"$query->order ('a.n_date DESC, a.id DESC');\n\nThis will first order the items in the list by the n_date field in descending order, and then order them by the id field in descending order. This should fix the issue where the list is not being ordered correctly on PHP 8.0.x :) Since the n_date field is already a date field, there is no need to use the JFactory::getDate function to convert it to a date. You can simply use the $newslist->n_date value directly in your code.\n"
] |
[
0,
0
] |
[
"It looks like you are trying to display a list of news items sorted by date in descending order in Joomla. The code you posted seems to be correct in terms of sorting the news items by date in descending order. However, it seems that the problem lies with the pagination of the news items.\nIn the code you posted, it appears that the $this->items variable contains all of the news items in the database. This means that when the code loops through the items to display them on the page, it is displaying all of the news items at once. This is why you are seeing the last page of news items first when you access the page.\nTo fix this issue, you will need to add pagination to your code. In Joomla, this can be done by using the JPagination class. Here is an example of how you can implement pagination in your code:\n\nIn your models/list.php file, add the following code to the getListQuery function after the return $query line:\n// Add pagination\n$limit = $this->getState('list.limit');\n$limitstart = $this->getState('list.start');\n$this->setState('list.total', $query->count_all());\n$query->setLimit($limit, $limitstart);\n\nIn your views/list/tmpl/default.php file, add the following code to\ndisplay the pagination links:\n$pagination = $this->get('Pagination');\necho $pagination->getPagesLinks();\n\n\nThis code will add pagination to your list of news items, allowing you to display the news items in multiple pages sorted by date in descending order. You may need to adjust the code to fit your specific needs, but this should give you a good starting point.\nI hope this helps! Let me know if you have any other questions.\nMy donation addresses: BTC:178vgzZkLNV9NPxZiQqabq5crzBSgQWmvs,ETH:0x99753577c4ae89e7043addf7abbbdf7258a74697\n"
] |
[
-1
] |
[
"echo",
"php",
"sql_order_by"
] |
stackoverflow_0074556543_echo_php_sql_order_by.txt
|
Q:
Finding closest value in another table
I'm quite new to SQL so bare with me. What I'm trying to do is return the value closest to another value in a different table for every record.
I'll show a simplified example of my two tables for clarification
First table is the one that I want the value ENTRY_YEAR matched to:
ID
ENTRY_VALUE
1001
1900
1002
2000
And the second table:
ID
ENTRY_VALUE
STATUS
1001
1880
SUCCES
1001
1930
FAIL
1001
1940
SUCCES
1002
1960
SUCCES
1002
1980
FAIL
So the end result I'm looking for is:
ID
ENTRY_VALUE
STATUS
1001
1880
SUCCES
1002
1980
FAIL
I have currently only managed to link the id's together but can't find a way to compare the ENTRY_VALUE in both tables and return the one closest to the Table1 entry.
So only this:
SELECT * from Table2
INNER JOIN Table1 ON (Table2.ID = Table1.ID)
Once again my bad for the basic question, I have googled right about everything but can't get it to work so any help is very welcome!
A:
First attempt
This is a (slower performing) query. First attempt! This is an approach using a "correlated subquery" so it runs the inner query for each row of the outer query. The strategy is, for each row, to determine what the min value is we are looking for, and then select only the rows that fit that criteria. But such queries can be slow at runtime, although the logic is very clean.
select
a.id,
b.entry_value,
b.[status]
from
Foo a
inner join Bar b
on a.id = b.id
where
abs(a.entry_value - b.entry_value) =
(select min(abs(t1.entry_value-t2.entry_value))
from Foo t1
inner join Bar t2
on t1.id = t2.id
where
t1.id = a.id
group by t1.id)
Second attempt
If you have many rows (in the tens of thousands or in any case if the previous query is just too slow), then this next one should be better performing. Second Attempt! If you run the two inner queries by themselves, you will probably see the strategy here of how we are joining them to get the desired result.
select A.Id, A.entry_value, A.[status]
from
(
select t1.id, t2.entry_value, abs(t1.entry_value-t2.entry_value) as diff, t2.[status]
from Foo t1
inner join Bar t2
on t1.id = t2.id
) A
inner join
(
select t3.id, min(abs(t3.entry_value-t4.entry_value)) as diff
from Foo t3
inner join Bar t4
on t3.id = t4.id
group by t3.id
) B
on A.id = B.id
and A.diff = B.diff
Note
I would probably not try to write either of these queries in MSAccess "Design view" although if I had too I am sure I could. But generally, this is a case where I would write the query "by hand" and paste it into your query directly using MSAccess "SQL view".
Caution
Beware that ties will result in two rows! Example:
First table has (1003,2000)
Second table has (1003, 1990, 'success') and (1003, 2010, 'fail')
You will have a result with two rows, one with success and the other with fail (!)
So you really should test with your data and look for such cases that might produce such ties (and decide what to do, if necessary).
Btw...
just for fun, here's how you might go for it in SQL Server.
But I think this will NOT work in MSAccess, unfortunately.
select
T.id,
T.entry_value,
T.[status]
from
(
select
t1.id,
t2.entry_value,
abs(t1.entry_value-t2.entry_value) as diff,
t2.[status],
rank() over (partition by t1.id order by abs(t1.entry_value-t2.entry_value)) as seq
from #Foo t1
inner join #Bar t2
on t1.id = t2.id
) T
where T.seq = 1;
A:
Use a simple subquery to find the minimum offset:
Select
tbl1.ID,
tbl2.ENTRY_VALUE,
tbl2.STATUS
From
tbl1
Inner Join
tbl2 On tbl1.ID = tbl2.ID
Where
Abs([tbl1].[ENTRY_VALUE] - [tbl2].[ENTRY_VALUE]) =
(Select Min(Abs([tbl1].[ENTRY_VALUE] - [T2].[ENTRY_VALUE])) As Offset
From tbl2 As T2
Where T2.ID = tbl1.ID);
Output:
ID
ENTRY_VALUE
STATUS
1001
1880
SUCCES
1002
1980
FAIL
Note, that if the minimum offset for an ID exists twice, both records having this offset will be returned. Thus, you may have to aggregate the output.
|
Finding closest value in another table
|
I'm quite new to SQL so bare with me. What I'm trying to do is return the value closest to another value in a different table for every record.
I'll show a simplified example of my two tables for clarification
First table is the one that I want the value ENTRY_YEAR matched to:
ID
ENTRY_VALUE
1001
1900
1002
2000
And the second table:
ID
ENTRY_VALUE
STATUS
1001
1880
SUCCES
1001
1930
FAIL
1001
1940
SUCCES
1002
1960
SUCCES
1002
1980
FAIL
So the end result I'm looking for is:
ID
ENTRY_VALUE
STATUS
1001
1880
SUCCES
1002
1980
FAIL
I have currently only managed to link the id's together but can't find a way to compare the ENTRY_VALUE in both tables and return the one closest to the Table1 entry.
So only this:
SELECT * from Table2
INNER JOIN Table1 ON (Table2.ID = Table1.ID)
Once again my bad for the basic question, I have googled right about everything but can't get it to work so any help is very welcome!
|
[
"First attempt\nThis is a (slower performing) query. First attempt! This is an approach using a \"correlated subquery\" so it runs the inner query for each row of the outer query. The strategy is, for each row, to determine what the min value is we are looking for, and then select only the rows that fit that criteria. But such queries can be slow at runtime, although the logic is very clean.\nselect \n a.id,\n b.entry_value,\n b.[status]\nfrom \n Foo a \n inner join Bar b\n on a.id = b.id\nwhere \n abs(a.entry_value - b.entry_value) =\n (select min(abs(t1.entry_value-t2.entry_value))\n from Foo t1 \n inner join Bar t2\n on t1.id = t2.id\n where\n t1.id = a.id\n group by t1.id)\n\nSecond attempt\nIf you have many rows (in the tens of thousands or in any case if the previous query is just too slow), then this next one should be better performing. Second Attempt! If you run the two inner queries by themselves, you will probably see the strategy here of how we are joining them to get the desired result.\nselect A.Id, A.entry_value, A.[status]\nfrom \n (\n select t1.id, t2.entry_value, abs(t1.entry_value-t2.entry_value) as diff, t2.[status]\n from Foo t1 \n inner join Bar t2\n on t1.id = t2.id\n ) A\n inner join\n (\n select t3.id, min(abs(t3.entry_value-t4.entry_value)) as diff\n from Foo t3\n inner join Bar t4\n on t3.id = t4.id\n group by t3.id\n ) B\n on A.id = B.id\n and A.diff = B.diff\n\nNote\nI would probably not try to write either of these queries in MSAccess \"Design view\" although if I had too I am sure I could. But generally, this is a case where I would write the query \"by hand\" and paste it into your query directly using MSAccess \"SQL view\".\nCaution\nBeware that ties will result in two rows! Example:\nFirst table has (1003,2000)\nSecond table has (1003, 1990, 'success') and (1003, 2010, 'fail')\nYou will have a result with two rows, one with success and the other with fail (!)\nSo you really should test with your data and look for such cases that might produce such ties (and decide what to do, if necessary).\nBtw...\njust for fun, here's how you might go for it in SQL Server.\nBut I think this will NOT work in MSAccess, unfortunately.\nselect\n T.id,\n T.entry_value,\n T.[status]\nfrom\n(\n select \n t1.id,\n t2.entry_value,\n abs(t1.entry_value-t2.entry_value) as diff,\n t2.[status],\n rank() over (partition by t1.id order by abs(t1.entry_value-t2.entry_value)) as seq\n from #Foo t1 \n inner join #Bar t2\n on t1.id = t2.id\n) T\nwhere T.seq = 1;\n\n",
"Use a simple subquery to find the minimum offset:\nSelect \n tbl1.ID, \n tbl2.ENTRY_VALUE,\n tbl2.STATUS\nFrom \n tbl1 \nInner Join \n tbl2 On tbl1.ID = tbl2.ID\nWhere \n Abs([tbl1].[ENTRY_VALUE] - [tbl2].[ENTRY_VALUE]) =\n (Select Min(Abs([tbl1].[ENTRY_VALUE] - [T2].[ENTRY_VALUE])) As Offset\n From tbl2 As T2 \n Where T2.ID = tbl1.ID);\n\nOutput:\n\n\n\n\nID\nENTRY_VALUE\nSTATUS\n\n\n\n\n1001\n1880\nSUCCES\n\n\n1002\n1980\nFAIL\n\n\n\n\nNote, that if the minimum offset for an ID exists twice, both records having this offset will be returned. Thus, you may have to aggregate the output.\n"
] |
[
1,
0
] |
[] |
[] |
[
"ms_access",
"sql"
] |
stackoverflow_0074662478_ms_access_sql.txt
|
Q:
SwiftUI: Passing information between children views without updating body of parent
In this simplified example, I have a parent view containing three children:
struct ParentView: View {
var body: some View {
ChildViewA()
ChildViewB()
ChildViewC()
}
}
I am trying to change a state in ChildViewA that will trigger a state change in ChildViewB without redrawing ChildViewC. So I cannot place a @State in ParentView, as this would update all three views. I also have a limitation that I cannot restucture the view hierarchy to combine A & B into one view.
I have tried using PreferenceKeys but this only seems to be useful to pass information from child to parent. Is what I am trying to do even possible in SwiftUI? I feel like I'm missing something here...
A:
It is possible to trigger a state change in a child view without recomputing its sibling views in SwiftUI. One way to do this is to use a Binding property on the child view that triggers the state change.
A Binding property is a reference-like property that allows you to read and write a value from another source, such as a parent view or a global state object. By using a Binding property, you can pass a reference to a value from the parent view to the child view, and then update the value in the child view. The parent view will be notified of the update and can react to it accordingly, without redrawing its sibling views.
Here is an example of how you could use a Binding property to trigger a state change in a child view without recomputing its sibling views:
struct ParentView: View {
@State private var someValue = false
var body: some View {
ChildViewA(someValue: $someValue)
ChildViewB()
ChildViewC()
}
}
struct ChildViewA: View {
@Binding var someValue: Bool
var body: some View {
Button(action: {
// Update the someValue property
self.someValue = true
}) {
Text("Update value")
}
}
}
struct ChildViewB: View {
@State private var someValue = false
var body: some View {
Text("Value: \(someValue)")
}
}
struct ChildViewC: View {
var body: some View {
Text("I will not be redrawn.")
}
}
|
SwiftUI: Passing information between children views without updating body of parent
|
In this simplified example, I have a parent view containing three children:
struct ParentView: View {
var body: some View {
ChildViewA()
ChildViewB()
ChildViewC()
}
}
I am trying to change a state in ChildViewA that will trigger a state change in ChildViewB without redrawing ChildViewC. So I cannot place a @State in ParentView, as this would update all three views. I also have a limitation that I cannot restucture the view hierarchy to combine A & B into one view.
I have tried using PreferenceKeys but this only seems to be useful to pass information from child to parent. Is what I am trying to do even possible in SwiftUI? I feel like I'm missing something here...
|
[
"It is possible to trigger a state change in a child view without recomputing its sibling views in SwiftUI. One way to do this is to use a Binding property on the child view that triggers the state change.\nA Binding property is a reference-like property that allows you to read and write a value from another source, such as a parent view or a global state object. By using a Binding property, you can pass a reference to a value from the parent view to the child view, and then update the value in the child view. The parent view will be notified of the update and can react to it accordingly, without redrawing its sibling views.\nHere is an example of how you could use a Binding property to trigger a state change in a child view without recomputing its sibling views:\nstruct ParentView: View {\n @State private var someValue = false\n\n var body: some View {\n ChildViewA(someValue: $someValue)\n ChildViewB()\n ChildViewC()\n }\n}\n\nstruct ChildViewA: View {\n @Binding var someValue: Bool\n\n var body: some View {\n Button(action: {\n // Update the someValue property\n self.someValue = true\n }) {\n Text(\"Update value\")\n }\n }\n}\n\nstruct ChildViewB: View {\n @State private var someValue = false\n\n var body: some View {\n Text(\"Value: \\(someValue)\")\n }\n}\n\nstruct ChildViewC: View {\n var body: some View {\n Text(\"I will not be redrawn.\")\n }\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"swiftui"
] |
stackoverflow_0074666406_swiftui.txt
|
Q:
Go from TFS to Git
I have a client server that has limited rights (no internet or many install priveleges), but it contains a version of TFS. Is there any way possible to get the data and history to bring down to my local computer to put into GIt?
I have tried Git-TFS but cannot get it installed on the server and tried running the source code.
Any ideas besides me downloaded the code for each branch and then adding them in branch by branch and ignoring the previous commits of each branch?
A:
Depending on the version of TFS on the server, it could have a built-in option to import the TFVC history into a git repo on the same server. Afterwards you can mirror-clone it to disk and copy it. This will grab a limited history from a single branch though.
Git-TFS doesn't need any special privileges on the server to run, so instead of using the installer try creating a portable version by copying the installed bits from your own workstation.
You'll need a couple of things:
A portable version of git.
A portable version of tf.exe (you can copy the team explorer folder from a Visual Studio installation)
A portable version of Git-TFS.
Open a command prompt, add the paths to the 3 executables to your path variable and run the tool. It should just work.
In extreme cases I'd get a copy of the TFS databases, install a local copy of TFS/Azure DevOps Server on my machine and attach the databases. That way you have full access to the contents in the server to run whatever tools you need to use without having to install anything in the original server.
A:
Yes but the server has no Internet and I cannot access it from local.
The client would have to export a zip file per branch HEAD, in order for you to import said branches in a new Git local repository.
That would indeed ignore commits done in each branch, which is standard for "complex" migration (where you keep the old repository for reference)
But you would still need to export back the new repository back to the client, which you can do with git bundle.
|
Go from TFS to Git
|
I have a client server that has limited rights (no internet or many install priveleges), but it contains a version of TFS. Is there any way possible to get the data and history to bring down to my local computer to put into GIt?
I have tried Git-TFS but cannot get it installed on the server and tried running the source code.
Any ideas besides me downloaded the code for each branch and then adding them in branch by branch and ignoring the previous commits of each branch?
|
[
"Depending on the version of TFS on the server, it could have a built-in option to import the TFVC history into a git repo on the same server. Afterwards you can mirror-clone it to disk and copy it. This will grab a limited history from a single branch though.\nGit-TFS doesn't need any special privileges on the server to run, so instead of using the installer try creating a portable version by copying the installed bits from your own workstation.\nYou'll need a couple of things:\n\nA portable version of git.\nA portable version of tf.exe (you can copy the team explorer folder from a Visual Studio installation)\nA portable version of Git-TFS.\n\nOpen a command prompt, add the paths to the 3 executables to your path variable and run the tool. It should just work.\nIn extreme cases I'd get a copy of the TFS databases, install a local copy of TFS/Azure DevOps Server on my machine and attach the databases. That way you have full access to the contents in the server to run whatever tools you need to use without having to install anything in the original server.\n",
"\nYes but the server has no Internet and I cannot access it from local.\n\nThe client would have to export a zip file per branch HEAD, in order for you to import said branches in a new Git local repository.\nThat would indeed ignore commits done in each branch, which is standard for \"complex\" migration (where you keep the old repository for reference)\nBut you would still need to export back the new repository back to the client, which you can do with git bundle.\n"
] |
[
1,
0
] |
[] |
[] |
[
"git",
"git_tfs",
"tfs",
"tfvc"
] |
stackoverflow_0074662076_git_git_tfs_tfs_tfvc.txt
|
Q:
How do i rotate an image animated on button click in WPF through C# code (Not XAML)
I dont know if im approaching this wrong or if there is an error in my code.
And as i said above i would like to solve the problem through C# code and not in XAML.
I don't get any compiler errors it just does nothing.
XAML:
<Button Content="Rotate" HorizontalAlignment="Left" Margin="1007,637,0,0" VerticalAlignment="Top" Click="Button_Click" Height="36" Width="51"/>
<Image x:Name="Brett" Margin="0,-563,0,187" Stretch="Fill" Source="/Monopoly_Brett_bib.png" RenderTransformOrigin="0.5, 0.5" OpacityMask="White"></Image>
C#:
private void Button_Click(object sender, RoutedEventArgs e)
{
DoubleAnimation doubleAnimation = new DoubleAnimation();
doubleAnimation.Duration = new Duration(new TimeSpan(0, 0, 0, 3, 0));
Storyboard storyBoard = new Storyboard();
storyBoard.Children.Add(doubleAnimation);
Storyboard.SetTarget(doubleAnimation, Brett);
Storyboard.SetTargetProperty(doubleAnimation, new PropertyPath(RotateTransform.AngleProperty));
doubleAnimation.From = 0;
doubleAnimation.To = -90;
storyBoard.Begin();
storyBoard.Completed += StoryBoard_Completed;
}
A:
Try this:
private void Button_Click(object sender, RoutedEventArgs e)
{
DoubleAnimation doubleAnimation = new DoubleAnimation
{
From =0,
To = -90,
Duration = new Duration(TimeSpan.FromSeconds(3))
};
Storyboard storyBoard = new Storyboard();
Brett.RenderTransform = new RotateTransform();
storyBoard.Children.Add(doubleAnimation);
Storyboard.SetTarget(doubleAnimation, Brett);
Storyboard.SetTargetProperty(doubleAnimation, new PropertyPath("RenderTransform.Angle"));
storyBoard.Begin();
}
A:
Try This. That's how I got it, I wanted a rotating settings image. I was using the 'mouse enter' event though. hope this helps:
private void BtnSettings_MouseEnter(object sender, MouseEventArgs e)
{
int angle = 270;
SetImage.LayoutTransform = new RotateTransform(angle);
}
and during the mouse leave event, I reversed the angle (int angle=0)
|
How do i rotate an image animated on button click in WPF through C# code (Not XAML)
|
I dont know if im approaching this wrong or if there is an error in my code.
And as i said above i would like to solve the problem through C# code and not in XAML.
I don't get any compiler errors it just does nothing.
XAML:
<Button Content="Rotate" HorizontalAlignment="Left" Margin="1007,637,0,0" VerticalAlignment="Top" Click="Button_Click" Height="36" Width="51"/>
<Image x:Name="Brett" Margin="0,-563,0,187" Stretch="Fill" Source="/Monopoly_Brett_bib.png" RenderTransformOrigin="0.5, 0.5" OpacityMask="White"></Image>
C#:
private void Button_Click(object sender, RoutedEventArgs e)
{
DoubleAnimation doubleAnimation = new DoubleAnimation();
doubleAnimation.Duration = new Duration(new TimeSpan(0, 0, 0, 3, 0));
Storyboard storyBoard = new Storyboard();
storyBoard.Children.Add(doubleAnimation);
Storyboard.SetTarget(doubleAnimation, Brett);
Storyboard.SetTargetProperty(doubleAnimation, new PropertyPath(RotateTransform.AngleProperty));
doubleAnimation.From = 0;
doubleAnimation.To = -90;
storyBoard.Begin();
storyBoard.Completed += StoryBoard_Completed;
}
|
[
"Try this:\nprivate void Button_Click(object sender, RoutedEventArgs e)\n {\n DoubleAnimation doubleAnimation = new DoubleAnimation\n {\n From =0,\n To = -90,\n Duration = new Duration(TimeSpan.FromSeconds(3))\n };\n Storyboard storyBoard = new Storyboard();\n\n Brett.RenderTransform = new RotateTransform();\n\n storyBoard.Children.Add(doubleAnimation);\n Storyboard.SetTarget(doubleAnimation, Brett);\n Storyboard.SetTargetProperty(doubleAnimation, new PropertyPath(\"RenderTransform.Angle\"));\n\n storyBoard.Begin();\n }\n\n",
"Try This. That's how I got it, I wanted a rotating settings image. I was using the 'mouse enter' event though. hope this helps:\n private void BtnSettings_MouseEnter(object sender, MouseEventArgs e)\n {\n int angle = 270;\n SetImage.LayoutTransform = new RotateTransform(angle); \n }\n\nand during the mouse leave event, I reversed the angle (int angle=0)\n"
] |
[
0,
0
] |
[] |
[] |
[
"c#",
"storyboard",
"wpf",
"wpf_animation",
"xaml"
] |
stackoverflow_0072557671_c#_storyboard_wpf_wpf_animation_xaml.txt
|
Q:
Getting VS Code's C++ intellisense to deal with WinAPI types
So, I'm using WinAPI in a C++ project with VS Code. Something I've noticed is that the standard C++ intellisense doesn't play so nicely with WinAPI's many macros.
For example,
#include <windows.h>
int CALLBACK WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
MessageBox(0, "This is a test", "Test", MB_OK|MB_ICONINFORMATION);
return 0;
}
In the above code, error squiggles appear under "This is a test" and "Test" because VS Code's intellisense is expecting those parameters to be of type LPCWSTR and is instead interpreting them as being const char *.
This shouldn't be the case, as "This is a test" and "Test" are valid as LPCWSTRs and the program compiles and runs perfectly fine.
Is there anyway I can get the intellisense engine to recognize that this is not an error? Or will I have to disable error squiggles entirely?
A:
Sorry, VS is working correctly in this case.
"This is a test" is a char const * (LPCSTR), not a LPCWSTR. For a wide character string literal add the 'L' prefix: L"This is a test".
A:
Try wrapping your string with macro _T(string) from <tchar.h>. Compiles fine and intellisense doesn't panic.
|
Getting VS Code's C++ intellisense to deal with WinAPI types
|
So, I'm using WinAPI in a C++ project with VS Code. Something I've noticed is that the standard C++ intellisense doesn't play so nicely with WinAPI's many macros.
For example,
#include <windows.h>
int CALLBACK WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow)
{
MessageBox(0, "This is a test", "Test", MB_OK|MB_ICONINFORMATION);
return 0;
}
In the above code, error squiggles appear under "This is a test" and "Test" because VS Code's intellisense is expecting those parameters to be of type LPCWSTR and is instead interpreting them as being const char *.
This shouldn't be the case, as "This is a test" and "Test" are valid as LPCWSTRs and the program compiles and runs perfectly fine.
Is there anyway I can get the intellisense engine to recognize that this is not an error? Or will I have to disable error squiggles entirely?
|
[
"Sorry, VS is working correctly in this case.\n\"This is a test\" is a char const * (LPCSTR), not a LPCWSTR. For a wide character string literal add the 'L' prefix: L\"This is a test\".\n",
"Try wrapping your string with macro _T(string) from <tchar.h>. Compiles fine and intellisense doesn't panic.\n"
] |
[
1,
0
] |
[] |
[] |
[
"visual_studio_code",
"vscode_settings",
"winapi"
] |
stackoverflow_0047215158_visual_studio_code_vscode_settings_winapi.txt
|
Q:
"function arguments expected near 'levelc'" when using LOVE
I'm currently trying to make a level loading system for a game.
function love.filedropped(file)
ofile=io.open(file:getFilename(),"r")
io.input(ofile)
file:close
levelc=io.read()
for i=1,levelc do
levels[i]=io.read()
print levels[i]
end
levelc should be the first line of the file, and file:getFilename is the file to open (path included) the project gives an error message on startup, and i've used a similar structure before, but for an output. The error is at line 30, which is the levelc=io.read().
I've tried changing the name of the file pointer (it was "f" before, now "ofile") and i've tried using io.read("*l") instead of io.read() but same result.
EDITS:
-this is a love.filedropped(file)
-i need to open other files from a .txt later and i don't really understand how do do that
A:
The parameter given by love.filedropped is a DroppedFile.
In your case helpful could be File:lines().
For example:
function love.filedropped(file)
-- Open for reading
file:open("r")
-- Iterate over the lines
local i = 0
for line in file:lines() do
i = i + 1
levels[i] = line
print(i, levels[i]) -- Notice the parentheses missing in your code
end
-- Close the file
file:close()
end
Notice that love2d usually only allows reading/writing files within the save or working directory. Dropped files are an exception.
Unrelated to this answer but things I noticed in your code:
Use locals, oFile should be local
file:close() required parentheses as its a function call
Same for the print
The filedropped callback has no end
You mentioned reading other files too, to do so, you can either:
Use love.filesystem.newFile and a similar approach as before
The recommended one-liner love.filesystem.lines
|
"function arguments expected near 'levelc'" when using LOVE
|
I'm currently trying to make a level loading system for a game.
function love.filedropped(file)
ofile=io.open(file:getFilename(),"r")
io.input(ofile)
file:close
levelc=io.read()
for i=1,levelc do
levels[i]=io.read()
print levels[i]
end
levelc should be the first line of the file, and file:getFilename is the file to open (path included) the project gives an error message on startup, and i've used a similar structure before, but for an output. The error is at line 30, which is the levelc=io.read().
I've tried changing the name of the file pointer (it was "f" before, now "ofile") and i've tried using io.read("*l") instead of io.read() but same result.
EDITS:
-this is a love.filedropped(file)
-i need to open other files from a .txt later and i don't really understand how do do that
|
[
"The parameter given by love.filedropped is a DroppedFile.\nIn your case helpful could be File:lines().\nFor example:\nfunction love.filedropped(file)\n -- Open for reading\n file:open(\"r\")\n \n -- Iterate over the lines\n local i = 0\n for line in file:lines() do\n i = i + 1\n levels[i] = line\n print(i, levels[i]) -- Notice the parentheses missing in your code\n end\n \n -- Close the file\n file:close()\nend\n\nNotice that love2d usually only allows reading/writing files within the save or working directory. Dropped files are an exception.\nUnrelated to this answer but things I noticed in your code:\n\nUse locals, oFile should be local\nfile:close() required parentheses as its a function call\nSame for the print\nThe filedropped callback has no end\n\nYou mentioned reading other files too, to do so, you can either:\n\nUse love.filesystem.newFile and a similar approach as before\nThe recommended one-liner love.filesystem.lines\n\n"
] |
[
0
] |
[] |
[] |
[
"love2d",
"lua"
] |
stackoverflow_0074665097_love2d_lua.txt
|
Q:
Sort algorithm to create a polygon from points with only right angles
Given a set of (x, y) coordinates in some random order, can they be sorted so that a polygonal path can be drawn with only 90o internal or external angles.
It is know that such a path exists, but it is not know what order the edge points of the polygon need to be connected.
The closest solutions readily findable in SO are:
Algorithm to create a polygon from points
Algorithm to create polygon
Both of these use polar coordination to order the points, and will produce a star like polygon, for which only some of the corners are 90o angles.
[NOTE This is a reposting of a deleted question: Sort algorithm to create a polygon from points with only right angle. I had developed a solution and went to post it only to find that the question had been deleted. I am reposting it here because others may find it useful.]
A:
To sort randomly ordered (x, y) rectalinear points into rectalinear polygonal order:
find the center of the points
find the remotest point
find the nearest point to the remotest point
find the angle between the remotest point and nearest remote point and the x/y axis (probably could be any two "nearest" points but remotest nearest
points should reduce the likelihood of any ambiguity)
rotate all points to be x-y axis aligned
pick any point as a start point as the first stepping point
find the nearest point as the next point
if the stepping point and the next point are x-axis aligned look for
the next nearest y-axis aligned point
if the stepping point and the next point are y-axis aligned look for
the next nearest x-axis aligned point
if there is no next axis aligned point, back track one point at a
time, temporarily removing the back tracked points from available
next points until another next back tracked axis aligned point is
found and then add the back tracked points back to the available
next points (back tracking is necessary because it is possible to get into an
enclave with no path out, but that is not a valid polygon)
make the next point the stepping point
alternate between x and y axis aligned next nearest points
repeat from 10 until all points are used
rotate points back to the their original alignment
The code below is a rough implementation in python. It will produce a number of SVG files for comparison.
points = [(156.40357183517773, 23.19316100057085), (83.97002318399646, 188.27914171909507), (518.4511031561435, 60.897074118366035), (799.3826769425817, 214.44658030407507), (304.1247347870089, -2.8540656494687013), (593.7387174567936, 199.93582818685073), (773.3354502925422, 66.72541735224387), (625.6142873407109, 92.7726440022834), (428.65273673826925, 127.50227953566946), (379.41234908765887, 136.184688419016), (446.0175545049623, 225.98305483689026), (448.871620154431, 530.1077896238992), (509.768694272797, 11.65668646775564), (373.58400585378104, 391.06903555541453), (602.4211263401401, 249.17621583746111), (182.45079848521726, 170.91432395240204), (616.9318784573643, 43.53225635167299), (165.08598071852424, 72.43354865118125), (312.80714367035546, 46.3863220011417), (225.86284290194985, 417.1162622054541), (399.63123250382057, 538.7901985072457), (66.60520541730344, 89.79836641787429)]
def genSVG(points):
path = "M " + str(points[0][0]) + " " + str(points[0][1]) + " "
minX = points[0][0]
minY = points[0][1]
maxX = minX
maxY = minY
for point in points[1:]:
path += "L " + str(point[0]) + " " + str(point[1]) + " "
if point[0] < minX:
minX = point[0]
elif point[0] > maxX:
maxX = point[0]
if point[1] < minY:
minY = point[1]
elif point[1] > maxY:
maxY = point[1]
path += "Z"
path = '<path fill="grey" d="' + path + '"/>'
viewbox = ' viewbox="' + str(minX-1) + ' ' + str(minY-1) + ' ' + str(maxX+1) + ' ' + str(maxY+1) + '"'
width = ' width="' + str((maxX - minX + 2)) + '"'
height = ' height="' + str((maxY - minY + 2)) + '"'
return '<svg ' + 'xmlns="http://www.w3.org/2000/svg"' + width + height + viewbox + '>' + path + '</svg>'
def genSVGover(points, overs, center):
path = "M " + str(points[0][0]) + " " + str(points[0][1]) + " "
minX = points[0][0]
minY = points[0][1]
maxX = minX
maxY = minY
for point in points[1:]:
path += "L " + str(point[0]) + " " + str(point[1]) + " "
if point[0] < minX:
minX = point[0]
elif point[0] > maxX:
maxX = point[0]
if point[1] < minY:
minY = point[1]
elif point[1] > maxY:
maxY = point[1]
path += "Z"
path = '<path stroke="black" stroke-width="7" fill="none" d="' + path + '"/>'
viewbox = ' viewbox="' + str(minX-4) + ' ' + str(minY-4) + ' ' + str(maxX+4) + ' ' + str(maxY+4) + '"'
width = ' width="' + str((maxX - minX + 8)) + '"'
height = ' height="' + str((maxY - minY + 8)) + '"'
over = "M " + str(overs[0][0]) + " " + str(overs[0][1]) + " "
for point in overs:
over += "L " + str(point[0]) + " " + str(point[1]) + " "
over += "Z"
over = '<path stroke="red" stroke-width="2" fill="none" d="' + over + '"/>'
return '<svg ' + 'xmlns="http://www.w3.org/2000/svg"' + width + height + viewbox + '>' + path + over + '<circle fill="blue" cx="' + str(center[0]) + '" cy="' + str(center[1]) + '" r="7" />' + '</svg>'
import math
def rotate(points, theta):
rotated = []
cosTheta = math.cos(theta)
sinTheta = math.sin(theta)
for point in points:
rotated.append(( cosTheta * point[0] + sinTheta * point[1], -sinTheta * point[0] + cosTheta * point[1] ))
return rotated
def closest(focus, points):
if ( points[0] != focus ):
closestPoint = points[0]
else:
closestPoint = points[1]
closestDist = ( focus[0] - closestPoint[0] )**2 + ( focus[1] - closestPoint[1] )**2
for point in points:
if point != focus:
dist = ( focus[0] - point[0] )**2 + ( focus[1] - point[1] )**2
if dist < closestDist:
closestDist = dist
closestPoint = point
return closestPoint
def rotangle(points):
focus = remotest(points)
closestPoint = closest(focus, points)
if abs(focus[0] - closestPoint[0]) < tolerance or abs(focus[1] - closestPoint[1]) < tolerance:
return 0
else:
return math.atan2(focus[1] - closestPoint[1], focus[0] - closestPoint[0])
tolerance = 0.000000000001
def rightSort(points):
sorted = [ points[0] ]
nextPoint = closest(sorted[-1], points)
x = abs( sorted[-1][0] - nextPoint[0]) < tolerance
popped = []
del points[0]
while len(points) > 0:
ndxes = []
if x:
for ndx in range(len(points)):
if abs(points[ndx][0] - sorted[-1][0]) < tolerance:
ndxes.append(ndx)
if len(ndxes) == 0:
popped.append(sorted.pop())
x = False
else:
closestDist = abs(points[ndxes[0]][1] - sorted[-1][1])
ndxClosest = ndxes[0]
for ndx in ndxes[1:]:
if abs(points[ndx][1] - sorted[-1][1]) < closestDist:
ndxClosest = ndx
sorted.append(points[ndxClosest])
del points[ndxClosest]
x = False
if popped:
points += popped
popped = []
else:
for ndx in range(len(points)):
if abs(points[ndx][1] - sorted[-1][1]) < tolerance:
ndxes.append(ndx)
if len(ndxes) == 0:
popped.append(sorted.pop())
x = True
else:
closestDist = abs(points[ndxes[0]][0] - sorted[-1][0])
ndxClosest = ndxes[0]
for ndx in ndxes[1:]:
if abs(points[ndx][0] - sorted[-1][0]) < closestDist:
ndxClosest = ndx
sorted.append(points[ndxClosest])
del points[ndxClosest]
x = True
if popped:
points += popped
popped = []
if popped:
sorted += popped
return sorted
def center(points):
return ( sum(point[0] for point in points) / len(points),
sum(point[1] for point in points) / len(points) )
def remotest(points):
centerPoint = center(points)
print( "center", centerPoint )
remotestPoint = points[0]
remotestDist = ( centerPoint[0] - remotestPoint[0] )**2 + ( centerPoint[1] - remotestPoint[1] )**2
for point in points[1:]:
dist = ( centerPoint[0] - point[0] )**2 + ( centerPoint[1] - point[1] )**2
if dist > remotestDist:
remotestDist = dist
remotestPoint = point
print( "remotest", remotestPoint )
return remotestPoint
def squaredPolar(point, centerPoint):
return ( math.atan2(point[1] - centerPoint[1], point[0] - centerPoint[0]),
( point[0] - centerPoint[0] )**2 + ( point[1] - centerPoint[1] )**2 )
def polarSort(points):
centerPoint = center(points)
presorted = []
for point in points:
presorted.append(( squaredPolar(point, centerPoint), point ))
presorted.sort()
sorted = []
for point in presorted:
sorted.append(point[1])
return sorted
htmlFile = open("polygon.html", "w")
htmlFile.write("<html><body>")
htmlFile.write(genSVG(points))
htmlFile.write("</body></html>")
htmlFile.close()
angle = rotangle(points)
print( "angle", angle * 180 / math.pi )
htmlFile = open("rightgon.html", "w")
htmlFile.write("<html><body>")
htmlFile.write(genSVGover(rotate(rightSort(rotate(points, angle)), -angle), polarSort(points), center(points)))
htmlFile.write("</body></html>")
htmlFile.close()
htmlFile = open("polargon.html", "w")
htmlFile.write("<html><body>")
htmlFile.write(genSVG(polarSort(points)))
htmlFile.write("</body></html>")
htmlFile.close()
The image below is an unsorted points "polygon".
<svg xmlns="http://www.w3.org/2000/svg" width="734.7774715252783" height="543.6442641567144" viewbox="65.60520541730344 -3.8540656494687013 800.3826769425817 539.7901985072457"><path fill="grey" d="M 156.40357183517773 23.19316100057085 L 83.97002318399646 188.27914171909507 L 518.4511031561435 60.897074118366035 L 799.3826769425817 214.44658030407507 L 304.1247347870089 -2.8540656494687013 L 593.7387174567936 199.93582818685073 L 773.3354502925422 66.72541735224387 L 625.6142873407109 92.7726440022834 L 428.65273673826925 127.50227953566946 L 379.41234908765887 136.184688419016 L 446.0175545049623 225.98305483689026 L 448.871620154431 530.1077896238992 L 509.768694272797 11.65668646775564 L 373.58400585378104 391.06903555541453 L 602.4211263401401 249.17621583746111 L 182.45079848521726 170.91432395240204 L 616.9318784573643 43.53225635167299 L 165.08598071852424 72.43354865118125 L 312.80714367035546 46.3863220011417 L 225.86284290194985 417.1162622054541 L 399.63123250382057 538.7901985072457 L 66.60520541730344 89.79836641787429 Z"/></svg>
The image below is the rendering of one output file. It shows:
blue dot is the center of the (x, y) coordinates
red polygon is the polar sorted polygon
black polygon is the right angle sorted polygon
<svg xmlns="http://www.w3.org/2000/svg" width="740.7774715252784" height="549.6442641567145" viewbox="62.60520541730345 -6.854065649468694 803.3826769425818 542.7901985072458"><path stroke="black" stroke-width="7" fill="none" d="M 156.40357183517776 23.19316100057085 L 165.08598071852424 72.43354865118125 L 66.60520541730345 89.7983664178743 L 83.97002318399647 188.2791417190951 L 182.4507984852173 170.91432395240207 L 225.86284290194988 417.1162622054542 L 373.5840058537811 391.0690355554146 L 399.63123250382057 538.7901985072458 L 448.87162015443107 530.1077896238993 L 379.41234908765887 136.184688419016 L 428.65273673826937 127.50227953566947 L 446.01755450496233 225.9830548368903 L 593.7387174567937 199.93582818685076 L 602.4211263401402 249.17621583746114 L 799.3826769425818 214.44658030407507 L 773.3354502925423 66.72541735224388 L 625.614287340711 92.7726440022834 L 616.9318784573644 43.532256351673 L 518.4511031561435 60.89707411836606 L 509.76869427279706 11.656686467755648 L 312.8071436703555 46.3863220011417 L 304.1247347870089 -2.8540656494686942 Z"/><path stroke="red" stroke-width="2" fill="none" d="M 182.45079848521726 170.91432395240204 L 182.45079848521726 170.91432395240204 L 66.60520541730344 89.79836641787429 L 165.08598071852424 72.43354865118125 L 156.40357183517773 23.19316100057085 L 379.41234908765887 136.184688419016 L 312.80714367035546 46.3863220011417 L 304.1247347870089 -2.8540656494687013 L 428.65273673826925 127.50227953566946 L 509.768694272797 11.65668646775564 L 518.4511031561435 60.897074118366035 L 616.9318784573643 43.53225635167299 L 625.6142873407109 92.7726440022834 L 773.3354502925422 66.72541735224387 L 799.3826769425817 214.44658030407507 L 593.7387174567936 199.93582818685073 L 602.4211263401401 249.17621583746111 L 446.0175545049623 225.98305483689026 L 448.871620154431 530.1077896238992 L 399.63123250382057 538.7901985072457 L 373.58400585378104 391.06903555541453 L 225.86284290194985 417.1162622054541 L 83.97002318399646 188.27914171909507 Z"/><circle fill="blue" cx="409.6874424591604" cy="177.00212769986794" r="7" /></svg>
|
Sort algorithm to create a polygon from points with only right angles
|
Given a set of (x, y) coordinates in some random order, can they be sorted so that a polygonal path can be drawn with only 90o internal or external angles.
It is know that such a path exists, but it is not know what order the edge points of the polygon need to be connected.
The closest solutions readily findable in SO are:
Algorithm to create a polygon from points
Algorithm to create polygon
Both of these use polar coordination to order the points, and will produce a star like polygon, for which only some of the corners are 90o angles.
[NOTE This is a reposting of a deleted question: Sort algorithm to create a polygon from points with only right angle. I had developed a solution and went to post it only to find that the question had been deleted. I am reposting it here because others may find it useful.]
|
[
"To sort randomly ordered (x, y) rectalinear points into rectalinear polygonal order:\n\nfind the center of the points\nfind the remotest point\nfind the nearest point to the remotest point\nfind the angle between the remotest point and nearest remote point and the x/y axis (probably could be any two \"nearest\" points but remotest nearest\npoints should reduce the likelihood of any ambiguity)\nrotate all points to be x-y axis aligned\npick any point as a start point as the first stepping point\nfind the nearest point as the next point\nif the stepping point and the next point are x-axis aligned look for\nthe next nearest y-axis aligned point\nif the stepping point and the next point are y-axis aligned look for\nthe next nearest x-axis aligned point\nif there is no next axis aligned point, back track one point at a\ntime, temporarily removing the back tracked points from available\nnext points until another next back tracked axis aligned point is\nfound and then add the back tracked points back to the available\nnext points (back tracking is necessary because it is possible to get into an\nenclave with no path out, but that is not a valid polygon)\nmake the next point the stepping point\nalternate between x and y axis aligned next nearest points\nrepeat from 10 until all points are used\nrotate points back to the their original alignment\n\nThe code below is a rough implementation in python. It will produce a number of SVG files for comparison.\npoints = [(156.40357183517773, 23.19316100057085), (83.97002318399646, 188.27914171909507), (518.4511031561435, 60.897074118366035), (799.3826769425817, 214.44658030407507), (304.1247347870089, -2.8540656494687013), (593.7387174567936, 199.93582818685073), (773.3354502925422, 66.72541735224387), (625.6142873407109, 92.7726440022834), (428.65273673826925, 127.50227953566946), (379.41234908765887, 136.184688419016), (446.0175545049623, 225.98305483689026), (448.871620154431, 530.1077896238992), (509.768694272797, 11.65668646775564), (373.58400585378104, 391.06903555541453), (602.4211263401401, 249.17621583746111), (182.45079848521726, 170.91432395240204), (616.9318784573643, 43.53225635167299), (165.08598071852424, 72.43354865118125), (312.80714367035546, 46.3863220011417), (225.86284290194985, 417.1162622054541), (399.63123250382057, 538.7901985072457), (66.60520541730344, 89.79836641787429)]\n\ndef genSVG(points):\n path = \"M \" + str(points[0][0]) + \" \" + str(points[0][1]) + \" \"\n minX = points[0][0]\n minY = points[0][1]\n maxX = minX\n maxY = minY\n for point in points[1:]:\n path += \"L \" + str(point[0]) + \" \" + str(point[1]) + \" \"\n if point[0] < minX:\n minX = point[0]\n elif point[0] > maxX:\n maxX = point[0]\n if point[1] < minY:\n minY = point[1]\n elif point[1] > maxY:\n maxY = point[1]\n path += \"Z\"\n path = '<path fill=\"grey\" d=\"' + path + '\"/>'\n\n viewbox = ' viewbox=\"' + str(minX-1) + ' ' + str(minY-1) + ' ' + str(maxX+1) + ' ' + str(maxY+1) + '\"'\n \n width = ' width=\"' + str((maxX - minX + 2)) + '\"'\n height = ' height=\"' + str((maxY - minY + 2)) + '\"'\n\n return '<svg ' + 'xmlns=\"http://www.w3.org/2000/svg\"' + width + height + viewbox + '>' + path + '</svg>'\n\ndef genSVGover(points, overs, center):\n path = \"M \" + str(points[0][0]) + \" \" + str(points[0][1]) + \" \"\n minX = points[0][0]\n minY = points[0][1]\n maxX = minX\n maxY = minY\n for point in points[1:]:\n path += \"L \" + str(point[0]) + \" \" + str(point[1]) + \" \"\n if point[0] < minX:\n minX = point[0]\n elif point[0] > maxX:\n maxX = point[0]\n if point[1] < minY:\n minY = point[1]\n elif point[1] > maxY:\n maxY = point[1]\n path += \"Z\"\n path = '<path stroke=\"black\" stroke-width=\"7\" fill=\"none\" d=\"' + path + '\"/>'\n\n viewbox = ' viewbox=\"' + str(minX-4) + ' ' + str(minY-4) + ' ' + str(maxX+4) + ' ' + str(maxY+4) + '\"'\n \n width = ' width=\"' + str((maxX - minX + 8)) + '\"'\n height = ' height=\"' + str((maxY - minY + 8)) + '\"'\n\n over = \"M \" + str(overs[0][0]) + \" \" + str(overs[0][1]) + \" \"\n for point in overs:\n over += \"L \" + str(point[0]) + \" \" + str(point[1]) + \" \"\n over += \"Z\"\n over = '<path stroke=\"red\" stroke-width=\"2\" fill=\"none\" d=\"' + over + '\"/>'\n \n return '<svg ' + 'xmlns=\"http://www.w3.org/2000/svg\"' + width + height + viewbox + '>' + path + over + '<circle fill=\"blue\" cx=\"' + str(center[0]) + '\" cy=\"' + str(center[1]) + '\" r=\"7\" />' + '</svg>'\n\nimport math\ndef rotate(points, theta):\n rotated = []\n cosTheta = math.cos(theta)\n sinTheta = math.sin(theta)\n for point in points:\n rotated.append(( cosTheta * point[0] + sinTheta * point[1], -sinTheta * point[0] + cosTheta * point[1] ))\n return rotated\n\ndef closest(focus, points):\n if ( points[0] != focus ):\n closestPoint = points[0]\n else:\n closestPoint = points[1]\n closestDist = ( focus[0] - closestPoint[0] )**2 + ( focus[1] - closestPoint[1] )**2\n for point in points:\n if point != focus:\n dist = ( focus[0] - point[0] )**2 + ( focus[1] - point[1] )**2\n if dist < closestDist:\n closestDist = dist\n closestPoint = point\n return closestPoint\n\ndef rotangle(points):\n focus = remotest(points)\n closestPoint = closest(focus, points)\n if abs(focus[0] - closestPoint[0]) < tolerance or abs(focus[1] - closestPoint[1]) < tolerance:\n return 0\n else:\n return math.atan2(focus[1] - closestPoint[1], focus[0] - closestPoint[0])\n\ntolerance = 0.000000000001\ndef rightSort(points):\n sorted = [ points[0] ]\n nextPoint = closest(sorted[-1], points)\n x = abs( sorted[-1][0] - nextPoint[0]) < tolerance\n popped = []\n del points[0]\n while len(points) > 0:\n ndxes = []\n if x:\n for ndx in range(len(points)):\n if abs(points[ndx][0] - sorted[-1][0]) < tolerance:\n ndxes.append(ndx)\n if len(ndxes) == 0:\n popped.append(sorted.pop())\n x = False\n else:\n closestDist = abs(points[ndxes[0]][1] - sorted[-1][1])\n ndxClosest = ndxes[0]\n for ndx in ndxes[1:]:\n if abs(points[ndx][1] - sorted[-1][1]) < closestDist:\n ndxClosest = ndx\n sorted.append(points[ndxClosest])\n del points[ndxClosest]\n x = False\n if popped:\n points += popped\n popped = []\n else:\n for ndx in range(len(points)):\n if abs(points[ndx][1] - sorted[-1][1]) < tolerance:\n ndxes.append(ndx)\n if len(ndxes) == 0:\n popped.append(sorted.pop())\n x = True\n else:\n closestDist = abs(points[ndxes[0]][0] - sorted[-1][0])\n ndxClosest = ndxes[0]\n for ndx in ndxes[1:]:\n if abs(points[ndx][0] - sorted[-1][0]) < closestDist:\n ndxClosest = ndx\n sorted.append(points[ndxClosest])\n del points[ndxClosest]\n x = True\n if popped:\n points += popped\n popped = []\n if popped:\n sorted += popped\n return sorted\n\ndef center(points):\n return ( sum(point[0] for point in points) / len(points),\n sum(point[1] for point in points) / len(points) )\n\ndef remotest(points):\n centerPoint = center(points)\n print( \"center\", centerPoint )\n remotestPoint = points[0]\n remotestDist = ( centerPoint[0] - remotestPoint[0] )**2 + ( centerPoint[1] - remotestPoint[1] )**2\n for point in points[1:]:\n dist = ( centerPoint[0] - point[0] )**2 + ( centerPoint[1] - point[1] )**2\n if dist > remotestDist:\n remotestDist = dist\n remotestPoint = point\n print( \"remotest\", remotestPoint )\n return remotestPoint\n\ndef squaredPolar(point, centerPoint):\n return ( math.atan2(point[1] - centerPoint[1], point[0] - centerPoint[0]),\n ( point[0] - centerPoint[0] )**2 + ( point[1] - centerPoint[1] )**2 )\n\ndef polarSort(points):\n centerPoint = center(points)\n presorted = []\n for point in points:\n presorted.append(( squaredPolar(point, centerPoint), point ))\n presorted.sort()\n sorted = []\n for point in presorted:\n sorted.append(point[1])\n return sorted\n \nhtmlFile = open(\"polygon.html\", \"w\")\nhtmlFile.write(\"<html><body>\")\nhtmlFile.write(genSVG(points))\nhtmlFile.write(\"</body></html>\")\nhtmlFile.close()\n\nangle = rotangle(points)\nprint( \"angle\", angle * 180 / math.pi )\n\nhtmlFile = open(\"rightgon.html\", \"w\")\nhtmlFile.write(\"<html><body>\")\nhtmlFile.write(genSVGover(rotate(rightSort(rotate(points, angle)), -angle), polarSort(points), center(points)))\nhtmlFile.write(\"</body></html>\")\nhtmlFile.close()\n\nhtmlFile = open(\"polargon.html\", \"w\")\nhtmlFile.write(\"<html><body>\")\nhtmlFile.write(genSVG(polarSort(points)))\nhtmlFile.write(\"</body></html>\")\nhtmlFile.close()\n\nThe image below is an unsorted points \"polygon\".\n\n\n<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"734.7774715252783\" height=\"543.6442641567144\" viewbox=\"65.60520541730344 -3.8540656494687013 800.3826769425817 539.7901985072457\"><path fill=\"grey\" d=\"M 156.40357183517773 23.19316100057085 L 83.97002318399646 188.27914171909507 L 518.4511031561435 60.897074118366035 L 799.3826769425817 214.44658030407507 L 304.1247347870089 -2.8540656494687013 L 593.7387174567936 199.93582818685073 L 773.3354502925422 66.72541735224387 L 625.6142873407109 92.7726440022834 L 428.65273673826925 127.50227953566946 L 379.41234908765887 136.184688419016 L 446.0175545049623 225.98305483689026 L 448.871620154431 530.1077896238992 L 509.768694272797 11.65668646775564 L 373.58400585378104 391.06903555541453 L 602.4211263401401 249.17621583746111 L 182.45079848521726 170.91432395240204 L 616.9318784573643 43.53225635167299 L 165.08598071852424 72.43354865118125 L 312.80714367035546 46.3863220011417 L 225.86284290194985 417.1162622054541 L 399.63123250382057 538.7901985072457 L 66.60520541730344 89.79836641787429 Z\"/></svg>\n\n\n\nThe image below is the rendering of one output file. It shows:\n\nblue dot is the center of the (x, y) coordinates\nred polygon is the polar sorted polygon\nblack polygon is the right angle sorted polygon\n\n\n\n<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"740.7774715252784\" height=\"549.6442641567145\" viewbox=\"62.60520541730345 -6.854065649468694 803.3826769425818 542.7901985072458\"><path stroke=\"black\" stroke-width=\"7\" fill=\"none\" d=\"M 156.40357183517776 23.19316100057085 L 165.08598071852424 72.43354865118125 L 66.60520541730345 89.7983664178743 L 83.97002318399647 188.2791417190951 L 182.4507984852173 170.91432395240207 L 225.86284290194988 417.1162622054542 L 373.5840058537811 391.0690355554146 L 399.63123250382057 538.7901985072458 L 448.87162015443107 530.1077896238993 L 379.41234908765887 136.184688419016 L 428.65273673826937 127.50227953566947 L 446.01755450496233 225.9830548368903 L 593.7387174567937 199.93582818685076 L 602.4211263401402 249.17621583746114 L 799.3826769425818 214.44658030407507 L 773.3354502925423 66.72541735224388 L 625.614287340711 92.7726440022834 L 616.9318784573644 43.532256351673 L 518.4511031561435 60.89707411836606 L 509.76869427279706 11.656686467755648 L 312.8071436703555 46.3863220011417 L 304.1247347870089 -2.8540656494686942 Z\"/><path stroke=\"red\" stroke-width=\"2\" fill=\"none\" d=\"M 182.45079848521726 170.91432395240204 L 182.45079848521726 170.91432395240204 L 66.60520541730344 89.79836641787429 L 165.08598071852424 72.43354865118125 L 156.40357183517773 23.19316100057085 L 379.41234908765887 136.184688419016 L 312.80714367035546 46.3863220011417 L 304.1247347870089 -2.8540656494687013 L 428.65273673826925 127.50227953566946 L 509.768694272797 11.65668646775564 L 518.4511031561435 60.897074118366035 L 616.9318784573643 43.53225635167299 L 625.6142873407109 92.7726440022834 L 773.3354502925422 66.72541735224387 L 799.3826769425817 214.44658030407507 L 593.7387174567936 199.93582818685073 L 602.4211263401401 249.17621583746111 L 446.0175545049623 225.98305483689026 L 448.871620154431 530.1077896238992 L 399.63123250382057 538.7901985072457 L 373.58400585378104 391.06903555541453 L 225.86284290194985 417.1162622054541 L 83.97002318399646 188.27914171909507 Z\"/><circle fill=\"blue\" cx=\"409.6874424591604\" cy=\"177.00212769986794\" r=\"7\" /></svg>\n\n\n\n"
] |
[
1
] |
[] |
[] |
[
"algorithm",
"geometry",
"graphics",
"sorting"
] |
stackoverflow_0074666413_algorithm_geometry_graphics_sorting.txt
|
Q:
How to redirect my page to another URL in an function
I have a button that handles the logout for the user, but i also want that the user gets redirected to another page when logging out. (With react-router-dom)
I've created a button with an on click event that executes a fuction on click, here called handleLogout, but i have problems redirecting the user to another page from the function
I have tried using: link over the button component, redirect, useNavigate but none of this worked as i wanted it to.
I have read through multiple other stackOverflow threads but none of these seem to answer my question in my Case.
I am using react version 18.0.2 and react-router-dom v6
import { Link, redirect } from "react-router-dom"
import { Home } from "./Home"
import "./Profile.css"
export function Profile(){
function handleLogout() {
localStorage.removeItem("currentUser");
window.location.reload();
return redirect("/")
}
return(
<div>
<button onClick={handleLogout}>Ausloggen</button>
</div>
)
}
A:
To redirect the user to another page in an function using react-router-dom, you can use the useHistory hook from react-router-dom and call the push method on the history object to push a new url to the history stack and redirect the user to that page.
Here is an example of how to use the useHistory hook to redirect the user to another page when they click on the logout button:
import { useHistory } from "react-router-dom"
export function Profile() {
const history = useHistory()
function handleLogout() {
localStorage.removeItem("currentUser")
window.location.reload()
history.push("/")
}
return (
<div>
<button onClick={handleLogout}>Ausloggen</button>
</div>
)
}
|
How to redirect my page to another URL in an function
|
I have a button that handles the logout for the user, but i also want that the user gets redirected to another page when logging out. (With react-router-dom)
I've created a button with an on click event that executes a fuction on click, here called handleLogout, but i have problems redirecting the user to another page from the function
I have tried using: link over the button component, redirect, useNavigate but none of this worked as i wanted it to.
I have read through multiple other stackOverflow threads but none of these seem to answer my question in my Case.
I am using react version 18.0.2 and react-router-dom v6
import { Link, redirect } from "react-router-dom"
import { Home } from "./Home"
import "./Profile.css"
export function Profile(){
function handleLogout() {
localStorage.removeItem("currentUser");
window.location.reload();
return redirect("/")
}
return(
<div>
<button onClick={handleLogout}>Ausloggen</button>
</div>
)
}
|
[
"To redirect the user to another page in an function using react-router-dom, you can use the useHistory hook from react-router-dom and call the push method on the history object to push a new url to the history stack and redirect the user to that page.\nHere is an example of how to use the useHistory hook to redirect the user to another page when they click on the logout button:\nimport { useHistory } from \"react-router-dom\"\n\nexport function Profile() {\n const history = useHistory()\n\n function handleLogout() {\n localStorage.removeItem(\"currentUser\")\n window.location.reload()\n history.push(\"/\")\n }\n\n return (\n <div>\n <button onClick={handleLogout}>Ausloggen</button>\n </div>\n )\n}\n\n"
] |
[
2
] |
[] |
[] |
[
"react_router_dom",
"reactjs",
"redirect"
] |
stackoverflow_0074666439_react_router_dom_reactjs_redirect.txt
|
Q:
Program that finds the number 1 in more than one Text file and uses a Thread for each file / Segmentation fault(Core dumped) error
I'm facing a problem in my homework that I can't solve, can you help me? I am compiling via terminal in Opensuse Leap 15.4. As I mentioned in the title, there will be 10-20 text files in the same directory as our main program, and this text file will consist of 1 and 0. As a program parameter, text file names will be given from the terminal and I will open this text file and find the number 1 in it. More than one text file can be given as a parameter to the program in the terminal. I will run a Thread that will read the contents of each text file.
I wrote a code, it compiles without errors. However, I get the "Segmentation error(core dumb)" error, when I parameterize the program from the terminal. Even though the files are in the same directory, I can't read them at all. Here I will share my source code, which parts should I change, what are your recommendations?
Example :
./main 1-10.txt 3-10.txt
Total Number of Ones in All Files: 11
./main 8-10.txt 5-10.txt 4-10.txt
Total Number of Ones in All Files: 14
./main
Total Number of Ones in All Files: 0
./main 1-10M.txt 2-10M.txt 4-10M.txt
Total Number of Ones in All Files: 15001073
--> I will run 3 Threads for 3 text files here to read
./main 8-10.txt xyzqw.txt
Total Number of Ones in All Files: 3
--> It will not read the content of the xyzq.txt file that is not in the directory and will not give an error.
./main *-10M.txt
Total Number of Ones in All Files: 24647735
-->> A program that can work in harmony with wildcards characters
Code :
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
int global = 0;
void* func(void *argp) {
char buffer[500];
char *c = argp;
sprintf(buffer, "%s", c);
FILE *fp = fopen(buffer, "r");
char ch;
if (fp == NULL) {
printf("No text file , Number of 1 : 0");
}
do {
ch = fgetc(fp);
if ((int) ch == 1)
global++;
} while (ch != EOF);
}
int main(int argc, char *argv[]) {
int ar = argc - 1;
pthread_t thread[ar];
if (argc >= 2) {
for (int i = 1; i <= ar; i++) {
pthread_create(&thread[i], NULL, func, (void*) argv[i]);
}
for (int i = 1; i <= ar; i++) {
pthread_join(thread[i], NULL);
}
} else {
printf("Filename not entered, Number of 1 -> 0 ");
}
printf("Number of Ones All files %d", global);
}
A:
You want
pthread_create(&thread[i - 1], NULL, func, (void*) argv[i]);
instead of
pthread_create(&thread[i], NULL, func, (void*) argv[i]);
otherwise you access outside of the bounds of the array in the last iteration.
To keep it simpler you can modify the main arguments:
if (argc > 1)
{
argc -= 1;
argv += 1;
pthread_t thread[argc];
for (int i = 0; i < argc; i++) {
pthread_create(&thread[i], NULL, func, argv[i]); // you don't need the cast
}
for (int i = 0; i < argc; i++) {
pthread_join(thread[i], NULL);
}
|
Program that finds the number 1 in more than one Text file and uses a Thread for each file / Segmentation fault(Core dumped) error
|
I'm facing a problem in my homework that I can't solve, can you help me? I am compiling via terminal in Opensuse Leap 15.4. As I mentioned in the title, there will be 10-20 text files in the same directory as our main program, and this text file will consist of 1 and 0. As a program parameter, text file names will be given from the terminal and I will open this text file and find the number 1 in it. More than one text file can be given as a parameter to the program in the terminal. I will run a Thread that will read the contents of each text file.
I wrote a code, it compiles without errors. However, I get the "Segmentation error(core dumb)" error, when I parameterize the program from the terminal. Even though the files are in the same directory, I can't read them at all. Here I will share my source code, which parts should I change, what are your recommendations?
Example :
./main 1-10.txt 3-10.txt
Total Number of Ones in All Files: 11
./main 8-10.txt 5-10.txt 4-10.txt
Total Number of Ones in All Files: 14
./main
Total Number of Ones in All Files: 0
./main 1-10M.txt 2-10M.txt 4-10M.txt
Total Number of Ones in All Files: 15001073
--> I will run 3 Threads for 3 text files here to read
./main 8-10.txt xyzqw.txt
Total Number of Ones in All Files: 3
--> It will not read the content of the xyzq.txt file that is not in the directory and will not give an error.
./main *-10M.txt
Total Number of Ones in All Files: 24647735
-->> A program that can work in harmony with wildcards characters
Code :
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
int global = 0;
void* func(void *argp) {
char buffer[500];
char *c = argp;
sprintf(buffer, "%s", c);
FILE *fp = fopen(buffer, "r");
char ch;
if (fp == NULL) {
printf("No text file , Number of 1 : 0");
}
do {
ch = fgetc(fp);
if ((int) ch == 1)
global++;
} while (ch != EOF);
}
int main(int argc, char *argv[]) {
int ar = argc - 1;
pthread_t thread[ar];
if (argc >= 2) {
for (int i = 1; i <= ar; i++) {
pthread_create(&thread[i], NULL, func, (void*) argv[i]);
}
for (int i = 1; i <= ar; i++) {
pthread_join(thread[i], NULL);
}
} else {
printf("Filename not entered, Number of 1 -> 0 ");
}
printf("Number of Ones All files %d", global);
}
|
[
"You want\npthread_create(&thread[i - 1], NULL, func, (void*) argv[i]);\n\ninstead of\npthread_create(&thread[i], NULL, func, (void*) argv[i]);\n\notherwise you access outside of the bounds of the array in the last iteration.\nTo keep it simpler you can modify the main arguments:\n if (argc > 1)\n {\n argc -= 1;\n argv += 1;\n\n pthread_t thread[argc];\n \n for (int i = 0; i < argc; i++) {\n pthread_create(&thread[i], NULL, func, argv[i]); // you don't need the cast\n }\n for (int i = 0; i < argc; i++) {\n pthread_join(thread[i], NULL);\n }\n\n"
] |
[
1
] |
[] |
[] |
[
"c",
"multithreading",
"posix",
"readfile"
] |
stackoverflow_0074666409_c_multithreading_posix_readfile.txt
|
Q:
Snowflake and Datadog integration
I'm setting up Snowflake and Datadog integration by following this guide.
I installed datadog agent as a docker container. However, when I try to install the snowflake integration by running the following command inside my datadog-agent docker container (via "docker exec -it --user dd-agent dd-agent bash")
datadog-agent integration install datadog-snowflake==2.0.1
I got this error
bash: datadog-agent: command not found
My question is, does datadog-agent docker version support installing integration? If it does, how do I do it? If it doesn't, do I have to install datadog-agent on a VM to do it?
A:
To install the Snowflake integration on a Datadog Agent running in Docker, you should use the following command:
$ docker exec -it dd-agent bash -c "datadog-agent integration install datadog-snowflake==2.0.1"
This will run the datadog-agent integration install command in the dd-agent Docker container. This command will install the Snowflake integration on the Datadog Agent running in the container.
Note that the --user dd-agent flag is not necessary in this command, as you are already running the command in the dd-agent container.
|
Snowflake and Datadog integration
|
I'm setting up Snowflake and Datadog integration by following this guide.
I installed datadog agent as a docker container. However, when I try to install the snowflake integration by running the following command inside my datadog-agent docker container (via "docker exec -it --user dd-agent dd-agent bash")
datadog-agent integration install datadog-snowflake==2.0.1
I got this error
bash: datadog-agent: command not found
My question is, does datadog-agent docker version support installing integration? If it does, how do I do it? If it doesn't, do I have to install datadog-agent on a VM to do it?
|
[
"To install the Snowflake integration on a Datadog Agent running in Docker, you should use the following command:\n$ docker exec -it dd-agent bash -c \"datadog-agent integration install datadog-snowflake==2.0.1\"\n\nThis will run the datadog-agent integration install command in the dd-agent Docker container. This command will install the Snowflake integration on the Datadog Agent running in the container.\nNote that the --user dd-agent flag is not necessary in this command, as you are already running the command in the dd-agent container.\n"
] |
[
0
] |
[] |
[] |
[
"datadog",
"docker",
"snowflake_cloud_data_platform",
"snowflake_schema"
] |
stackoverflow_0074666220_datadog_docker_snowflake_cloud_data_platform_snowflake_schema.txt
|
Q:
How do I provide an incrementing counter in place of an existing JSON value using jq
I have an JSON file similar to this:
{
"version": "2.0",
"stage" : {
"objects" : [
{
"foo" : 1100,
"bar" : false,
"id" : "56a983f1-8111-4abc-a1eb-263d41cfb098"
},
{
"foo" : 1100,
"bar" : false,
"id" : "6369df4b-90c4-4695-8a9c-6bb2b8da5976"
}],
"bish" : "#FFFFFF"
},
"more": "abcd"
}
I would like the output to be exactly the same, with the exception of an incrementing integer in place of the "id" : "guid" - something like:
{
"version": "2.0",
"stage" : {
"objects" : [
{
"foo" : 1100,
"bar" : false,
"id" : 1
},
{
"foo" : 1100,
"bar" : false,
"id" : 2
}],
"bish" : "#FFFFFF"
},
"more": "abcd"
}
I'm new to jq. I can set the id's to a fixed integer with .stage.objects[].id |= 1.
{
"version": "2.0",
"stage": {
"objects": [
{
"foo": 1100,
"bar": false,
"id": 1
},
{
"foo": 1100,
"bar": false,
"id": 1
}
],
"bish": "#FFFFFF"
},
"more": "abcd"
}
I can't figure out the syntax to make the assigned number iterate.
I tried various combinations of map, reduce, to_entries, foreach and other strategies mentioned in answers to similar questions but the data in those examples always consisted of something simple.
A:
You can exploit the fact that to_entries on arrays uses the index as "key", then modify your value:
.stage.objects |= (to_entries | map(.value.id = .key + 1 | .value))
or
.stage.objects |= (to_entries | map(.value += {id: (.key + 1)} | .value))
Output:
{
"version": "2.0",
"stage": {
"objects": [
{
"foo": 1100,
"bar": false,
"id": 1
},
{
"foo": 1100,
"bar": false,
"id": 2
}
],
"bish": "#FFFFFF"
},
"more": "abcd"
}
A:
Here's a variant using reduce to iterate over the keys:
.stage.objects |= reduce keys[] as $i (.; .[$i].id = $i + 1)
{
"version": "2.0",
"stage": {
"objects": [
{
"foo": 1100,
"bar": false,
"id": 1
},
{
"foo": 1100,
"bar": false,
"id": 2
}
],
"bish": "#FFFFFF"
},
"more": "abcd"
}
Demo
Update:
Is there a way to make the search and replace go deep? If the items in the objects array had children arrays with id's, could they be replaced as well?
Of course. You could enhance the LHS of the update to also cover all .children arrays recursively using recurse(.[].children | arrays):
(.stage.objects | recurse(.[].children | arrays)) |=
reduce keys[] as $i (.; .[$i].id = $i + 1)
Demo
Note that in this case each .children array is treated independently, thus numbering starts from 1 in each of them. If you want a continuous numbering instead, it has to be done outside and brought down into the iteration. Here's a solution gathering the target paths using path, numbering them using to_entries, and setting them iteratively using setpath:
reduce (
[path(.stage.objects[] | recurse(.children | arrays[]).id)] | to_entries[]
) as $i (.; setpath($i.value; $i.key + 1))
Demo
|
How do I provide an incrementing counter in place of an existing JSON value using jq
|
I have an JSON file similar to this:
{
"version": "2.0",
"stage" : {
"objects" : [
{
"foo" : 1100,
"bar" : false,
"id" : "56a983f1-8111-4abc-a1eb-263d41cfb098"
},
{
"foo" : 1100,
"bar" : false,
"id" : "6369df4b-90c4-4695-8a9c-6bb2b8da5976"
}],
"bish" : "#FFFFFF"
},
"more": "abcd"
}
I would like the output to be exactly the same, with the exception of an incrementing integer in place of the "id" : "guid" - something like:
{
"version": "2.0",
"stage" : {
"objects" : [
{
"foo" : 1100,
"bar" : false,
"id" : 1
},
{
"foo" : 1100,
"bar" : false,
"id" : 2
}],
"bish" : "#FFFFFF"
},
"more": "abcd"
}
I'm new to jq. I can set the id's to a fixed integer with .stage.objects[].id |= 1.
{
"version": "2.0",
"stage": {
"objects": [
{
"foo": 1100,
"bar": false,
"id": 1
},
{
"foo": 1100,
"bar": false,
"id": 1
}
],
"bish": "#FFFFFF"
},
"more": "abcd"
}
I can't figure out the syntax to make the assigned number iterate.
I tried various combinations of map, reduce, to_entries, foreach and other strategies mentioned in answers to similar questions but the data in those examples always consisted of something simple.
|
[
"You can exploit the fact that to_entries on arrays uses the index as \"key\", then modify your value:\n.stage.objects |= (to_entries | map(.value.id = .key + 1 | .value))\n\nor\n.stage.objects |= (to_entries | map(.value += {id: (.key + 1)} | .value))\n\nOutput:\n{\n \"version\": \"2.0\",\n \"stage\": {\n \"objects\": [\n {\n \"foo\": 1100,\n \"bar\": false,\n \"id\": 1\n },\n {\n \"foo\": 1100,\n \"bar\": false,\n \"id\": 2\n }\n ],\n \"bish\": \"#FFFFFF\"\n },\n \"more\": \"abcd\"\n}\n\n",
"Here's a variant using reduce to iterate over the keys:\n.stage.objects |= reduce keys[] as $i (.; .[$i].id = $i + 1)\n\n{\n \"version\": \"2.0\",\n \"stage\": {\n \"objects\": [\n {\n \"foo\": 1100,\n \"bar\": false,\n \"id\": 1\n },\n {\n \"foo\": 1100,\n \"bar\": false,\n \"id\": 2\n }\n ],\n \"bish\": \"#FFFFFF\"\n },\n \"more\": \"abcd\"\n}\n\nDemo\n\nUpdate:\n\nIs there a way to make the search and replace go deep? If the items in the objects array had children arrays with id's, could they be replaced as well?\n\nOf course. You could enhance the LHS of the update to also cover all .children arrays recursively using recurse(.[].children | arrays):\n(.stage.objects | recurse(.[].children | arrays)) |=\n reduce keys[] as $i (.; .[$i].id = $i + 1)\n\nDemo\nNote that in this case each .children array is treated independently, thus numbering starts from 1 in each of them. If you want a continuous numbering instead, it has to be done outside and brought down into the iteration. Here's a solution gathering the target paths using path, numbering them using to_entries, and setting them iteratively using setpath:\nreduce (\n [path(.stage.objects[] | recurse(.children | arrays[]).id)] | to_entries[]\n) as $i (.; setpath($i.value; $i.key + 1))\n\nDemo\n"
] |
[
1,
0
] |
[] |
[] |
[
"jq",
"loops"
] |
stackoverflow_0074664554_jq_loops.txt
|
Q:
Azure Functions Easy Auth & Google Access token
I'm trying to add Google authentication to my Azure Functions app which will be used from a Svelte static web app (SWA). The SWA uses Google Identity (https://accounts.google.com/gsi/client) to both authenticate and then retrieve an access_token. Authentication is performed using a standard Google Identity sign in button. I've tried One Tap prompt as well with the same result.
google.accounts.id.initialize({
client_id: googleClientId,
callback: handleCredentialResponse,
});
google.accounts.id.renderButton(
button,
{ theme: 'outline', size: 'large' }, // customization attributes
);
User authenticates, works fine and I get a JWT id_token containing name email image etcetera. It's a bit annoying the user has to then again go through the whole process of selecting their account, but I guess that's the Google experience. Once I'm ready to do function calls I then proceed to authorize:
function getAccessToken() {
return new Promise((resolve, reject) => {
const client = google.accounts.oauth2.initTokenClient({
client_id: googleClientId,
scope: "openid",
callback: (response) => {
if (response.access_token) {
resolve(access_token);
} else {
reject(response?.error);
}
},
});
client.requestAccessToken();
});
}
This also works fine, I retrieve an access_token. I then proceed to call an Azure Function with this token in the header:
Authorization: Bearer <ACCESS_TOKEN>
This always results in a 401 response. I have tried setting all functions to anonymous to no effect.
I'm wondering if this has to do with scope. In the Google Console it's only possible to add Google specific scopes, which is why I retrieve an access_token for the openid scope.
I've also tried setting credentials to include since there might be cookies the Easy Auth layer would like to read from the web app to authenticate the user. CORS on the Azure Functions app is configured correctly for the host names used by the web app and Access-Control-Allow-Credentials is enabled on the Function App. This has no effect either.
A:
Wow this was badly documented. After reading the Azure Functions and App Service Authentication blog post it seems an 'authentication token' needs to be retrieved from the functions app itself instead of an 'access token' from Google. After Google identification the id_token from the first step needs to be POSTed to https://<functions_app>/.auth/login/google with the following as body:
{
"id_token": "<id_token>"
}
This in turn returns something as follows:
{
"authenticationToken": "<authenticationToken>",
"user": { "userId": "<sid>" }
}
This authenticationToken then needs be be passed in the header to each function call as follows:
X-ZUMO-AUTH: <authenticationToken>
Edit: it seems this was fully documented, somehow I missed this.
|
Azure Functions Easy Auth & Google Access token
|
I'm trying to add Google authentication to my Azure Functions app which will be used from a Svelte static web app (SWA). The SWA uses Google Identity (https://accounts.google.com/gsi/client) to both authenticate and then retrieve an access_token. Authentication is performed using a standard Google Identity sign in button. I've tried One Tap prompt as well with the same result.
google.accounts.id.initialize({
client_id: googleClientId,
callback: handleCredentialResponse,
});
google.accounts.id.renderButton(
button,
{ theme: 'outline', size: 'large' }, // customization attributes
);
User authenticates, works fine and I get a JWT id_token containing name email image etcetera. It's a bit annoying the user has to then again go through the whole process of selecting their account, but I guess that's the Google experience. Once I'm ready to do function calls I then proceed to authorize:
function getAccessToken() {
return new Promise((resolve, reject) => {
const client = google.accounts.oauth2.initTokenClient({
client_id: googleClientId,
scope: "openid",
callback: (response) => {
if (response.access_token) {
resolve(access_token);
} else {
reject(response?.error);
}
},
});
client.requestAccessToken();
});
}
This also works fine, I retrieve an access_token. I then proceed to call an Azure Function with this token in the header:
Authorization: Bearer <ACCESS_TOKEN>
This always results in a 401 response. I have tried setting all functions to anonymous to no effect.
I'm wondering if this has to do with scope. In the Google Console it's only possible to add Google specific scopes, which is why I retrieve an access_token for the openid scope.
I've also tried setting credentials to include since there might be cookies the Easy Auth layer would like to read from the web app to authenticate the user. CORS on the Azure Functions app is configured correctly for the host names used by the web app and Access-Control-Allow-Credentials is enabled on the Function App. This has no effect either.
|
[
"Wow this was badly documented. After reading the Azure Functions and App Service Authentication blog post it seems an 'authentication token' needs to be retrieved from the functions app itself instead of an 'access token' from Google. After Google identification the id_token from the first step needs to be POSTed to https://<functions_app>/.auth/login/google with the following as body:\n{\n \"id_token\": \"<id_token>\"\n}\n\nThis in turn returns something as follows:\n{\n \"authenticationToken\": \"<authenticationToken>\",\n \"user\": { \"userId\": \"<sid>\" }\n}\n\nThis authenticationToken then needs be be passed in the header to each function call as follows:\nX-ZUMO-AUTH: <authenticationToken>\n\n\nEdit: it seems this was fully documented, somehow I missed this.\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"azure_functions",
"easy_auth",
"google_identity"
] |
stackoverflow_0074666098_azure_azure_functions_easy_auth_google_identity.txt
|
Q:
Spring Boot Controller not mapping
I have used STS and now I am using IntelliJ Ultimate Edition but I am still getting the same output. My controller is not getting mapped thus showing 404 error. I am completely new to Spring Framework.
DemoApplication.java
package com.webservice.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ComponentScan;
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
HelloController.java
package com.webservice.demo;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class HelloController {
@RequestMapping("/hello")
public String sayHello(){
return "Hey";
}
}
Console Output
com.webservice.demo.DemoApplication : Starting DemoApplication on XFT000159365001 with PID 11708 (started by Mayank Khursija in C:\Users\Mayank Khursija\IdeaProjects\demo)
2017-07-19 12:59:46.150 INFO 11708 --- [ main] com.webservice.demo.DemoApplication : No active profile set, falling back to default profiles: default
2017-07-19 12:59:46.218 INFO 11708 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@238e3f: startup date [Wed Jul 19 12:59:46 IST 2017]; root of context hierarchy
2017-07-19 12:59:47.821 INFO 11708 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8211 (http)
2017-07-19 12:59:47.832 INFO 11708 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2017-07-19 12:59:47.832 INFO 11708 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.15
2017-07-19 12:59:47.944 INFO 11708 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2017-07-19 12:59:47.944 INFO 11708 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1728 ms
2017-07-19 12:59:47.987 INFO 11708 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2017-07-19 12:59:48.510 INFO 11708 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2017-07-19 12:59:48.519 INFO 11708 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 0
2017-07-19 12:59:48.634 INFO 11708 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8211 (http)
2017-07-19 12:59:48.638 INFO 11708 --- [ main] com.webservice.demo.DemoApplication : Started DemoApplication in 2.869 seconds (JVM running for 3.44)
A:
I too had the similar issue and was able to finally resolve it by correcting the source package structure following this
Your Controller classes are not scanned by the Component scanning. Your Controller classes must be nested below in package hierarchy to the main SpringApplication class having the main() method, then only it will be scanned and you should also see the RequestMappings listed in the console output while Spring Boot is getting started.
Tested on Spring Boot 1.5.8.RELEASE
But in case you prefer to use your own packaging structure, you can always use the @ComponentScan annotation to define your basePackages to scan.
A:
Because of DemoApplication.class and HelloController.class in the same package
Locate your main application class in a root package above other classes
Take look at Spring Boot documentation Locating the Main Application Class
Using a root package also allows component scan to apply only on your
project.
For example, in your case it looks like below:
com.webservice.demo.DemoApplication
com.webservice.demo.controller.HelloController
A:
In my case, it was missing the dependency from pom.xml, otherwise everything compiled just fine. The 404 and missing mappings info from Spring logs were the only hints.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
A:
I also had trouble with a similar issue and resolved it using the correct package structure as per below. After correction, it is working properly.
e.g.
Spring Application Main Class is in package com.example
Controller Classes are in package com.example.controller
A:
Adding @ComponentScan(com.webservice) in main class above @SpringBootApplication will resolve your problem. Refer below code
package com.webservice.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ComponentScan;
@ComponentScan(com.webservice)
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
A:
In my case, I was using @Controller instead of @RestController with @RequestMapping
A:
In my opinion, this visibility problem comes when we leave the component scan to Spring which has a particular way of looking for the classes using standard convention.
In this scenario as the Starter class(DemoApplication)is in com.webservice.demo package, putting Controller one level below will help Spring to find the classes using the default component scan mechanism. Putting HelloController under com.webservice.demo.controller should solve the issue.
A:
It depends on a couple of properties:
server.contextPath property in application properties. If it's set to any value then you need to append that in your request url. If there is no such property then add this line in application.properties server.contextPath=/
method property in @RequestMapping, there does not seem to be any value and hence, as per documentation, it should map to all the methods. However, if you want it to listen to any particular method then you can set it to let's say method = HttpMethod.GET
A:
I found the answer to this. This was occurring because of security configuration which is updated in newer versions of Spring Framework. So i just changed my version from 1.5.4 to 1.3.2
A:
In my case I used wrong port for test request - Tomcat was started with several ones exposed (including one for monitoring /actuator).
A:
In my case I changed the package of configuration file. Moved it back to the original com.example.demo package and things started working.
A:
Another case might be that you accidentally put a Java class in a Kotlin sources directory as I did.
Wrong:
src/main
┕ kotlin ← this is wrong for Java
┕ com
┕ example
┕ web
┕ Controller.class
Correct:
src/main
┕ java ← changed 'kotlin' to 'java'
┕ com
┕ example
┕ web
┕ Controller.class
Because when in Kotlin sources directory, Java class won't get picked up.
A:
All other packages should be an extension of parent package then only spring boot app will scan them by default.
Other option will be to use @ComponentScan(com.webservice)
package structure
A:
I set up Spring Boot Security in Maven deps. And it automatically deny access to unlogged users also for login page if you haven't change rules for it.
So I prefered my own security system and deleted this dependency.
If you want to use Spring Security. You can wrote WebSecurityConfig like this:
@Configuration
@EnableWebSecurity
public class WebSecurityConfig extends WebSecurityConfigurerAdapter {
@Autowired
UserService userService;
@Bean
public BCryptPasswordEncoder bCryptPasswordEncoder() {
return new BCryptPasswordEncoder();
}
@Override
protected void configure(HttpSecurity httpSecurity) throws Exception {
httpSecurity
.csrf()
.disable()
.authorizeRequests()
//Доступ только для не зарегистрированных пользователей
.antMatchers("/registration").not().fullyAuthenticated()
//Доступ только для пользователей с ролью Администратор
.antMatchers("/admin/**").hasRole("ADMIN")
.antMatchers("/news").hasRole("USER")
//Доступ разрешен всем пользователей
.antMatchers("/", "/resources/**").permitAll()
//Все остальные страницы требуют аутентификации
.anyRequest().authenticated()
.and()
//Настройка для входа в систему
.formLogin()
.loginPage("/login")
//Перенарпавление на главную страницу после успешного входа
.defaultSuccessUrl("/")
.permitAll()
.and()
.logout()
.permitAll()
.logoutSuccessUrl("/");
}
@Autowired
protected void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
auth.userDetailsService(userService).passwordEncoder(bCryptPasswordEncoder());
}
}
from [https://habr.com/ru/post/482552/] (in russian)
|
Spring Boot Controller not mapping
|
I have used STS and now I am using IntelliJ Ultimate Edition but I am still getting the same output. My controller is not getting mapped thus showing 404 error. I am completely new to Spring Framework.
DemoApplication.java
package com.webservice.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.ComponentScan;
@SpringBootApplication
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
HelloController.java
package com.webservice.demo;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class HelloController {
@RequestMapping("/hello")
public String sayHello(){
return "Hey";
}
}
Console Output
com.webservice.demo.DemoApplication : Starting DemoApplication on XFT000159365001 with PID 11708 (started by Mayank Khursija in C:\Users\Mayank Khursija\IdeaProjects\demo)
2017-07-19 12:59:46.150 INFO 11708 --- [ main] com.webservice.demo.DemoApplication : No active profile set, falling back to default profiles: default
2017-07-19 12:59:46.218 INFO 11708 --- [ main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@238e3f: startup date [Wed Jul 19 12:59:46 IST 2017]; root of context hierarchy
2017-07-19 12:59:47.821 INFO 11708 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat initialized with port(s): 8211 (http)
2017-07-19 12:59:47.832 INFO 11708 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2017-07-19 12:59:47.832 INFO 11708 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/8.5.15
2017-07-19 12:59:47.944 INFO 11708 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2017-07-19 12:59:47.944 INFO 11708 --- [ost-startStop-1] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 1728 ms
2017-07-19 12:59:47.987 INFO 11708 --- [ost-startStop-1] o.s.b.w.servlet.FilterRegistrationBean : Mapping filter: 'characterEncodingFilter' to: [/*]
2017-07-19 12:59:48.510 INFO 11708 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2017-07-19 12:59:48.519 INFO 11708 --- [ main] o.s.c.support.DefaultLifecycleProcessor : Starting beans in phase 0
2017-07-19 12:59:48.634 INFO 11708 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8211 (http)
2017-07-19 12:59:48.638 INFO 11708 --- [ main] com.webservice.demo.DemoApplication : Started DemoApplication in 2.869 seconds (JVM running for 3.44)
|
[
"I too had the similar issue and was able to finally resolve it by correcting the source package structure following this\nYour Controller classes are not scanned by the Component scanning. Your Controller classes must be nested below in package hierarchy to the main SpringApplication class having the main() method, then only it will be scanned and you should also see the RequestMappings listed in the console output while Spring Boot is getting started.\nTested on Spring Boot 1.5.8.RELEASE\nBut in case you prefer to use your own packaging structure, you can always use the @ComponentScan annotation to define your basePackages to scan.\n",
"Because of DemoApplication.class and HelloController.class in the same package\nLocate your main application class in a root package above other classes\nTake look at Spring Boot documentation Locating the Main Application Class\n\nUsing a root package also allows component scan to apply only on your\n project.\n\nFor example, in your case it looks like below:\ncom.webservice.demo.DemoApplication\ncom.webservice.demo.controller.HelloController \n",
"In my case, it was missing the dependency from pom.xml, otherwise everything compiled just fine. The 404 and missing mappings info from Spring logs were the only hints.\n <dependency>\n <groupId>org.springframework.boot</groupId>\n <artifactId>spring-boot-starter-web</artifactId>\n </dependency>\n\n",
"I also had trouble with a similar issue and resolved it using the correct package structure as per below. After correction, it is working properly.\ne.g.\n\nSpring Application Main Class is in package com.example\nController Classes are in package com.example.controller\n\n",
"Adding @ComponentScan(com.webservice) in main class above @SpringBootApplication will resolve your problem. Refer below code\npackage com.webservice.demo;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\nimport org.springframework.context.annotation.ComponentScan;\n\n@ComponentScan(com.webservice)\n@SpringBootApplication\npublic class DemoApplication {\n\n public static void main(String[] args) {\n SpringApplication.run(DemoApplication.class, args);\n }\n}\n\n",
"In my case, I was using @Controller instead of @RestController with @RequestMapping\n",
"In my opinion, this visibility problem comes when we leave the component scan to Spring which has a particular way of looking for the classes using standard convention. \nIn this scenario as the Starter class(DemoApplication)is in com.webservice.demo package, putting Controller one level below will help Spring to find the classes using the default component scan mechanism. Putting HelloController under com.webservice.demo.controller should solve the issue.\n",
"It depends on a couple of properties:\n\nserver.contextPath property in application properties. If it's set to any value then you need to append that in your request url. If there is no such property then add this line in application.properties server.contextPath=/\nmethod property in @RequestMapping, there does not seem to be any value and hence, as per documentation, it should map to all the methods. However, if you want it to listen to any particular method then you can set it to let's say method = HttpMethod.GET\n\n",
"I found the answer to this. This was occurring because of security configuration which is updated in newer versions of Spring Framework. So i just changed my version from 1.5.4 to 1.3.2\n",
"In my case I used wrong port for test request - Tomcat was started with several ones exposed (including one for monitoring /actuator).\n",
"In my case I changed the package of configuration file. Moved it back to the original com.example.demo package and things started working.\n",
"Another case might be that you accidentally put a Java class in a Kotlin sources directory as I did.\nWrong:\nsrc/main\n┕ kotlin ← this is wrong for Java\n ┕ com\n ┕ example\n ┕ web\n ┕ Controller.class\n\nCorrect:\nsrc/main\n┕ java ← changed 'kotlin' to 'java'\n ┕ com\n ┕ example\n ┕ web\n ┕ Controller.class\n\nBecause when in Kotlin sources directory, Java class won't get picked up.\n",
"All other packages should be an extension of parent package then only spring boot app will scan them by default.\nOther option will be to use @ComponentScan(com.webservice)\npackage structure\n",
" I set up Spring Boot Security in Maven deps. And it automatically deny access to unlogged users also for login page if you haven't change rules for it.\nSo I prefered my own security system and deleted this dependency.\nIf you want to use Spring Security. You can wrote WebSecurityConfig like this:\n@Configuration\n@EnableWebSecurity\npublic class WebSecurityConfig extends WebSecurityConfigurerAdapter {\n @Autowired\n UserService userService;\n\n @Bean\n public BCryptPasswordEncoder bCryptPasswordEncoder() {\n return new BCryptPasswordEncoder();\n }\n\n @Override\n protected void configure(HttpSecurity httpSecurity) throws Exception {\n httpSecurity\n .csrf()\n .disable()\n .authorizeRequests()\n //Доступ только для не зарегистрированных пользователей\n .antMatchers(\"/registration\").not().fullyAuthenticated()\n //Доступ только для пользователей с ролью Администратор\n .antMatchers(\"/admin/**\").hasRole(\"ADMIN\")\n .antMatchers(\"/news\").hasRole(\"USER\")\n //Доступ разрешен всем пользователей\n .antMatchers(\"/\", \"/resources/**\").permitAll()\n //Все остальные страницы требуют аутентификации\n .anyRequest().authenticated()\n .and()\n //Настройка для входа в систему\n .formLogin()\n .loginPage(\"/login\")\n //Перенарпавление на главную страницу после успешного входа\n .defaultSuccessUrl(\"/\")\n .permitAll()\n .and()\n .logout()\n .permitAll()\n .logoutSuccessUrl(\"/\");\n }\n\n @Autowired\n protected void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {\n auth.userDetailsService(userService).passwordEncoder(bCryptPasswordEncoder());\n }\n}\n\nfrom [https://habr.com/ru/post/482552/] (in russian)\n"
] |
[
101,
21,
5,
4,
4,
4,
2,
1,
0,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"java",
"spring"
] |
stackoverflow_0045183875_java_spring.txt
|
Q:
Cheerio(ver 13) doesn't work with US Census Table
Up to recently with the following codes in Javascript(Google Apps Script) I had been able to get data from https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0. But all of sudden since sometime last month this codes doesn't work. I couldn't figure out what's wrong. Is there any change in Cheerio library? Can anyone help me? Thank you so much in advance for any help!
function test() {
var url = "https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0#table-results";
var res = UrlFetchApp.fetch(url, { muteHttpExceptions: true }).getContentText();
var $ = Cheerio.load(res); //version 13
var data = $("table").find('td').toArray().map(el => $(el).text().replace(/,/g, ''));
console.log(data);
}
A:
I believe your goal is as follows.
You want to retrieve the bottom table in the site of URL https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0 using Google Apps Script.
Issue and workaround:
When I saw the HTML of your URL, the bottom table is not included. It seems that that is created by Javascript. But, unfortunately, I couldn't find the script. But, fortunately, when I saw the site, I can find the URL for downloading the table as CSV data. I thought that this URL might be able to be used. When this is reflected in a sample script, it becomes as follows.
Sample script:
function myFunction() {
// This is from your URL.
const url = "https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0";
// Convert your URL.
const query = url.split("?").pop().split("&").reduce((o, e) => {
const [k, v] = e.split("=");
o[k == "programCode" ? "program" : k] = v;
return o;
}, {});
const obj = { format: "csv", adjusted: true, notAdjusted: false, errorData: false, mode: "report", submit: "GET+DATA" };
const q = Object.entries(obj).reduce((o, [k, v]) => (o[k] = v, o), query);
String.prototype.addQuery = function (obj) { // Ref: https://gist.github.com/tanaikech/70503e0ea6998083fcb05c6d2a857107
return this + "?" + Object.entries(obj).flatMap(([k, v]) => Array.isArray(v) ? v.map(e => `${k}=${encodeURIComponent(e)}`) : `${k}=${encodeURIComponent(v)}`).join("&");
}
const convertedUrl = "https://www.census.gov/econ_export".addQuery(q);
// Download table as CSV data.
const res = UrlFetchApp.fetch(convertedUrl);
const ar = Utilities.parseCsv(res.getContentText());
const idx = ar.findIndex(([a, b]) => !a && !b);
const temp = ar.splice(idx + 1, ar.length);
const result = temp[0].map((_, c) => temp.map(r => r[c]));
console.log(result);
}
When this script is run, the following result is obtained.
[
["Period","Jan-2022","Feb-2022","Mar-2022","Apr-2022","May-2022","Jun-2022","Jul-2022","Aug-2022","Sep-2022","Oct-2022","Nov-2022","Dec-2022"],
["Value","1726585","1753123","1768168","1780890","1793778","1803791","1817862","1797771","1800105","1794949","NA","NA"]
]
The URL of convertedUrl can be manually retrieved from the site. When you can use the manually retrieved URL, the script is simpler as follows.
const res = UrlFetchApp.fetch("###URL###");
const ar = Utilities.parseCsv(res.getContentText());
const idx = ar.findIndex(([a, b]) => !a && !b);
const temp = ar.splice(idx + 1, ar.length);
const result = temp[0].map((_, c) => temp.map(r => r[c]));
console.log(result);
IMPORTANT:
This sample script is for the current HTML of your URL of https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0. When you change your URL, this script might not be able to be used. And, when the specification of the site is changed, this script might not be able to be used. Please be careful about this.
Reference:
fetch(url, params)
|
Cheerio(ver 13) doesn't work with US Census Table
|
Up to recently with the following codes in Javascript(Google Apps Script) I had been able to get data from https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0. But all of sudden since sometime last month this codes doesn't work. I couldn't figure out what's wrong. Is there any change in Cheerio library? Can anyone help me? Thank you so much in advance for any help!
function test() {
var url = "https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0#table-results";
var res = UrlFetchApp.fetch(url, { muteHttpExceptions: true }).getContentText();
var $ = Cheerio.load(res); //version 13
var data = $("table").find('td').toArray().map(el => $(el).text().replace(/,/g, ''));
console.log(data);
}
|
[
"I believe your goal is as follows.\n\nYou want to retrieve the bottom table in the site of URL https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0 using Google Apps Script.\n\nIssue and workaround:\nWhen I saw the HTML of your URL, the bottom table is not included. It seems that that is created by Javascript. But, unfortunately, I couldn't find the script. But, fortunately, when I saw the site, I can find the URL for downloading the table as CSV data. I thought that this URL might be able to be used. When this is reflected in a sample script, it becomes as follows.\nSample script:\nfunction myFunction() {\n // This is from your URL.\n const url = \"https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0\";\n\n // Convert your URL.\n const query = url.split(\"?\").pop().split(\"&\").reduce((o, e) => {\n const [k, v] = e.split(\"=\");\n o[k == \"programCode\" ? \"program\" : k] = v;\n return o;\n }, {});\n const obj = { format: \"csv\", adjusted: true, notAdjusted: false, errorData: false, mode: \"report\", submit: \"GET+DATA\" };\n const q = Object.entries(obj).reduce((o, [k, v]) => (o[k] = v, o), query);\n String.prototype.addQuery = function (obj) { // Ref: https://gist.github.com/tanaikech/70503e0ea6998083fcb05c6d2a857107\n return this + \"?\" + Object.entries(obj).flatMap(([k, v]) => Array.isArray(v) ? v.map(e => `${k}=${encodeURIComponent(e)}`) : `${k}=${encodeURIComponent(v)}`).join(\"&\");\n }\n const convertedUrl = \"https://www.census.gov/econ_export\".addQuery(q);\n\n // Download table as CSV data.\n const res = UrlFetchApp.fetch(convertedUrl);\n const ar = Utilities.parseCsv(res.getContentText());\n const idx = ar.findIndex(([a, b]) => !a && !b);\n const temp = ar.splice(idx + 1, ar.length);\n const result = temp[0].map((_, c) => temp.map(r => r[c]));\n console.log(result);\n}\n\n\nWhen this script is run, the following result is obtained.\n [\n [\"Period\",\"Jan-2022\",\"Feb-2022\",\"Mar-2022\",\"Apr-2022\",\"May-2022\",\"Jun-2022\",\"Jul-2022\",\"Aug-2022\",\"Sep-2022\",\"Oct-2022\",\"Nov-2022\",\"Dec-2022\"],\n [\"Value\",\"1726585\",\"1753123\",\"1768168\",\"1780890\",\"1793778\",\"1803791\",\"1817862\",\"1797771\",\"1800105\",\"1794949\",\"NA\",\"NA\"]\n ]\n\n\nThe URL of convertedUrl can be manually retrieved from the site. When you can use the manually retrieved URL, the script is simpler as follows.\n const res = UrlFetchApp.fetch(\"###URL###\");\n const ar = Utilities.parseCsv(res.getContentText());\n const idx = ar.findIndex(([a, b]) => !a && !b);\n const temp = ar.splice(idx + 1, ar.length);\n const result = temp[0].map((_, c) => temp.map(r => r[c]));\n console.log(result);\n\n\n\nIMPORTANT:\n\nThis sample script is for the current HTML of your URL of https://www.census.gov/econ/currentdata/?programCode=VIP&startYear=2022&endYear=2022&categories[]=AXXXX&dataType=T&geoLevel=US&adjusted=1¬Adjusted=0&errorData=0. When you change your URL, this script might not be able to be used. And, when the specification of the site is changed, this script might not be able to be used. Please be careful about this.\n\nReference:\n\nfetch(url, params)\n\n"
] |
[
0
] |
[] |
[] |
[
"cheerio",
"google_apps_script",
"javascript"
] |
stackoverflow_0074651119_cheerio_google_apps_script_javascript.txt
|
Q:
set vcpkg x-buildtrees-root option in manifest or in cmakepresets.json
I've a CMake project that uses vcpkg.json for using vcpkg, and CMakePresets.json for setting the CMake options.
This is the vcpkg.json:
{
"name": "myproj",
"version": "1.0.0",
"dependencies": [
"boost",
"qt"
]
}
This is the CMakePresets.json:
{
"version": 3,
"cmakeMinimumRequired": {
"major": 3,
"minor": 22,
"patch": 1
},
"configurePresets": [
{
"name": "default",
"displayName": "Default Config",
"description": "Default config generator with ninja",
"generator": "Ninja",
"binaryDir": "${sourceDir}/build/${presetName}",
"hidden": true,
"cacheVariables": {
"CMAKE_TOOLCHAIN_FILE": "e:/lib/vcpkg/scripts/buildsystems/vcpkg.cmake",
"VCPKG_DEFAULT_TRIPLET": "x64-windows",
"CMAKE_EXPORT_COMPILE_COMMANDS": "TRUE"
},
"environment": {
}
},
{
"inherits": "default",
"name": "debug",
"displayName": "Debug",
"description": "Debug build.",
"cacheVariables": {
"CMAKE_BUILD_TYPE": "Debug"
}
},
{
"inherits": "default",
"name": "release",
"displayName": "Release",
"description": "Release build.",
"cacheVariables": {
"CMAKE_BUILD_TYPE": "Release"
}
}
],
"buildPresets": [
{
"name": "Debug",
"configurePreset": "debug"
},
{
"name": "Release",
"configurePreset": "release"
}
],
"testPresets": [
{
"name": "debugtest",
"configurePreset": "debug",
"output": {"outputOnFailure": true},
"execution": {"noTestsAction": "error", "stopOnFailure": true}
}
]
}
When I open the project folder with Visual Studio 2022, it start to build the vcpkg libraries, and everything goes well, until it builds qtwebengine, that returns me an error:
1> [CMake] Installing 376/432 qtwebengine:x64-windows...
1> [CMake] Building qtwebengine[core,default-features,geolocation,spellchecker,webchannel]:x64-windows...
1> [CMake] -- Using cached pypa-get-pip-38e54e5de07c66e875c11a1ebbdb938854625dd8.tar.gz.
1> [CMake] -- Cleaning sources at E:/lib/vcpkg/buildtrees/qtwebengine/src/8854625dd8-861bd167bd.clean. Use --editable to skip cleaning for the packages you specify.
1> [CMake] -- Extracting source E:/lib/vcpkg/downloads/pypa-get-pip-38e54e5de07c66e875c11a1ebbdb938854625dd8.tar.gz
1> [CMake] -- Using source at E:/lib/vcpkg/buildtrees/qtwebengine/src/8854625dd8-861bd167bd.clean
1> [CMake] -- Setting up python virtual environmnent...
1> [CMake] -- Installing python packages: html5lib
1> [CMake] -- Setting up python virtual environmnent...finished.
1> [CMake] CMake Warning at ports/qtwebengine/portfile.cmake:85 (message):
1> [CMake] Buildtree path 'E:/lib/vcpkg/buildtrees/qtwebengine' is too long.
1> [CMake]
1> [CMake] Consider passing --x-buildtrees-root=<shortpath> to vcpkg!
1> [CMake]
1> [CMake] Trying to use 'E:/lib/vcpkg/buildtrees/qtwebengine/../tmp'
1> [CMake] Call Stack (most recent call first):
1> [CMake] scripts/ports.cmake:147 (include)
1> [CMake]
1> [CMake]
1> [CMake] CMake Error at ports/qtwebengine/portfile.cmake:90 (message):
1> [CMake] Buildtree path is too long. Build will fail! Pass
1> [CMake] --x-buildtrees-root=<shortpath> to vcpkg!
1> [CMake] Call Stack (most recent call first):
1> [CMake] scripts/ports.cmake:147 (include)
1> [CMake] error: building qtwebengine:x64-windows failed with: BUILD_FAILED
1> [CMake] error: Please ensure you're using the latest port files with `git pull` and `vcpkg update`.
Basically I need to set the --x-buildtrees-root=<shortpath> option when building the library with vcpkg. I can do it manually, but how can I set this option in order to be called automatically when I build the dependencies with Visual Studio? How can I update my configuration files?
A:
The variable VCPKG_INSTALL_OPTIONS is meant for passing further options to vcpkg install. So just set it in your preset.
|
set vcpkg x-buildtrees-root option in manifest or in cmakepresets.json
|
I've a CMake project that uses vcpkg.json for using vcpkg, and CMakePresets.json for setting the CMake options.
This is the vcpkg.json:
{
"name": "myproj",
"version": "1.0.0",
"dependencies": [
"boost",
"qt"
]
}
This is the CMakePresets.json:
{
"version": 3,
"cmakeMinimumRequired": {
"major": 3,
"minor": 22,
"patch": 1
},
"configurePresets": [
{
"name": "default",
"displayName": "Default Config",
"description": "Default config generator with ninja",
"generator": "Ninja",
"binaryDir": "${sourceDir}/build/${presetName}",
"hidden": true,
"cacheVariables": {
"CMAKE_TOOLCHAIN_FILE": "e:/lib/vcpkg/scripts/buildsystems/vcpkg.cmake",
"VCPKG_DEFAULT_TRIPLET": "x64-windows",
"CMAKE_EXPORT_COMPILE_COMMANDS": "TRUE"
},
"environment": {
}
},
{
"inherits": "default",
"name": "debug",
"displayName": "Debug",
"description": "Debug build.",
"cacheVariables": {
"CMAKE_BUILD_TYPE": "Debug"
}
},
{
"inherits": "default",
"name": "release",
"displayName": "Release",
"description": "Release build.",
"cacheVariables": {
"CMAKE_BUILD_TYPE": "Release"
}
}
],
"buildPresets": [
{
"name": "Debug",
"configurePreset": "debug"
},
{
"name": "Release",
"configurePreset": "release"
}
],
"testPresets": [
{
"name": "debugtest",
"configurePreset": "debug",
"output": {"outputOnFailure": true},
"execution": {"noTestsAction": "error", "stopOnFailure": true}
}
]
}
When I open the project folder with Visual Studio 2022, it start to build the vcpkg libraries, and everything goes well, until it builds qtwebengine, that returns me an error:
1> [CMake] Installing 376/432 qtwebengine:x64-windows...
1> [CMake] Building qtwebengine[core,default-features,geolocation,spellchecker,webchannel]:x64-windows...
1> [CMake] -- Using cached pypa-get-pip-38e54e5de07c66e875c11a1ebbdb938854625dd8.tar.gz.
1> [CMake] -- Cleaning sources at E:/lib/vcpkg/buildtrees/qtwebengine/src/8854625dd8-861bd167bd.clean. Use --editable to skip cleaning for the packages you specify.
1> [CMake] -- Extracting source E:/lib/vcpkg/downloads/pypa-get-pip-38e54e5de07c66e875c11a1ebbdb938854625dd8.tar.gz
1> [CMake] -- Using source at E:/lib/vcpkg/buildtrees/qtwebengine/src/8854625dd8-861bd167bd.clean
1> [CMake] -- Setting up python virtual environmnent...
1> [CMake] -- Installing python packages: html5lib
1> [CMake] -- Setting up python virtual environmnent...finished.
1> [CMake] CMake Warning at ports/qtwebengine/portfile.cmake:85 (message):
1> [CMake] Buildtree path 'E:/lib/vcpkg/buildtrees/qtwebengine' is too long.
1> [CMake]
1> [CMake] Consider passing --x-buildtrees-root=<shortpath> to vcpkg!
1> [CMake]
1> [CMake] Trying to use 'E:/lib/vcpkg/buildtrees/qtwebengine/../tmp'
1> [CMake] Call Stack (most recent call first):
1> [CMake] scripts/ports.cmake:147 (include)
1> [CMake]
1> [CMake]
1> [CMake] CMake Error at ports/qtwebengine/portfile.cmake:90 (message):
1> [CMake] Buildtree path is too long. Build will fail! Pass
1> [CMake] --x-buildtrees-root=<shortpath> to vcpkg!
1> [CMake] Call Stack (most recent call first):
1> [CMake] scripts/ports.cmake:147 (include)
1> [CMake] error: building qtwebengine:x64-windows failed with: BUILD_FAILED
1> [CMake] error: Please ensure you're using the latest port files with `git pull` and `vcpkg update`.
Basically I need to set the --x-buildtrees-root=<shortpath> option when building the library with vcpkg. I can do it manually, but how can I set this option in order to be called automatically when I build the dependencies with Visual Studio? How can I update my configuration files?
|
[
"The variable VCPKG_INSTALL_OPTIONS is meant for passing further options to vcpkg install. So just set it in your preset.\n"
] |
[
1
] |
[] |
[] |
[
"c++",
"cmake",
"qt",
"qtwebengine",
"vcpkg"
] |
stackoverflow_0074656830_c++_cmake_qt_qtwebengine_vcpkg.txt
|
Q:
Should I make every method and instance variable in my code static?
I tried to write a code that finds NashEquilibrium in given matrix.
I was keep getting errors that says I can't call non static method from static method so I turned every method and instance variable to static, is that a problem?
There are tons of logic errors in my code and it gives wrong answer, could it be because they are all static or its only logic error?
import java.util.ArrayList;
import java.util.Scanner;
public class Nash
{
public static String nes;
public static String str;
public static void main(String[] args)
{
Scanner scan = new Scanner(System.in);
System.out.println("Please enter the amount of strategies for each player");
int stratA = scan.nextInt();
int stratB = scan.nextInt();
String[][] utilities = new String[stratA][stratB];
System.out.println("Please enter the utilities");
for(int row = 0; row<stratA; row++)
for(int column = 0; column<stratB; column++)
utilities[row][column] = scan.next();
// Creates a 2D array with given utilities
if (nashExists(stratA, stratB, utilities) == true)
System.out.println(nes);
else
System.out.println("No NE found");
// Prints the results
}
public static boolean nashExists(int strA, int strB, String[][] util)
{
int[][] movesA = new int[strA][strB];
for(int row = 0; row<strA; row++)
for(int column = 0; column<strB; column++)
movesA[row][column] = Integer.parseInt(util[row][column].substring(0,1));
int[][] movesB = new int[strA][strB];
for(int row = 0; row<strA; row++)
for(int column = 0; column<strB; column++)
movesA[row][column] = Integer.parseInt(util[row][column].substring(2,3));
// Creates a 2d integer array for utilites of every strategy of A and B
ArrayList<String> aNE = new ArrayList<String>();
ArrayList<String> bNE = new ArrayList<String>();
for(int row = 0; row<strA; row++)
for(int column = 0; column<strB; column++)
if (nashExistsA(row, column, movesA) == true)
aNE.add((row+1) + "," + (column+1));
for(int row = 0; row<strA; row++)
for(int column = 0; column<strB; column++)
if (nashExistsB(row, column, movesB) == true)
bNE.add((row+1) + "," + (column+1));
// Checks if there are NE for one of players
if (compareArrayLists(aNE, bNE) == true)
return true;
else
return false;
}
// Checks if there are any matchs between both players NE's
public static boolean nashExistsA(int r, int c, int[][] a)
{
int max = a[r][c];
for (int i = 0; i<a.length; i++)
if (max < a[i][c])
max = a[i][c];
if (a[r][c] == max)
return true;
else
return false;
}
public static boolean nashExistsB(int r, int c, int[][] b)
{
int max = b[r][c];
for (int i = 0; i<b[0].length; i++)
if (max < b[r][i])
max = b[r][i];
if (b[r][c] == max)
return true;
else
return false;
}
public static boolean compareArrayLists(ArrayList<String> aN, ArrayList<String> bN)
{
for (int i=0; i<aN.size(); i++)
{
String potNE = aN.get(i);
if (bN.indexOf(potNE) >= 0)
str += "(" + potNE + ") ";
}
nes = str;
if (str.length()>0)
return true;
else
return false;
}
}
A:
Turning members (methods and fields) into static is the classic mistake that novices tend to make when learning their first object-oriented language.
Don't do this.
I have seen this happening twice in workplaces where we hired fresh college graduates who had practically no programming experience. Predictably, after struggling with it for a while, the colleague would come to one of the older guys asking for help, and the help invariably was "lose static everywhere".
The more things you turn into static, the less object-oriented you are; if you turn everything static, then you are not object-oriented at all; you might as well be programming in BASIC or in COBOL.
When you become more familiar with the language and you start doing more advanced stuff, you will discover legitimate uses for static, which are very rare. When you come across such a situation, you will know it. Until then, stick with the rule that says:
Generally, avoid static like the plague.
|
Should I make every method and instance variable in my code static?
|
I tried to write a code that finds NashEquilibrium in given matrix.
I was keep getting errors that says I can't call non static method from static method so I turned every method and instance variable to static, is that a problem?
There are tons of logic errors in my code and it gives wrong answer, could it be because they are all static or its only logic error?
import java.util.ArrayList;
import java.util.Scanner;
public class Nash
{
public static String nes;
public static String str;
public static void main(String[] args)
{
Scanner scan = new Scanner(System.in);
System.out.println("Please enter the amount of strategies for each player");
int stratA = scan.nextInt();
int stratB = scan.nextInt();
String[][] utilities = new String[stratA][stratB];
System.out.println("Please enter the utilities");
for(int row = 0; row<stratA; row++)
for(int column = 0; column<stratB; column++)
utilities[row][column] = scan.next();
// Creates a 2D array with given utilities
if (nashExists(stratA, stratB, utilities) == true)
System.out.println(nes);
else
System.out.println("No NE found");
// Prints the results
}
public static boolean nashExists(int strA, int strB, String[][] util)
{
int[][] movesA = new int[strA][strB];
for(int row = 0; row<strA; row++)
for(int column = 0; column<strB; column++)
movesA[row][column] = Integer.parseInt(util[row][column].substring(0,1));
int[][] movesB = new int[strA][strB];
for(int row = 0; row<strA; row++)
for(int column = 0; column<strB; column++)
movesA[row][column] = Integer.parseInt(util[row][column].substring(2,3));
// Creates a 2d integer array for utilites of every strategy of A and B
ArrayList<String> aNE = new ArrayList<String>();
ArrayList<String> bNE = new ArrayList<String>();
for(int row = 0; row<strA; row++)
for(int column = 0; column<strB; column++)
if (nashExistsA(row, column, movesA) == true)
aNE.add((row+1) + "," + (column+1));
for(int row = 0; row<strA; row++)
for(int column = 0; column<strB; column++)
if (nashExistsB(row, column, movesB) == true)
bNE.add((row+1) + "," + (column+1));
// Checks if there are NE for one of players
if (compareArrayLists(aNE, bNE) == true)
return true;
else
return false;
}
// Checks if there are any matchs between both players NE's
public static boolean nashExistsA(int r, int c, int[][] a)
{
int max = a[r][c];
for (int i = 0; i<a.length; i++)
if (max < a[i][c])
max = a[i][c];
if (a[r][c] == max)
return true;
else
return false;
}
public static boolean nashExistsB(int r, int c, int[][] b)
{
int max = b[r][c];
for (int i = 0; i<b[0].length; i++)
if (max < b[r][i])
max = b[r][i];
if (b[r][c] == max)
return true;
else
return false;
}
public static boolean compareArrayLists(ArrayList<String> aN, ArrayList<String> bN)
{
for (int i=0; i<aN.size(); i++)
{
String potNE = aN.get(i);
if (bN.indexOf(potNE) >= 0)
str += "(" + potNE + ") ";
}
nes = str;
if (str.length()>0)
return true;
else
return false;
}
}
|
[
"Turning members (methods and fields) into static is the classic mistake that novices tend to make when learning their first object-oriented language.\nDon't do this.\nI have seen this happening twice in workplaces where we hired fresh college graduates who had practically no programming experience. Predictably, after struggling with it for a while, the colleague would come to one of the older guys asking for help, and the help invariably was \"lose static everywhere\".\nThe more things you turn into static, the less object-oriented you are; if you turn everything static, then you are not object-oriented at all; you might as well be programming in BASIC or in COBOL.\nWhen you become more familiar with the language and you start doing more advanced stuff, you will discover legitimate uses for static, which are very rare. When you come across such a situation, you will know it. Until then, stick with the rule that says:\n\nGenerally, avoid static like the plague.\n\n"
] |
[
2
] |
[] |
[] |
[
"java",
"static"
] |
stackoverflow_0074666153_java_static.txt
|
Q:
Python read in file: ERROR: line contains NULL byte
I would like to parse an .ubx File(=my input file). This file contains many different NMEA sentences as well as raw receiver data. The output file should just contain informations out of GGA sentences. This works fine as far as the .ubx File does not contain any raw messages. However if it contains raw data
I get the following error:
Traceback (most recent call last):
File "C:...myParser.py", line 25, in
for row in reader:
Error: line contains NULL byte
My code looks like this:
import csv
from datetime import datetime
import math
# adapt this to your file
INPUT_FILENAME = 'Rover.ubx'
OUTPUT_FILENAME = 'out2.csv'
# open the input file in read mode
with open(INPUT_FILENAME, 'r') as input_file:
# open the output file in write mode
with open(OUTPUT_FILENAME, 'wt') as output_file:
# create a csv reader object from the input file (nmea files are basically csv)
reader = csv.reader(input_file)
# create a csv writer object for the output file
writer = csv.writer(output_file, delimiter=',', lineterminator='\n')
# write the header line to the csv file
writer.writerow(['Time','Longitude','Latitude','Altitude','Quality','Number of Sat.','HDOP','Geoid seperation','diffAge'])
# iterate over all the rows in the nmea file
for row in reader:
if row[0].startswith('$GNGGA'):
time = row[1]
# merge the time and date columns into one Python datetime object (usually more convenient than having both separately)
date_and_time = datetime.strptime(time, '%H%M%S.%f')
date_and_time = date_and_time.strftime('%H:%M:%S.%f')[:-6] #
writer.writerow([date_and_time])
My .ubx file looks like this:
$GNGSA,A,3,16,25,29,20,31,26,05,21,,,,,1.30,0.70,1.10*10
$GNGSA,A,3,88,79,78,81,82,80,72,,,,,,1.30,0.70,1.10*16
$GPGSV,4,1,13,02,08,040,17,04,,,47,05,18,071,44,09,02,348,24*49
$GPGSV,4,2,13,12,03,118,24,16,12,298,36,20,15,118,30,21,44,179,51*74
$GPGSV,4,3,13,23,06,324,35,25,37,121,47,26,40,299,48,29,60,061,49*73
$GPGSV,4,4,13,31,52,239,51*42
$GLGSV,3,1,10,65,07,076,24,70,01,085,,71,04,342,34,72,13,029,35*64
$GLGSV,3,2,10,78,35,164,41,79,75,214,48,80,34,322,46,81,79,269,49*64
$GLGSV,3,3,10,82,28,235,52,88,39,043,43*6D
$GNGLL,4951.69412,N,00839.03672,E,124610.00,A,D*71
$GNGST,124610.00,12,,,,0.010,0.010,0.010*4B
$GNZDA,124610.00,03,07,2016,00,00*79
µb< ¸½¸Abð½ . SB éF é v.¥ # 1 f =•Iè ,
Ïÿÿ£Ëÿÿd¡ ¬M 0+ùÿÿ³øÿÿµj #ª ² -K*
,¨ , éºJU /) ++ f 5 .lG NL C8G /{; „> é óK 3 — Bòl . "¿ 2 bm¡
4âH ÐM X cRˆ 35 »7 Óo‡ž "*ßÿÿØÜÿÿUhQ`
3ŒðÿÿÂïÿÿþþûù ÂÈÿÿñÅÿÿJX ES
$²I uM N:w (YÃÿÿV¿ÿÿ> =ìî 1¥éÿÿèÿÿmk³m /?ÔÿÿÒÿÿšz+Ú Ïÿÿ6ÍÿÿêwÇ\ ? ]? ˜B Aÿƒ y µbÐD‹lçtæ@p3,}ßœŒ-vAh
¿M"A‚UE ôû JQý
'wA´üát¸jžAÀ‚"Å
)DÂï–ŽtAöÙüñÅ›A|$Å ôû/ Ìcd§ÇørA†áãì˜AØY–Ä ôû1 /Áƒ´zsAc5+_’ô™AìéNÅ ôû( ¶y(,wvAFøÈV§ƒA˜ÝwE ôû$ _S R‰wAhÙ]‘ÑëžAÇ9Å vwAòܧsAŒöƒd§Ò™AÜOÄ ôû3 kœÕ}vA;D.ž‡žAÒûàÄ @ˆ" ϬŸ ntAfˆÞ3ךA~Y2E ôû3 :GVtAæ93l)ÆšAß yE ôû4 Uþy.TwA<âƒ' ¦žAhmëC ôû" ¯4Çï ›wAþ‰Ì½6ŸAŠû¶D ~~xI]tA<ÞÿrÁšAmHE ôû/ ÖÆ@ÈgŸsAXnþ‚†4šA'0tE ôû. ·ÈO:’
sA¢B†i™Aë%
E ôû/ >Þ,À8vA°‚9êœA>ÇD ôû, ø(¼+çŠuAÆOÁ לAÈΆD
ôû# ¨Ä-_c¯qAuÓ?]> —AÐкà ôû0 ÆUV¨ØZsA]ðÛñß™AÛ'Å ôû, ™mv7žqAYÐ:›Ä‘—AdWxD ôû1 ûö>%vA}„
ëV˜A.êbE
AÝ$GNRMC,124611.00,A,4951.69413,N,00839.03672,E,0.009,,030716,,,D*62
$GNVTG,,T,,M,0.009,N,0.016,K,D*36
$GNGNS,124611.00,4951.69413,N,00839.03672,E,RR,15,0.70,162.5,47.6,1.0,0000*42
$GNGGA,124611.00,4951.69413,N,00839.03672,E,4,12,0.70,162.5,M,47.6,M,1.0,0000*6A
$GNGSA,A,3,16,25,29,20,31,26,05,21,,,,,1.31,0.70,1.10*11
$GNGSA,A,3,88,79,78,81,82,80,72,,,,,,1.31,0.70,1.10*17
$GPGSV,4,1,13,02,08,040,18,04,,,47,05,18,071,44,09,02,348,21*43
$GPGSV,4,2,13,12,03,118,24,16,
I already searched for similar problems. However I was not able to find a solution which workes for me.
I ended up with code like that:
import csv
def unfussy_reader(csv_reader):
while True:
try:
yield next(csv_reader)
except csv.Error:
# log the problem or whatever
print("Problem with some row")
continue
if __name__ == '__main__':
#
# Generate malformed csv file for
# demonstration purposes
#
with open("temp.csv", "w") as fout:
fout.write("abc,def\nghi\x00,klm\n123,456")
#
# Open the malformed file for reading, fire up a
# conventional CSV reader over it, wrap that reader
# in our "unfussy" generator and enumerate over that
# generator.
#
with open("Rover.ubx") as fin:
reader = unfussy_reader(csv.reader(fin))
for n, row in enumerate(reader):
fout.write(row[0])
However I was not able to simply write a file containing just all the rows read in with the unfuss_reader wrapper using the above code.
Would be glad if you could help me.
Here is an Image of how the .ubx file looks in notepad++image
Thanks!
A:
I am not quite sure but your file looks pretty binary. You should try to open it as such
with open(INPUT_FILENAME, 'rb') as input_file:
A:
It seems like you did not open the file with correct coding format.
So the raw message cannot be read correctly.
If it is encoded as UTF8, you need to open the file with coding option:
with open(INPUT_FILENAME, 'r', newline='', encoding='utf8') as input_file
A:
Hey if anyone else has this proglem to read in NMEA sentences of uBlox .ubx files
this pyhton code worked for me:
def read_in():
with open('GNGGA.txt', 'w') as GNGGA:
with open('GNRMC.txt','w') as GNRMC:
with open('rover.ubx', 'rb') as f:
for line in f:
#print line
if line.startswith('$GNGGA'):
#print line
GNGGA.write(line)
if line.startswith('$GNRMC'):
GNRMC.write(line)
read_in()
A:
You could also use the gnssdump command line utility which is installed with the PyGPSClient and pygnssutils Python packages.
e.g.
gnssdump filename=Rover.ubx msgfilter=GNGGA
See gnssdump -h for help.
Alternatively if you want a simple Python script you could use the pyubx2 Python package, e.g.
from pyubx2 import UBXReader
with open("Rover.ubx", "rb") as stream:
ubr = UBXReader(stream)
for (_, parsed_data) in ubr.iterate():
if parsed_data.identity in ("GNGGA", "GNRMC"):
print(parsed_data)
|
Python read in file: ERROR: line contains NULL byte
|
I would like to parse an .ubx File(=my input file). This file contains many different NMEA sentences as well as raw receiver data. The output file should just contain informations out of GGA sentences. This works fine as far as the .ubx File does not contain any raw messages. However if it contains raw data
I get the following error:
Traceback (most recent call last):
File "C:...myParser.py", line 25, in
for row in reader:
Error: line contains NULL byte
My code looks like this:
import csv
from datetime import datetime
import math
# adapt this to your file
INPUT_FILENAME = 'Rover.ubx'
OUTPUT_FILENAME = 'out2.csv'
# open the input file in read mode
with open(INPUT_FILENAME, 'r') as input_file:
# open the output file in write mode
with open(OUTPUT_FILENAME, 'wt') as output_file:
# create a csv reader object from the input file (nmea files are basically csv)
reader = csv.reader(input_file)
# create a csv writer object for the output file
writer = csv.writer(output_file, delimiter=',', lineterminator='\n')
# write the header line to the csv file
writer.writerow(['Time','Longitude','Latitude','Altitude','Quality','Number of Sat.','HDOP','Geoid seperation','diffAge'])
# iterate over all the rows in the nmea file
for row in reader:
if row[0].startswith('$GNGGA'):
time = row[1]
# merge the time and date columns into one Python datetime object (usually more convenient than having both separately)
date_and_time = datetime.strptime(time, '%H%M%S.%f')
date_and_time = date_and_time.strftime('%H:%M:%S.%f')[:-6] #
writer.writerow([date_and_time])
My .ubx file looks like this:
$GNGSA,A,3,16,25,29,20,31,26,05,21,,,,,1.30,0.70,1.10*10
$GNGSA,A,3,88,79,78,81,82,80,72,,,,,,1.30,0.70,1.10*16
$GPGSV,4,1,13,02,08,040,17,04,,,47,05,18,071,44,09,02,348,24*49
$GPGSV,4,2,13,12,03,118,24,16,12,298,36,20,15,118,30,21,44,179,51*74
$GPGSV,4,3,13,23,06,324,35,25,37,121,47,26,40,299,48,29,60,061,49*73
$GPGSV,4,4,13,31,52,239,51*42
$GLGSV,3,1,10,65,07,076,24,70,01,085,,71,04,342,34,72,13,029,35*64
$GLGSV,3,2,10,78,35,164,41,79,75,214,48,80,34,322,46,81,79,269,49*64
$GLGSV,3,3,10,82,28,235,52,88,39,043,43*6D
$GNGLL,4951.69412,N,00839.03672,E,124610.00,A,D*71
$GNGST,124610.00,12,,,,0.010,0.010,0.010*4B
$GNZDA,124610.00,03,07,2016,00,00*79
µb< ¸½¸Abð½ . SB éF é v.¥ # 1 f =•Iè ,
Ïÿÿ£Ëÿÿd¡ ¬M 0+ùÿÿ³øÿÿµj #ª ² -K*
,¨ , éºJU /) ++ f 5 .lG NL C8G /{; „> é óK 3 — Bòl . "¿ 2 bm¡
4âH ÐM X cRˆ 35 »7 Óo‡ž "*ßÿÿØÜÿÿUhQ`
3ŒðÿÿÂïÿÿþþûù ÂÈÿÿñÅÿÿJX ES
$²I uM N:w (YÃÿÿV¿ÿÿ> =ìî 1¥éÿÿèÿÿmk³m /?ÔÿÿÒÿÿšz+Ú Ïÿÿ6ÍÿÿêwÇ\ ? ]? ˜B Aÿƒ y µbÐD‹lçtæ@p3,}ßœŒ-vAh
¿M"A‚UE ôû JQý
'wA´üát¸jžAÀ‚"Å
)DÂï–ŽtAöÙüñÅ›A|$Å ôû/ Ìcd§ÇørA†áãì˜AØY–Ä ôû1 /Áƒ´zsAc5+_’ô™AìéNÅ ôû( ¶y(,wvAFøÈV§ƒA˜ÝwE ôû$ _S R‰wAhÙ]‘ÑëžAÇ9Å vwAòܧsAŒöƒd§Ò™AÜOÄ ôû3 kœÕ}vA;D.ž‡žAÒûàÄ @ˆ" ϬŸ ntAfˆÞ3ךA~Y2E ôû3 :GVtAæ93l)ÆšAß yE ôû4 Uþy.TwA<âƒ' ¦žAhmëC ôû" ¯4Çï ›wAþ‰Ì½6ŸAŠû¶D ~~xI]tA<ÞÿrÁšAmHE ôû/ ÖÆ@ÈgŸsAXnþ‚†4šA'0tE ôû. ·ÈO:’
sA¢B†i™Aë%
E ôû/ >Þ,À8vA°‚9êœA>ÇD ôû, ø(¼+çŠuAÆOÁ לAÈΆD
ôû# ¨Ä-_c¯qAuÓ?]> —AÐкà ôû0 ÆUV¨ØZsA]ðÛñß™AÛ'Å ôû, ™mv7žqAYÐ:›Ä‘—AdWxD ôû1 ûö>%vA}„
ëV˜A.êbE
AÝ$GNRMC,124611.00,A,4951.69413,N,00839.03672,E,0.009,,030716,,,D*62
$GNVTG,,T,,M,0.009,N,0.016,K,D*36
$GNGNS,124611.00,4951.69413,N,00839.03672,E,RR,15,0.70,162.5,47.6,1.0,0000*42
$GNGGA,124611.00,4951.69413,N,00839.03672,E,4,12,0.70,162.5,M,47.6,M,1.0,0000*6A
$GNGSA,A,3,16,25,29,20,31,26,05,21,,,,,1.31,0.70,1.10*11
$GNGSA,A,3,88,79,78,81,82,80,72,,,,,,1.31,0.70,1.10*17
$GPGSV,4,1,13,02,08,040,18,04,,,47,05,18,071,44,09,02,348,21*43
$GPGSV,4,2,13,12,03,118,24,16,
I already searched for similar problems. However I was not able to find a solution which workes for me.
I ended up with code like that:
import csv
def unfussy_reader(csv_reader):
while True:
try:
yield next(csv_reader)
except csv.Error:
# log the problem or whatever
print("Problem with some row")
continue
if __name__ == '__main__':
#
# Generate malformed csv file for
# demonstration purposes
#
with open("temp.csv", "w") as fout:
fout.write("abc,def\nghi\x00,klm\n123,456")
#
# Open the malformed file for reading, fire up a
# conventional CSV reader over it, wrap that reader
# in our "unfussy" generator and enumerate over that
# generator.
#
with open("Rover.ubx") as fin:
reader = unfussy_reader(csv.reader(fin))
for n, row in enumerate(reader):
fout.write(row[0])
However I was not able to simply write a file containing just all the rows read in with the unfuss_reader wrapper using the above code.
Would be glad if you could help me.
Here is an Image of how the .ubx file looks in notepad++image
Thanks!
|
[
"I am not quite sure but your file looks pretty binary. You should try to open it as such\nwith open(INPUT_FILENAME, 'rb') as input_file:\n\n",
"It seems like you did not open the file with correct coding format.\nSo the raw message cannot be read correctly.\nIf it is encoded as UTF8, you need to open the file with coding option:\nwith open(INPUT_FILENAME, 'r', newline='', encoding='utf8') as input_file\n\n",
"Hey if anyone else has this proglem to read in NMEA sentences of uBlox .ubx files\nthis pyhton code worked for me:\ndef read_in():\nwith open('GNGGA.txt', 'w') as GNGGA:\n with open('GNRMC.txt','w') as GNRMC:\n with open('rover.ubx', 'rb') as f:\n for line in f:\n #print line\n if line.startswith('$GNGGA'):\n #print line\n GNGGA.write(line)\n if line.startswith('$GNRMC'):\n GNRMC.write(line)\n\nread_in()\n",
"You could also use the gnssdump command line utility which is installed with the PyGPSClient and pygnssutils Python packages.\ne.g.\ngnssdump filename=Rover.ubx msgfilter=GNGGA\n\nSee gnssdump -h for help.\nAlternatively if you want a simple Python script you could use the pyubx2 Python package, e.g.\nfrom pyubx2 import UBXReader\n\nwith open(\"Rover.ubx\", \"rb\") as stream:\n\n ubr = UBXReader(stream)\n for (_, parsed_data) in ubr.iterate():\n if parsed_data.identity in (\"GNGGA\", \"GNRMC\"):\n print(parsed_data)\n\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"nmea",
"parsing",
"python"
] |
stackoverflow_0038179492_nmea_parsing_python.txt
|
Q:
Save & load best model in AutoTS python
After fitting AutoTS model over some time series data, how can I save & load the best model trained? Though, the AutoTS object has export_template() & import_template() functions to save best model, but while loading best model from this template, it requires re-fitting. How can such a solution be used in production? My code:
from autots import AutoTS
model = AutoTS(
frequency='infer',
prediction_interval=0.9,
ensemble=None,
model_list="fast", # "superfast", "default", "fast_parallel"
transformer_list="fast", # "superfast",
drop_most_recent=1,
max_generations=4,
num_validations=2,
validation_method="backwards")
model.fit(df_day,date_col='xyz',value_col='abc')
model.export_template("unique_user_1", models='best', n=1, max_per_model_class=3)
Now, in some new instance, when I do
model = model.import_template('unique_user_1.csv',method='only')
The model required retraining.
A:
The major issue with your code is that you have named your export_template as 'unique_user_1' without an extension. Try saving it as csv file with 'unique_user_1.csv'
Once you feel you are done with training your model. Write the following lines
model.export_template(
"unique_user_1.csv",
models="best",
max_per_model_class=1,
include_results=True,
)
To load the template & reuse it
model = model.import_template(
"unique_user_1.csv",
method="only",
enforce_model_list=True,)
model.fit(data)
prediction = model.predict(forecast_length=15)
|
Save & load best model in AutoTS python
|
After fitting AutoTS model over some time series data, how can I save & load the best model trained? Though, the AutoTS object has export_template() & import_template() functions to save best model, but while loading best model from this template, it requires re-fitting. How can such a solution be used in production? My code:
from autots import AutoTS
model = AutoTS(
frequency='infer',
prediction_interval=0.9,
ensemble=None,
model_list="fast", # "superfast", "default", "fast_parallel"
transformer_list="fast", # "superfast",
drop_most_recent=1,
max_generations=4,
num_validations=2,
validation_method="backwards")
model.fit(df_day,date_col='xyz',value_col='abc')
model.export_template("unique_user_1", models='best', n=1, max_per_model_class=3)
Now, in some new instance, when I do
model = model.import_template('unique_user_1.csv',method='only')
The model required retraining.
|
[
"The major issue with your code is that you have named your export_template as 'unique_user_1' without an extension. Try saving it as csv file with 'unique_user_1.csv'\nOnce you feel you are done with training your model. Write the following lines\nmodel.export_template(\n\"unique_user_1.csv\",\nmodels=\"best\",\nmax_per_model_class=1,\ninclude_results=True,\n\n)\nTo load the template & reuse it\nmodel = model.import_template(\n\"unique_user_1.csv\",\nmethod=\"only\",\nenforce_model_list=True,)\nmodel.fit(data)\nprediction = model.predict(forecast_length=15)\n\n"
] |
[
0
] |
[] |
[] |
[
"data_science",
"forecasting",
"machine_learning",
"python",
"time_series"
] |
stackoverflow_0072123229_data_science_forecasting_machine_learning_python_time_series.txt
|
Q:
Json form schema use in react native
Is there any way I can use json form schema with custom renderer (using json schema to render react native UI elements) in react native.
I have seen couple of react native specific package which are inspired by json form but couldn't get those working as per the requirement.
I am also looking to use it with yup form validation package
Thanks.
A:
I think you can try react-jsonschema-form and json-schema-form-for-react-native + yeah, you can also use the yup form validation package with these packages to validate the form data.
Example of react-jsonschema-form package usage with a custom renderer and yup validation:
import React from 'react';
import { View } from 'react-native';
import { Form, Field } from 'react-jsonschema-form';
import * as yup from 'yup';
const schema = {
type: 'object',
properties: {
firstName: { type: 'string', title: 'First Name' },
lastName: { type: 'string', title: 'Last Name' },
}
};
const uiSchema = {
firstName: {
'ui:placeholder': 'Enter your first name',
},
lastName: {
'ui:placeholder': 'Enter your last name',
},
};
const CustomInput = ({ type, value, onChange }) => (
<TextInput
value={value}
onChangeText={text => onChange(text)}
/>
);
const CustomForm = () => (
<Form
schema={schema}
uiSchema={uiSchema}
validate={yup.object().shape(schema)}
FieldTemplate={CustomInput}
/>
);
Documentation: https://github.com/mozilla-services/react-jsonschema-form.
|
Json form schema use in react native
|
Is there any way I can use json form schema with custom renderer (using json schema to render react native UI elements) in react native.
I have seen couple of react native specific package which are inspired by json form but couldn't get those working as per the requirement.
I am also looking to use it with yup form validation package
Thanks.
|
[
"I think you can try react-jsonschema-form and json-schema-form-for-react-native + yeah, you can also use the yup form validation package with these packages to validate the form data.\nExample of react-jsonschema-form package usage with a custom renderer and yup validation:\nimport React from 'react';\nimport { View } from 'react-native';\nimport { Form, Field } from 'react-jsonschema-form';\nimport * as yup from 'yup';\n\nconst schema = {\n type: 'object',\n properties: {\n firstName: { type: 'string', title: 'First Name' },\n lastName: { type: 'string', title: 'Last Name' },\n }\n};\n\nconst uiSchema = {\n firstName: {\n 'ui:placeholder': 'Enter your first name',\n },\n lastName: {\n 'ui:placeholder': 'Enter your last name',\n },\n};\n\nconst CustomInput = ({ type, value, onChange }) => (\n <TextInput\n value={value}\n onChangeText={text => onChange(text)}\n />\n);\n\nconst CustomForm = () => (\n <Form\n schema={schema}\n uiSchema={uiSchema}\n validate={yup.object().shape(schema)}\n FieldTemplate={CustomInput}\n />\n);\n\nDocumentation: https://github.com/mozilla-services/react-jsonschema-form.\n"
] |
[
0
] |
[] |
[] |
[
"forms",
"jsonforms",
"react_native"
] |
stackoverflow_0074529130_forms_jsonforms_react_native.txt
|
Q:
pandas apply subtractions on columns function when indexes are not equal, based on alignment in another columns
I have two dataframes:
df1 =
C0 C1. C2.
4 AB. 1. 2
5 AC. 7 8
6 AD. 9. 9
7 AE. 2. 6
8 AG 8. 9
df2 =
C0 C1. C2
8 AB 0. 1
9 AE. 6. 3
10 AD. 1. 2
I want to apply a subtraction between these two dataframes, such that when the value of the columns C0 is the same - I will get the subsraction, and when is not - a bool column will have the value False. notice that current indeics are not aligned.
So new df1 should be:
df1 =
C0 C1. C2. diff_C1 match
4 AB. 1. 2. 1. True
5 AC. 7 8. 0. False
6 AD. 9. 9. 8. True
7 AE. 2. 6. -4. True
8 AG 8. 9. 0 False
What is the best way to do it?
A:
A possible solution, based on pandas.DataFrame.merge:
(df1.merge(df2.iloc[:,:-1], on='C0', suffixes=['', 'y'], how='left')
.rename({'C1.y': 'diff_C1'}, axis=1)
.assign(diff_C1 = lambda x: x['C1.'].sub(x['diff_C1']))
.assign(match = lambda x: x['diff_C1'].notna())
.fillna(0))
Output:
C0 C1. C2. diff_C1 match
0 AB. 1.0 2 1.0 True
1 AC. 7.0 8 0.0 False
2 AD. 9.0 9 8.0 True
3 AE. 2.0 6 -4.0 True
4 AG. 8.0 9 0.0 False
A:
You can try merging the columns using pandas.DataFrame.merge on column C0 and how as left as shown below
df1.merge(df2, how='left', on='C0')
.assign(match=lambda x: x['C1_y'].notna())
.fillna(0)
Output:
then subtract the C1 columns i.e. C1_x and C1_y
df['C1_diff'] = df['C1_x'] - df['C1_y']
|
pandas apply subtractions on columns function when indexes are not equal, based on alignment in another columns
|
I have two dataframes:
df1 =
C0 C1. C2.
4 AB. 1. 2
5 AC. 7 8
6 AD. 9. 9
7 AE. 2. 6
8 AG 8. 9
df2 =
C0 C1. C2
8 AB 0. 1
9 AE. 6. 3
10 AD. 1. 2
I want to apply a subtraction between these two dataframes, such that when the value of the columns C0 is the same - I will get the subsraction, and when is not - a bool column will have the value False. notice that current indeics are not aligned.
So new df1 should be:
df1 =
C0 C1. C2. diff_C1 match
4 AB. 1. 2. 1. True
5 AC. 7 8. 0. False
6 AD. 9. 9. 8. True
7 AE. 2. 6. -4. True
8 AG 8. 9. 0 False
What is the best way to do it?
|
[
"A possible solution, based on pandas.DataFrame.merge:\n(df1.merge(df2.iloc[:,:-1], on='C0', suffixes=['', 'y'], how='left')\n .rename({'C1.y': 'diff_C1'}, axis=1)\n .assign(diff_C1 = lambda x: x['C1.'].sub(x['diff_C1']))\n .assign(match = lambda x: x['diff_C1'].notna())\n .fillna(0))\n\nOutput:\n C0 C1. C2. diff_C1 match\n0 AB. 1.0 2 1.0 True\n1 AC. 7.0 8 0.0 False\n2 AD. 9.0 9 8.0 True\n3 AE. 2.0 6 -4.0 True\n4 AG. 8.0 9 0.0 False\n\n",
"You can try merging the columns using pandas.DataFrame.merge on column C0 and how as left as shown below\ndf1.merge(df2, how='left', on='C0')\n .assign(match=lambda x: x['C1_y'].notna())\n .fillna(0)\n\nOutput:\n\nthen subtract the C1 columns i.e. C1_x and C1_y\ndf['C1_diff'] = df['C1_x'] - df['C1_y']\n\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"data_munging",
"data_science",
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074666280_data_munging_data_science_dataframe_pandas_python.txt
|
Q:
ImproperlyConfigured AUTH_USER_MODEL refers to model 'core.User' that has not been installed
I am calling this method in my core app - models.py,
from django.contrib.auth import get_user_model
User = get_user_model()
I am getting error,
Exception has occurred: ImproperlyConfigured (note: full exception trace is shown but execution is paused at: <module>)
AUTH_USER_MODEL refers to model 'core.User' that has not been installed
debugger points to this line
A:
I found the problem,
User = get_user_model()
I had pasted follwing code inside the core app models.py
|
ImproperlyConfigured AUTH_USER_MODEL refers to model 'core.User' that has not been installed
|
I am calling this method in my core app - models.py,
from django.contrib.auth import get_user_model
User = get_user_model()
I am getting error,
Exception has occurred: ImproperlyConfigured (note: full exception trace is shown but execution is paused at: <module>)
AUTH_USER_MODEL refers to model 'core.User' that has not been installed
debugger points to this line
|
[
"I found the problem,\nUser = get_user_model()\n\nI had pasted follwing code inside the core app models.py\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_models",
"python",
"python_3.x"
] |
stackoverflow_0074666310_django_django_models_python_python_3.x.txt
|
Q:
Match with Django import_export with multiple fields
I would like to import a CSV in Django. The issue occurs when trying to import based on the attributes. Here is my code:
class Event(models.Model):
id = models.BigAutoField(primary_key=True)
amount = models.ForeignKey(Amount, on_delete=models.CASCADE)
value = models.FloatField()
space = models.ForeignKey(Space, on_delete=models.RESTRICT)
time = models.ForeignKey(Time, on_delete=models.RESTRICT)
class Meta:
managed = True
db_table = "event"
class Space(models.Model):
objects = SpaceManager()
id = models.BigAutoField(primary_key=True)
code = models.CharField(max_length=100)
type = models.ForeignKey(SpaceType, on_delete=models.RESTRICT)
space_date = models.DateField(blank=True, null=True)
def natural_key(self):
return self.code # + self.type + self.source_date
def __str__(self):
return f"{self.name}"
class Meta:
managed = True
db_table = "space"
class Time(models.Model):
objects = TimeManager()
id = models.BigAutoField(primary_key=True)
type = models.ForeignKey(TimeType, on_delete=models.RESTRICT)
startdate = models.DateTimeField()
enddate = models.DateTimeField()
def natural_key(self):
return self.name
def __str__(self):
return f"{self.name}"
class Meta:
managed = True
db_table = "time"
Now, I create the resource that should find the right objects, but it seems it does not enter into ForeignKeyWidget(s) at all:
class AmountForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row=None, **kwargs):
logger.critical("<<<<< {AmountForeignKeyWidget} <<<<<<<")
name_upper = value.upper()
amount = Amount.objects.get_by_natural_key(name=name_upper)
return amount
class SpaceForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
logger.critical("<<<<< {SpaceForeignKeyWidget} <<<<<<<")
space_code = row["space_code"]
space_type = SpatialDimensionType.objects.get_by_natural_key(row["space_type"])
try:
space_date = datetime.strptime(row["space_date"], "%Y%m%d")
except ValueError:
space_date = None
space = Space.objects.get(
code=space_code, type=space_type, source_date=space_date
)
return space
class TimeForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
logger.critical("<<<<< {TimeForeignKeyWidget} <<<<<<<")
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
time_date = parse_datetime(row["time_date"])
time = Time.objects.get_or_create(
type=time_type, startdate=time_date), defaults={...}
)
return time
class EventResource(ModelResource):
amount = Field(
column_name="amount",
attribute="amount",
widget=AmountForeignKeyWidget(Amount),
)
space = Field(
# column_name="space_code",
attribute="space",
widget=SpaceForeignKeyWidget(Space),
)
time = Field(
attribute="time",
widget=TimeForeignKeyWidget(Time),
)
def before_import_row(self, row, row_number=None, **kwargs):
logger.error(f">>>> before_import_row() >>>>>>")
time_date = datetime.strptime(row["time_date"], "%Y%m%d").date()
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
Time.objects.get_or_create(
type=time_type, startdate=time_date,
defaults={
"name": str(time_type) + str(time_date),
"type": time_type,
"startdate": time_date,
"enddate": time_date + timedelta(days=1),
},
)
class Meta:
model = Event
I added some loggers, but I only print out the log at AmountForeignKeyWidget. The main question is: How to search for objects in Space by attributes (space_code,space_type,space_date) and in Time search and create by (time_date,time_type)
A lesser question is why SpaceForeignKeyWidget and TimeForeignKeyWidget are not used?
A:
The main question is: How to search for objects in Space by attributes (space_code,space_type,space_date) and in Time search and create by (time_date,time_type)
It looks like you are searching for these objects correctly, but it might not be being called. Often with import-export you will save yourself a lot of time if you setup your debugger and step through the code.
It could be that there isn't a 'space' or a 'time' column in your source csv. If there are no such fields, then the import process will silently skip this declaration. If you need to create objects if they don't exist, it's probably best to use before_import_row() for this, as you do in your example. Ensure that you use get_or_create() so that re-runs of the import are handled correctly.
Update
I believe the use case you have is that you need to link relations (Time, Space) to an Event instance during import, but there is no single field which identifies the relations. Instead, they are defined by a combination of fields.
This use case can be handled by import-export but it requires overriding the correct functions. We need to create relations if they don't exist, and then link the created relation instances to the model instance. Therefore we need to find a method in the code base which takes both the instance and the row as params. Unfortunately this is not as well defined as it could be in the code base (before_save_instance() would be a good candidate), but there is an method called import_obj() which we can use.
def import_obj(self, obj, data, dry_run, **kwargs):
# 'obj' is the object instance
# 'data' is the row data
# go ahead and create the relation objects
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
time_date = parse_datetime(row["time_date"])
obj.time = Time.objects.get_or_create(
type=time_type, startdate=time_date), defaults={...}
)
# other relation creations omitted...
super().import_obj(obj, data, dry_run, **kwargs)
A lesser question is why SpaceForeignKeyWidget and TimeForeignKeyWidget are not used?
As above, if there is no 'space' or 'time' column in the source data, then they will never be called.
It shouldn't make a difference but your clean() method declaration does not define row as a kwarg in SpaceForeignKeyWidget and TimeForeignKeyWidget. Change the clean() definition to:
def clean(self, value, row=None, **kwargs):
# your implementation here
I can't see that this will fix it but maybe when running in your context it is an issue.
Note that there are some changes you can make to improve your code.
For AmountForeignKeyWidget, if you only need to look up by one value, you can change your resource declaration to this:
class EventResource(ModelResource):
amount = Field(
column_name="amount",
attribute="amount",
widget=ForeignKeyWidget(Amount, field="name__iexact"),
)
You don't need any extra logic, and the lookup will be case-insensitive.
A:
I managed to solve all the issues and make proper imports. Following is the code I used:
class EventResource(ModelResource):
amount = Field(
column_name="amount",
attribute="amount",
widget=ForeignKeyWidget(Amount, field="name__iexact"),
)
space_code = Field(
attribute="space",
widget=SpaceForeignKeyWidget(Space),
)
time_date = Field(
attribute="time",
widget=TimeForeignKeyWidget(Time),
)
class Meta:
model = Event
For the amount field I don't need to make a derivative Widget, since it is using only one variable in CSV. For the two others, implementation follows. I noticed that the widgets for the two other variables were not called and the reason is the variable names were non-existent in my CSV file. When I renamed them to the column names existing in the CSV they have been called.
class SpaceForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
space_code = row["spacial_code"]
space_type = SpaceDimensionType.objects.get(type=row["space_type"])
try:
space_date = datetime.strptime(row["space_date"], "%Y%m%d")
except ValueError:
space_date = None
space = SpaceDimension.objects.get(
code=space_code, type=space_type, source_date=space_date
)
return space
class TimeForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
time_type = TimeDimensionType.objects.get(type=row["time_type"])
delta = T_TYPES[time_type]
start_date = datetime.strptime(row["time_date"], "%Y%m%d").date()
end_date = start_date + timedelta(days=delta)
time, created = TimeDimension.objects.get_or_create(
type=time_type,
startdate=start_date,
enddate=start_date + timedelta(days=delta),
defaults={
"name": f"{time_type}: {start_date}-{end_date}",
"type": time_type,
"startdate": start_date,
"enddate": end_date,
},
)
return temporal
SpaceForeignKeyWidget only searches it the record is existing and returns the object and TimeForeignKeyWidget creates if non-existing and returns the record. This way no need to use before_import_row() and all the logic is localized to this two widgets.
|
Match with Django import_export with multiple fields
|
I would like to import a CSV in Django. The issue occurs when trying to import based on the attributes. Here is my code:
class Event(models.Model):
id = models.BigAutoField(primary_key=True)
amount = models.ForeignKey(Amount, on_delete=models.CASCADE)
value = models.FloatField()
space = models.ForeignKey(Space, on_delete=models.RESTRICT)
time = models.ForeignKey(Time, on_delete=models.RESTRICT)
class Meta:
managed = True
db_table = "event"
class Space(models.Model):
objects = SpaceManager()
id = models.BigAutoField(primary_key=True)
code = models.CharField(max_length=100)
type = models.ForeignKey(SpaceType, on_delete=models.RESTRICT)
space_date = models.DateField(blank=True, null=True)
def natural_key(self):
return self.code # + self.type + self.source_date
def __str__(self):
return f"{self.name}"
class Meta:
managed = True
db_table = "space"
class Time(models.Model):
objects = TimeManager()
id = models.BigAutoField(primary_key=True)
type = models.ForeignKey(TimeType, on_delete=models.RESTRICT)
startdate = models.DateTimeField()
enddate = models.DateTimeField()
def natural_key(self):
return self.name
def __str__(self):
return f"{self.name}"
class Meta:
managed = True
db_table = "time"
Now, I create the resource that should find the right objects, but it seems it does not enter into ForeignKeyWidget(s) at all:
class AmountForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row=None, **kwargs):
logger.critical("<<<<< {AmountForeignKeyWidget} <<<<<<<")
name_upper = value.upper()
amount = Amount.objects.get_by_natural_key(name=name_upper)
return amount
class SpaceForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
logger.critical("<<<<< {SpaceForeignKeyWidget} <<<<<<<")
space_code = row["space_code"]
space_type = SpatialDimensionType.objects.get_by_natural_key(row["space_type"])
try:
space_date = datetime.strptime(row["space_date"], "%Y%m%d")
except ValueError:
space_date = None
space = Space.objects.get(
code=space_code, type=space_type, source_date=space_date
)
return space
class TimeForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row, **kwargs):
logger.critical("<<<<< {TimeForeignKeyWidget} <<<<<<<")
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
time_date = parse_datetime(row["time_date"])
time = Time.objects.get_or_create(
type=time_type, startdate=time_date), defaults={...}
)
return time
class EventResource(ModelResource):
amount = Field(
column_name="amount",
attribute="amount",
widget=AmountForeignKeyWidget(Amount),
)
space = Field(
# column_name="space_code",
attribute="space",
widget=SpaceForeignKeyWidget(Space),
)
time = Field(
attribute="time",
widget=TimeForeignKeyWidget(Time),
)
def before_import_row(self, row, row_number=None, **kwargs):
logger.error(f">>>> before_import_row() >>>>>>")
time_date = datetime.strptime(row["time_date"], "%Y%m%d").date()
time_type = TimeType.objects.get_by_natural_key(row["time_type"])
Time.objects.get_or_create(
type=time_type, startdate=time_date,
defaults={
"name": str(time_type) + str(time_date),
"type": time_type,
"startdate": time_date,
"enddate": time_date + timedelta(days=1),
},
)
class Meta:
model = Event
I added some loggers, but I only print out the log at AmountForeignKeyWidget. The main question is: How to search for objects in Space by attributes (space_code,space_type,space_date) and in Time search and create by (time_date,time_type)
A lesser question is why SpaceForeignKeyWidget and TimeForeignKeyWidget are not used?
|
[
"\nThe main question is: How to search for objects in Space by attributes (space_code,space_type,space_date) and in Time search and create by (time_date,time_type)\n\nIt looks like you are searching for these objects correctly, but it might not be being called. Often with import-export you will save yourself a lot of time if you setup your debugger and step through the code.\nIt could be that there isn't a 'space' or a 'time' column in your source csv. If there are no such fields, then the import process will silently skip this declaration. If you need to create objects if they don't exist, it's probably best to use before_import_row() for this, as you do in your example. Ensure that you use get_or_create() so that re-runs of the import are handled correctly.\nUpdate\nI believe the use case you have is that you need to link relations (Time, Space) to an Event instance during import, but there is no single field which identifies the relations. Instead, they are defined by a combination of fields.\nThis use case can be handled by import-export but it requires overriding the correct functions. We need to create relations if they don't exist, and then link the created relation instances to the model instance. Therefore we need to find a method in the code base which takes both the instance and the row as params. Unfortunately this is not as well defined as it could be in the code base (before_save_instance() would be a good candidate), but there is an method called import_obj() which we can use.\ndef import_obj(self, obj, data, dry_run, **kwargs):\n # 'obj' is the object instance\n # 'data' is the row data\n # go ahead and create the relation objects\n time_type = TimeType.objects.get_by_natural_key(row[\"time_type\"])\n time_date = parse_datetime(row[\"time_date\"])\n obj.time = Time.objects.get_or_create(\n type=time_type, startdate=time_date), defaults={...}\n )\n # other relation creations omitted...\n super().import_obj(obj, data, dry_run, **kwargs)\n\n\nA lesser question is why SpaceForeignKeyWidget and TimeForeignKeyWidget are not used?\n\nAs above, if there is no 'space' or 'time' column in the source data, then they will never be called.\nIt shouldn't make a difference but your clean() method declaration does not define row as a kwarg in SpaceForeignKeyWidget and TimeForeignKeyWidget. Change the clean() definition to:\ndef clean(self, value, row=None, **kwargs):\n # your implementation here\n\nI can't see that this will fix it but maybe when running in your context it is an issue.\nNote that there are some changes you can make to improve your code.\nFor AmountForeignKeyWidget, if you only need to look up by one value, you can change your resource declaration to this:\nclass EventResource(ModelResource):\n amount = Field(\n column_name=\"amount\",\n attribute=\"amount\",\n widget=ForeignKeyWidget(Amount, field=\"name__iexact\"),\n )\n\nYou don't need any extra logic, and the lookup will be case-insensitive.\n",
"I managed to solve all the issues and make proper imports. Following is the code I used:\nclass EventResource(ModelResource):\n amount = Field(\n column_name=\"amount\",\n attribute=\"amount\",\n widget=ForeignKeyWidget(Amount, field=\"name__iexact\"),\n )\n space_code = Field(\n attribute=\"space\",\n widget=SpaceForeignKeyWidget(Space),\n )\n time_date = Field(\n attribute=\"time\",\n widget=TimeForeignKeyWidget(Time),\n )\n\n class Meta:\n model = Event\n\nFor the amount field I don't need to make a derivative Widget, since it is using only one variable in CSV. For the two others, implementation follows. I noticed that the widgets for the two other variables were not called and the reason is the variable names were non-existent in my CSV file. When I renamed them to the column names existing in the CSV they have been called.\nclass SpaceForeignKeyWidget(ForeignKeyWidget):\n def clean(self, value, row, **kwargs):\n space_code = row[\"spacial_code\"]\n space_type = SpaceDimensionType.objects.get(type=row[\"space_type\"])\n try:\n space_date = datetime.strptime(row[\"space_date\"], \"%Y%m%d\")\n except ValueError:\n space_date = None\n\n space = SpaceDimension.objects.get(\n code=space_code, type=space_type, source_date=space_date\n )\n return space\n\n\nclass TimeForeignKeyWidget(ForeignKeyWidget):\n def clean(self, value, row, **kwargs):\n time_type = TimeDimensionType.objects.get(type=row[\"time_type\"])\n delta = T_TYPES[time_type]\n\n start_date = datetime.strptime(row[\"time_date\"], \"%Y%m%d\").date()\n end_date = start_date + timedelta(days=delta)\n time, created = TimeDimension.objects.get_or_create(\n type=time_type,\n startdate=start_date,\n enddate=start_date + timedelta(days=delta),\n defaults={\n \"name\": f\"{time_type}: {start_date}-{end_date}\",\n \"type\": time_type,\n \"startdate\": start_date,\n \"enddate\": end_date,\n },\n )\n return temporal\n\n\nSpaceForeignKeyWidget only searches it the record is existing and returns the object and TimeForeignKeyWidget creates if non-existing and returns the record. This way no need to use before_import_row() and all the logic is localized to this two widgets.\n"
] |
[
1,
0
] |
[] |
[] |
[
"django",
"django_import_export",
"python"
] |
stackoverflow_0074647054_django_django_import_export_python.txt
|
Q:
Use cases of NoLoggers
I have noticed that a logger that logs nothing but only supress logs (NoLogger) actually exists. But I am not aware of its use cases (maybe except unit tests).
Do you have any experience regarding NoLoggers?
Many thanks in advance.
Understanding NoLoggers
A:
The use case is pretty simple: If you don't want any log to be created/written - that's basically it.
When do you want this? Basically, if you want to avoid any I/O-operation or to keep your server's console clear. Or simply not to log anything on the customer's system since it might expose internal stuff.
That's just to name a few examples.
|
Use cases of NoLoggers
|
I have noticed that a logger that logs nothing but only supress logs (NoLogger) actually exists. But I am not aware of its use cases (maybe except unit tests).
Do you have any experience regarding NoLoggers?
Many thanks in advance.
Understanding NoLoggers
|
[
"The use case is pretty simple: If you don't want any log to be created/written - that's basically it.\nWhen do you want this? Basically, if you want to avoid any I/O-operation or to keep your server's console clear. Or simply not to log anything on the customer's system since it might expose internal stuff.\nThat's just to name a few examples.\n"
] |
[
0
] |
[] |
[] |
[
"logging"
] |
stackoverflow_0074666462_logging.txt
|
Q:
Disable "explicit" import suggestions in VSCode
Is there an option to disable "explicit" import suggestions in VSCode, like shown in the picture below? I find these visually displeasing because they add extra unnecessary lines. These are Haskell imports.
A:
Problem solved per answer by the latest OpenAI ChatGPT bot. Incredible.
Yes, there is an option to disable explicit import suggestions in
Visual Studio Code for Haskell code. To do this, you can follow these
steps:
Open the Visual Studio Code settings by clicking on the gear icon in
the bottom left corner of the editor, or by pressing Ctrl + , on your
keyboard.
In the settings window, search for "Haskell" in the search box, and
click on the "Haskell" option that appears in the search results.
In the Haskell settings, scroll down until you see the "Imports"
section, and then uncheck the box next to "Suggest explicit import
module names".
Close the settings window, and the explicit import suggestions should
no longer appear in your Haskell code.
|
Disable "explicit" import suggestions in VSCode
|
Is there an option to disable "explicit" import suggestions in VSCode, like shown in the picture below? I find these visually displeasing because they add extra unnecessary lines. These are Haskell imports.
|
[
"Problem solved per answer by the latest OpenAI ChatGPT bot. Incredible.\n\nYes, there is an option to disable explicit import suggestions in\nVisual Studio Code for Haskell code. To do this, you can follow these\nsteps:\nOpen the Visual Studio Code settings by clicking on the gear icon in\nthe bottom left corner of the editor, or by pressing Ctrl + , on your\nkeyboard.\nIn the settings window, search for \"Haskell\" in the search box, and\nclick on the \"Haskell\" option that appears in the search results.\nIn the Haskell settings, scroll down until you see the \"Imports\"\nsection, and then uncheck the box next to \"Suggest explicit import\nmodule names\".\nClose the settings window, and the explicit import suggestions should\nno longer appear in your Haskell code.\n\n"
] |
[
0
] |
[] |
[] |
[
"visual_studio_code"
] |
stackoverflow_0073613408_visual_studio_code.txt
|
Q:
Coding Style - Clean Architecture and Helpers
I'm learning golang and decided to give it a go (pun mildly intended) with a side project that I had in mind for some time (though question is probably language agnostic).
I decided to learn more about the clean architecture along the way because I learnt the hard way about having a bad architecture in the current project I'm dealing with in my day job.
Here is a simplified layout of relevant part of my project:
.
└── app
├── error_codes
│ └── error_codes.go
├── interfaces
│ └── interfaces.go
├── models
│ └── repo
│ └── repo.go
├── managers
│ └── vcs_managers
│ └── git_manager
│ └── git_manager.go
└── helpers
└── helpers.go
Now error_codes, interfaces and models are all in the innermost circle of the architecture.
Here is where I'm getting confused. On my Repo model, I have some functions that does stuff and doing that stuff requires some helper methods in helpers package. But as per the requirements of clean architecture, I cannot refer to helpers package from Repo model, because of the direction of the dependency.
I feel like I'm missing something really fundamental. What am I missing in this picture?
|
Coding Style - Clean Architecture and Helpers
|
I'm learning golang and decided to give it a go (pun mildly intended) with a side project that I had in mind for some time (though question is probably language agnostic).
I decided to learn more about the clean architecture along the way because I learnt the hard way about having a bad architecture in the current project I'm dealing with in my day job.
Here is a simplified layout of relevant part of my project:
.
└── app
├── error_codes
│ └── error_codes.go
├── interfaces
│ └── interfaces.go
├── models
│ └── repo
│ └── repo.go
├── managers
│ └── vcs_managers
│ └── git_manager
│ └── git_manager.go
└── helpers
└── helpers.go
Now error_codes, interfaces and models are all in the innermost circle of the architecture.
Here is where I'm getting confused. On my Repo model, I have some functions that does stuff and doing that stuff requires some helper methods in helpers package. But as per the requirements of clean architecture, I cannot refer to helpers package from Repo model, because of the direction of the dependency.
I feel like I'm missing something really fundamental. What am I missing in this picture?
|
[] |
[] |
[
"The clean architecture states that the innermost layer, which in your example is the \"models\" package, should not have any dependencies on the outer layers. This means that your Repo model should not be able to import or use any helpers or functions from the \"helpers\" package.\nOne way to solve this issue is to move the helper methods that are required by your Repo model into a new package that is closer to the innermost layer. For example, you could create a \"repo_helpers\" package inside the \"models\" package and move the helper methods there. Your Repo model can then import and use these helpers without breaking the clean architecture.\nAlternatively, you could also consider refactoring your Repo model so that it does not depend on any external helper methods, and instead contains all the necessary logic to perform the desired actions. This would allow you to keep the \"helpers\" package in its current location, but it may require more work to implement.\n"
] |
[
-1
] |
[
"clean_architecture"
] |
stackoverflow_0074666490_clean_architecture.txt
|
Q:
Flutter Hive: type 'List' is not a subtype of type 'List?' in type cast
Following problem:
I have Hive in my project and there I save Lists of Objects. When I store something while I use the App, and want to get the data from Hive(still same session), then everything is fine and I got the data which I previously stored in Hive. When I look in my document Folder, there is also a .Hive file, where my data is stored. But After I close the App, and when I want to get the Data from Hive, then it tells me:
" type 'Unhandled exception:
type 'List<dynamic>' is not a subtype of type 'List<Bookingday>?' in type cast
#0 BoxImpl.get (package:hive/src/box/box_impl.dart:44:26)
#1 BookingDAO.Eval ()
#2 BookingDAO.getStoredWeek (package:workplace/utils/booking_dao.dart:23:36)
#3 _ReservationsState.initState (package:workplace/pages/reservations.dart:44:30)
I can't understand such behaviour. Why does it works well, when I store and get the data in the same session, but after restarting the App, it says the List is of type dynamic?
Can it have something to do with how I open and close Hive?
my Method:
Box<List<Bookingday>> boxList = Hive.box<List<Bookingday>>(bookingDayBoxName);
List<Bookingday> getStoredWeek(DateTime firstJan, DateTime date) {
String key = getCalenderWeek(firstJan, date);
try {
List<Bookingday>? bookList = boxList.get(key);
if (bookList != null) {
bookingdays = bookList;
return bookList;
} else {
return List.generate(
getWeek(dateNow).length,
(index) => Bookingday(
day: dateNow,
parkingSlotReserved: false,
capacityCounter: 0,
maxCapacity: 4));
}
} catch (e) {
if (e is TypeError) {}
}
return bookingdays;
}
A:
Try to cast
Box<List<Bookingday>> boxList = Hive.box<List<Bookingday>>(bookingDayBoxName);
List<Bookingday> getStoredWeek(DateTime firstJan, DateTime date) {
String key = getCalenderWeek(firstJan, date);
try {
List<Bookingday>? bookList = boxList.get(key);
if (bookList != null) {
bookingdays = bookList!;
return bookList;
} else {
return List.generate(
getWeek(dateNow).length,
(index) => Bookingday(
day: dateNow,
parkingSlotReserved: false,
capacityCounter: 0,
maxCapacity: 4)).cast<Bookingday>();
}
} catch (e) {
if (e is TypeError) {}
}
return bookingdays;
}
|
Flutter Hive: type 'List' is not a subtype of type 'List?' in type cast
|
Following problem:
I have Hive in my project and there I save Lists of Objects. When I store something while I use the App, and want to get the data from Hive(still same session), then everything is fine and I got the data which I previously stored in Hive. When I look in my document Folder, there is also a .Hive file, where my data is stored. But After I close the App, and when I want to get the Data from Hive, then it tells me:
" type 'Unhandled exception:
type 'List<dynamic>' is not a subtype of type 'List<Bookingday>?' in type cast
#0 BoxImpl.get (package:hive/src/box/box_impl.dart:44:26)
#1 BookingDAO.Eval ()
#2 BookingDAO.getStoredWeek (package:workplace/utils/booking_dao.dart:23:36)
#3 _ReservationsState.initState (package:workplace/pages/reservations.dart:44:30)
I can't understand such behaviour. Why does it works well, when I store and get the data in the same session, but after restarting the App, it says the List is of type dynamic?
Can it have something to do with how I open and close Hive?
my Method:
Box<List<Bookingday>> boxList = Hive.box<List<Bookingday>>(bookingDayBoxName);
List<Bookingday> getStoredWeek(DateTime firstJan, DateTime date) {
String key = getCalenderWeek(firstJan, date);
try {
List<Bookingday>? bookList = boxList.get(key);
if (bookList != null) {
bookingdays = bookList;
return bookList;
} else {
return List.generate(
getWeek(dateNow).length,
(index) => Bookingday(
day: dateNow,
parkingSlotReserved: false,
capacityCounter: 0,
maxCapacity: 4));
}
} catch (e) {
if (e is TypeError) {}
}
return bookingdays;
}
|
[
"Try to cast\nBox<List<Bookingday>> boxList = Hive.box<List<Bookingday>>(bookingDayBoxName);\n\n List<Bookingday> getStoredWeek(DateTime firstJan, DateTime date) {\n String key = getCalenderWeek(firstJan, date);\n try {\n List<Bookingday>? bookList = boxList.get(key);\n if (bookList != null) {\n bookingdays = bookList!;\n return bookList;\n } else {\n return List.generate(\n getWeek(dateNow).length,\n (index) => Bookingday(\n day: dateNow,\n parkingSlotReserved: false,\n capacityCounter: 0,\n maxCapacity: 4)).cast<Bookingday>();\n }\n } catch (e) {\n if (e is TypeError) {}\n }\n return bookingdays;\n } \n\n"
] |
[
0
] |
[] |
[] |
[
"dart",
"flutter",
"flutter_hive",
"hive",
"local_storage"
] |
stackoverflow_0074666279_dart_flutter_flutter_hive_hive_local_storage.txt
|
Q:
malloc and free on a Stack in C
I'm trying to write a code that dynamically writes the Coordinates of a Point on the Stack and prints (and frees) them back:
#include <stdio.h>
#include <stdlib.h>
struct point{
float x;
float y;
float z;
}; typedef struct point POINT;
struct stackPoint{
POINT myPoint;
struct stackPoint *next;
}; typedef struct stackPoint STACKPOINT;
static STACKPOINT *stacktop = NULL;
void printStackElement(POINT aPoint){
printf(" x:%f \t y:%f \t z:%f\n", aPoint.x, aPoint.y, aPoint.z );
}
void push(POINT pushPoint){
STACKPOINT *newElem = malloc(sizeof(STACKPOINT));
stacktop = stacktop +1;
newElem->myPoint = pushPoint;
stacktop = newElem;
}
POINT pop(){
POINT b = stacktop->myPoint;
free(stacktop);
stacktop = stacktop -1;
return b;
}
int isEmpty(){
if(stacktop == NULL){
return 1;
}
return 0;
}
POINT readPoint(){
POINT a;
printf("Please enter your x-Coordinate: ");
scanf(" %f", &a.x);
printf("Please enter your y-Coordinate: ");
scanf(" %f", &a.y);
printf("Please enter your z-Coordinate: ");
scanf(" %f", &a.z);
return a;
}
int main(){
char quit = 0;
while(quit !=1 ){
printf("\n\n enter 'p' to enter another Point or 'q' to quit: " );
scanf(" %s", &quit);
switch(quit){
case 'p':
push(readPoint());
break;
case 'q':
quit = 1;
break;
default:
break;
}
}
while(isEmpty() == 0){
printStackElement(pop());
}
}
It prints the last entry but before printing the second to last entry, just an error message appears, that the "pointer beeing freed was not allocated".
I tried running it without the free() command, but then it just prints the first line and infite lines of just 0's
I also tried using the *stackTop pointer as a non static pointer instead of the *newElem pointer but that also didnt work..
A:
It is supposed to be a linked list. Our professor just gave us this exercise and never even mentioned a linked list in any way or form.. Thank you very much, it works now!
I changed the push function to:
STACKPOINT *newElem = malloc(sizeof(STACKPOINT));
newElem->myPoint = pushPoint;
newElem->next = stacktop;
stacktop = newElem;
and the pop function to:
POINT b = stacktop->myPoint;
free(stacktop);
stacktop = stacktop->next;
return b;
|
malloc and free on a Stack in C
|
I'm trying to write a code that dynamically writes the Coordinates of a Point on the Stack and prints (and frees) them back:
#include <stdio.h>
#include <stdlib.h>
struct point{
float x;
float y;
float z;
}; typedef struct point POINT;
struct stackPoint{
POINT myPoint;
struct stackPoint *next;
}; typedef struct stackPoint STACKPOINT;
static STACKPOINT *stacktop = NULL;
void printStackElement(POINT aPoint){
printf(" x:%f \t y:%f \t z:%f\n", aPoint.x, aPoint.y, aPoint.z );
}
void push(POINT pushPoint){
STACKPOINT *newElem = malloc(sizeof(STACKPOINT));
stacktop = stacktop +1;
newElem->myPoint = pushPoint;
stacktop = newElem;
}
POINT pop(){
POINT b = stacktop->myPoint;
free(stacktop);
stacktop = stacktop -1;
return b;
}
int isEmpty(){
if(stacktop == NULL){
return 1;
}
return 0;
}
POINT readPoint(){
POINT a;
printf("Please enter your x-Coordinate: ");
scanf(" %f", &a.x);
printf("Please enter your y-Coordinate: ");
scanf(" %f", &a.y);
printf("Please enter your z-Coordinate: ");
scanf(" %f", &a.z);
return a;
}
int main(){
char quit = 0;
while(quit !=1 ){
printf("\n\n enter 'p' to enter another Point or 'q' to quit: " );
scanf(" %s", &quit);
switch(quit){
case 'p':
push(readPoint());
break;
case 'q':
quit = 1;
break;
default:
break;
}
}
while(isEmpty() == 0){
printStackElement(pop());
}
}
It prints the last entry but before printing the second to last entry, just an error message appears, that the "pointer beeing freed was not allocated".
I tried running it without the free() command, but then it just prints the first line and infite lines of just 0's
I also tried using the *stackTop pointer as a non static pointer instead of the *newElem pointer but that also didnt work..
|
[
"It is supposed to be a linked list. Our professor just gave us this exercise and never even mentioned a linked list in any way or form.. Thank you very much, it works now!\nI changed the push function to:\nSTACKPOINT *newElem = malloc(sizeof(STACKPOINT));\n\nnewElem->myPoint = pushPoint;\n\nnewElem->next = stacktop;\n\nstacktop = newElem;\n\nand the pop function to:\nPOINT b = stacktop->myPoint;\n\nfree(stacktop);\n\nstacktop = stacktop->next;\n\nreturn b;\n\n"
] |
[
0
] |
[] |
[] |
[
"c",
"free",
"malloc",
"stack"
] |
stackoverflow_0074666036_c_free_malloc_stack.txt
|
Q:
Is it possible to restrict the Type passed to struct by the parent class?
I would like to have a compile error except for the Type of a certain parent class. If you know of such a possibility, please let me know.
using System;
class Program
{
static void Main(string[] args)
{
var objectA = new TypeReference(typeof(TargetSubClass));
// I want to make a compile error if the parent class of Type is not TargetClass.
var objectB = new TypeReference(typeof(NotTargetClass));
}
}
public readonly struct TypeReference
{
public readonly string TypeName;
public readonly Type Type;
public TypeReference(Type type)
{
Type = type;
TypeName = Type.FullName;
}
}
public class TargetClass{}
public class TargetSubClass : TargetClass{}
public class NotTargetClass{}
If it is run time, I can just throw a throw, but I want to make it a compile error like generic's where.
using System;
public readonly struct TypeReference
{
public readonly string TypeName;
public readonly Type Type;
public TypeReference(Type type)
{
// confirmation of Type
if (!typeof(TargetClass).IsAssignableFrom(type))
{
throw new ArgumentException("Type is not a TargetClass.");
}
Type = type;
TypeName = Type.FullName;
}
}
A:
You could create a generic factory method with appropriate constraint:
public readonly struct TypeReference
{
public readonly string TypeName;
public readonly Type Type;
private TypeReference(Type type)
{
Type = type;
TypeName = Type.FullName;
}
public static TypeReference Create<T>() where T : TargetClass
{
return new TypeReference(typeof(T));
}
}
var objectA = TypeReference.Create<TargetSubClass>();
// this produces a compile error
var objectB = TypeReference.Create<NotTargetClass>();
|
Is it possible to restrict the Type passed to struct by the parent class?
|
I would like to have a compile error except for the Type of a certain parent class. If you know of such a possibility, please let me know.
using System;
class Program
{
static void Main(string[] args)
{
var objectA = new TypeReference(typeof(TargetSubClass));
// I want to make a compile error if the parent class of Type is not TargetClass.
var objectB = new TypeReference(typeof(NotTargetClass));
}
}
public readonly struct TypeReference
{
public readonly string TypeName;
public readonly Type Type;
public TypeReference(Type type)
{
Type = type;
TypeName = Type.FullName;
}
}
public class TargetClass{}
public class TargetSubClass : TargetClass{}
public class NotTargetClass{}
If it is run time, I can just throw a throw, but I want to make it a compile error like generic's where.
using System;
public readonly struct TypeReference
{
public readonly string TypeName;
public readonly Type Type;
public TypeReference(Type type)
{
// confirmation of Type
if (!typeof(TargetClass).IsAssignableFrom(type))
{
throw new ArgumentException("Type is not a TargetClass.");
}
Type = type;
TypeName = Type.FullName;
}
}
|
[
"You could create a generic factory method with appropriate constraint:\npublic readonly struct TypeReference\n{\n public readonly string TypeName;\n public readonly Type Type;\n \n private TypeReference(Type type)\n {\n Type = type;\n TypeName = Type.FullName;\n }\n \n public static TypeReference Create<T>() where T : TargetClass\n {\n return new TypeReference(typeof(T));\n }\n}\nvar objectA = TypeReference.Create<TargetSubClass>();\n// this produces a compile error\nvar objectB = TypeReference.Create<NotTargetClass>();\n\n"
] |
[
1
] |
[] |
[] |
[
"c#"
] |
stackoverflow_0074666450_c#.txt
|
Q:
CSS Text margins vh
I was working on a project and started working on texts when I realized that if a set the margin-top: 20vh property on CSS does not work, why? Please help me.
#text1 {
margin-top: 26vh;
}
#text2 {
float: right;
margin-right: 15mm;
margin-top: 22vh;
}
#text3 {
float: left;
margin-top: 37vh;
}
<section class="imagesUnderHeader" id="image2" style="background-image: url(Images/Loop.gif)">
<div class="progressionBar" id="progressionBar"></div>
<div class="dots" id="firstParagraph"></div>
<div class="dots" id="secondParagraph"></div>
<div class="dots" id="thirdParagraph"></div>
<div class="paragraphBar" id="bar1"></div>
<div class="paragraphBar" id="bar2"></div>
<div class="paragraphBar" id="bar3"></div>
<h1 class="title adjustPadding goUpText" id="title">Some Text</h1><br>
<p class="subtitle adjustPadding popUpText" id="text1">Subtitle 1</p><br>
<p class="subtitle adjustPadding popUpText" id="text2">Subtitle 2</p><br>
<p class="subtitle adjustPadding popUpText" id="text3">Subtitle 3, <a> with a link, </a> <a>Another one</a><br> <a>and another one</a></p>
</section>
If you need any informations please ask and I will provide additional informations, thank you so much for the help.
A:
Here I made a demo stripping away some details from your code that were just adding noise for the sake of better showing what's going on.
As the mdn docs says:
https://developer.mozilla.org/en-US/docs/Learn/CSS/Building_blocks/Values_and_units
1vh = 1% of the viewport's height
I included that quote just to avoid any ambiguity but it was pretty clear already.
In this demo I just used one single custom css property to decide the margin-top value and used that value to set the margin-top of all the 3 #text elements and to size the ::before element used as a placeholder to highlight the margin like it was a ruler.
The value of that variable is going to be 10vh so 10% of viewport's height.
As you can see running the demo at full screen, if you resize the window, those distances will indeed change as expected.
:root {
--margin-top: 10vh;
}
* {
margin: 0;
padding; 0;
}
#text1, #text2, #text3 {
margin-top: var(--margin-top);
}
.title {
border: solid 1px;
}
.subtitle {
position: relative;
border: solid 1px blue;
}
.subtitle::before {
content: '';
position: absolute;
height: var(--margin-top);
border: solid 1px red;
width: 0px;
top: calc(-1 * var(--margin-top) - 1px); /*here it's doing -1px to account for border*/
left: 5px;
}
<section class="imagesUnderHeader" id="image2">
<h1 class="title adjustPadding goUpText" id="title">Some Text</h1>
<p class="subtitle adjustPadding popUpText" id="text1">Subtitle 1</p>
<p class="subtitle adjustPadding popUpText" id="text2">Subtitle 2</p>
<p class="subtitle adjustPadding popUpText" id="text3">
Subtitle 3,
<a> with a link, </a>
<a>Another one</a>
<br>
<a>and another one</a>
</p>
</section>
Anyway the whole truth is also that I removed the <br>s that were going to add vertical spaces hard to discern from margins and more importantly I also removed the float left and right because they affect the positioning in relation to the document flow.
So for the sake of completeness this is the same exact demo with those styles added to show the difference:
:root {
--margin-top: 10vh;
}
* {
margin: 0;
padding; 0;
}
#text1, #text2, #text3 {
margin-top: var(--margin-top);
}
#text2{
float: left;
}
#text3{
float: right;
}
.title {
border: solid 1px;
}
.subtitle {
position: relative;
border: solid 1px blue;
}
.subtitle::before {
content: '';
position: absolute;
height: var(--margin-top);
border: solid 1px red;
width: 0px;
top: calc(-1 * var(--margin-top) - 1px); /*here it's doing -1px to account for border*/
left: 5px;
}
<section class="imagesUnderHeader" id="image2">
<h1 class="title adjustPadding goUpText" id="title">Some Text</h1>
<p class="subtitle adjustPadding popUpText" id="text1">Subtitle 1</p>
<p class="subtitle adjustPadding popUpText" id="text2">Subtitle 2</p>
<p class="subtitle adjustPadding popUpText" id="text3">
Subtitle 3,
<a> with a link, </a>
<a>Another one</a>
<br>
<a>and another one</a>
</p>
</section>
So to make it short, to me the margin is correctly applied. So I'm not sure I'm answering to the exact issue you are encountering. As an added consideration maybe you are fighting against the margin collapsing and in case that's an option, here is the related info from mdn:
https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Box_Model/Mastering_margin_collapsing
|
CSS Text margins vh
|
I was working on a project and started working on texts when I realized that if a set the margin-top: 20vh property on CSS does not work, why? Please help me.
#text1 {
margin-top: 26vh;
}
#text2 {
float: right;
margin-right: 15mm;
margin-top: 22vh;
}
#text3 {
float: left;
margin-top: 37vh;
}
<section class="imagesUnderHeader" id="image2" style="background-image: url(Images/Loop.gif)">
<div class="progressionBar" id="progressionBar"></div>
<div class="dots" id="firstParagraph"></div>
<div class="dots" id="secondParagraph"></div>
<div class="dots" id="thirdParagraph"></div>
<div class="paragraphBar" id="bar1"></div>
<div class="paragraphBar" id="bar2"></div>
<div class="paragraphBar" id="bar3"></div>
<h1 class="title adjustPadding goUpText" id="title">Some Text</h1><br>
<p class="subtitle adjustPadding popUpText" id="text1">Subtitle 1</p><br>
<p class="subtitle adjustPadding popUpText" id="text2">Subtitle 2</p><br>
<p class="subtitle adjustPadding popUpText" id="text3">Subtitle 3, <a> with a link, </a> <a>Another one</a><br> <a>and another one</a></p>
</section>
If you need any informations please ask and I will provide additional informations, thank you so much for the help.
|
[
"Here I made a demo stripping away some details from your code that were just adding noise for the sake of better showing what's going on.\nAs the mdn docs says:\nhttps://developer.mozilla.org/en-US/docs/Learn/CSS/Building_blocks/Values_and_units\n\n1vh = 1% of the viewport's height\n\nI included that quote just to avoid any ambiguity but it was pretty clear already.\nIn this demo I just used one single custom css property to decide the margin-top value and used that value to set the margin-top of all the 3 #text elements and to size the ::before element used as a placeholder to highlight the margin like it was a ruler.\nThe value of that variable is going to be 10vh so 10% of viewport's height.\nAs you can see running the demo at full screen, if you resize the window, those distances will indeed change as expected.\n\n\n:root {\n --margin-top: 10vh;\n}\n\n* {\n margin: 0;\n padding; 0;\n}\n\n#text1, #text2, #text3 {\n margin-top: var(--margin-top);\n}\n\n.title {\n border: solid 1px;\n}\n\n.subtitle {\n position: relative;\n border: solid 1px blue;\n}\n\n.subtitle::before {\n content: '';\n position: absolute;\n height: var(--margin-top);\n border: solid 1px red;\n width: 0px;\n top: calc(-1 * var(--margin-top) - 1px); /*here it's doing -1px to account for border*/\n left: 5px;\n}\n<section class=\"imagesUnderHeader\" id=\"image2\">\n <h1 class=\"title adjustPadding goUpText\" id=\"title\">Some Text</h1>\n <p class=\"subtitle adjustPadding popUpText\" id=\"text1\">Subtitle 1</p>\n <p class=\"subtitle adjustPadding popUpText\" id=\"text2\">Subtitle 2</p>\n <p class=\"subtitle adjustPadding popUpText\" id=\"text3\">\n Subtitle 3,\n <a> with a link, </a>\n <a>Another one</a>\n <br>\n <a>and another one</a>\n </p>\n</section>\n\n\n\nAnyway the whole truth is also that I removed the <br>s that were going to add vertical spaces hard to discern from margins and more importantly I also removed the float left and right because they affect the positioning in relation to the document flow.\nSo for the sake of completeness this is the same exact demo with those styles added to show the difference:\n\n\n:root {\n --margin-top: 10vh;\n}\n\n* {\n margin: 0;\n padding; 0;\n}\n\n#text1, #text2, #text3 {\n margin-top: var(--margin-top);\n}\n\n#text2{\n float: left;\n}\n\n#text3{\n float: right;\n}\n\n.title {\n border: solid 1px;\n}\n\n.subtitle {\n position: relative;\n border: solid 1px blue;\n}\n\n.subtitle::before {\n content: '';\n position: absolute;\n height: var(--margin-top);\n border: solid 1px red;\n width: 0px;\n top: calc(-1 * var(--margin-top) - 1px); /*here it's doing -1px to account for border*/\n left: 5px;\n}\n<section class=\"imagesUnderHeader\" id=\"image2\">\n <h1 class=\"title adjustPadding goUpText\" id=\"title\">Some Text</h1>\n <p class=\"subtitle adjustPadding popUpText\" id=\"text1\">Subtitle 1</p>\n <p class=\"subtitle adjustPadding popUpText\" id=\"text2\">Subtitle 2</p>\n <p class=\"subtitle adjustPadding popUpText\" id=\"text3\">\n Subtitle 3,\n <a> with a link, </a>\n <a>Another one</a>\n <br>\n <a>and another one</a>\n </p>\n</section>\n\n\n\nSo to make it short, to me the margin is correctly applied. So I'm not sure I'm answering to the exact issue you are encountering. As an added consideration maybe you are fighting against the margin collapsing and in case that's an option, here is the related info from mdn:\nhttps://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Box_Model/Mastering_margin_collapsing\n"
] |
[
0
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0074666218_css_html.txt
|
Q:
Gmail SMTP works in SpringBoot but not on Tomee Server
I am using exactly the same settings for javax Mail session, which works like charm in SpringBoot App but fails in Tomee Server.
On SpringBoot's application.properties, I have the following settings (which works):
spring.mail.host=smtp.gmail.com
spring.mail.port=587
spring.mail.username=<my gmail>
spring.mail.password=<App PW generated on Google>
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true
And here are the setting in Tomee's conf/tomee.xml settings
<Resource id="mail/bjm" type="javax.mail.Session">
mail.smtp.host=smtp.gmail.com
mail.smtp.starttls.enable=true
mail.smtp.port=587
mail.transport.protocol=smtp
mail.smtp.auth=true
mail.smtp.user=<my gmail>
password=<App PW generated on Google>
</Resource>
On Tomee, when I run the application, I get the following error message:
failure (javax.mail.AuthenticationFailedException: null)
I am puzzled what wrong am I doing in tomee.xml because I have followed the guidelines from here: https://tomee.apache.org/master/docs/configuring-javamail.html
How can I fix the issue?
A:
The sample you "state" you are following Declaring a JavaMail Resource which you aren't really since you have changed it.
Is for use with MAP SASL XOAUTH2 mechanism configuration Xoauth2 is a form of oauth2 that is supported by googles imap server.
rather then following the sample you have changed it to use user name and an apps password. This is not Xoauth2.
Offending lines of code:
spring.mail.username=<my gmail>
spring.mail.password=<App PW generated on Google>
You are getting a fail for javax.mail.AuthenticationFailedException because you have not added a valid access token as shown in the sample, which you have not shown in your code.
store.connect("imap.gmail.com", "<username>@gmail.com", "<YourAccesToken>");
To be 100% clear an apps password is not an access token an access token must be created by standard Oauth2 mechanizes of requesting authorization of the user, with a mail scope.
|
Gmail SMTP works in SpringBoot but not on Tomee Server
|
I am using exactly the same settings for javax Mail session, which works like charm in SpringBoot App but fails in Tomee Server.
On SpringBoot's application.properties, I have the following settings (which works):
spring.mail.host=smtp.gmail.com
spring.mail.port=587
spring.mail.username=<my gmail>
spring.mail.password=<App PW generated on Google>
spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true
And here are the setting in Tomee's conf/tomee.xml settings
<Resource id="mail/bjm" type="javax.mail.Session">
mail.smtp.host=smtp.gmail.com
mail.smtp.starttls.enable=true
mail.smtp.port=587
mail.transport.protocol=smtp
mail.smtp.auth=true
mail.smtp.user=<my gmail>
password=<App PW generated on Google>
</Resource>
On Tomee, when I run the application, I get the following error message:
failure (javax.mail.AuthenticationFailedException: null)
I am puzzled what wrong am I doing in tomee.xml because I have followed the guidelines from here: https://tomee.apache.org/master/docs/configuring-javamail.html
How can I fix the issue?
|
[
"The sample you \"state\" you are following Declaring a JavaMail Resource which you aren't really since you have changed it.\nIs for use with MAP SASL XOAUTH2 mechanism configuration Xoauth2 is a form of oauth2 that is supported by googles imap server.\nrather then following the sample you have changed it to use user name and an apps password. This is not Xoauth2.\nOffending lines of code:\nspring.mail.username=<my gmail>\nspring.mail.password=<App PW generated on Google>\n\nYou are getting a fail for javax.mail.AuthenticationFailedException because you have not added a valid access token as shown in the sample, which you have not shown in your code.\nstore.connect(\"imap.gmail.com\", \"<username>@gmail.com\", \"<YourAccesToken>\");\n\nTo be 100% clear an apps password is not an access token an access token must be created by standard Oauth2 mechanizes of requesting authorization of the user, with a mail scope.\n"
] |
[
0
] |
[] |
[] |
[
"gmail",
"imap",
"oauth",
"spring_boot",
"tomee_8"
] |
stackoverflow_0074651253_gmail_imap_oauth_spring_boot_tomee_8.txt
|
Q:
How to display (Google) Maps on .Net Maui
I'm playing around with .Net Maui. I'd like to add a map to my demo app. Unfortunately it seems that the map control has not been migrated yet. Also it seems that the promised implementation of the control has been removed from the roadmap for RC.
Also existing projects like this one: https://github.com/amay077/Xamarin.Forms.GoogleMaps
doesn't support .Net Maui...
Does anybody already include a map to a .Net Maui project and could give me some a hint?
Thx!
A:
To use Google or Apple Maps in .NET MAUI, while it is not yet in the Framework , make use of your own handler. You can find a detailed blog post on our website.
But in general, you have to do the following steps:
Create a view that represents your map
public class MapView : View, IMapView
{ }
This is the control you use inside your ContentPage.
Create the platform-independent handler-implementation to render your view
To render your view, MAUI needs a platform-independent entry point.
partial class MapHandler
{
public static IPropertyMapper<MapView, MapHandler> MapMapper = new PropertyMapper<MapView, MapHandler>(ViewMapper)
{ };
public MapHandler() : base(MapMapper)
{ }
}
That needs to be registered in your MauiProgram.cs
.ConfigureMauiHandlers(handlers =>
{
handlers.AddHandler(typeof(MapHandlerDemo.Maps.Map),typeof(MapHandler));
})
Create the platform-specific handler-implementation
A handler tells MAUI how to render your control. So you need a handler for each platform you want to support.
The iOS handler is, in comparison to Android, simpler and shorter to implement.
public partial class MapHandler : ViewHandler<MapView, MKMapView>
{
public MapHandler(IPropertyMapper mapper, CommandMapper commandMapper = null) : base(mapper, commandMapper)
{ }
protected override MKMapView CreatePlatformView()
{
return new MKMapView(CoreGraphics.CGRect.Empty);
}
protected override void ConnectHandler(MKMapView PlatformView)
{ }
protected override void DisconnectHandler(MKMapView PlatformView)
{
// Clean-up the native view to reduce memory leaks and memory usage
if (PlatformView.Delegate != null)
{
PlatformView.Delegate.Dispose();
PlatformView.Delegate = null;
}
PlatformView.RemoveFromSuperview();
}
}
Next step would be to implement your Android handler.
A:
I also missing it. In an earlier roadmap of the MAUI team it was announced for February/2022 and MAUI Version 12, but meanwhile we have end of March and MAUI 14, but no progress in the map control.
But Xamarin, the predecessor, still have it. Btw. that prevents me to move to MAUI.
A:
Try the GoogleApi package by Michael Vivet on NuGet. It is compatible with Net5. I have downloaded the source and added Net6 to the dll, it works perfectly.
A:
i have ported Xamarin.Forms.GoogleMaps to .NET MAUI. Feel free to contact me if u have any issues: https://www.nuget.org/packages/Onion.Maui.GoogleMaps/
A:
With .net 7.0 there has been added the control Microsoft.Maui.Controls.Maps
Documentation: Map
Simple demo Maui app: MapDemo
|
How to display (Google) Maps on .Net Maui
|
I'm playing around with .Net Maui. I'd like to add a map to my demo app. Unfortunately it seems that the map control has not been migrated yet. Also it seems that the promised implementation of the control has been removed from the roadmap for RC.
Also existing projects like this one: https://github.com/amay077/Xamarin.Forms.GoogleMaps
doesn't support .Net Maui...
Does anybody already include a map to a .Net Maui project and could give me some a hint?
Thx!
|
[
"To use Google or Apple Maps in .NET MAUI, while it is not yet in the Framework , make use of your own handler. You can find a detailed blog post on our website.\nBut in general, you have to do the following steps:\n\nCreate a view that represents your map\n\n public class MapView : View, IMapView\n { }\n\nThis is the control you use inside your ContentPage.\n\nCreate the platform-independent handler-implementation to render your view\n\nTo render your view, MAUI needs a platform-independent entry point.\n partial class MapHandler\n {\n public static IPropertyMapper<MapView, MapHandler> MapMapper = new PropertyMapper<MapView, MapHandler>(ViewMapper)\n { };\n\n public MapHandler() : base(MapMapper)\n { }\n }\n\nThat needs to be registered in your MauiProgram.cs\n.ConfigureMauiHandlers(handlers =>\n{\n handlers.AddHandler(typeof(MapHandlerDemo.Maps.Map),typeof(MapHandler));\n})\n\n\nCreate the platform-specific handler-implementation\n\nA handler tells MAUI how to render your control. So you need a handler for each platform you want to support.\nThe iOS handler is, in comparison to Android, simpler and shorter to implement.\n public partial class MapHandler : ViewHandler<MapView, MKMapView>\n {\n public MapHandler(IPropertyMapper mapper, CommandMapper commandMapper = null) : base(mapper, commandMapper)\n { }\n\n protected override MKMapView CreatePlatformView()\n {\n return new MKMapView(CoreGraphics.CGRect.Empty);\n }\n\n protected override void ConnectHandler(MKMapView PlatformView)\n { }\n\n protected override void DisconnectHandler(MKMapView PlatformView)\n {\n // Clean-up the native view to reduce memory leaks and memory usage\n if (PlatformView.Delegate != null)\n {\n PlatformView.Delegate.Dispose();\n PlatformView.Delegate = null;\n }\n\n PlatformView.RemoveFromSuperview();\n }\n }\n\nNext step would be to implement your Android handler.\n",
"I also missing it. In an earlier roadmap of the MAUI team it was announced for February/2022 and MAUI Version 12, but meanwhile we have end of March and MAUI 14, but no progress in the map control.\nBut Xamarin, the predecessor, still have it. Btw. that prevents me to move to MAUI.\n",
"Try the GoogleApi package by Michael Vivet on NuGet. It is compatible with Net5. I have downloaded the source and added Net6 to the dll, it works perfectly.\n",
"i have ported Xamarin.Forms.GoogleMaps to .NET MAUI. Feel free to contact me if u have any issues: https://www.nuget.org/packages/Onion.Maui.GoogleMaps/\n",
"With .net 7.0 there has been added the control Microsoft.Maui.Controls.Maps\nDocumentation: Map\nSimple demo Maui app: MapDemo\n"
] |
[
6,
3,
1,
0,
0
] |
[
"Try like this please:\n<WebView Source=\"https://embed.windy.com\" />\n\n\nCheck please also\n"
] |
[
-2
] |
[
".net",
"dictionary",
"maui"
] |
stackoverflow_0070976168_.net_dictionary_maui.txt
|
Q:
html how to put filter:drop-shadow over background: linear-gradient
enter image description here
hello,
i have a question i maked a navbar on the left side with a filter:drop-shadow and then i put a image in the top of the side with a linear gradient on the bottom. The problem is the filter:drop-shadow is under the linear gradient and i want to ask if anyone know how i can but the dropshadow from the navbar over the linear gradient from the image i also added a screenshot to the post. Iam really new to html so sorry when i write anything wrong in the post.
Navbar left code:
.navbar {
height: 100%;
background-color: rgb(27, 27, 27);
width: 60px;
left: 0;
overflow-x: hidden;
padding-top: 20px;
position: fixed;
filter: drop-shadow(0 0 0.75rem crimson);
top: 0;
}
Picture gradient:
.filmbanner .img1 {
width: 1890px;
}
.content1 {
background: linear-gradient(180deg, rgba(13,29,49,0) 0%, rgba(13,29,49,0.5690476874343487) 31%, rgba(13,29,49,0.6894958667060574) 63%, rgba(13,29,49,1) 100%);
position: relative;
margin-top: -200px;
height: 200px;
left: 50px;
}
What i tryed:
putting the img attribut in css on top of the stylesheet and the navbar css on the buttom
A:
You could try something like this:
<div class="container">
<div class="navbar">
<!-- navbar content here -->
</div>
<div class="filmbanner">
<div class="img1">
<!-- image here -->
</div>
<div class="content1">
<!-- content here -->
</div>
</div>
</div>
.container {
filter: drop-shadow(0 0 0.75rem crimson);
}
.navbar {
height: 100%;
background-color: rgb(27, 27, 27);
width: 60px;
left: 0;
overflow-x: hidden;
padding-top: 20px;
position: fixed;
top: 0;
}
.filmbanner .img1 {
width: 1890px;
}
.content1 {
background: linear-gradient(180deg, rgba(13,29,49,0) 0%, rgba(13,29,49,0.5690476874343487) 31%, rgba(13,
|
html how to put filter:drop-shadow over background: linear-gradient
|
enter image description here
hello,
i have a question i maked a navbar on the left side with a filter:drop-shadow and then i put a image in the top of the side with a linear gradient on the bottom. The problem is the filter:drop-shadow is under the linear gradient and i want to ask if anyone know how i can but the dropshadow from the navbar over the linear gradient from the image i also added a screenshot to the post. Iam really new to html so sorry when i write anything wrong in the post.
Navbar left code:
.navbar {
height: 100%;
background-color: rgb(27, 27, 27);
width: 60px;
left: 0;
overflow-x: hidden;
padding-top: 20px;
position: fixed;
filter: drop-shadow(0 0 0.75rem crimson);
top: 0;
}
Picture gradient:
.filmbanner .img1 {
width: 1890px;
}
.content1 {
background: linear-gradient(180deg, rgba(13,29,49,0) 0%, rgba(13,29,49,0.5690476874343487) 31%, rgba(13,29,49,0.6894958667060574) 63%, rgba(13,29,49,1) 100%);
position: relative;
margin-top: -200px;
height: 200px;
left: 50px;
}
What i tryed:
putting the img attribut in css on top of the stylesheet and the navbar css on the buttom
|
[
"You could try something like this:\n<div class=\"container\">\n <div class=\"navbar\">\n <!-- navbar content here -->\n </div>\n <div class=\"filmbanner\">\n <div class=\"img1\">\n <!-- image here -->\n </div>\n <div class=\"content1\">\n <!-- content here -->\n </div>\n </div>\n</div>\n\n.container {\n filter: drop-shadow(0 0 0.75rem crimson);\n}\n\n.navbar {\n height: 100%;\n background-color: rgb(27, 27, 27);\n width: 60px;\n left: 0;\n overflow-x: hidden;\n padding-top: 20px;\n position: fixed;\n top: 0; \n}\n\n.filmbanner .img1 {\n width: 1890px;\n}\n\n.content1 {\n background: linear-gradient(180deg, rgba(13,29,49,0) 0%, rgba(13,29,49,0.5690476874343487) 31%, rgba(13,\n\n\n"
] |
[
0
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0074666485_css_html.txt
|
Q:
How to use ASCII codes to create uppercase strings in C programming language
I have a function in C as follows
char *ft_strupcase(char *str)
{
int i;
i = 0;
while (str[i] != '\0')
{
if (str[i] >= 'a' && str[i] <= 'z')
{
str[i] -= 32;
}
i++;
}
return (str);
}
which converts lowercase letter in the string to upper case. I want to achieve the same but using ASCII values instead. My code is
char *ft_strupcase(char *str)
{
int index;
index = 0;
while (str[index] != '\0')
{
if (str[index] < 97 && str[index] < 122)
{
str[index] = str[index] - str[32];
}
++index;
}
return str;
}
which gives me a error
Bad permissions for mapped region at address 0x400657at 0x40057B: ft_strupcase
which I don't understand why. According to my understanding if the char on the string is "a" then my code should convert it to "A" as 97 -32 = 65 which is an ASCII for A . I am lost as to what I am doing wrong. Any guidance is highly appreciated
A:
Insure str does not point to a string literal
char *bad = "Test";
ft_strupcase(bad); // UB
char good[] = "Test";
ft_strupcase(good); // OK
Wrong compare
Wrong offset
str[32] is not the difference between lower and upper case ASCII characters. That is some element of array str[].
Avoid naked magic numbers
//if (str[index] < 97 && str[index] < 122) {
// str[index] = str[index] - str[32];
//}
#define ASCIIA 65
#define ASCIIa 97
#define ASCIIz 122
if (str[index] >= ASCIIa && str[index] <= ASCIIz) {
str[index] = str[index] - ASCIIa + ASCIIA;
}
It really makes more sense to use 'a' instead of ASCIIa or 97. Same for the other constants.
The only reason for not using 'a' is to make the source code portable to non-ASCII source code environments (rare these days) yet still handle ASCII input.
|
How to use ASCII codes to create uppercase strings in C programming language
|
I have a function in C as follows
char *ft_strupcase(char *str)
{
int i;
i = 0;
while (str[i] != '\0')
{
if (str[i] >= 'a' && str[i] <= 'z')
{
str[i] -= 32;
}
i++;
}
return (str);
}
which converts lowercase letter in the string to upper case. I want to achieve the same but using ASCII values instead. My code is
char *ft_strupcase(char *str)
{
int index;
index = 0;
while (str[index] != '\0')
{
if (str[index] < 97 && str[index] < 122)
{
str[index] = str[index] - str[32];
}
++index;
}
return str;
}
which gives me a error
Bad permissions for mapped region at address 0x400657at 0x40057B: ft_strupcase
which I don't understand why. According to my understanding if the char on the string is "a" then my code should convert it to "A" as 97 -32 = 65 which is an ASCII for A . I am lost as to what I am doing wrong. Any guidance is highly appreciated
|
[
"Insure str does not point to a string literal\nchar *bad = \"Test\"; \nft_strupcase(bad); // UB\n\nchar good[] = \"Test\"; \nft_strupcase(good); // OK\n\nWrong compare\nWrong offset\nstr[32] is not the difference between lower and upper case ASCII characters. That is some element of array str[].\nAvoid naked magic numbers\n\n //if (str[index] < 97 && str[index] < 122) {\n // str[index] = str[index] - str[32];\n //}\n\n #define ASCIIA 65\n #define ASCIIa 97\n #define ASCIIz 122\n if (str[index] >= ASCIIa && str[index] <= ASCIIz) {\n str[index] = str[index] - ASCIIa + ASCIIA;\n }\n\n\nIt really makes more sense to use 'a' instead of ASCIIa or 97. Same for the other constants.\nThe only reason for not using 'a' is to make the source code portable to non-ASCII source code environments (rare these days) yet still handle ASCII input.\n"
] |
[
0
] |
[] |
[] |
[
"c"
] |
stackoverflow_0074666337_c.txt
|
Q:
Nextjs export gives Cannot find module for page
Hi I just started playing around with nextjs to see if it fits my use case. I wanted to export the site with some dynamic routes.
My pages folder structure is like below
page
locales
[locale]
[slug].js
When I run next develop I can access the page at http://localhost:3000/locales/de-DE/summer-dress-f.
So now im trying to export the page with next.config.js like
module.exports = {
exportPathMap: function() {
return {
"/locales/de-DE/summer-dress-f": {
page: "/locales",
query: { locale: "de-DE", slug: "summer-dress-f" }
}
};
}
};
next build runs fine but when I run next export I get the error
Error: Cannot find module for page: /locales
at pageNotFoundError (/Users/bmathew/Desktop/workspace/next-demo/node_modules/next-server/dist/server/require.js:13:17)
Any ideas what am I missing here?
A:
Running npm install seems to fix this.
A:
Finally figured it out. The pathmap should look like
module.exports = {
exportPathMap: function() {
return {
"/locales/de-DE/summer-dress-f": {
page: "/locales/[locale]/[slug]",
query: { locale: "de-DE", slug: "summer-dress-f" }
}
};
}
};
A:
Page component naming should be unique.
So I had about.tsx with name: AboutPage and faqs.tsx with name: AboutPage as well, amending faqs.tsx to be unique fixed it :)
A:
I just hit a similar error, and I had simply forgotten to run next build before next export!
A:
I had this vague error message when I had capitals in the file name WIP-sidebar.js.
A:
In my case, running:
npm i --save --legacy-peer-deps
fixed the issue.
A:
In my case, I solved a similar issue by deleting 'node_modules' by running rm -rf node_modules and installed the packages again.
|
Nextjs export gives Cannot find module for page
|
Hi I just started playing around with nextjs to see if it fits my use case. I wanted to export the site with some dynamic routes.
My pages folder structure is like below
page
locales
[locale]
[slug].js
When I run next develop I can access the page at http://localhost:3000/locales/de-DE/summer-dress-f.
So now im trying to export the page with next.config.js like
module.exports = {
exportPathMap: function() {
return {
"/locales/de-DE/summer-dress-f": {
page: "/locales",
query: { locale: "de-DE", slug: "summer-dress-f" }
}
};
}
};
next build runs fine but when I run next export I get the error
Error: Cannot find module for page: /locales
at pageNotFoundError (/Users/bmathew/Desktop/workspace/next-demo/node_modules/next-server/dist/server/require.js:13:17)
Any ideas what am I missing here?
|
[
"Running npm install seems to fix this.\n",
"Finally figured it out. The pathmap should look like\nmodule.exports = {\n exportPathMap: function() {\n return {\n \"/locales/de-DE/summer-dress-f\": {\n page: \"/locales/[locale]/[slug]\",\n query: { locale: \"de-DE\", slug: \"summer-dress-f\" }\n }\n };\n }\n};\n\n",
"Page component naming should be unique.\nSo I had about.tsx with name: AboutPage and faqs.tsx with name: AboutPage as well, amending faqs.tsx to be unique fixed it :)\n",
"I just hit a similar error, and I had simply forgotten to run next build before next export!\n",
"I had this vague error message when I had capitals in the file name WIP-sidebar.js.\n",
"In my case, running:\nnpm i --save --legacy-peer-deps\n\nfixed the issue.\n",
"In my case, I solved a similar issue by deleting 'node_modules' by running rm -rf node_modules and installed the packages again.\n"
] |
[
9,
4,
4,
2,
0,
0,
0
] |
[] |
[] |
[
"next.js"
] |
stackoverflow_0057004513_next.js.txt
|
Q:
How to use @ControllerAdvice for catching exceptions from Service classes?
I have some Service classes which contain multiple methods that throws error, an example of methods that throws an error:
public Optional<Item> getItemById(Long itemId) throws Exception {
return Optional.of(itemRepository.findById(itemId).
orElseThrow(() -> new Exception("Item with that id doesn't exist")));
}
Should I catch errors in the @ControllerAdvice annoted class?
How should I do it?
A:
The controller marked with @ControllerAdvice will intercept any exception thrown in the stack called when a request arrives. If the question is if you should catch errors with ControllerAdvice, is up to you, but it allows you to customize the behaviour once a exception is thrown. To do it you should create a class like this:
@ControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler({ Exception.class, MyCustomException.class }) //Which exceptions should this method intercept
public final ResponseEntity<ApiError> handleException(Exception ex){
return new ResponseEntity<>(body, HttpStatus.NOT_FOUND); //Or any HTTP error you want to return
}
}
|
How to use @ControllerAdvice for catching exceptions from Service classes?
|
I have some Service classes which contain multiple methods that throws error, an example of methods that throws an error:
public Optional<Item> getItemById(Long itemId) throws Exception {
return Optional.of(itemRepository.findById(itemId).
orElseThrow(() -> new Exception("Item with that id doesn't exist")));
}
Should I catch errors in the @ControllerAdvice annoted class?
How should I do it?
|
[
"The controller marked with @ControllerAdvice will intercept any exception thrown in the stack called when a request arrives. If the question is if you should catch errors with ControllerAdvice, is up to you, but it allows you to customize the behaviour once a exception is thrown. To do it you should create a class like this:\n@ControllerAdvice\npublic class GlobalExceptionHandler {\n\n @ExceptionHandler({ Exception.class, MyCustomException.class }) //Which exceptions should this method intercept\n public final ResponseEntity<ApiError> handleException(Exception ex){\n return new ResponseEntity<>(body, HttpStatus.NOT_FOUND); //Or any HTTP error you want to return\n }\n\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"error_handling",
"java",
"spring",
"spring_boot"
] |
stackoverflow_0074666291_error_handling_java_spring_spring_boot.txt
|
Q:
How to fix ''http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.'
I am currently working on a react app but when I run yarn start. I keep getting this issue Access to XMLHttpRequest at 'https://google.com/' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
'web-preferences': {
'web-security': false
}
but the Moesif Orign & CORS Changer extension on chrome helped me bypass it but I am trying to fix it without an extension.
const electron = require("electron");
const app = electron.app;
const BrowserWindow = electron.BrowserWindow;
const path = require("path");
const isDev = require("electron-is-dev");
let mainWindow;
let createWindow=()=> {
mainWindow = new BrowserWindow({ width: 900, height: 680 });
mainWindow.loadURL(
isDev
? "http://localhost:3000"
: `file://${path.join(__dirname, "../build/index.html")}`
);
mainWindow.on("closed", () => (mainWindow = null));
}
app.on("ready", createWindow);
app.on("window-all-closed", () => {
if (process.platform !== "darwin") {
app.quit();
}
});
app.on("activate", () => {
if (mainWindow === null) {
createWindow();
}
});
I am expecting to bypass this issue with out the Moesif Orign & CORS Changer extension on chrome.
A:
I faced the same problem using expressjs and it's basically the same, here's the code I've used to deal wit CORS
const express = require('express')
const app = express()
// Defining CORS
app.use(function(req, res, next) {
res.setHeader(
"Access-Control-Allow-Headers",
"X-Requested-With,content-type"
);
res.setHeader("Access-Control-Allow-Origin", "*");
res.setHeader(
"Access-Control-Allow-Methods",
"GET, POST, OPTIONS, PUT, PATCH, DELETE"
);
res.setHeader("Access-Control-Allow-Credentials", true);
next();
});
hope this helps
A:
run npm i --save cors on server, then
app.options("*", cors());
app.use(cors());
|
How to fix ''http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.'
|
I am currently working on a react app but when I run yarn start. I keep getting this issue Access to XMLHttpRequest at 'https://google.com/' from origin 'http://localhost:3000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
'web-preferences': {
'web-security': false
}
but the Moesif Orign & CORS Changer extension on chrome helped me bypass it but I am trying to fix it without an extension.
const electron = require("electron");
const app = electron.app;
const BrowserWindow = electron.BrowserWindow;
const path = require("path");
const isDev = require("electron-is-dev");
let mainWindow;
let createWindow=()=> {
mainWindow = new BrowserWindow({ width: 900, height: 680 });
mainWindow.loadURL(
isDev
? "http://localhost:3000"
: `file://${path.join(__dirname, "../build/index.html")}`
);
mainWindow.on("closed", () => (mainWindow = null));
}
app.on("ready", createWindow);
app.on("window-all-closed", () => {
if (process.platform !== "darwin") {
app.quit();
}
});
app.on("activate", () => {
if (mainWindow === null) {
createWindow();
}
});
I am expecting to bypass this issue with out the Moesif Orign & CORS Changer extension on chrome.
|
[
"I faced the same problem using expressjs and it's basically the same, here's the code I've used to deal wit CORS\nconst express = require('express')\n\nconst app = express()\n\n// Defining CORS\napp.use(function(req, res, next) {\n res.setHeader(\n \"Access-Control-Allow-Headers\",\n \"X-Requested-With,content-type\"\n );\n res.setHeader(\"Access-Control-Allow-Origin\", \"*\");\n res.setHeader(\n \"Access-Control-Allow-Methods\",\n \"GET, POST, OPTIONS, PUT, PATCH, DELETE\"\n );\n res.setHeader(\"Access-Control-Allow-Credentials\", true);\n next();\n});\n\nhope this helps\n",
"run npm i --save cors on server, then\napp.options(\"*\", cors());\napp.use(cors());\n"
] |
[
2,
0
] |
[] |
[] |
[
"electron",
"javascript",
"node.js",
"reactjs",
"typescript"
] |
stackoverflow_0058740144_electron_javascript_node.js_reactjs_typescript.txt
|
Q:
Why is my terminal on java showing -1 as one of the values in array?
In my java program, I am trying to display a table of two single arrays in descending order. I have managed to display it in both ascending and descending order. However, there is an additional array element -1 in my terminal. The -1 can be seen in the picture attached below.
-1 element inside array
This is my attempt so far:
import java.util.*;
public class Q2_Frequency {
public static void main(String[] args) {
int sum = 0, mean, temp;
Scanner input = new Scanner(System.in);
System.out.println("Please enter the number of days: "); //take input from user
int n = input.nextInt();
int a[] = new int[n];
int b[] = new int[n];
int c = 0;
System.out.println("Please enter number of trucks using a road over the " + n + " day period: ");
for (int i = 0; i < n; i++) { //input into array
a[i] = input.nextInt();
sum = sum + a[i];
}
mean = sum / n;
System.out.println("The mean is: " + mean); // calculate mean of n day period
System.out.println("Sorted in ascending order");
System.out.println("Input\tFrequency");//print table in ascending order
for(int i = 0 ; i < n ; i++)
{
for(int j = i + 1 ; j < n ; j++)
{
if (a[i] > a[j])
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
}
}
for (int i = 0; i < n; i++) {
c = 1;
if (a[i] != 1) {
for (int j = i + 1; j < n; j++) {
if (a[i] == a[j]) {
c = c + 1;
a[j] = -1;
}
}
b[i] = c;
}
}
for (int i = 0; i < n; i++) {
if (a[i] != -1)
{
System.out.println(a[i] + "\t\t\t"+ b[i]);
}
}
System.out.println("Sorted in descending order");
System.out.println("Input\tFrequency");//print table in ascending order
for(int i = 0 ; i < n ; i++)
{
for(int j = i + 1 ; j < n ; j++)
{
if (a[i] < a[j])
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
}
System.out.println(a[i] + "\t\t\t"+ b[i]);
}
ArrayLength(a);
}
private static void ArrayLength(int []array) // method to count number of inputs
{
if (array==null)
{
System.out.println("Number of input is 0.");
}
else
{
int arrayLength = array.length;
System.out.println("Number of input is: "+arrayLength);
}
}
}
Does anyone have an idea why the -1 appears only in the descending order and why?
A:
I refactor your code a bit but this fixes the issues you are having
public static void main(String[] args) {
int sum = 0, mean = 0;
Scanner input = new Scanner(System.in);
System.out.println("Please enter the number of days: ");
int n = input.nextInt();
int[] a = new int[n];
int[] b = new int[n];
System.out.println("Please enter number of trucks using a road over the " + n + " day period: ");
for (int i = 0; i < n; i++) {
a[i] = input.nextInt();
sum += a[i];
}
if (n != 0) mean = sum / n;
System.out.println("The mean is: " + mean);
printHeaders("Sorted in ascending order");
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
sort(a[i] > a[j], a, b, i, j);
}
}
calculateFrequency(n, a, b);
printValues(n, a, b);
printHeaders("Sorted in descending order");
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
sort(a[i] < a[j], a, b, i, j);
}
}
printValues(n, a, b);
ArrayLength(a);
}
private static void calculateFrequency(int n, int[] a, int[] b) {
for (int i = 0; i < n; i++) {
int c = 1;
for (int j = i + 1; j < n; j++) {
if (a[i] != a[j]) continue;
c = c + 1;
a[j] = -1;
}
b[i] = c;
}
}
private static void printValues(int n, int[] a, int[] b) {
for (int i = 0; i < n; i++) {
if (a[i] != -1) System.out.println(a[i] + "\t\t\t" + b[i]);
}
}
private static void sort(boolean statement, int[] a, int[] b, int i, int j) {
if (!statement) return;
int temp = a[i];
a[i] = a[j];
a[j] = temp;
temp = b[i];
b[i] = b[j];
b[j] = temp;
}
private static void printHeaders(String stringOrder) {
System.out.println(stringOrder);
System.out.println("Input\tFrequency");
}
private static void ArrayLength(int[] array) {
System.out.println("Number of input is: " + array.length);
}
this method is for sorthing the a and b array at the same time
private static void sort(boolean statement, int[] a, int[] b, int i, int j) {
if (!statement) return;
int temp = a[i];
a[i] = a[j];
a[j] = temp;
temp = b[i];
b[i] = b[j];
b[j] = temp;
}
this one if for print the values of the 2 arrays
private static void printValues(int n, int[] a, int[] b) {
for (int i = 0; i < n; i++) {
if (a[i] != -1) System.out.println(a[i] + "\t\t\t" + b[i]);
}
}
|
Why is my terminal on java showing -1 as one of the values in array?
|
In my java program, I am trying to display a table of two single arrays in descending order. I have managed to display it in both ascending and descending order. However, there is an additional array element -1 in my terminal. The -1 can be seen in the picture attached below.
-1 element inside array
This is my attempt so far:
import java.util.*;
public class Q2_Frequency {
public static void main(String[] args) {
int sum = 0, mean, temp;
Scanner input = new Scanner(System.in);
System.out.println("Please enter the number of days: "); //take input from user
int n = input.nextInt();
int a[] = new int[n];
int b[] = new int[n];
int c = 0;
System.out.println("Please enter number of trucks using a road over the " + n + " day period: ");
for (int i = 0; i < n; i++) { //input into array
a[i] = input.nextInt();
sum = sum + a[i];
}
mean = sum / n;
System.out.println("The mean is: " + mean); // calculate mean of n day period
System.out.println("Sorted in ascending order");
System.out.println("Input\tFrequency");//print table in ascending order
for(int i = 0 ; i < n ; i++)
{
for(int j = i + 1 ; j < n ; j++)
{
if (a[i] > a[j])
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
}
}
for (int i = 0; i < n; i++) {
c = 1;
if (a[i] != 1) {
for (int j = i + 1; j < n; j++) {
if (a[i] == a[j]) {
c = c + 1;
a[j] = -1;
}
}
b[i] = c;
}
}
for (int i = 0; i < n; i++) {
if (a[i] != -1)
{
System.out.println(a[i] + "\t\t\t"+ b[i]);
}
}
System.out.println("Sorted in descending order");
System.out.println("Input\tFrequency");//print table in ascending order
for(int i = 0 ; i < n ; i++)
{
for(int j = i + 1 ; j < n ; j++)
{
if (a[i] < a[j])
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
}
System.out.println(a[i] + "\t\t\t"+ b[i]);
}
ArrayLength(a);
}
private static void ArrayLength(int []array) // method to count number of inputs
{
if (array==null)
{
System.out.println("Number of input is 0.");
}
else
{
int arrayLength = array.length;
System.out.println("Number of input is: "+arrayLength);
}
}
}
Does anyone have an idea why the -1 appears only in the descending order and why?
|
[
"I refactor your code a bit but this fixes the issues you are having\npublic static void main(String[] args) {\n int sum = 0, mean = 0;\n Scanner input = new Scanner(System.in);\n System.out.println(\"Please enter the number of days: \");\n int n = input.nextInt();\n int[] a = new int[n];\n int[] b = new int[n];\n System.out.println(\"Please enter number of trucks using a road over the \" + n + \" day period: \");\n for (int i = 0; i < n; i++) {\n a[i] = input.nextInt();\n sum += a[i];\n }\n if (n != 0) mean = sum / n;\n System.out.println(\"The mean is: \" + mean);\n printHeaders(\"Sorted in ascending order\");\n for (int i = 0; i < n; i++) {\n for (int j = i + 1; j < n; j++) {\n sort(a[i] > a[j], a, b, i, j);\n }\n }\n calculateFrequency(n, a, b);\n printValues(n, a, b);\n printHeaders(\"Sorted in descending order\");\n for (int i = 0; i < n; i++) {\n for (int j = i + 1; j < n; j++) {\n sort(a[i] < a[j], a, b, i, j);\n }\n }\n printValues(n, a, b);\n ArrayLength(a);\n}\nprivate static void calculateFrequency(int n, int[] a, int[] b) {\n for (int i = 0; i < n; i++) {\n int c = 1;\n for (int j = i + 1; j < n; j++) {\n if (a[i] != a[j]) continue;\n c = c + 1;\n a[j] = -1;\n }\n b[i] = c;\n }\n}\nprivate static void printValues(int n, int[] a, int[] b) {\n for (int i = 0; i < n; i++) {\n if (a[i] != -1) System.out.println(a[i] + \"\\t\\t\\t\" + b[i]);\n }\n}\nprivate static void sort(boolean statement, int[] a, int[] b, int i, int j) {\n if (!statement) return;\n int temp = a[i];\n a[i] = a[j];\n a[j] = temp;\n\n temp = b[i];\n b[i] = b[j];\n b[j] = temp;\n}\nprivate static void printHeaders(String stringOrder) {\n System.out.println(stringOrder);\n System.out.println(\"Input\\tFrequency\");\n}\nprivate static void ArrayLength(int[] array) {\n System.out.println(\"Number of input is: \" + array.length);\n}\n\nthis method is for sorthing the a and b array at the same time\nprivate static void sort(boolean statement, int[] a, int[] b, int i, int j) {\n if (!statement) return;\n int temp = a[i];\n a[i] = a[j];\n a[j] = temp;\n\n temp = b[i];\n b[i] = b[j];\n b[j] = temp;\n}\n\nthis one if for print the values of the 2 arrays\nprivate static void printValues(int n, int[] a, int[] b) {\n for (int i = 0; i < n; i++) {\n if (a[i] != -1) System.out.println(a[i] + \"\\t\\t\\t\" + b[i]);\n }\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"arrays",
"element",
"frequency",
"java",
"sorting"
] |
stackoverflow_0074665590_arrays_element_frequency_java_sorting.txt
|
Q:
pyTelegramBotAPI inline google search engine
@bot.inline_handler(func=lambda query: len(query.query) > 0)
def query_text(query):
sleep(6)
text=query.query
html=requests.get(f'https://google.com/search?q={text}')
# print(html.status_code)
open('index.html','w', encoding='utf-8').write(html.text)
soup=BeautifulSoup(html.text, 'html.parser').find_all('div',{"class":"***********"})
for i in soup:
fk.append(types.InlineQueryResultArticle(id=str(len(fk)), title=f"{i.find('h3').get_text()}",description=f"{i.find('div',{'class':'**********'}).get_text()}",input_message_content=types.InputTextMessageContent(message_text=i.find('a').get('href').replace('/url?q=','https://google.com/url?q=')),hide_url=True,url=i.find('a').get('href').replace('/url?q=','https://google.com/url?q='),thumb_url='https://w7.pngwing.com/pngs/338/520/png-transparent-g-suite-google-play-google-logo-google-text-logo-cloud-computing.png', thumb_width=30, thumb_height=30))
print(i.find('a').get('href').replace('/url?q=','')+'\n')
sleep(2)
bot.answer_inline_query(query.id, fk)
When I write @bot google request
Bot takes it as g go goo google
What is causing the error
"A request to the Telegram API was unsuccessful. Error code: 400. Description: Bad Request: query is too old and response timeout expired or query ID is invalid"
How to make text input timeout so that it doesn't respond to every letter?
A:
I think, the error resides in your way of parsing data. It takes at least 8 seconds (based on sleeps) just to get to the answer method. Telegram inline queries have very few seconds until they are considered old, so, it is better to process data after you call bot.answer_inline_query() and then send it to user using bot.send_message()
I am not certain how it works with async code though.
If you find another solution, please let me know :)
|
pyTelegramBotAPI inline google search engine
|
@bot.inline_handler(func=lambda query: len(query.query) > 0)
def query_text(query):
sleep(6)
text=query.query
html=requests.get(f'https://google.com/search?q={text}')
# print(html.status_code)
open('index.html','w', encoding='utf-8').write(html.text)
soup=BeautifulSoup(html.text, 'html.parser').find_all('div',{"class":"***********"})
for i in soup:
fk.append(types.InlineQueryResultArticle(id=str(len(fk)), title=f"{i.find('h3').get_text()}",description=f"{i.find('div',{'class':'**********'}).get_text()}",input_message_content=types.InputTextMessageContent(message_text=i.find('a').get('href').replace('/url?q=','https://google.com/url?q=')),hide_url=True,url=i.find('a').get('href').replace('/url?q=','https://google.com/url?q='),thumb_url='https://w7.pngwing.com/pngs/338/520/png-transparent-g-suite-google-play-google-logo-google-text-logo-cloud-computing.png', thumb_width=30, thumb_height=30))
print(i.find('a').get('href').replace('/url?q=','')+'\n')
sleep(2)
bot.answer_inline_query(query.id, fk)
When I write @bot google request
Bot takes it as g go goo google
What is causing the error
"A request to the Telegram API was unsuccessful. Error code: 400. Description: Bad Request: query is too old and response timeout expired or query ID is invalid"
How to make text input timeout so that it doesn't respond to every letter?
|
[
"I think, the error resides in your way of parsing data. It takes at least 8 seconds (based on sleeps) just to get to the answer method. Telegram inline queries have very few seconds until they are considered old, so, it is better to process data after you call bot.answer_inline_query() and then send it to user using bot.send_message()\nI am not certain how it works with async code though.\nIf you find another solution, please let me know :)\n"
] |
[
0
] |
[] |
[] |
[
"inline",
"py_telegram_bot_api",
"python_3.x",
"telebot"
] |
stackoverflow_0071895711_inline_py_telegram_bot_api_python_3.x_telebot.txt
|
Q:
Hide vertices from plot.igraph conditional on vertex attribute without deleting them
I have an igraph plot that is geographically laid out based on its latitude and longitude coordinates. I now want to hide certain points from one time period, while preserving the layout of the graph. I would therefore not like to delete the vertices from the network, but merely make them invisible in this particular plot rendering, conditional on a vertex attribute. Furthermore, the color attribute is already set to capture another variable, so I cannot use that to hide the points.
My plot is generated according to the following code:
lo <- layout.norm(as.matrix(g[, c("longitude","latitude")]))
plot.igraph(g, layout=lo, vertex.label=NA,rescale=T, vertex.size = 4)
The time attribute is a numerical variable stored in V(g)$period
Is there code I can put within the plot.igraph function to hide vertices for which V(g)$period == 1?
A:
Update.
Building upon Szabolcs's answer.
library(igraph)
## reproducible example
g <- make_graph("Zachary")
V(g)$name <- V(g)
set.seed(10)
lyt <- layout_with_drl(g)
V(g)$x <- lyt[,1]
V(g)$y <- lyt[,2]
plot(g)
del_vs <- c(4, 8, 9, 19, 24, 33)
dev.new(); plot(g - del_vs, main = paste("Zachary minus", toString(del_vs)))
Try invisible inkt, e.g. print hidden objects in background color.
Or try this.
library(igraph)
## reproducible example.
g <- make_graph("Zachary")
V(g)$name <- V(g)
set.seed(10)
lyt <- layout_with_drl(g)
plot(g, layout=lyt)
## delete vertices and preserve layout.
del_vs <- c(9, 19, 24, 33)
g2 <- g - del_vs
g2$main <- paste("Zachary minus", toString(del_vs))
g2$layout <- matrix(lyt[-del_vs,], ncol=2)
dev.new(); plot(g2)
See also:
Looking to save coordinates/layout to make temporal networks in Igraph with DRL
.
A:
You can store the coordinates in the x and y vertex attributes. Then they will be used by plot automatically, and they will be preserved when you delete vertices.
For example:
g<-make_ring(4)
V(g)$x <- c(0,0,1,1)
V(g)$y <- c(0,1,0,1)
plot(g)
plot(delete_vertices(g,1))
|
Hide vertices from plot.igraph conditional on vertex attribute without deleting them
|
I have an igraph plot that is geographically laid out based on its latitude and longitude coordinates. I now want to hide certain points from one time period, while preserving the layout of the graph. I would therefore not like to delete the vertices from the network, but merely make them invisible in this particular plot rendering, conditional on a vertex attribute. Furthermore, the color attribute is already set to capture another variable, so I cannot use that to hide the points.
My plot is generated according to the following code:
lo <- layout.norm(as.matrix(g[, c("longitude","latitude")]))
plot.igraph(g, layout=lo, vertex.label=NA,rescale=T, vertex.size = 4)
The time attribute is a numerical variable stored in V(g)$period
Is there code I can put within the plot.igraph function to hide vertices for which V(g)$period == 1?
|
[
"Update.\nBuilding upon Szabolcs's answer.\nlibrary(igraph)\n## reproducible example\ng <- make_graph(\"Zachary\")\nV(g)$name <- V(g)\nset.seed(10)\nlyt <- layout_with_drl(g)\nV(g)$x <- lyt[,1]\nV(g)$y <- lyt[,2]\nplot(g)\ndel_vs <- c(4, 8, 9, 19, 24, 33)\ndev.new(); plot(g - del_vs, main = paste(\"Zachary minus\", toString(del_vs)))\n\n\nTry invisible inkt, e.g. print hidden objects in background color.\nOr try this.\nlibrary(igraph)\n## reproducible example.\ng <- make_graph(\"Zachary\")\nV(g)$name <- V(g)\nset.seed(10)\nlyt <- layout_with_drl(g)\nplot(g, layout=lyt)\n\n## delete vertices and preserve layout.\ndel_vs <- c(9, 19, 24, 33)\ng2 <- g - del_vs\ng2$main <- paste(\"Zachary minus\", toString(del_vs))\ng2$layout <- matrix(lyt[-del_vs,], ncol=2)\ndev.new(); plot(g2)\n\nSee also:\nLooking to save coordinates/layout to make temporal networks in Igraph with DRL\n.\n",
"You can store the coordinates in the x and y vertex attributes. Then they will be used by plot automatically, and they will be preserved when you delete vertices.\nFor example:\ng<-make_ring(4)\nV(g)$x <- c(0,0,1,1)\nV(g)$y <- c(0,1,0,1)\n\nplot(g)\n\n\nplot(delete_vertices(g,1))\n\n\n"
] |
[
2,
2
] |
[] |
[] |
[
"igraph",
"r"
] |
stackoverflow_0074660087_igraph_r.txt
|
Q:
How to determine the majority of appearances of a list in list of lists. (Python)
I am trying to determine the majority in a list of lists for a project I am working on. My problem is that the code will run in an environment that not allow me to use packages. Can someone refer me to an algorithm that does what I am asking or let me know about a way to do it with pre built functions in python that don't require outside packages?. Thank you for your time.
Example:
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
Output:
["hello", 1]
A:
You can actually use a dictionairy to save the lists as keys and use the values as count. Then you can take the maximum count, to get your result.
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
# Make a dictionary:
dic = {}
# Loop over every item in the data
for item in data:
# Convert to tuple, since a list is unhashable:
entry = tuple(item)
# Add one to the count
# dic.get() gets the value of the entry in the dictionairy
# if this exists. Else, it sets the value to 0.
dic[entry] = dic.get(entry, 0) + 1
# Get the maximum argument by using a lambda function
# on the items in the dictionary. Get the key by taking index 0.
result = max(dic.items(), key = lambda x: x[1])[0]
You might want to convert the tuple back to a list by
result = list(result)
A:
Here is one possible solution using the built-in Counter class from the collections module in Python:
from collections import Counter
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
# Create a list of all the elements in the sublists
elements = [element[0] for element in data]
# Use Counter to count the occurrences of each element
c = Counter(elements)
# Get the most common element
most_common_element = c.most_common(1)[0][0]
# Get the value of the most common element from the original data
for element in data:
if element[0] == most_common_element:
value = element[1]
break
# Print the result
print([most_common_element, value])
A:
Try this:
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
for i in data:
if data.count(i) == max(data.count(i) for i in data):
res = i
print(res)
Or this:
res = [i for i in data if data.count(i) == max(data.count(i) for i in data)][0]
print(res)
Output:
['hello', 1]
|
How to determine the majority of appearances of a list in list of lists. (Python)
|
I am trying to determine the majority in a list of lists for a project I am working on. My problem is that the code will run in an environment that not allow me to use packages. Can someone refer me to an algorithm that does what I am asking or let me know about a way to do it with pre built functions in python that don't require outside packages?. Thank you for your time.
Example:
data = [ ["hello", 1], ["hello", 1], ["hello", 1], ["other", 32] ]
Output:
["hello", 1]
|
[
"You can actually use a dictionairy to save the lists as keys and use the values as count. Then you can take the maximum count, to get your result.\ndata = [ [\"hello\", 1], [\"hello\", 1], [\"hello\", 1], [\"other\", 32] ]\n\n# Make a dictionary:\ndic = {}\n\n# Loop over every item in the data\nfor item in data:\n\n # Convert to tuple, since a list is unhashable:\n entry = tuple(item)\n\n # Add one to the count\n # dic.get() gets the value of the entry in the dictionairy\n # if this exists. Else, it sets the value to 0.\n dic[entry] = dic.get(entry, 0) + 1\n\n# Get the maximum argument by using a lambda function \n# on the items in the dictionary. Get the key by taking index 0.\nresult = max(dic.items(), key = lambda x: x[1])[0]\n \n\nYou might want to convert the tuple back to a list by\nresult = list(result)\n\n",
"Here is one possible solution using the built-in Counter class from the collections module in Python:\nfrom collections import Counter\n\ndata = [ [\"hello\", 1], [\"hello\", 1], [\"hello\", 1], [\"other\", 32] ]\n\n# Create a list of all the elements in the sublists\nelements = [element[0] for element in data]\n\n# Use Counter to count the occurrences of each element\nc = Counter(elements)\n\n# Get the most common element\nmost_common_element = c.most_common(1)[0][0]\n\n# Get the value of the most common element from the original data\nfor element in data:\n if element[0] == most_common_element:\n value = element[1]\n break\n\n# Print the result\nprint([most_common_element, value])\n\n",
"Try this:\ndata = [ [\"hello\", 1], [\"hello\", 1], [\"hello\", 1], [\"other\", 32] ]\n\nfor i in data:\n if data.count(i) == max(data.count(i) for i in data):\n res = i\n\nprint(res)\n\nOr this:\nres = [i for i in data if data.count(i) == max(data.count(i) for i in data)][0]\nprint(res)\n\nOutput:\n['hello', 1]\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074665675_python.txt
|
Q:
Using MapKit causes Publishing changes from within view updates is not allowed, this will cause undefined behavior
Using Swift5.7, XCode14.0, iOS16.0,
I get very strange error messages and warnings in my XCode console, when trying to make a MapKit example to work.
Here is the log:
2022-11-01 17:26:51.756834+0100 myApp[3999:834036] Metal API Validation Enabled
2022-11-01 17:26:52.139973+0100 myApp[3999:834036] [PipelineLibrary] Mapping the pipeline data cache failed, errno 22
2022-11-01 17:26:52.192482+0100 myApp[3999:834036] [core] "Error returned from daemon: Error Domain=com.apple.accounts Code=7 "(null)""
2022-11-01 17:26:53.884031+0100 myApp[3999:834036] [SwiftUI] Publishing changes from within view updates is not allowed, this will cause undefined behavior.
2022-11-01 17:26:53.900265+0100 myApp[3999:834036] [SwiftUI] Publishing changes from within view updates is not allowed, this will cause undefined behavior.
It seems that in SwiftUI, there has been a change in how Published variables in combination with Bindings are handeled.
The core issue, I think, is very nicely described here.
And I assume that Apple has not finished the transition to this new SwiftUI4 behaviour in their own API's themselves.
Or is there any way I can make the Publishing changes bla bla warning going away ??
See my entire Code here below:
//
// MyView.swift
// myApp
//
import SwiftUI
import MapKit
struct MyView: View {
@State private var showMap = false
@State private var region = MKCoordinateRegion(
center: CLLocationCoordinate2D(
latitude: 37.8879948,
longitude: 4.1237047
),
span: MKCoordinateSpan(
latitudeDelta: 0.05,
longitudeDelta: 0.05
)
)
@State private var locations: [Location] = [Location(name: "Test", description: "", latitude: 37.8879948, longitude: 4.1237047)]
@State private var isLoading = false
var body: some View {
Map(coordinateRegion: $region,
annotationItems: locations,
annotationContent: { location in
MapAnnotation(
coordinate: CLLocationCoordinate2D(latitude: location.latitude, longitude: location.longitude)
) {
VStack {
Image("THPin")
.resizable()
.scaledToFit()
.frame(width: 44, height: 44)
ZStack {
Text(location.name)
.padding(5)
.font(.subheadline)
.background(.white.opacity(0.5), in: Capsule())
}
}
}
}
)
}
}
A:
The same problem! I found that if you replace MapAnnotation with MapMarker the problem disappears. The problem is most likely in the library itself
|
Using MapKit causes Publishing changes from within view updates is not allowed, this will cause undefined behavior
|
Using Swift5.7, XCode14.0, iOS16.0,
I get very strange error messages and warnings in my XCode console, when trying to make a MapKit example to work.
Here is the log:
2022-11-01 17:26:51.756834+0100 myApp[3999:834036] Metal API Validation Enabled
2022-11-01 17:26:52.139973+0100 myApp[3999:834036] [PipelineLibrary] Mapping the pipeline data cache failed, errno 22
2022-11-01 17:26:52.192482+0100 myApp[3999:834036] [core] "Error returned from daemon: Error Domain=com.apple.accounts Code=7 "(null)""
2022-11-01 17:26:53.884031+0100 myApp[3999:834036] [SwiftUI] Publishing changes from within view updates is not allowed, this will cause undefined behavior.
2022-11-01 17:26:53.900265+0100 myApp[3999:834036] [SwiftUI] Publishing changes from within view updates is not allowed, this will cause undefined behavior.
It seems that in SwiftUI, there has been a change in how Published variables in combination with Bindings are handeled.
The core issue, I think, is very nicely described here.
And I assume that Apple has not finished the transition to this new SwiftUI4 behaviour in their own API's themselves.
Or is there any way I can make the Publishing changes bla bla warning going away ??
See my entire Code here below:
//
// MyView.swift
// myApp
//
import SwiftUI
import MapKit
struct MyView: View {
@State private var showMap = false
@State private var region = MKCoordinateRegion(
center: CLLocationCoordinate2D(
latitude: 37.8879948,
longitude: 4.1237047
),
span: MKCoordinateSpan(
latitudeDelta: 0.05,
longitudeDelta: 0.05
)
)
@State private var locations: [Location] = [Location(name: "Test", description: "", latitude: 37.8879948, longitude: 4.1237047)]
@State private var isLoading = false
var body: some View {
Map(coordinateRegion: $region,
annotationItems: locations,
annotationContent: { location in
MapAnnotation(
coordinate: CLLocationCoordinate2D(latitude: location.latitude, longitude: location.longitude)
) {
VStack {
Image("THPin")
.resizable()
.scaledToFit()
.frame(width: 44, height: 44)
ZStack {
Text(location.name)
.padding(5)
.font(.subheadline)
.background(.white.opacity(0.5), in: Capsule())
}
}
}
}
)
}
}
|
[
"The same problem! I found that if you replace MapAnnotation with MapMarker the problem disappears. The problem is most likely in the library itself\n"
] |
[
0
] |
[] |
[] |
[
"annotations",
"mapkit",
"swift",
"swiftui"
] |
stackoverflow_0074278985_annotations_mapkit_swift_swiftui.txt
|
Q:
pine script send an alert when the lines bumping into each other in stochastic indicator
I want to send an alert when two lines of the stochastic indicator bumping each other.
I wrote an alert condition but it doesn't give any alerts.
//@version=5
indicator(title="Stochastic", shorttitle="Stoch", format=format.price, precision=2, timeframe="", timeframe_gaps=true)
periodK = input.int(14, title="%K Length", minval=1)
smoothK = input.int(1, title="%K Smoothing", minval=1)
periodD = input.int(3, title="%D Smoothing", minval=1)
k = ta.sma(ta.stoch(close, high, low, periodK), smoothK)
d = ta.sma(k, periodD)
plot(k, title="%K", color=#2962FF)
plot(d, title="%D", color=#FF6D00)
// My alert condition
alertcondition(k == d, 'Collision happened', 'Collision happened')
h0 = hline(80, "Upper Band", color=#787B86)
hline(50, "Middle Band", color=color.new(#787B86, 50))
h1 = hline(20, "Lower Band", color=#787B86)
fill(h0, h1, color=color.rgb(33, 150, 243, 90), title="Background")
A:
It is very unlikely that your condition is met : k == d
You should test if the value just crossed :
justcrossed = false
if (k > d and k[1] < d[1]) or (k < d and k[1] > d[1]
justcrossed := true
alertcondition(justcrossed, 'Collision happened', 'Collision happened')
Also, don't forget to activate your alert on your chart to create it (see https://www.tradingview.com/pine-script-reference/v5/#fun_alertcondition)
|
pine script send an alert when the lines bumping into each other in stochastic indicator
|
I want to send an alert when two lines of the stochastic indicator bumping each other.
I wrote an alert condition but it doesn't give any alerts.
//@version=5
indicator(title="Stochastic", shorttitle="Stoch", format=format.price, precision=2, timeframe="", timeframe_gaps=true)
periodK = input.int(14, title="%K Length", minval=1)
smoothK = input.int(1, title="%K Smoothing", minval=1)
periodD = input.int(3, title="%D Smoothing", minval=1)
k = ta.sma(ta.stoch(close, high, low, periodK), smoothK)
d = ta.sma(k, periodD)
plot(k, title="%K", color=#2962FF)
plot(d, title="%D", color=#FF6D00)
// My alert condition
alertcondition(k == d, 'Collision happened', 'Collision happened')
h0 = hline(80, "Upper Band", color=#787B86)
hline(50, "Middle Band", color=color.new(#787B86, 50))
h1 = hline(20, "Lower Band", color=#787B86)
fill(h0, h1, color=color.rgb(33, 150, 243, 90), title="Background")
|
[
"It is very unlikely that your condition is met : k == d \nYou should test if the value just crossed : \njustcrossed = false\nif (k > d and k[1] < d[1]) or (k < d and k[1] > d[1]\n justcrossed := true\nalertcondition(justcrossed, 'Collision happened', 'Collision happened')\n\nAlso, don't forget to activate your alert on your chart to create it (see https://www.tradingview.com/pine-script-reference/v5/#fun_alertcondition)\n"
] |
[
0
] |
[] |
[] |
[
"pine_script",
"trading",
"tradingview_api"
] |
stackoverflow_0074666067_pine_script_trading_tradingview_api.txt
|
Q:
how to display items with category name as title
hi I have multiple items with different categories. I want to display all item with its Category name as a title at header. My code displaying each item but category Title repeats with each item.
I want Category title only one time at the top and then its items list.
MY Code is this
{
formsList.map((item, index,temp=0) => {
if(temp!==item.cat_id)
{
temp = item?.cat_id;
return (
<div className="custom-control custom-radio mb-3">
<div className="form-control-label"> {item.category_title}</div>
<input
className="custom-control-input"
id= {item.id}
name= {item.cat_id}
type="radio"
/>
<label className="custom-control-label" htmlFor={item.id}>
{item.form_title} {temp}
</label>
</div>
)
}
return (
<div className="custom-control custom-radio mb-3">
<input
className="custom-control-input"
id= {item.id}
name= {item.cat_id}
type="radio"
/>
<label className="custom-control-label" htmlFor={item.id}>
{item.form_title}
</label>
</div>
)
})
}
My Json array is like this.
{"forms":
[
{"id":1,"category_title":"Individual Tax Return","cat_id":1,
"form_title":"Single},
{"id":2,"category_title":"Individual Tax Return","cat_id":1,
"form_title":"Married Filing Separately"},
{"id":3,"category_title":"Business Type", "cat_id":2,
"form_title":"SoleProprietorships"},
{"id":4,"category_title":"Business Type","cat_id":2,
"form_title":" Partnership"}
]
}
I want to display this one like as below
//////////////////
Individual Tax Return
Single
Married Filing Separately
Business Type
SoleProprietorships
Partnership
/////////////////////////
Please check and help with thanks
A:
One way to solve this problem is to store the previous category ID in a variable, and only render the category title when the current category ID is different from the previous one. Here is an example of how you can do this:
{
let prevCatId = null;
formsList.map((item, index) => {
if (prevCatId !== item.cat_id) {
prevCatId = item.cat_id;
return (
<div className="form-control-label">{item.category_title}</div>
);
}
return (
<div className="custom-control custom-radio mb-3">
<input
className="custom-control-input"
id={item.id}
name={item.cat_id}
type="radio"
/>
<label className="custom-control-label" htmlFor={item.id}>
{item.form_title}
</label>
</div>
);
});
}
A:
Please try this
json part
const recipes = [{
id: 716429,
title: "Pasta with Garlic, Scallions, Cauliflower & Breadcrumbs",
image: "http://ovuets.com/uploads/716429-312x231.jpg>",
dishTypes: [
"lunch",
"main course",
"main dish",
"dinner"
],
recipe: {
// recipe data
}
}]
function part
export default function Recipes() {
return (
<div>
{recipes.map((recipe) => {
return <div key={recipe.id}>
<h1>{recipe.title}</h1>
<img src={recipe.image} alt="recipe image" />
{recipe.dishTypes.map((type, index) => {
return <span key={index}>{type}</span>
})}
</div>
})}
</div>
)}
|
how to display items with category name as title
|
hi I have multiple items with different categories. I want to display all item with its Category name as a title at header. My code displaying each item but category Title repeats with each item.
I want Category title only one time at the top and then its items list.
MY Code is this
{
formsList.map((item, index,temp=0) => {
if(temp!==item.cat_id)
{
temp = item?.cat_id;
return (
<div className="custom-control custom-radio mb-3">
<div className="form-control-label"> {item.category_title}</div>
<input
className="custom-control-input"
id= {item.id}
name= {item.cat_id}
type="radio"
/>
<label className="custom-control-label" htmlFor={item.id}>
{item.form_title} {temp}
</label>
</div>
)
}
return (
<div className="custom-control custom-radio mb-3">
<input
className="custom-control-input"
id= {item.id}
name= {item.cat_id}
type="radio"
/>
<label className="custom-control-label" htmlFor={item.id}>
{item.form_title}
</label>
</div>
)
})
}
My Json array is like this.
{"forms":
[
{"id":1,"category_title":"Individual Tax Return","cat_id":1,
"form_title":"Single},
{"id":2,"category_title":"Individual Tax Return","cat_id":1,
"form_title":"Married Filing Separately"},
{"id":3,"category_title":"Business Type", "cat_id":2,
"form_title":"SoleProprietorships"},
{"id":4,"category_title":"Business Type","cat_id":2,
"form_title":" Partnership"}
]
}
I want to display this one like as below
//////////////////
Individual Tax Return
Single
Married Filing Separately
Business Type
SoleProprietorships
Partnership
/////////////////////////
Please check and help with thanks
|
[
"One way to solve this problem is to store the previous category ID in a variable, and only render the category title when the current category ID is different from the previous one. Here is an example of how you can do this:\n{\n let prevCatId = null;\n\n formsList.map((item, index) => {\n if (prevCatId !== item.cat_id) {\n prevCatId = item.cat_id;\n\n return (\n <div className=\"form-control-label\">{item.category_title}</div>\n );\n }\n\n return (\n <div className=\"custom-control custom-radio mb-3\">\n <input\n className=\"custom-control-input\"\n id={item.id}\n name={item.cat_id}\n type=\"radio\"\n />\n <label className=\"custom-control-label\" htmlFor={item.id}>\n {item.form_title}\n </label>\n </div>\n );\n });\n}\n\n",
"Please try this\njson part\nconst recipes = [{\n id: 716429,\n title: \"Pasta with Garlic, Scallions, Cauliflower & Breadcrumbs\",\n image: \"http://ovuets.com/uploads/716429-312x231.jpg>\",\n dishTypes: [\n \"lunch\",\n \"main course\",\n \"main dish\",\n \"dinner\"\n ],\n recipe: {\n // recipe data\n }\n}]\n\nfunction part\nexport default function Recipes() {\nreturn (\n<div>\n {recipes.map((recipe) => {\n return <div key={recipe.id}>\n <h1>{recipe.title}</h1>\n <img src={recipe.image} alt=\"recipe image\" />\n {recipe.dishTypes.map((type, index) => {\n return <span key={index}>{type}</span>\n })}\n </div>\n })}\n</div>\n)}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"reactjs"
] |
stackoverflow_0074666064_reactjs.txt
|
Q:
What is meant by ‘define model class’ in pytorch documentation?
On the pytorch documentation page about saving and loading models, it says that when loading a saved model, # Model class must be defined somewhere https://pytorch.org/tutorials/beginner/saving_loading_models.html#:~:text=%23%20Model%20class%20must%20be%20defined%20somewhere
Maybe my question is silly, but what does class in this context refer to? Thanks in advance.
Earlier on the page, the 'loading-of-a-model process' is described such as
Load:
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
A:
You need to define the model class as, for example, explained here. Re-using the example from the linked website as a random example, a class for TheModelClass could be defined as follows:
class TheModelClass(torch.nn.Module):
def __init__(self):
super(TheModelClass, self).__init__()
self.linear1 = torch.nn.Linear(100, 200)
self.activation = torch.nn.ReLU()
self.linear2 = torch.nn.Linear(200, 10)
self.softmax = torch.nn.Softmax()
def forward(self, x):
x = self.linear1(x)
x = self.activation(x)
x = self.linear2(x)
x = self.softmax(x)
return x
A:
The class in that context refers to the class of the model you’re trying to load with torch.load. The class must be defined because that function will construct the model object using the model class name stored in PATH. Thus, the construction will fail if the class with that name is not defined somewhere before torch.load is executed. This process is similar to how pickle loads a .pkl file (in fact I think torch.load uses pickle by default).
Note that the model class definition is not needed if you save and load the model’s state dict (the recommended way) because state dicts are Python dicts with strings as keys and torch.Tensor as values. Dicts and strings are built-ins so they’re always defined, and torch.Tensor is always defined whenever you import torch to use torch.load.
|
What is meant by ‘define model class’ in pytorch documentation?
|
On the pytorch documentation page about saving and loading models, it says that when loading a saved model, # Model class must be defined somewhere https://pytorch.org/tutorials/beginner/saving_loading_models.html#:~:text=%23%20Model%20class%20must%20be%20defined%20somewhere
Maybe my question is silly, but what does class in this context refer to? Thanks in advance.
Earlier on the page, the 'loading-of-a-model process' is described such as
Load:
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
|
[
"You need to define the model class as, for example, explained here. Re-using the example from the linked website as a random example, a class for TheModelClass could be defined as follows:\nclass TheModelClass(torch.nn.Module):\n\n def __init__(self):\n super(TheModelClass, self).__init__()\n\n self.linear1 = torch.nn.Linear(100, 200)\n self.activation = torch.nn.ReLU()\n self.linear2 = torch.nn.Linear(200, 10)\n self.softmax = torch.nn.Softmax()\n\n def forward(self, x):\n x = self.linear1(x)\n x = self.activation(x)\n x = self.linear2(x)\n x = self.softmax(x)\n return x\n\n",
"The class in that context refers to the class of the model you’re trying to load with torch.load. The class must be defined because that function will construct the model object using the model class name stored in PATH. Thus, the construction will fail if the class with that name is not defined somewhere before torch.load is executed. This process is similar to how pickle loads a .pkl file (in fact I think torch.load uses pickle by default).\nNote that the model class definition is not needed if you save and load the model’s state dict (the recommended way) because state dicts are Python dicts with strings as keys and torch.Tensor as values. Dicts and strings are built-ins so they’re always defined, and torch.Tensor is always defined whenever you import torch to use torch.load.\n"
] |
[
0,
0
] |
[] |
[] |
[
"nlp",
"python",
"pytorch"
] |
stackoverflow_0073339264_nlp_python_pytorch.txt
|
Q:
Convert android vector drawable XML to SVG
How can I convert my android vector drawable to SVG?
Don't mark it as duplicate question. I have already tried those methods but didn't work, what I have tried https://shapeshifter.design/ website, but actually it is good, but it gives me wrong input and output.
Suppose I have a vector
<vector android:height="80dp" android:viewportHeight="512"
android:viewportWidth="512" android:width="80dp" xmlns:android="http://schemas.android.com/apk/res/android">
<path android:fillColor="@color/colorLightYellow" android:pathData="M150.561,144.549c-1.315,0 -2.647,-0.341 -3.86,-1.06L52.164,87.532c-3.609,-2.136 -4.803,-6.793 -2.667,-10.402c2.137,-3.608 6.793,-4.802 10.402,-2.667l94.537,55.957c3.609,2.136 4.803,6.793 2.667,10.402C155.685,143.217 153.156,144.549 150.561,144.549z"/>
<path android:fillColor="@color/colorLightYellow" android:pathData="M150.568,144.548H47.842c-4.194,0 -7.593,-3.399 -7.593,-7.593s3.4,-7.593 7.593,-7.593h102.727c4.194,0 7.593,3.399 7.593,7.593S154.762,144.548 150.568,144.548z"/>
<path android:fillColor="@color/colorLightOrange" android:pathData="M342.693,335.833L207.961,136.955l51.811,-74.838c10.849,-15.671 -0.367,-37.077 -19.426,-37.077H118.183c-19.059,0 -30.275,21.406 -19.426,37.077l51.811,74.838L15.836,335.833C5.516,351.066 0,369.043 0,387.443l0,0c0,50.82 41.198,92.018 92.017,92.018h174.495c50.82,0 92.017,-41.198 92.017,-92.018l0,0C358.529,369.043 353.013,351.066 342.693,335.833z"/>
<path android:fillColor="@color/colorLightOrange" android:pathData="M342.693,335.833L207.961,136.955l51.811,-74.838c10.849,-15.671 -0.367,-37.077 -19.426,-37.077h-22.144c17.303,0 27.486,21.406 17.637,37.077l-47.038,74.838L311.12,335.833c9.369,15.233 14.377,33.211 14.377,51.61c0,50.82 -37.402,92.018 -83.539,92.018h24.555c50.82,0 92.017,-41.198 92.017,-92.018C358.529,369.043 353.013,351.066 342.693,335.833z"/>
<path android:fillColor="@color/colorLightYellow" android:pathData="M214.129,144.548h-71.883c-4.194,0 -7.593,-3.399 -7.593,-7.593s3.4,-7.593 7.593,-7.593h71.883c4.194,0 7.593,3.399 7.593,7.593S218.323,144.548 214.129,144.548z"/>
<path android:fillColor="#FCAB29" android:pathData="M393.083,249.127c-65.571,0 -118.917,53.346 -118.917,118.917c0,65.57 53.346,118.916 118.917,118.916S512,433.614 512,368.044C512,302.473 458.654,249.127 393.083,249.127z"/>
<path android:fillColor="#DD8D19" android:pathData="M458.128,268.543c22.753,21.675 36.953,52.25 36.953,86.081c0,65.57 -53.346,118.916 -118.917,118.916c-23.991,0 -46.341,-7.148 -65.045,-19.417c21.347,20.336 50.223,32.836 81.964,32.836C458.654,486.96 512,433.614 512,368.044C512,326.464 490.544,289.807 458.128,268.543z"/>
<path android:fillColor="#F2DF33" android:pathData="M393.08,368.04m-80.17,0a80.17,80.17 0,1 1,160.34 0a80.17,80.17 0,1 1,-160.34 0"/>
<path android:fillColor="#FCAB29" android:pathData="M403.037,360.544h-19.908c-5.535,0 -10.038,-4.503 -10.038,-10.038s4.503,-10.038 10.038,-10.038h29.192c4.142,0 7.5,-3.357 7.5,-7.5s-3.358,-7.5 -7.5,-7.5h-11.738v-7.827c0,-4.143 -3.358,-7.5 -7.5,-7.5s-7.5,3.357 -7.5,7.5v7.827h-2.454c-13.806,0 -25.038,11.232 -25.038,25.038s11.232,25.038 25.038,25.038h19.908c5.535,0 10.038,4.503 10.038,10.037c0,5.535 -4.503,10.038 -10.038,10.038h-29.192c-4.142,0 -7.5,3.357 -7.5,7.5s3.358,7.5 7.5,7.5h11.739v7.827c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-7.827h2.454c13.806,0 25.038,-11.232 25.038,-25.038S416.843,360.544 403.037,360.544z"/>
<path android:fillColor="@color/colorLightYellow" android:pathData="M368.669,144.262l-18.046,-14.437c-0.019,-0.016 -0.042,-0.025 -0.061,-0.041c-0.315,-0.248 -0.648,-0.473 -1.001,-0.668c-0.007,-0.003 -0.013,-0.008 -0.02,-0.012c-0.339,-0.186 -0.696,-0.339 -1.064,-0.472c-0.05,-0.018 -0.1,-0.038 -0.15,-0.055c-0.347,-0.116 -0.704,-0.207 -1.071,-0.272c-0.065,-0.011 -0.129,-0.02 -0.193,-0.029c-0.368,-0.056 -0.741,-0.093 -1.124,-0.093s-0.756,0.038 -1.124,0.093c-0.065,0.01 -0.129,0.018 -0.193,0.029c-0.367,0.065 -0.725,0.156 -1.071,0.272c-0.051,0.017 -0.1,0.037 -0.15,0.055c-0.368,0.132 -0.725,0.286 -1.064,0.472c-0.007,0.004 -0.013,0.009 -0.02,0.012c-0.353,0.195 -0.686,0.421 -1.001,0.668c-0.02,0.016 -0.042,0.025 -0.061,0.041l-18.046,14.437c-3.234,2.588 -3.759,7.307 -1.171,10.542c2.587,3.233 7.306,3.759 10.542,1.171l5.861,-4.688v68.76c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-68.76l5.861,4.688c1.383,1.106 3.037,1.644 4.68,1.644c2.2,0 4.38,-0.963 5.861,-2.814C372.429,151.568 371.904,146.85 368.669,144.262z"/>
<path android:fillColor="@color/colorLightYellow" android:pathData="M462.959,104.039l-18.046,-14.437c-0.019,-0.016 -0.042,-0.025 -0.061,-0.041c-0.315,-0.248 -0.648,-0.473 -1.001,-0.668c-0.007,-0.003 -0.013,-0.008 -0.02,-0.012c-0.339,-0.186 -0.696,-0.339 -1.064,-0.472c-0.05,-0.018 -0.1,-0.038 -0.15,-0.055c-0.347,-0.116 -0.704,-0.207 -1.071,-0.272c-0.065,-0.011 -0.129,-0.02 -0.193,-0.029c-0.368,-0.056 -0.741,-0.093 -1.124,-0.093s-0.756,0.038 -1.124,0.093c-0.065,0.01 -0.129,0.018 -0.193,0.029c-0.367,0.065 -0.725,0.156 -1.071,0.272c-0.051,0.017 -0.1,0.037 -0.15,0.055c-0.368,0.132 -0.725,0.286 -1.064,0.472c-0.007,0.004 -0.013,0.009 -0.02,0.012c-0.353,0.195 -0.686,0.421 -1.001,0.668c-0.02,0.016 -0.042,0.025 -0.061,0.041l-18.046,14.437c-3.234,2.588 -3.759,7.307 -1.171,10.542c2.587,3.233 7.306,3.758 10.542,1.171l5.861,-4.688v68.76c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-68.76l5.861,4.688c1.383,1.106 3.037,1.644 4.68,1.644c2.2,0 4.38,-0.963 5.861,-2.814C466.718,111.346 466.193,106.627 462.959,104.039z"/>
</vector>
than this website shows me this:
but my actual vector is this:
The website doesn't show the knapsack and those 2 arrows and after exporting also, it only shows the coins only.
I need to make this vector into PNG, that why I am trying it to make SVG then PNG, and I tried few more websites but either those shows deprecated.
A:
I have converted it without of any programm. Here is the SVG for you:
<svg xmlns="http://www.w3.org/2000/svg" width="80" height="80" viewBox="0 0 512 512">
<path fill="#fcab29" d="M150.561,144.549c-1.315,0 -2.647,-0.341 -3.86,-1.06L52.164,87.532c-3.609,-2.136 -4.803,-6.793 -2.667,-10.402c2.137,-3.608 6.793,-4.802 10.402,-2.667l94.537,55.957c3.609,2.136 4.803,6.793 2.667,10.402C155.685,143.217 153.156,144.549 150.561,144.549z"/>
<path fill="#fcab29" d="M150.568,144.548H47.842c-4.194,0 -7.593,-3.399 -7.593,-7.593s3.4,-7.593 7.593,-7.593h102.727c4.194,0 7.593,3.399 7.593,7.593S154.762,144.548 150.568,144.548z"/>
<path fill="#ed664c" d="M342.693,335.833L207.961,136.955l51.811,-74.838c10.849,-15.671 -0.367,-37.077 -19.426,-37.077H118.183c-19.059,0 -30.275,21.406 -19.426,37.077l51.811,74.838L15.836,335.833C5.516,351.066 0,369.043 0,387.443l0,0c0,50.82 41.198,92.018 92.017,92.018h174.495c50.82,0 92.017,-41.198 92.017,-92.018l0,0C358.529,369.043 353.013,351.066 342.693,335.833z"/>
<path fill="#ed664c" d="M342.693,335.833L207.961,136.955l51.811,-74.838c10.849,-15.671 -0.367,-37.077 -19.426,-37.077h-22.144c17.303,0 27.486,21.406 17.637,37.077l-47.038,74.838L311.12,335.833c9.369,15.233 14.377,33.211 14.377,51.61c0,50.82 -37.402,92.018 -83.539,92.018h24.555c50.82,0 92.017,-41.198 92.017,-92.018C358.529,369.043 353.013,351.066 342.693,335.833z"/>
<path fill="#fcab29" d="M214.129,144.548h-71.883c-4.194,0 -7.593,-3.399 -7.593,-7.593s3.4,-7.593 7.593,-7.593h71.883c4.194,0 7.593,3.399 7.593,7.593S218.323,144.548 214.129,144.548z"/>
<path fill="#FCAB29" d="M393.083,249.127c-65.571,0 -118.917,53.346 -118.917,118.917c0,65.57 53.346,118.916 118.917,118.916S512,433.614 512,368.044C512,302.473 458.654,249.127 393.083,249.127z"/>
<path fill="#DD8D19" d="M458.128,268.543c22.753,21.675 36.953,52.25 36.953,86.081c0,65.57 -53.346,118.916 -118.917,118.916c-23.991,0 -46.341,-7.148 -65.045,-19.417c21.347,20.336 50.223,32.836 81.964,32.836C458.654,486.96 512,433.614 512,368.044C512,326.464 490.544,289.807 458.128,268.543z"/>
<path fill="#F2DF33" d="M393.08,368.04m-80.17,0a80.17,80.17 0,1 1,160.34 0a80.17,80.17 0,1 1,-160.34 0"/>
<path fill="#FCAB29" d="M403.037,360.544h-19.908c-5.535,0 -10.038,-4.503 -10.038,-10.038s4.503,-10.038 10.038,-10.038h29.192c4.142,0 7.5,-3.357 7.5,-7.5s-3.358,-7.5 -7.5,-7.5h-11.738v-7.827c0,-4.143 -3.358,-7.5 -7.5,-7.5s-7.5,3.357 -7.5,7.5v7.827h-2.454c-13.806,0 -25.038,11.232 -25.038,25.038s11.232,25.038 25.038,25.038h19.908c5.535,0 10.038,4.503 10.038,10.037c0,5.535 -4.503,10.038 -10.038,10.038h-29.192c-4.142,0 -7.5,3.357 -7.5,7.5s3.358,7.5 7.5,7.5h11.739v7.827c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-7.827h2.454c13.806,0 25.038,-11.232 25.038,-25.038S416.843,360.544 403.037,360.544z"/>
<path fill="#fcab29" d="M368.669,144.262l-18.046,-14.437c-0.019,-0.016 -0.042,-0.025 -0.061,-0.041c-0.315,-0.248 -0.648,-0.473 -1.001,-0.668c-0.007,-0.003 -0.013,-0.008 -0.02,-0.012c-0.339,-0.186 -0.696,-0.339 -1.064,-0.472c-0.05,-0.018 -0.1,-0.038 -0.15,-0.055c-0.347,-0.116 -0.704,-0.207 -1.071,-0.272c-0.065,-0.011 -0.129,-0.02 -0.193,-0.029c-0.368,-0.056 -0.741,-0.093 -1.124,-0.093s-0.756,0.038 -1.124,0.093c-0.065,0.01 -0.129,0.018 -0.193,0.029c-0.367,0.065 -0.725,0.156 -1.071,0.272c-0.051,0.017 -0.1,0.037 -0.15,0.055c-0.368,0.132 -0.725,0.286 -1.064,0.472c-0.007,0.004 -0.013,0.009 -0.02,0.012c-0.353,0.195 -0.686,0.421 -1.001,0.668c-0.02,0.016 -0.042,0.025 -0.061,0.041l-18.046,14.437c-3.234,2.588 -3.759,7.307 -1.171,10.542c2.587,3.233 7.306,3.759 10.542,1.171l5.861,-4.688v68.76c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-68.76l5.861,4.688c1.383,1.106 3.037,1.644 4.68,1.644c2.2,0 4.38,-0.963 5.861,-2.814C372.429,151.568 371.904,146.85 368.669,144.262z"/>
<path fill="#fcab29" d="M462.959,104.039l-18.046,-14.437c-0.019,-0.016 -0.042,-0.025 -0.061,-0.041c-0.315,-0.248 -0.648,-0.473 -1.001,-0.668c-0.007,-0.003 -0.013,-0.008 -0.02,-0.012c-0.339,-0.186 -0.696,-0.339 -1.064,-0.472c-0.05,-0.018 -0.1,-0.038 -0.15,-0.055c-0.347,-0.116 -0.704,-0.207 -1.071,-0.272c-0.065,-0.011 -0.129,-0.02 -0.193,-0.029c-0.368,-0.056 -0.741,-0.093 -1.124,-0.093s-0.756,0.038 -1.124,0.093c-0.065,0.01 -0.129,0.018 -0.193,0.029c-0.367,0.065 -0.725,0.156 -1.071,0.272c-0.051,0.017 -0.1,0.037 -0.15,0.055c-0.368,0.132 -0.725,0.286 -1.064,0.472c-0.007,0.004 -0.013,0.009 -0.02,0.012c-0.353,0.195 -0.686,0.421 -1.001,0.668c-0.02,0.016 -0.042,0.025 -0.061,0.041l-18.046,14.437c-3.234,2.588 -3.759,7.307 -1.171,10.542c2.587,3.233 7.306,3.758 10.542,1.171l5.861,-4.688v68.76c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-68.76l5.861,4.688c1.383,1.106 3.037,1.644 4.68,1.644c2.2,0 4.38,-0.963 5.861,-2.814C466.718,111.346 466.193,106.627 462.959,104.039z"/>
</svg>
I can say also why you had bad luck on the converting site:
https://shapeshifter.design
It is because you have in your code not convertable color values like @color/colorLightYellow. If you change android:fillColor="@color/colorLightYellow" to android:fillColor="#fcab29" and android:fillColor="@color/colorLightOrange" to android:fillColor="#ed664c" overall in your code then you will be able to convert your Android vector drawable image into SVG on this site without any mistakes.
A:
You can use the https://shapeshifter.design/
Import the vector and use export button
A:
Someone created this https://vd.floo.app/ - very simple and easy to use, but I think that problem is caused by usage of Android resource link @color/colorLightYellow, bcz none of converters know about what the color it is)
|
Convert android vector drawable XML to SVG
|
How can I convert my android vector drawable to SVG?
Don't mark it as duplicate question. I have already tried those methods but didn't work, what I have tried https://shapeshifter.design/ website, but actually it is good, but it gives me wrong input and output.
Suppose I have a vector
<vector android:height="80dp" android:viewportHeight="512"
android:viewportWidth="512" android:width="80dp" xmlns:android="http://schemas.android.com/apk/res/android">
<path android:fillColor="@color/colorLightYellow" android:pathData="M150.561,144.549c-1.315,0 -2.647,-0.341 -3.86,-1.06L52.164,87.532c-3.609,-2.136 -4.803,-6.793 -2.667,-10.402c2.137,-3.608 6.793,-4.802 10.402,-2.667l94.537,55.957c3.609,2.136 4.803,6.793 2.667,10.402C155.685,143.217 153.156,144.549 150.561,144.549z"/>
<path android:fillColor="@color/colorLightYellow" android:pathData="M150.568,144.548H47.842c-4.194,0 -7.593,-3.399 -7.593,-7.593s3.4,-7.593 7.593,-7.593h102.727c4.194,0 7.593,3.399 7.593,7.593S154.762,144.548 150.568,144.548z"/>
<path android:fillColor="@color/colorLightOrange" android:pathData="M342.693,335.833L207.961,136.955l51.811,-74.838c10.849,-15.671 -0.367,-37.077 -19.426,-37.077H118.183c-19.059,0 -30.275,21.406 -19.426,37.077l51.811,74.838L15.836,335.833C5.516,351.066 0,369.043 0,387.443l0,0c0,50.82 41.198,92.018 92.017,92.018h174.495c50.82,0 92.017,-41.198 92.017,-92.018l0,0C358.529,369.043 353.013,351.066 342.693,335.833z"/>
<path android:fillColor="@color/colorLightOrange" android:pathData="M342.693,335.833L207.961,136.955l51.811,-74.838c10.849,-15.671 -0.367,-37.077 -19.426,-37.077h-22.144c17.303,0 27.486,21.406 17.637,37.077l-47.038,74.838L311.12,335.833c9.369,15.233 14.377,33.211 14.377,51.61c0,50.82 -37.402,92.018 -83.539,92.018h24.555c50.82,0 92.017,-41.198 92.017,-92.018C358.529,369.043 353.013,351.066 342.693,335.833z"/>
<path android:fillColor="@color/colorLightYellow" android:pathData="M214.129,144.548h-71.883c-4.194,0 -7.593,-3.399 -7.593,-7.593s3.4,-7.593 7.593,-7.593h71.883c4.194,0 7.593,3.399 7.593,7.593S218.323,144.548 214.129,144.548z"/>
<path android:fillColor="#FCAB29" android:pathData="M393.083,249.127c-65.571,0 -118.917,53.346 -118.917,118.917c0,65.57 53.346,118.916 118.917,118.916S512,433.614 512,368.044C512,302.473 458.654,249.127 393.083,249.127z"/>
<path android:fillColor="#DD8D19" android:pathData="M458.128,268.543c22.753,21.675 36.953,52.25 36.953,86.081c0,65.57 -53.346,118.916 -118.917,118.916c-23.991,0 -46.341,-7.148 -65.045,-19.417c21.347,20.336 50.223,32.836 81.964,32.836C458.654,486.96 512,433.614 512,368.044C512,326.464 490.544,289.807 458.128,268.543z"/>
<path android:fillColor="#F2DF33" android:pathData="M393.08,368.04m-80.17,0a80.17,80.17 0,1 1,160.34 0a80.17,80.17 0,1 1,-160.34 0"/>
<path android:fillColor="#FCAB29" android:pathData="M403.037,360.544h-19.908c-5.535,0 -10.038,-4.503 -10.038,-10.038s4.503,-10.038 10.038,-10.038h29.192c4.142,0 7.5,-3.357 7.5,-7.5s-3.358,-7.5 -7.5,-7.5h-11.738v-7.827c0,-4.143 -3.358,-7.5 -7.5,-7.5s-7.5,3.357 -7.5,7.5v7.827h-2.454c-13.806,0 -25.038,11.232 -25.038,25.038s11.232,25.038 25.038,25.038h19.908c5.535,0 10.038,4.503 10.038,10.037c0,5.535 -4.503,10.038 -10.038,10.038h-29.192c-4.142,0 -7.5,3.357 -7.5,7.5s3.358,7.5 7.5,7.5h11.739v7.827c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-7.827h2.454c13.806,0 25.038,-11.232 25.038,-25.038S416.843,360.544 403.037,360.544z"/>
<path android:fillColor="@color/colorLightYellow" android:pathData="M368.669,144.262l-18.046,-14.437c-0.019,-0.016 -0.042,-0.025 -0.061,-0.041c-0.315,-0.248 -0.648,-0.473 -1.001,-0.668c-0.007,-0.003 -0.013,-0.008 -0.02,-0.012c-0.339,-0.186 -0.696,-0.339 -1.064,-0.472c-0.05,-0.018 -0.1,-0.038 -0.15,-0.055c-0.347,-0.116 -0.704,-0.207 -1.071,-0.272c-0.065,-0.011 -0.129,-0.02 -0.193,-0.029c-0.368,-0.056 -0.741,-0.093 -1.124,-0.093s-0.756,0.038 -1.124,0.093c-0.065,0.01 -0.129,0.018 -0.193,0.029c-0.367,0.065 -0.725,0.156 -1.071,0.272c-0.051,0.017 -0.1,0.037 -0.15,0.055c-0.368,0.132 -0.725,0.286 -1.064,0.472c-0.007,0.004 -0.013,0.009 -0.02,0.012c-0.353,0.195 -0.686,0.421 -1.001,0.668c-0.02,0.016 -0.042,0.025 -0.061,0.041l-18.046,14.437c-3.234,2.588 -3.759,7.307 -1.171,10.542c2.587,3.233 7.306,3.759 10.542,1.171l5.861,-4.688v68.76c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-68.76l5.861,4.688c1.383,1.106 3.037,1.644 4.68,1.644c2.2,0 4.38,-0.963 5.861,-2.814C372.429,151.568 371.904,146.85 368.669,144.262z"/>
<path android:fillColor="@color/colorLightYellow" android:pathData="M462.959,104.039l-18.046,-14.437c-0.019,-0.016 -0.042,-0.025 -0.061,-0.041c-0.315,-0.248 -0.648,-0.473 -1.001,-0.668c-0.007,-0.003 -0.013,-0.008 -0.02,-0.012c-0.339,-0.186 -0.696,-0.339 -1.064,-0.472c-0.05,-0.018 -0.1,-0.038 -0.15,-0.055c-0.347,-0.116 -0.704,-0.207 -1.071,-0.272c-0.065,-0.011 -0.129,-0.02 -0.193,-0.029c-0.368,-0.056 -0.741,-0.093 -1.124,-0.093s-0.756,0.038 -1.124,0.093c-0.065,0.01 -0.129,0.018 -0.193,0.029c-0.367,0.065 -0.725,0.156 -1.071,0.272c-0.051,0.017 -0.1,0.037 -0.15,0.055c-0.368,0.132 -0.725,0.286 -1.064,0.472c-0.007,0.004 -0.013,0.009 -0.02,0.012c-0.353,0.195 -0.686,0.421 -1.001,0.668c-0.02,0.016 -0.042,0.025 -0.061,0.041l-18.046,14.437c-3.234,2.588 -3.759,7.307 -1.171,10.542c2.587,3.233 7.306,3.758 10.542,1.171l5.861,-4.688v68.76c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-68.76l5.861,4.688c1.383,1.106 3.037,1.644 4.68,1.644c2.2,0 4.38,-0.963 5.861,-2.814C466.718,111.346 466.193,106.627 462.959,104.039z"/>
</vector>
than this website shows me this:
but my actual vector is this:
The website doesn't show the knapsack and those 2 arrows and after exporting also, it only shows the coins only.
I need to make this vector into PNG, that why I am trying it to make SVG then PNG, and I tried few more websites but either those shows deprecated.
|
[
"I have converted it without of any programm. Here is the SVG for you:\n\n\n<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"80\" height=\"80\" viewBox=\"0 0 512 512\">\n<path fill=\"#fcab29\" d=\"M150.561,144.549c-1.315,0 -2.647,-0.341 -3.86,-1.06L52.164,87.532c-3.609,-2.136 -4.803,-6.793 -2.667,-10.402c2.137,-3.608 6.793,-4.802 10.402,-2.667l94.537,55.957c3.609,2.136 4.803,6.793 2.667,10.402C155.685,143.217 153.156,144.549 150.561,144.549z\"/>\n<path fill=\"#fcab29\" d=\"M150.568,144.548H47.842c-4.194,0 -7.593,-3.399 -7.593,-7.593s3.4,-7.593 7.593,-7.593h102.727c4.194,0 7.593,3.399 7.593,7.593S154.762,144.548 150.568,144.548z\"/>\n<path fill=\"#ed664c\" d=\"M342.693,335.833L207.961,136.955l51.811,-74.838c10.849,-15.671 -0.367,-37.077 -19.426,-37.077H118.183c-19.059,0 -30.275,21.406 -19.426,37.077l51.811,74.838L15.836,335.833C5.516,351.066 0,369.043 0,387.443l0,0c0,50.82 41.198,92.018 92.017,92.018h174.495c50.82,0 92.017,-41.198 92.017,-92.018l0,0C358.529,369.043 353.013,351.066 342.693,335.833z\"/>\n<path fill=\"#ed664c\" d=\"M342.693,335.833L207.961,136.955l51.811,-74.838c10.849,-15.671 -0.367,-37.077 -19.426,-37.077h-22.144c17.303,0 27.486,21.406 17.637,37.077l-47.038,74.838L311.12,335.833c9.369,15.233 14.377,33.211 14.377,51.61c0,50.82 -37.402,92.018 -83.539,92.018h24.555c50.82,0 92.017,-41.198 92.017,-92.018C358.529,369.043 353.013,351.066 342.693,335.833z\"/>\n<path fill=\"#fcab29\" d=\"M214.129,144.548h-71.883c-4.194,0 -7.593,-3.399 -7.593,-7.593s3.4,-7.593 7.593,-7.593h71.883c4.194,0 7.593,3.399 7.593,7.593S218.323,144.548 214.129,144.548z\"/>\n<path fill=\"#FCAB29\" d=\"M393.083,249.127c-65.571,0 -118.917,53.346 -118.917,118.917c0,65.57 53.346,118.916 118.917,118.916S512,433.614 512,368.044C512,302.473 458.654,249.127 393.083,249.127z\"/>\n<path fill=\"#DD8D19\" d=\"M458.128,268.543c22.753,21.675 36.953,52.25 36.953,86.081c0,65.57 -53.346,118.916 -118.917,118.916c-23.991,0 -46.341,-7.148 -65.045,-19.417c21.347,20.336 50.223,32.836 81.964,32.836C458.654,486.96 512,433.614 512,368.044C512,326.464 490.544,289.807 458.128,268.543z\"/>\n<path fill=\"#F2DF33\" d=\"M393.08,368.04m-80.17,0a80.17,80.17 0,1 1,160.34 0a80.17,80.17 0,1 1,-160.34 0\"/>\n<path fill=\"#FCAB29\" d=\"M403.037,360.544h-19.908c-5.535,0 -10.038,-4.503 -10.038,-10.038s4.503,-10.038 10.038,-10.038h29.192c4.142,0 7.5,-3.357 7.5,-7.5s-3.358,-7.5 -7.5,-7.5h-11.738v-7.827c0,-4.143 -3.358,-7.5 -7.5,-7.5s-7.5,3.357 -7.5,7.5v7.827h-2.454c-13.806,0 -25.038,11.232 -25.038,25.038s11.232,25.038 25.038,25.038h19.908c5.535,0 10.038,4.503 10.038,10.037c0,5.535 -4.503,10.038 -10.038,10.038h-29.192c-4.142,0 -7.5,3.357 -7.5,7.5s3.358,7.5 7.5,7.5h11.739v7.827c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-7.827h2.454c13.806,0 25.038,-11.232 25.038,-25.038S416.843,360.544 403.037,360.544z\"/>\n<path fill=\"#fcab29\" d=\"M368.669,144.262l-18.046,-14.437c-0.019,-0.016 -0.042,-0.025 -0.061,-0.041c-0.315,-0.248 -0.648,-0.473 -1.001,-0.668c-0.007,-0.003 -0.013,-0.008 -0.02,-0.012c-0.339,-0.186 -0.696,-0.339 -1.064,-0.472c-0.05,-0.018 -0.1,-0.038 -0.15,-0.055c-0.347,-0.116 -0.704,-0.207 -1.071,-0.272c-0.065,-0.011 -0.129,-0.02 -0.193,-0.029c-0.368,-0.056 -0.741,-0.093 -1.124,-0.093s-0.756,0.038 -1.124,0.093c-0.065,0.01 -0.129,0.018 -0.193,0.029c-0.367,0.065 -0.725,0.156 -1.071,0.272c-0.051,0.017 -0.1,0.037 -0.15,0.055c-0.368,0.132 -0.725,0.286 -1.064,0.472c-0.007,0.004 -0.013,0.009 -0.02,0.012c-0.353,0.195 -0.686,0.421 -1.001,0.668c-0.02,0.016 -0.042,0.025 -0.061,0.041l-18.046,14.437c-3.234,2.588 -3.759,7.307 -1.171,10.542c2.587,3.233 7.306,3.759 10.542,1.171l5.861,-4.688v68.76c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-68.76l5.861,4.688c1.383,1.106 3.037,1.644 4.68,1.644c2.2,0 4.38,-0.963 5.861,-2.814C372.429,151.568 371.904,146.85 368.669,144.262z\"/>\n<path fill=\"#fcab29\" d=\"M462.959,104.039l-18.046,-14.437c-0.019,-0.016 -0.042,-0.025 -0.061,-0.041c-0.315,-0.248 -0.648,-0.473 -1.001,-0.668c-0.007,-0.003 -0.013,-0.008 -0.02,-0.012c-0.339,-0.186 -0.696,-0.339 -1.064,-0.472c-0.05,-0.018 -0.1,-0.038 -0.15,-0.055c-0.347,-0.116 -0.704,-0.207 -1.071,-0.272c-0.065,-0.011 -0.129,-0.02 -0.193,-0.029c-0.368,-0.056 -0.741,-0.093 -1.124,-0.093s-0.756,0.038 -1.124,0.093c-0.065,0.01 -0.129,0.018 -0.193,0.029c-0.367,0.065 -0.725,0.156 -1.071,0.272c-0.051,0.017 -0.1,0.037 -0.15,0.055c-0.368,0.132 -0.725,0.286 -1.064,0.472c-0.007,0.004 -0.013,0.009 -0.02,0.012c-0.353,0.195 -0.686,0.421 -1.001,0.668c-0.02,0.016 -0.042,0.025 -0.061,0.041l-18.046,14.437c-3.234,2.588 -3.759,7.307 -1.171,10.542c2.587,3.233 7.306,3.758 10.542,1.171l5.861,-4.688v68.76c0,4.143 3.358,7.5 7.5,7.5s7.5,-3.357 7.5,-7.5v-68.76l5.861,4.688c1.383,1.106 3.037,1.644 4.68,1.644c2.2,0 4.38,-0.963 5.861,-2.814C466.718,111.346 466.193,106.627 462.959,104.039z\"/>\n</svg>\n\n\n\nI can say also why you had bad luck on the converting site:\nhttps://shapeshifter.design\nIt is because you have in your code not convertable color values like @color/colorLightYellow. If you change android:fillColor=\"@color/colorLightYellow\" to android:fillColor=\"#fcab29\" and android:fillColor=\"@color/colorLightOrange\" to android:fillColor=\"#ed664c\" overall in your code then you will be able to convert your Android vector drawable image into SVG on this site without any mistakes.\n",
"You can use the https://shapeshifter.design/\nImport the vector and use export button\n\n",
"Someone created this https://vd.floo.app/ - very simple and easy to use, but I think that problem is caused by usage of Android resource link @color/colorLightYellow, bcz none of converters know about what the color it is)\n"
] |
[
26,
12,
0
] |
[] |
[] |
[
"android",
"android_drawable",
"android_vectordrawable",
"png",
"svg"
] |
stackoverflow_0062540139_android_android_drawable_android_vectordrawable_png_svg.txt
|
Q:
Why am I getting this error: ERROR [internal] load metadata
I am a Docker noob and am trying to run the make dev-services script, declared in the skaffold.yml file (I exchanged image and sha names with xxx):
- name: dev-services
build:
tagPolicy:
inputDigest: {}
local:
push: false
useBuildkit: true
artifacts:
- image: gcr.io/xxx/service-base
context: .
- image: gcr.io/xxx/api
context: server/api/
requires:
- image: gcr.io/xxx/service-base
alias: service_base
- image: gcr.io/xxx/media
context: server/media/app
requires:
- image: gcr.io/xxx/service-base
alias: service_base
deploy:
kustomize:
paths:
- ./k8s/local
- ./server/api/k8s/development
- ./server/media/k8s/development
When I run it, I get this error:
Building [gcr.io/xxx/media]...
[+] Building 2.8s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.14 1.2s
=> ERROR [internal] load metadata for gcr.io/xxx/service-base:xxx 2.6s
------
> [internal] load metadata for gcr.io/xxx/service-base:xxx:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: unexpected status code [manifests xxx]: 401 Unauthorized
Building [gcr.io/xxx/api]...
Canceled build for gcr.io/xxx/api
exit status 1. Docker build ran into internal error. Please retry.
If this keeps happening, please open an issue..
make: *** [dev-services] Error 1
Anyone know what might be the problem here?
Might it be the google container registry?
I'm using Minikube. Is there a Minikube - or Docker - registry that could try? If so, what would I need to change in the skaffold.yaml file?
Thanks a lot in advance :)
A:
The error:
failed to solve with frontend dockerfile.v0: failed to create LLB definition: unexpected status code [manifests xxx]: 401 Unauthorized
indicates that Docker was unable to get authorization for one of your GCR repositories. Docker will normally get this information from your gcloud settings. There's a couple of reasons why this may fail:
You haven't configured Docker for accessing GCR. See the GCR documentation for how to configure access.
Your account doesn't have permission to access GCR. See the GCR documentation on configuring access control.
Your login details have expired or been revoked. Use gcloud auth login to re-login.
You have multiple accounts, and you're using the wrong account. Try gcloud auth list to see your current accounts. You can use gcloud config set account xxx to set the active account, or set the environment CLOUDSDK_CORE_ACCOUNT to set an account for the duration of a session.
A:
for anyone else coming here from windows OS in your docker desktop settings, uncheck the Use Docker Compose V2 this worked for me, i uncheck it works, i checked to try again and make sure that was the issue and yes it was the issue didn't work , until i uncheck again
A:
run sudo chown -R [User] $(pwd)
Then run your container with root permission (i.e: sudo)
EX:
sudo docker-compose build --no-cache ContainerName
|
Why am I getting this error: ERROR [internal] load metadata
|
I am a Docker noob and am trying to run the make dev-services script, declared in the skaffold.yml file (I exchanged image and sha names with xxx):
- name: dev-services
build:
tagPolicy:
inputDigest: {}
local:
push: false
useBuildkit: true
artifacts:
- image: gcr.io/xxx/service-base
context: .
- image: gcr.io/xxx/api
context: server/api/
requires:
- image: gcr.io/xxx/service-base
alias: service_base
- image: gcr.io/xxx/media
context: server/media/app
requires:
- image: gcr.io/xxx/service-base
alias: service_base
deploy:
kustomize:
paths:
- ./k8s/local
- ./server/api/k8s/development
- ./server/media/k8s/development
When I run it, I get this error:
Building [gcr.io/xxx/media]...
[+] Building 2.8s (4/4) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 37B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.14 1.2s
=> ERROR [internal] load metadata for gcr.io/xxx/service-base:xxx 2.6s
------
> [internal] load metadata for gcr.io/xxx/service-base:xxx:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: unexpected status code [manifests xxx]: 401 Unauthorized
Building [gcr.io/xxx/api]...
Canceled build for gcr.io/xxx/api
exit status 1. Docker build ran into internal error. Please retry.
If this keeps happening, please open an issue..
make: *** [dev-services] Error 1
Anyone know what might be the problem here?
Might it be the google container registry?
I'm using Minikube. Is there a Minikube - or Docker - registry that could try? If so, what would I need to change in the skaffold.yaml file?
Thanks a lot in advance :)
|
[
"The error:\nfailed to solve with frontend dockerfile.v0: failed to create LLB definition: unexpected status code [manifests xxx]: 401 Unauthorized\nindicates that Docker was unable to get authorization for one of your GCR repositories. Docker will normally get this information from your gcloud settings. There's a couple of reasons why this may fail:\n\nYou haven't configured Docker for accessing GCR. See the GCR documentation for how to configure access.\nYour account doesn't have permission to access GCR. See the GCR documentation on configuring access control.\nYour login details have expired or been revoked. Use gcloud auth login to re-login.\nYou have multiple accounts, and you're using the wrong account. Try gcloud auth list to see your current accounts. You can use gcloud config set account xxx to set the active account, or set the environment CLOUDSDK_CORE_ACCOUNT to set an account for the duration of a session.\n\n",
"for anyone else coming here from windows OS in your docker desktop settings, uncheck the Use Docker Compose V2 this worked for me, i uncheck it works, i checked to try again and make sure that was the issue and yes it was the issue didn't work , until i uncheck again\n",
"run sudo chown -R [User] $(pwd)\nThen run your container with root permission (i.e: sudo)\nEX:\nsudo docker-compose build --no-cache ContainerName\n\n"
] |
[
2,
2,
0
] |
[
"I was also facing the same error, the reason was a typo in the base image, you can check if you have any typo in the dockerfile\n",
"check your internet connection, I was having the same issues got resolved by checking the internet connectivity\n"
] |
[
-1,
-2
] |
[
"docker",
"skaffold"
] |
stackoverflow_0070288986_docker_skaffold.txt
|
Q:
Cannot install python 3.10.0 on m1 Apple silicon - ld: symbol(s) not found for architecture x86_64
I am trying to get python 3.10.0 installed on my Apple M1 Silicon.
Installing via asdf venv manager.
3.7.9 and 3.9.4 work without any issues but installing 3.10.0 causes the following error:
Last 10 log lines:
"_libintl_textdomain", referenced from:
__locale_textdomain in libpython3.10.a(_localemodule.o)
__locale_textdomain in libpython3.10.a(_localemodule.o)
ld: symbol(s) not found for architecture x86_64
ld: symbol(s) not found for architecture x86_64
clang: clangerror: linker command failed with exit code 1 (use -v to see invocation)
: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [Programs/_testembed] Error 1
make: *** Waiting for unfinished jobs....
make: *** [python.exe] Error 1
cmake version 3.22.0
Apple clang version 13.0.0 (clang-1300.0.29.3)
Target: x86_64-apple-darwin21.1.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
What I tried:
export ARCHFLAGS="-arch arm64"
and all the suggestions from
Can't install Python 3.10.0 with pyenv on MacOS
Thank you so much in advance, it's driving me nuts :-)
A:
First install gettext:
brew install gettext
Then export flags:
export LDFLAGS="-L/opt/homebrew/lib"; export CPPFLAGS="-I/opt/homebrew/include"
Finaly install python:
pyenv install 3.10.0
It worked for me. I found it here https://github.com/pyenv/pyenv/issues/1877#issuecomment-962514298
A:
This worked for me:
arch -x86_64 pyenv install 3.10.4
|
Cannot install python 3.10.0 on m1 Apple silicon - ld: symbol(s) not found for architecture x86_64
|
I am trying to get python 3.10.0 installed on my Apple M1 Silicon.
Installing via asdf venv manager.
3.7.9 and 3.9.4 work without any issues but installing 3.10.0 causes the following error:
Last 10 log lines:
"_libintl_textdomain", referenced from:
__locale_textdomain in libpython3.10.a(_localemodule.o)
__locale_textdomain in libpython3.10.a(_localemodule.o)
ld: symbol(s) not found for architecture x86_64
ld: symbol(s) not found for architecture x86_64
clang: clangerror: linker command failed with exit code 1 (use -v to see invocation)
: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [Programs/_testembed] Error 1
make: *** Waiting for unfinished jobs....
make: *** [python.exe] Error 1
cmake version 3.22.0
Apple clang version 13.0.0 (clang-1300.0.29.3)
Target: x86_64-apple-darwin21.1.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
What I tried:
export ARCHFLAGS="-arch arm64"
and all the suggestions from
Can't install Python 3.10.0 with pyenv on MacOS
Thank you so much in advance, it's driving me nuts :-)
|
[
"\nFirst install gettext:\n\nbrew install gettext\n\n\nThen export flags:\n\nexport LDFLAGS=\"-L/opt/homebrew/lib\"; export CPPFLAGS=\"-I/opt/homebrew/include\"\n\n\nFinaly install python:\n\npyenv install 3.10.0\n\nIt worked for me. I found it here https://github.com/pyenv/pyenv/issues/1877#issuecomment-962514298\n",
"This worked for me:\narch -x86_64 pyenv install 3.10.4\n\n"
] |
[
22,
0
] |
[] |
[] |
[
"apple_m1",
"arm",
"python_3.x"
] |
stackoverflow_0070152525_apple_m1_arm_python_3.x.txt
|
Q:
Concatenate columns of Pandas dataframe into a new column of lists with only non-zero values
I have a Pandas dataframe that looks like:
mwe5a = pd.DataFrame({'a': [0.1, 0.0],
'b': [0.0, 0.2],
'c': [0.3, 0.0]
}
)
mwe5a
a b c
0 0.1 0.0 0.3
1 0.0 0.2 0.0
My desired output is:
mwe5b
output_column
[0.1, 0.3]
[0.2]
How do I do that?
After that, I'd like to sort the order of a column in another Pandas dataframe based on those values, from largest value to least.
mwe7a = pd.DataFrame({'items': [ ['item1', 'item2'],
['item3']
]})
['item1', 'item2']
['item3']
which should then look like
mwe7b
['item2', 'item1']
['item3']
UPDATE:
I updated the MWE dataframes to be less confusing. So to review, I can get the following to work:
token_uniqueness_sparse = pd.DataFrame({'token_a': [0.1, 0.0],
'token_b': [0.0, 0.2],
'token c': [0.3, 0.0]
}
)
token_uniqueness_sparse
token_a token_b token c
0 0.1 0.0 0.3
1 0.0 0.2 0.0
sf_fake = pd.DataFrame({'items': [ ['token_a', 'token_c'],
['token_b']],
'rcol': [1,2]
})
sf_fake
items rcol
0 [token_a, token_c] 1
1 [token_b] 2
token_uniqueness_dense = (token_uniqueness_sparse
.apply(lambda x: list(x[x.ne(0)]), axis=1)
.to_frame('output_column'))
token_uniqueness_dense
output_column
0 [0.1, 0.3]
1 [0.2]
(sf_fake.apply(lambda x: sorted(x['items'], key=lambda y: token_uniqueness_dense.loc[x.name,
'output_column'][x['items'].index(y)], reverse=True), axis=1))
So I know the solution works. But when I attempt to apply it to my actual dataframes and not the toy ones above, I get the following error:
Input In [76], in <lambda>(x)
----> 1 (forbes_df.apply(lambda x: sorted(x['tokenized_company_name'],
2 key=lambda y: tfidf_df_dense.loc[x.name,
3 'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1))
Input In [76], in <lambda>.<locals>.<lambda>(y)
1 (forbes_df.apply(lambda x: sorted(x['tokenized_company_name'],
----> 2 key=lambda y: tfidf_df_dense.loc[x.name,
3 'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1))
IndexError: list index out of range
Any ideas what to check for?
A:
A possible solution:
mwe5b = (mwe5a
.apply(lambda x: list(x[x.ne(0)].sort_values(ascending=False)), axis=1)
.to_frame('output_column'))
Output:
output_column
0 [0.3, 0.1]
1 [0.2]
EDIT
To accomplish the goal the OP wants with mwe7a, I offer the following solution:
(mwe7a.apply(lambda x: sorted(x['items'], key=lambda y: mwe5b.loc[x.name,
'output_column'][x['items'].index(y)], reverse=True), axis=1))
To get mwe5b without sorting, as needed for getting mwe7a:
mwe5b = (mwe5a
.apply(lambda x: list(x[x.ne(0)]), axis=1)
.to_frame('output_column'))
Output:
0 [item2, item1]
1 [item3]
|
Concatenate columns of Pandas dataframe into a new column of lists with only non-zero values
|
I have a Pandas dataframe that looks like:
mwe5a = pd.DataFrame({'a': [0.1, 0.0],
'b': [0.0, 0.2],
'c': [0.3, 0.0]
}
)
mwe5a
a b c
0 0.1 0.0 0.3
1 0.0 0.2 0.0
My desired output is:
mwe5b
output_column
[0.1, 0.3]
[0.2]
How do I do that?
After that, I'd like to sort the order of a column in another Pandas dataframe based on those values, from largest value to least.
mwe7a = pd.DataFrame({'items': [ ['item1', 'item2'],
['item3']
]})
['item1', 'item2']
['item3']
which should then look like
mwe7b
['item2', 'item1']
['item3']
UPDATE:
I updated the MWE dataframes to be less confusing. So to review, I can get the following to work:
token_uniqueness_sparse = pd.DataFrame({'token_a': [0.1, 0.0],
'token_b': [0.0, 0.2],
'token c': [0.3, 0.0]
}
)
token_uniqueness_sparse
token_a token_b token c
0 0.1 0.0 0.3
1 0.0 0.2 0.0
sf_fake = pd.DataFrame({'items': [ ['token_a', 'token_c'],
['token_b']],
'rcol': [1,2]
})
sf_fake
items rcol
0 [token_a, token_c] 1
1 [token_b] 2
token_uniqueness_dense = (token_uniqueness_sparse
.apply(lambda x: list(x[x.ne(0)]), axis=1)
.to_frame('output_column'))
token_uniqueness_dense
output_column
0 [0.1, 0.3]
1 [0.2]
(sf_fake.apply(lambda x: sorted(x['items'], key=lambda y: token_uniqueness_dense.loc[x.name,
'output_column'][x['items'].index(y)], reverse=True), axis=1))
So I know the solution works. But when I attempt to apply it to my actual dataframes and not the toy ones above, I get the following error:
Input In [76], in <lambda>(x)
----> 1 (forbes_df.apply(lambda x: sorted(x['tokenized_company_name'],
2 key=lambda y: tfidf_df_dense.loc[x.name,
3 'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1))
Input In [76], in <lambda>.<locals>.<lambda>(y)
1 (forbes_df.apply(lambda x: sorted(x['tokenized_company_name'],
----> 2 key=lambda y: tfidf_df_dense.loc[x.name,
3 'output_column'][x['tokenized_company_name'].index(y)], reverse=True), axis=1))
IndexError: list index out of range
Any ideas what to check for?
|
[
"A possible solution:\nmwe5b = (mwe5a\n .apply(lambda x: list(x[x.ne(0)].sort_values(ascending=False)), axis=1)\n .to_frame('output_column'))\n\nOutput:\n output_column\n0 [0.3, 0.1]\n1 [0.2]\n\nEDIT\nTo accomplish the goal the OP wants with mwe7a, I offer the following solution:\n(mwe7a.apply(lambda x: sorted(x['items'], key=lambda y: mwe5b.loc[x.name,\n 'output_column'][x['items'].index(y)], reverse=True), axis=1))\n\nTo get mwe5b without sorting, as needed for getting mwe7a:\nmwe5b = (mwe5a\n .apply(lambda x: list(x[x.ne(0)]), axis=1)\n .to_frame('output_column'))\n\nOutput:\n0 [item2, item1]\n1 [item3]\n\n"
] |
[
2
] |
[] |
[] |
[
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074666489_pandas_python_python_3.x.txt
|
Q:
Does Cloud Firestore save strings with newline \n characters (multiline)?
Apparently Cloud Firestore console does not display newline characters inside strings. Is there a way to inspect them?
This string saved is actually:
QUESTION
Can I be sure that the newlines are there, even though they're not visible on the Firestore Console?
A:
Strings are stored unmodified, but various parts of the Firestore console show the newline character in different ways. Also see my previous answer on Firebase Firestore new line command and Doug's answer here: New Line Command (\n) Not Working With Firebase Firestore Database Strings.
Since the behavior is confusing to you, please file a bug report. But rest assured: your newline characters are stored and read correctly.
A:
The method I have been using is to save the text in URL encoded format and decode it in your app or website. This method is good since URL encoding converts newlines, spaces and tabs to characters which are safe for transmitting over Internet.
A:
Replace loaded string '\\n' => '\n' in program and you can add '\n' in the database
text.replaceAll('\\n', '\n');
|
Does Cloud Firestore save strings with newline \n characters (multiline)?
|
Apparently Cloud Firestore console does not display newline characters inside strings. Is there a way to inspect them?
This string saved is actually:
QUESTION
Can I be sure that the newlines are there, even though they're not visible on the Firestore Console?
|
[
"Strings are stored unmodified, but various parts of the Firestore console show the newline character in different ways. Also see my previous answer on Firebase Firestore new line command and Doug's answer here: New Line Command (\\n) Not Working With Firebase Firestore Database Strings.\nSince the behavior is confusing to you, please file a bug report. But rest assured: your newline characters are stored and read correctly.\n",
"The method I have been using is to save the text in URL encoded format and decode it in your app or website. This method is good since URL encoding converts newlines, spaces and tabs to characters which are safe for transmitting over Internet.\n",
"Replace loaded string '\\\\n' => '\\n' in program and you can add '\\n' in the database\ntext.replaceAll('\\\\n', '\\n');\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"firebase",
"google_cloud_firestore"
] |
stackoverflow_0056681520_firebase_google_cloud_firestore.txt
|
Q:
filter df2 by using date column of df1 ..if exist between two date columns and update df1
FMS_ID
date
code1
18866
2022-01-01
3103
18866
2022-01-22
3103
18867
2022-10-23
3103
18867
2022-06-04
3103
FMS_ID
Fdate
Tdate
code2
18866
2021-01-01
2022-01-21
1126
18866
2022-01-22
2022-11-01
8102
18867
2022-05-03
2022-08-01
3101
18867
2022-09-04
2022-11-01
1150
I want to take code from df2 and update code in df1 but FMS_ID should match in df2 FMS_ID and date should be between Fdate & Tdate. Many thanks.
output table: df1
FMS_ID
date
code1
18866
2022-01-01
1126
18866
2022-01-22
8102
18867
2022-10-23
1150
18867
2022-06-04
3101
A:
Since there are two of the FMS IDs you gave as the key, a new row will be created for each combination. So FMS_ID is not unique key. I think you are aware of that. If these are ok, you can merge these two df's with merge and check the dates.
final = df.merge(df2,how='left',on=['FMS_ID'])
final=final[(final['date'].ge(final['Fdate']) & (final['date'].le(final['Tdate'])))].drop(['code1','Fdate','Tdate'],axis=1)
#or
final=final[(final['date'] >= final['Fdate']) & (final['date'] <= final['Tdate'])].drop(['code1','Fdate','Tdate'],axis=1)
Output:
FMS_ID date code2
0 18866 2022-01-01 1126
3 18866 2022-01-22 8102
5 18867 2022-10-23 1150
6 18867 2022-06-04 3101
|
filter df2 by using date column of df1 ..if exist between two date columns and update df1
|
FMS_ID
date
code1
18866
2022-01-01
3103
18866
2022-01-22
3103
18867
2022-10-23
3103
18867
2022-06-04
3103
FMS_ID
Fdate
Tdate
code2
18866
2021-01-01
2022-01-21
1126
18866
2022-01-22
2022-11-01
8102
18867
2022-05-03
2022-08-01
3101
18867
2022-09-04
2022-11-01
1150
I want to take code from df2 and update code in df1 but FMS_ID should match in df2 FMS_ID and date should be between Fdate & Tdate. Many thanks.
output table: df1
FMS_ID
date
code1
18866
2022-01-01
1126
18866
2022-01-22
8102
18867
2022-10-23
1150
18867
2022-06-04
3101
|
[
"Since there are two of the FMS IDs you gave as the key, a new row will be created for each combination. So FMS_ID is not unique key. I think you are aware of that. If these are ok, you can merge these two df's with merge and check the dates.\nfinal = df.merge(df2,how='left',on=['FMS_ID'])\nfinal=final[(final['date'].ge(final['Fdate']) & (final['date'].le(final['Tdate'])))].drop(['code1','Fdate','Tdate'],axis=1)\n\n#or\n\nfinal=final[(final['date'] >= final['Fdate']) & (final['date'] <= final['Tdate'])].drop(['code1','Fdate','Tdate'],axis=1)\n\nOutput:\n FMS_ID date code2\n0 18866 2022-01-01 1126\n3 18866 2022-01-22 8102\n5 18867 2022-10-23 1150\n6 18867 2022-06-04 3101\n\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"date",
"pandas",
"range"
] |
stackoverflow_0074666061_dataframe_date_pandas_range.txt
|
Q:
Webpack dev server cache clearing
I can't get webpack dev server to work properly. I think the issue is the compiled code it makes in memory is not clearing. I can't work out where I'm going wrong.
My config file is:
var path = require('path');
const MiniCssExtractPlugin = require("mini-css-extract-plugin");
const HtmlWebpackPlugin = require('html-webpack-plugin');
module.exports = {
entry: ['babel-polyfill', './src/js/index.js'],
output: {
path: path.join(__dirname, 'dist'),
publicPath: "/",
filename: 'js/index.js'
},
devServer: {
contentBase: '/dist'
},
module: {
rules: [
{
test: /\.js$/,
use: ['babel-loader']
},
{
test: /\.scss$/,
use: [ MiniCssExtractPlugin.loader, 'css-loader', 'sass-loader']
}
]
},
plugins: [
new MiniCssExtractPlugin({
filename: 'css/styles.css',
}),
new HtmlWebpackPlugin({
inject: false,
hash: true,
template: './src/index.html',
filename: 'index.html'
})
]
}
And my scripts:
"scripts": {
"dev": "webpack --mode development",
"build": "webpack --mode production",
"start": "webpack-dev-server --mode development --open"
},
What I want is for webpack dev server to allow me to live reload as I work, then use build to actually compile my code.
The problem is, as soon as I use dev or build, and my dist file is made, webpack dev server stops working - even if I delete the dist file. I simply don't know how to get it to work. Would really appreciate any help.
Thanks, R
A:
Maybe you are not running the webpack bundler(not webpack-dev-server) in watch mode
use watch mode
{
watch: true
}
and install concurrently package
npm i -D concurrently
update your start script
{
"scripts": {
"dev": "webpack --mode development",
"build": "webpack --mode production",
"dev:server": "webpack-dev-server",
"start": "concurrently \"npm:dev\" \"npm:dev:server\""
}
A:
It sounds like you might be running into caching issues with Webpack. One solution to this is to add a filename property to your output configuration object that includes a unique hash, like this:
output: {
path: path.join(__dirname, 'dist'),
publicPath: "/",
filename: 'js/index.[hash].js'
},
This will cause Webpack to generate a unique filename for your bundled JavaScript file on each build, which will prevent caching issues.
Another potential solution is to add the --no-cache flag when running webpack-dev-server, like this:
"scripts": {
"dev": "webpack --mode development",
"build": "webpack --mode production",
"start": "webpack-dev-server --mode development --open --no-cache"
},
This will tell webpack-dev-server not to cache the files it generates.
|
Webpack dev server cache clearing
|
I can't get webpack dev server to work properly. I think the issue is the compiled code it makes in memory is not clearing. I can't work out where I'm going wrong.
My config file is:
var path = require('path');
const MiniCssExtractPlugin = require("mini-css-extract-plugin");
const HtmlWebpackPlugin = require('html-webpack-plugin');
module.exports = {
entry: ['babel-polyfill', './src/js/index.js'],
output: {
path: path.join(__dirname, 'dist'),
publicPath: "/",
filename: 'js/index.js'
},
devServer: {
contentBase: '/dist'
},
module: {
rules: [
{
test: /\.js$/,
use: ['babel-loader']
},
{
test: /\.scss$/,
use: [ MiniCssExtractPlugin.loader, 'css-loader', 'sass-loader']
}
]
},
plugins: [
new MiniCssExtractPlugin({
filename: 'css/styles.css',
}),
new HtmlWebpackPlugin({
inject: false,
hash: true,
template: './src/index.html',
filename: 'index.html'
})
]
}
And my scripts:
"scripts": {
"dev": "webpack --mode development",
"build": "webpack --mode production",
"start": "webpack-dev-server --mode development --open"
},
What I want is for webpack dev server to allow me to live reload as I work, then use build to actually compile my code.
The problem is, as soon as I use dev or build, and my dist file is made, webpack dev server stops working - even if I delete the dist file. I simply don't know how to get it to work. Would really appreciate any help.
Thanks, R
|
[
"Maybe you are not running the webpack bundler(not webpack-dev-server) in watch mode \nuse watch mode\n\n{\n watch: true\n}\n\nand install concurrently package\nnpm i -D concurrently\nupdate your start script\n {\n \"scripts\": {\n \"dev\": \"webpack --mode development\",\n \"build\": \"webpack --mode production\",\n \"dev:server\": \"webpack-dev-server\",\n \"start\": \"concurrently \\\"npm:dev\\\" \\\"npm:dev:server\\\"\"\n }\n\n",
"It sounds like you might be running into caching issues with Webpack. One solution to this is to add a filename property to your output configuration object that includes a unique hash, like this:\noutput: {\n path: path.join(__dirname, 'dist'),\n publicPath: \"/\",\n filename: 'js/index.[hash].js'\n},\n\nThis will cause Webpack to generate a unique filename for your bundled JavaScript file on each build, which will prevent caching issues.\nAnother potential solution is to add the --no-cache flag when running webpack-dev-server, like this:\n\"scripts\": {\n \"dev\": \"webpack --mode development\",\n \"build\": \"webpack --mode production\",\n \"start\": \"webpack-dev-server --mode development --open --no-cache\"\n},\n\nThis will tell webpack-dev-server not to cache the files it generates.\n"
] |
[
0,
0
] |
[] |
[] |
[
"webpack",
"webpack_4",
"webpack_dev_server"
] |
stackoverflow_0051502815_webpack_webpack_4_webpack_dev_server.txt
|
Q:
Alphabet Layers In Python
How to multiply layers without ankwardly repeating elif lines? Cannot get += 1 working. Or perhaps different string approach? I'm certainly new in Python.
layer = int(input("Give a number between 2 and 26: "))
table_size = layer + layer - 1
ts = table_size
center = (ts // 2)
for row in range(ts):
for col in range(ts):
if row == col == (center):
print("A", end="")
elif (row > center or col > center \
or row < center or col < center) \
and row < center + 2 and row > center - 2 \
and col < center + 2 and col > center - 2 :
print("B", end="")
elif (row > center+1 or col > center+1 \
or row < center-1 or col < center-1) \
and row < center+3 and row > center-3 \
and col < center+3 and col > center-3 :
print(chr(67), end="")
else:
print(" ", end="")
print()
CCCCC
CBBBC
CBABC
CBBBC
CCCCC
A:
You can resort to numpy to prepare the indexation of the alphabet, and then use the prepared indexes to get your final string. This is how:
# Get your number of layers
N = int(input("Give a number between 2 and 26: "))
assert 2<=N<=26, 'Wrong number'
# INDEX PREPARATION WITH NP
import numpy as np
len_vec = np.arange(N)
horiz_vec = np.concatenate([np.flip(len_vec[1:]), len_vec])
rep_mat = np.tile(horiz_vec, [ 2*N-1, 1])
idx_mat = np.maximum(rep_mat, rep_mat.T)
# STRING CREATION: join elements in row with '', and rows with newline '\n'
from string import ascii_uppercase # 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
final_string = '\n'.join(''.join([ascii_uppercase[i] for i in row]) for row in idx_mat)
# PRINTING THE STRING
print(final_string)
An example with N=3:
#> len_vec
array([0, 1, 2])
#> horiz_vec
array([2, 1, 0, 1, 2])
#> rep_mat
array([[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 0, 1, 2]])
#> idx_mat
array([[2, 2, 2, 2, 2],
[2, 1, 1, 1, 2],
[2, 1, 0, 1, 2],
[2, 1, 1, 1, 2],
[2, 2, 2, 2, 2]])
#> print(final_string)
CCCCC
CBBBC
CBABC
CBBBC
CCCCC
A:
This is an example with a regular python list:
from string import ascii_uppercase
result = []
# Get your number of layers
N = int(input("Give a number between 2 and 26: "))
assert 2<=N<=26, 'Wrong number'
for i in range(N):
# update existing rows
for j, string in enumerate(result):
result[j] = ascii_uppercase[i] + string + ascii_uppercase[i]
# add top and bottom row
result.append((2*i+1)*ascii_uppercase[i])
if i != 0:
result.insert(0, (2*i+1)*ascii_uppercase[i])
# print result
for line in result:
print(line)
A:
layer = int(input("Give a number between 2 and 26: "))
table_size = layer + layer - 1
ts = table_size
center = (ts // 2)
counter=0
print(center)
for row in range(ts):
for col in range(ts):
if row<=center and ts-counter>col:
outcome=65+center-min(row,col)
elif row <=center and col>=ts-counter :
outcome=65+col-center
elif row>center and ts-counter>col:
outcome=65+center-min(row,col)
elif row >center and col<counter :
outcome=65+row-center
elif row >center and col>=counter :
outcome=65+row-center+(col-counter)
print(chr(outcome), end="")
counter=counter+1
print()
A:
user_input = int(input("Layers: "))
center = 25
layer = user_input - 1
counter = 0
import string
string_x = ""
alphabet = 26
list_of_letters = [True]
while alphabet != (-1):
string_x = string_x + string.ascii_uppercase[alphabet-1]*alphabet
string_y = string_x[::-1]
string_y = string_y[1:len(string_y)]
alphabet = alphabet - 1
string_z = string_x + string_y
list_of_letters.append(string_z)
string_x = string_x[0:26-alphabet]
dictionary = { }
variable = 0
for number in range(1,27):
dictionary[number] = 24 - variable
variable = variable + 1
differential = user_input - dictionary[user_input]
counter = user_input - differential + 2
helper_variable = counter
while counter != 26:
print(list_of_letters[counter][center-layer:center+user_input])
counter = counter + 1
while counter != helper_variable - 1:
print(list_of_letters[counter][center-layer:center+user_input])
counter = counter - 1
You can make this box of letters by creating a list with elements from 'ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ' to 'ZYXWVUTSRQPONMLKJIHGFEDCBABCDEFGHIJKLMNOPQRSTUVWXYZ'. And then find a way to reference and print these strings exactly as many times as you want considering that 'A' will be 25th and you add layers with neighbouring letters. Do it in both directions using while-loop and helper variables.
|
Alphabet Layers In Python
|
How to multiply layers without ankwardly repeating elif lines? Cannot get += 1 working. Or perhaps different string approach? I'm certainly new in Python.
layer = int(input("Give a number between 2 and 26: "))
table_size = layer + layer - 1
ts = table_size
center = (ts // 2)
for row in range(ts):
for col in range(ts):
if row == col == (center):
print("A", end="")
elif (row > center or col > center \
or row < center or col < center) \
and row < center + 2 and row > center - 2 \
and col < center + 2 and col > center - 2 :
print("B", end="")
elif (row > center+1 or col > center+1 \
or row < center-1 or col < center-1) \
and row < center+3 and row > center-3 \
and col < center+3 and col > center-3 :
print(chr(67), end="")
else:
print(" ", end="")
print()
CCCCC
CBBBC
CBABC
CBBBC
CCCCC
|
[
"You can resort to numpy to prepare the indexation of the alphabet, and then use the prepared indexes to get your final string. This is how:\n# Get your number of layers\nN = int(input(\"Give a number between 2 and 26: \"))\nassert 2<=N<=26, 'Wrong number'\n\n# INDEX PREPARATION WITH NP\nimport numpy as np\nlen_vec = np.arange(N) \nhoriz_vec = np.concatenate([np.flip(len_vec[1:]), len_vec]) \nrep_mat = np.tile(horiz_vec, [ 2*N-1, 1])\nidx_mat = np.maximum(rep_mat, rep_mat.T)\n\n# STRING CREATION: join elements in row with '', and rows with newline '\\n'\nfrom string import ascii_uppercase # 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'\nfinal_string = '\\n'.join(''.join([ascii_uppercase[i] for i in row]) for row in idx_mat)\n\n# PRINTING THE STRING\nprint(final_string)\n\nAn example with N=3:\n#> len_vec\narray([0, 1, 2])\n#> horiz_vec\narray([2, 1, 0, 1, 2])\n#> rep_mat\narray([[2, 1, 0, 1, 2],\n [2, 1, 0, 1, 2],\n [2, 1, 0, 1, 2],\n [2, 1, 0, 1, 2],\n [2, 1, 0, 1, 2]])\n#> idx_mat\narray([[2, 2, 2, 2, 2],\n [2, 1, 1, 1, 2],\n [2, 1, 0, 1, 2],\n [2, 1, 1, 1, 2],\n [2, 2, 2, 2, 2]])\n#> print(final_string)\nCCCCC\nCBBBC\nCBABC\nCBBBC\nCCCCC\n\n",
"This is an example with a regular python list:\nfrom string import ascii_uppercase\n\nresult = []\n\n# Get your number of layers\nN = int(input(\"Give a number between 2 and 26: \"))\nassert 2<=N<=26, 'Wrong number'\n\nfor i in range(N):\n # update existing rows\n for j, string in enumerate(result):\n result[j] = ascii_uppercase[i] + string + ascii_uppercase[i]\n\n # add top and bottom row\n result.append((2*i+1)*ascii_uppercase[i])\n if i != 0:\n result.insert(0, (2*i+1)*ascii_uppercase[i])\n \n# print result\nfor line in result:\n print(line)\n\n",
" layer = int(input(\"Give a number between 2 and 26: \"))\ntable_size = layer + layer - 1\nts = table_size\ncenter = (ts // 2)\ncounter=0\nprint(center)\nfor row in range(ts):\n for col in range(ts):\n if row<=center and ts-counter>col:\n outcome=65+center-min(row,col)\n elif row <=center and col>=ts-counter :\n outcome=65+col-center \n elif row>center and ts-counter>col:\n outcome=65+center-min(row,col) \n elif row >center and col<counter : \n outcome=65+row-center\n elif row >center and col>=counter : \n outcome=65+row-center+(col-counter) \n \n print(chr(outcome), end=\"\")\n counter=counter+1 \n \n print()\n\n",
"user_input = int(input(\"Layers: \"))\ncenter = 25\nlayer = user_input - 1\ncounter = 0\n\nimport string\nstring_x = \"\"\nalphabet = 26\nlist_of_letters = [True]\nwhile alphabet != (-1):\n string_x = string_x + string.ascii_uppercase[alphabet-1]*alphabet\n string_y = string_x[::-1]\n string_y = string_y[1:len(string_y)]\n alphabet = alphabet - 1\n string_z = string_x + string_y \n list_of_letters.append(string_z)\n string_x = string_x[0:26-alphabet]\n\ndictionary = { }\nvariable = 0\nfor number in range(1,27):\n dictionary[number] = 24 - variable\n variable = variable + 1\n\ndifferential = user_input - dictionary[user_input]\ncounter = user_input - differential + 2\nhelper_variable = counter\n\nwhile counter != 26:\n print(list_of_letters[counter][center-layer:center+user_input])\n counter = counter + 1\nwhile counter != helper_variable - 1:\n print(list_of_letters[counter][center-layer:center+user_input])\n counter = counter - 1\n\nYou can make this box of letters by creating a list with elements from 'ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ' to 'ZYXWVUTSRQPONMLKJIHGFEDCBABCDEFGHIJKLMNOPQRSTUVWXYZ'. And then find a way to reference and print these strings exactly as many times as you want considering that 'A' will be 25th and you add layers with neighbouring letters. Do it in both directions using while-loop and helper variables.\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"alphabet",
"design_patterns",
"layer",
"loops",
"python"
] |
stackoverflow_0067938383_alphabet_design_patterns_layer_loops_python.txt
|
Q:
Wait for condition before strategy starts trading in PineScript
I want to wait untill a condition is met before the strategy begins trading.
Trying to make code that waits for the 15 sma to cross below the 12 sma (the Start Trading condition), and then once that happens the strategy can begin trading, but only once the start trading condition is met.
How can I do this ?
--
I tryed to use a switch but this did not work
--
The code so far:
//@version=5
strategy("My script", overlay=true)
fifteensma = ta.sma(close, 15)
twelvesma = ta.sma(close, 12)
sevensma = ta.sma(close, 7)
fourteensma = ta.sma(close, 14)
starttrading = (fifteensma[0] < twelvesma[0] ) and (fifteensma[1] > twelvesma[1] )
longcondition = (sevensma[0] > fourteensma[0] ) and (sevensma[1] < fourteensma[1] )
starttradingsignalsoccurred = 0.0
//Wait untill a start trading signal occurs
if (starttrading)
//when a starttrading signal happens add 1 to the starttrading signal variable
starttradingsignalsoccurred := starttradingsignalsoccurred[1] + 1
if starttradingsignalsoccurred > 0
if ( longcondition)
//Enter Trade Long
strategy.entry("Long", strategy.long, qty=10)
plot(fifteensma, color=color.red)
plot(twelvesma, color=color.green)
plot(sevensma, color=color.orange)
plot(fourteensma, color=color.blue)
A:
I think you mess a bit with 'starttrading' and 'starttradingsignalsoccurred'.
Try :
var starttradingsignalsoccurred = 0
if (starttrading)
starttradingsignalsoccurred := starttradingsignalsoccurred + 1
if starttradingsignalsoccurred > 0
if longcondition
....
|
Wait for condition before strategy starts trading in PineScript
|
I want to wait untill a condition is met before the strategy begins trading.
Trying to make code that waits for the 15 sma to cross below the 12 sma (the Start Trading condition), and then once that happens the strategy can begin trading, but only once the start trading condition is met.
How can I do this ?
--
I tryed to use a switch but this did not work
--
The code so far:
//@version=5
strategy("My script", overlay=true)
fifteensma = ta.sma(close, 15)
twelvesma = ta.sma(close, 12)
sevensma = ta.sma(close, 7)
fourteensma = ta.sma(close, 14)
starttrading = (fifteensma[0] < twelvesma[0] ) and (fifteensma[1] > twelvesma[1] )
longcondition = (sevensma[0] > fourteensma[0] ) and (sevensma[1] < fourteensma[1] )
starttradingsignalsoccurred = 0.0
//Wait untill a start trading signal occurs
if (starttrading)
//when a starttrading signal happens add 1 to the starttrading signal variable
starttradingsignalsoccurred := starttradingsignalsoccurred[1] + 1
if starttradingsignalsoccurred > 0
if ( longcondition)
//Enter Trade Long
strategy.entry("Long", strategy.long, qty=10)
plot(fifteensma, color=color.red)
plot(twelvesma, color=color.green)
plot(sevensma, color=color.orange)
plot(fourteensma, color=color.blue)
|
[
"I think you mess a bit with 'starttrading' and 'starttradingsignalsoccurred'.\nTry : \nvar starttradingsignalsoccurred = 0\n\nif (starttrading)\n starttradingsignalsoccurred := starttradingsignalsoccurred + 1\n\nif starttradingsignalsoccurred > 0\n if longcondition\n ....\n\n"
] |
[
0
] |
[] |
[] |
[
"pine_script",
"pine_script_v4",
"pinescript_v5"
] |
stackoverflow_0074664702_pine_script_pine_script_v4_pinescript_v5.txt
|
Q:
How do I prevent tsc from including browser types in the compliation
I have a script that uses ts-node:
#!/usr/binenv ts-node
const top: number[] = [];
but tsc complains:
top3.ts(3,7): error TS2451: Cannot redeclare block-scoped variable 'top'.
because apparently top is a global variable in browsers.
I've installed @types/node and my tsconfig.json reads:
{
"compilerOptions": {
"noImplicitAny": true,
"target": "es6",
"types": ["node"],
}
}
so I can refer to node builtins like process.
How do I configure tsc so that it does not include browser builtins, but only pure ECMAScript + node.js builtins?
A:
To prevent tsc from including browser types in the compilation, you can use the "lib" option in your tsconfig.json file. This option allows you to specify the library files that should be included in the compilation.
To only include pure ECMAScript and node.js builtins, you can set the "lib" option to ["es6", "dom", "node"]. This will exclude any browser-specific types from the compilation.
Here is an example of how your tsconfig.json file would look with this configuration:
{
"compilerOptions": {
"noImplicitAny": true,
"target": "es6",
"types": ["node"],
"lib": ["es6", "dom", "node"]
}
}
With this configuration, tsc will only include ECMAScript, node.js, and dom builtin types in the compilation, and will not include any browser-specific types. This should resolve the error you are seeing with the variable 'top' being redeclared.
A:
Yes, you are correct. The "dom" option in the "lib" configuration includes the browser-specific types that are causing the error with the variable 'top' being redeclared.
To prevent this error, you can either remove the "dom" option from the "lib" configuration, or you can rename the variable 'top' to something else that does not conflict with any browser-specific types.
Here is an example of how your tsconfig.json file would look without the "dom" option in the "lib" configuration:
{
"compilerOptions": {
"noImplicitAny": true,
"target": "es6",
"types": ["node"],
"lib": ["es6", "node"]
}
}
Alternatively, you can rename the variable 'top' to something else that does not conflict with any browser-specific types. For example, you could rename it to 'topNumbers' or 'topList':
const topNumbers: number[] = [];
or
const topList: number[] = [];
Either of these solutions should prevent the error from occurring.
|
How do I prevent tsc from including browser types in the compliation
|
I have a script that uses ts-node:
#!/usr/binenv ts-node
const top: number[] = [];
but tsc complains:
top3.ts(3,7): error TS2451: Cannot redeclare block-scoped variable 'top'.
because apparently top is a global variable in browsers.
I've installed @types/node and my tsconfig.json reads:
{
"compilerOptions": {
"noImplicitAny": true,
"target": "es6",
"types": ["node"],
}
}
so I can refer to node builtins like process.
How do I configure tsc so that it does not include browser builtins, but only pure ECMAScript + node.js builtins?
|
[
"To prevent tsc from including browser types in the compilation, you can use the \"lib\" option in your tsconfig.json file. This option allows you to specify the library files that should be included in the compilation.\nTo only include pure ECMAScript and node.js builtins, you can set the \"lib\" option to [\"es6\", \"dom\", \"node\"]. This will exclude any browser-specific types from the compilation.\nHere is an example of how your tsconfig.json file would look with this configuration:\n{\n \"compilerOptions\": {\n \"noImplicitAny\": true,\n \"target\": \"es6\",\n \"types\": [\"node\"],\n \"lib\": [\"es6\", \"dom\", \"node\"]\n }\n}\n\nWith this configuration, tsc will only include ECMAScript, node.js, and dom builtin types in the compilation, and will not include any browser-specific types. This should resolve the error you are seeing with the variable 'top' being redeclared.\n",
"Yes, you are correct. The \"dom\" option in the \"lib\" configuration includes the browser-specific types that are causing the error with the variable 'top' being redeclared.\nTo prevent this error, you can either remove the \"dom\" option from the \"lib\" configuration, or you can rename the variable 'top' to something else that does not conflict with any browser-specific types.\nHere is an example of how your tsconfig.json file would look without the \"dom\" option in the \"lib\" configuration:\n{\n \"compilerOptions\": {\n \"noImplicitAny\": true,\n \"target\": \"es6\",\n \"types\": [\"node\"],\n \"lib\": [\"es6\", \"node\"]\n }\n}\n\nAlternatively, you can rename the variable 'top' to something else that does not conflict with any browser-specific types. For example, you could rename it to 'topNumbers' or 'topList':\nconst topNumbers: number[] = [];\nor\nconst topList: number[] = [];\nEither of these solutions should prevent the error from occurring.\n"
] |
[
2,
2
] |
[] |
[] |
[
"node.js",
"typescript_typings"
] |
stackoverflow_0074666511_node.js_typescript_typings.txt
|
Q:
Print a list without all the empty elements of the list
I would like to know if there is a faster way to print all the not empty elements of a string list in Java.
Currently, this is my code and it works, but I would like to know if there is another, shorter way to do it. That means without creating a "cloned list" from which we removed all the empty element (as we must not edit the original list "strings")
List<String> strings = Arrays.asList("abc", "", "bc", "efg", "abcd", "", "jkl");
//get count of empty string
int countEmptyStr = (int) strings.stream().filter(string -> string.isEmpty()).count();
System.out.println("Number of empty strings:" + countEmptyStr );
//get count of no empty string
int countNoEmptyStr = (int) strings.stream().filter(string -> !string.isEmpty()).count();
System.out.println("Number of no-empty strings:" + countNoEmptyStr );
//print only no empty string from the list
List<String> stringsRmvd = new ArrayList<String>(strings);
stringsRmvd.removeAll(Arrays.asList("", null));
System.out.println("Print only no empty string from the list:" + stringsRmvd);
And we get in the output (as expected):
Number of empty strings:2
Number of no-empty strings:5
Print only no empty string from the list:[abc, bc, efg, abcd, jkl]
A:
You're already using filter, why not use that?
filter(string -> !string.isEmpty()).toList()
For example (not tested):
System.out.println( "Print only non-empty string from the list:"
+ strings.stream()
.filter(string -> !string.isEmpty())
.toList() );
A:
Filter out empty strings and print the remained elements:
List<String> strings = Arrays.asList("abc", "", "bc", "efg", "abcd", "", "jkl");
strings.stream()
.filter(str -> !str.isEmpty()) // retain the string that matches the predicate (i.e. not empty)
.forEach(System.out::println);
Output:
abc
bc
efg
abcd
jkl
|
Print a list without all the empty elements of the list
|
I would like to know if there is a faster way to print all the not empty elements of a string list in Java.
Currently, this is my code and it works, but I would like to know if there is another, shorter way to do it. That means without creating a "cloned list" from which we removed all the empty element (as we must not edit the original list "strings")
List<String> strings = Arrays.asList("abc", "", "bc", "efg", "abcd", "", "jkl");
//get count of empty string
int countEmptyStr = (int) strings.stream().filter(string -> string.isEmpty()).count();
System.out.println("Number of empty strings:" + countEmptyStr );
//get count of no empty string
int countNoEmptyStr = (int) strings.stream().filter(string -> !string.isEmpty()).count();
System.out.println("Number of no-empty strings:" + countNoEmptyStr );
//print only no empty string from the list
List<String> stringsRmvd = new ArrayList<String>(strings);
stringsRmvd.removeAll(Arrays.asList("", null));
System.out.println("Print only no empty string from the list:" + stringsRmvd);
And we get in the output (as expected):
Number of empty strings:2
Number of no-empty strings:5
Print only no empty string from the list:[abc, bc, efg, abcd, jkl]
|
[
"You're already using filter, why not use that?\nfilter(string -> !string.isEmpty()).toList()\n\nFor example (not tested):\nSystem.out.println( \"Print only non-empty string from the list:\" \n + strings.stream()\n .filter(string -> !string.isEmpty())\n .toList() );\n\n",
"Filter out empty strings and print the remained elements:\nList<String> strings = Arrays.asList(\"abc\", \"\", \"bc\", \"efg\", \"abcd\", \"\", \"jkl\");\n \nstrings.stream()\n .filter(str -> !str.isEmpty()) // retain the string that matches the predicate (i.e. not empty)\n .forEach(System.out::println);\n\nOutput:\nabc\nbc\nefg\nabcd\njkl\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"coding_efficiency",
"java",
"list",
"performance",
"string"
] |
stackoverflow_0074666548_coding_efficiency_java_list_performance_string.txt
|
Q:
Segmentation fault when creating and insert function
New to C here, I am creating an insert function that will insert any value to an array provided I give the position of the array.
For example, here is what I have tried:
#include <stdio.h>
#include <stdlib.h>
int insert(int A[], int N, int P, int KEY){
int i = N - 1;
while(i >= P){
A[i+1] = A[i];
i += 1;
}
A[P] = KEY;
N = N+1;
return *A;
}
int main(void){
int arr[5] = { 1, 2, 3, 4, 5 };
size_t n = sizeof(arr)/sizeof(arr[0]);
int p = 3;
int K = 2;
int result;
result = insert(arr, n, p, K);
printf("Insert values: %d", result);
return 0;
}
However, I get the following error:
zsh: segmentation fault ./insert
A:
Accessing out of bounds memory:
The statement is the while loop.
A[i + 1] = A[i];
is incorrect. Arrays indices start at 0 in C. Your array consists of 5 elements. The 5th int is the element [4]. You declared i to be (N - 1), which is correct, but then A[i + 1] becomes A[5] which is out of bounds, and results in undefined behaviour.
The memory not allocated should not be read.
As an aside, you can use:
i++;
N++;
as shorthand for:
i = i + 1;
N = N + 1;
A:
Maybe you are not aware of what is segmentation fault and when does it occur. Lets start with the segmentation fault.
Segmentation fault:
A segmentation fault occurs when your program attempts to access an area of memory that it is not allowed to access. In other words, when your program tries to access memory that is beyond the limits that the operating system allocated for your program.
Segmentation faults are mostly caused by pointers that are −
Used to being properly initialized.
Used after the memory they point to has been reallocated or freed.
Used in an indexed array where the index is outside of the array bounds.
Now back to your problem, here you have used i = N - 1 which is 4. Then in the while loop, you are trying to access A[i+1] or A[5] which is outside of the array bounds. Thus you are getting segmentation faults.
|
Segmentation fault when creating and insert function
|
New to C here, I am creating an insert function that will insert any value to an array provided I give the position of the array.
For example, here is what I have tried:
#include <stdio.h>
#include <stdlib.h>
int insert(int A[], int N, int P, int KEY){
int i = N - 1;
while(i >= P){
A[i+1] = A[i];
i += 1;
}
A[P] = KEY;
N = N+1;
return *A;
}
int main(void){
int arr[5] = { 1, 2, 3, 4, 5 };
size_t n = sizeof(arr)/sizeof(arr[0]);
int p = 3;
int K = 2;
int result;
result = insert(arr, n, p, K);
printf("Insert values: %d", result);
return 0;
}
However, I get the following error:
zsh: segmentation fault ./insert
|
[
"Accessing out of bounds memory:\nThe statement is the while loop.\nA[i + 1] = A[i];\n\nis incorrect. Arrays indices start at 0 in C. Your array consists of 5 elements. The 5th int is the element [4]. You declared i to be (N - 1), which is correct, but then A[i + 1] becomes A[5] which is out of bounds, and results in undefined behaviour.\nThe memory not allocated should not be read.\nAs an aside, you can use:\ni++; \nN++; \n\nas shorthand for:\ni = i + 1;\nN = N + 1;\n\n",
"Maybe you are not aware of what is segmentation fault and when does it occur. Lets start with the segmentation fault.\nSegmentation fault:\nA segmentation fault occurs when your program attempts to access an area of memory that it is not allowed to access. In other words, when your program tries to access memory that is beyond the limits that the operating system allocated for your program.\nSegmentation faults are mostly caused by pointers that are −\n\nUsed to being properly initialized.\nUsed after the memory they point to has been reallocated or freed.\nUsed in an indexed array where the index is outside of the array bounds.\n\nNow back to your problem, here you have used i = N - 1 which is 4. Then in the while loop, you are trying to access A[i+1] or A[5] which is outside of the array bounds. Thus you are getting segmentation faults.\n"
] |
[
0,
0
] |
[] |
[] |
[
"c"
] |
stackoverflow_0074666392_c.txt
|
Q:
JSONAPI Complex attributes
Is this sample in a correct format based on JSON API specifications? In another word can we have in attributes an array?
{
"meta": {
},
"links": {
"self": ""
},
"jsonapi": {
"version": "",
"meta": {
}
},
"data": {
"type": "typeof(class)",
"id": "string",
"attributes": [
{
"item1": "Value1",
"item2": "Value2",
"item3": "Value3"
}
],
"links": {
"self": ""
}
}
}
I am not sure even after reading that (link) If correct how can I Deserialize it I am using JSONAPISerializer package in C#
A:
Visit https://json2csharp.com/
Paste your JSON. In Property Settings select Use Pascal Case.
Copy class and paste into your project.
Goto NuGet package manager and install newtonsoft json package.
Use like, var myDeserializedClass = JsonConvert.DeserializeObject<Root(myJsonResponse);
A:
Why aren't you sure? Here's a quote from JSON API specification:
Attributes may contain any valid JSON value, including complex data structures involving JSON objects and arrays.
Class System.Text.Json.JsonSerializer can deserialize a JSON array into a C# IEnumerable<T>. So you may create an object that has a property Attributes of type IEnumerable<T> and deserialize like this:
using System.IO;
using System.Text.Json;
// ...
string json = File.ReadAllText("YourJsonDocumentPath");
YourEntityDescribedInJsonDocument obj = JsonSerializer.Deserialize<YourEntityDescribedInJsonDocument>(json, new JsonSerializerOptions());
|
JSONAPI Complex attributes
|
Is this sample in a correct format based on JSON API specifications? In another word can we have in attributes an array?
{
"meta": {
},
"links": {
"self": ""
},
"jsonapi": {
"version": "",
"meta": {
}
},
"data": {
"type": "typeof(class)",
"id": "string",
"attributes": [
{
"item1": "Value1",
"item2": "Value2",
"item3": "Value3"
}
],
"links": {
"self": ""
}
}
}
I am not sure even after reading that (link) If correct how can I Deserialize it I am using JSONAPISerializer package in C#
|
[
"\nVisit https://json2csharp.com/\nPaste your JSON. In Property Settings select Use Pascal Case.\nCopy class and paste into your project.\nGoto NuGet package manager and install newtonsoft json package.\nUse like, var myDeserializedClass = JsonConvert.DeserializeObject<Root(myJsonResponse);\n\n",
"Why aren't you sure? Here's a quote from JSON API specification:\n\nAttributes may contain any valid JSON value, including complex data structures involving JSON objects and arrays.\n\nClass System.Text.Json.JsonSerializer can deserialize a JSON array into a C# IEnumerable<T>. So you may create an object that has a property Attributes of type IEnumerable<T> and deserialize like this:\nusing System.IO;\nusing System.Text.Json;\n\n// ...\n\nstring json = File.ReadAllText(\"YourJsonDocumentPath\");\n\nYourEntityDescribedInJsonDocument obj = JsonSerializer.Deserialize<YourEntityDescribedInJsonDocument>(json, new JsonSerializerOptions());\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"c#",
"json_api",
"jsonapi_serialize"
] |
stackoverflow_0074666428_c#_json_api_jsonapi_serialize.txt
|
Q:
Redirect visitor to the same url that he/she write without database
i have 2 websites let says (example1.com and example2.com).
if the visitor write in the url any number for example (example1.com/123456) i want it to re-directed to (example2.com/123456) so the domain only replaced not the numbers the visitor written.
i need that happen using htaccess file or any simple methods without database or save any data because i have only HTML pages, and its hard to me to do it because i am still learning.
A:
All you need is a redirection on protocol level. That is possible with all usual http server's, the exact solution depends on which http server you are using to serve the domain "example1.com".
Since you tagged your question .htaccess and mod-rewrite I assume that you are using the apache http server. If so you can implement a rule like that one:
RewriteEngine on
RewriteRule ^/?(\d+)$ https://example2.com%{REQUEST_URI} [R=301,L]
It will redirect all requests to example1.com that use a path that consists only of digits to the second domain while preserving the requested path. An external redirection will get performed using a http status 301 ("moved permanently") as a response to the first request to example1.com.
If both domains are served by the same http server and you do not have setup separate virtual hosts you probably have to add a condition to that to make sure the rule only gets applied to requests to the first domain:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^(?:www.)?\.example1\.com$
RewriteRule ^/?(\d+)$ https://example2.com%{REQUEST_URI} [R=301,L]
Preferably such a general redirection rule should get implemented in the central http server's host configuration responsible for serving example1.com. If you do not have access to that you can indeed use a distributed configuration file instead, often called ".htaccess". If so you need to enable that feature beforehand using the AllowOverride directive. The configuration file needs to be located in the hosts top level DOCUMENT_ROOT folder and it needs to be readable for the http server process.
|
Redirect visitor to the same url that he/she write without database
|
i have 2 websites let says (example1.com and example2.com).
if the visitor write in the url any number for example (example1.com/123456) i want it to re-directed to (example2.com/123456) so the domain only replaced not the numbers the visitor written.
i need that happen using htaccess file or any simple methods without database or save any data because i have only HTML pages, and its hard to me to do it because i am still learning.
|
[
"All you need is a redirection on protocol level. That is possible with all usual http server's, the exact solution depends on which http server you are using to serve the domain \"example1.com\".\nSince you tagged your question .htaccess and mod-rewrite I assume that you are using the apache http server. If so you can implement a rule like that one:\nRewriteEngine on\nRewriteRule ^/?(\\d+)$ https://example2.com%{REQUEST_URI} [R=301,L]\n\nIt will redirect all requests to example1.com that use a path that consists only of digits to the second domain while preserving the requested path. An external redirection will get performed using a http status 301 (\"moved permanently\") as a response to the first request to example1.com.\nIf both domains are served by the same http server and you do not have setup separate virtual hosts you probably have to add a condition to that to make sure the rule only gets applied to requests to the first domain:\nRewriteEngine on\nRewriteCond %{HTTP_HOST} ^(?:www.)?\\.example1\\.com$\nRewriteRule ^/?(\\d+)$ https://example2.com%{REQUEST_URI} [R=301,L]\n\nPreferably such a general redirection rule should get implemented in the central http server's host configuration responsible for serving example1.com. If you do not have access to that you can indeed use a distributed configuration file instead, often called \".htaccess\". If so you need to enable that feature beforehand using the AllowOverride directive. The configuration file needs to be located in the hosts top level DOCUMENT_ROOT folder and it needs to be readable for the http server process.\n"
] |
[
1
] |
[] |
[] |
[
".htaccess",
"mod_rewrite"
] |
stackoverflow_0074666341_.htaccess_mod_rewrite.txt
|
Q:
Xcode does not include changes in rebuild of cordova app
I am experiencing the problem that Xcode does not incorporate any changes made to HTML/CSS/JS files when rebuilding the app for iOS
Right now I am deleting the whole platforms/ios folder and rerunning cordova add platform ios every time. This can't be the intended way of testing cordova apps. What is a good workflow for testing cordova apps on an iOS device?
A:
Well, the recommended workflow for testing Cordova apps is not using Xcode at all, just use the Cordova CLI to run your apps. But the truth is that running from the CLI might be slower than using Xcode.
What you need to copy the changes from www to the Xcode project is to run cordova prepare ios before running from Xcode. You can do it manually or create a Xcode build script to run it for you.
To add a build script, on Xcode select your project target, go to Build Phases, click the + button and select New Build Script phase.
You can try to just add cordova prepare ios and this might work.
If you get a cordova command not found, then you also need to add Cordova path to your PATH. To do it, open a terminal and type which cordova, you'll get the Cordova path, something like /Users/davidnathan/.nvm/versions/node/v4.4.7/bin/cordova.
Now add that path without the cordova part to your build script before the cordova prepare ios, something like
PATH=/Users/davidnathan/.nvm/versions/node/v4.4.7/bin/:$PATH && cordova prepare ios
Move the build script to be over the existing "Copy www directory"
A:
Try ionic cordova prepare ios.
|
Xcode does not include changes in rebuild of cordova app
|
I am experiencing the problem that Xcode does not incorporate any changes made to HTML/CSS/JS files when rebuilding the app for iOS
Right now I am deleting the whole platforms/ios folder and rerunning cordova add platform ios every time. This can't be the intended way of testing cordova apps. What is a good workflow for testing cordova apps on an iOS device?
|
[
"Well, the recommended workflow for testing Cordova apps is not using Xcode at all, just use the Cordova CLI to run your apps. But the truth is that running from the CLI might be slower than using Xcode.\nWhat you need to copy the changes from www to the Xcode project is to run cordova prepare ios before running from Xcode. You can do it manually or create a Xcode build script to run it for you.\nTo add a build script, on Xcode select your project target, go to Build Phases, click the + button and select New Build Script phase.\nYou can try to just add cordova prepare ios and this might work.\nIf you get a cordova command not found, then you also need to add Cordova path to your PATH. To do it, open a terminal and type which cordova, you'll get the Cordova path, something like /Users/davidnathan/.nvm/versions/node/v4.4.7/bin/cordova.\nNow add that path without the cordova part to your build script before the cordova prepare ios, something like\nPATH=/Users/davidnathan/.nvm/versions/node/v4.4.7/bin/:$PATH && cordova prepare ios\nMove the build script to be over the existing \"Copy www directory\"\n",
"Try ionic cordova prepare ios.\n"
] |
[
5,
0
] |
[] |
[] |
[
"cordova",
"ios",
"xcode"
] |
stackoverflow_0044485897_cordova_ios_xcode.txt
|
Q:
How to delete a model using php artisan?
Is there a command to safely delete a model in Laravel 5? To create a model we use
php artisan make:model modelname
And that will create a model under app folder, and also a migration in database/migrations
But what I can't find is how to delete a model...
A:
Deleting a model: just delete the model under App/ or whatever other folder.
Deleting a migration: if you have migrated it (meaning the database has suffered changes) you have two choices:
The "project starting"/ugly way is to migrate:rollback until the migration is undone (if it was the last migration you did, one rollback is enough, if not, you're gonna have to rollback a couple of times) then delete the migration file (the one inside the database/migrations folder. Important thing here: the migration's class will still be autoloader by composer. So you have to remove the migration class loading from vendor/composer/autoload_classmap.php. Maybe composer dumpautoload will work, it didn't for me though. If you have no important data in the DB and you can wipe it, delete the migration file, composer dumpautoload then run php artisan migrate:refresh. This will rollback every migration then migrate everything back in.
The "this is in production and I messed up" way: create another migration where the up method is dropping the first migration's table, down is creating it (basically the up method from the first migration). Leave the two migration files in there, don't remove them.
If you haven't migrated it, just delete the migration file, composer dumpautoload and if you have some class/file not found error, check if vendor/composer/autoload_classmap.php has the class of the file you just removed and delete the row there.
A:
No command, just do it manually and its safe
Delete the model first (if you don't) need the model any longer
Delete the migration from ...database/migrations folder
If you have already migrated i.e if you have already run php artisan migrate, log into your phpmyadmin or SQL(whichever the case is) and in your database, delete the table created by the migration
Still within your database, in the migrations table, locate the row with that migration file name and delete the row.
Works for me, hope it helps!
A:
search in vendor/composer/autoload_classmap.php
Ctrl+F write modelname
delete allow edit this folder and delete model path
A:
The problem can also arise when your database name is different from the one defined in .env file.
DB_DATABASE=laravel
By default, database structure in .env sets database name as laravel. You can replace laravel with the name of your database.
A:
Here is what I've created for my project to remove controller and model
app/Console/Commands/RemoveController.php
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
class RemoveController extends Command
{
/**
* The name and signature of the console command.
*
* @var string
*/
protected $signature = 'remove:controller {name}';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Remove the controller class';
/**
* Create a new command instance.
*
* @return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* @return mixed
*/
public function handle():void
{
$controllerName = $this->argument('name').'.php';
$controllerPath = base_path('app/Http/Controllers/').$controllerName;
if(file_exists($controllerPath)){
unlink($controllerPath);
$this->line('Controller removed successfully.');
}else{
$this->line('No controller found.');
}
}
}
app/Console/Commands/RemoveModel.php
<?php
namespace App\Console\Commands;
use Illuminate\Console\Command;
class RemoveModel extends Command
{
/**
* The name and signature of the console command.
*
* @var string
*/
protected $signature = 'remove:model {name}';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Remove the model class';
/**
* Create a new command instance.
*
* @return void
*/
public function __construct()
{
parent::__construct();
}
/**
* Execute the console command.
*
* @return mixed
*/
public function handle():void
{
$modelName = $this->argument('name').'.php';
$modelPath = base_path('app/').$modelName;
if(file_exists($modelPath)){
unlink($modelPath);
$this->line('Model removed successfully.');
}else{
$this->line('No controller found.');
}
}
}
I Hope this helps someone
A:
There is no any artisan command do it.You want to do it manually.
You want to delete your model from the Models directory
Path : app\Models\yourmodel.php
In the next step you want to delete your migration file from migration folder
Path : database\migrations\yourmigrationfile.php
Note: Already, If you have migrated, you should want to delete table from your database.you can log into your phpmyadmin panel and you can to do it.
A:
I had this problem. with add table name in model file, my problem was solved.
class Company extends Model
{
public $table = 'table_name';
}
|
How to delete a model using php artisan?
|
Is there a command to safely delete a model in Laravel 5? To create a model we use
php artisan make:model modelname
And that will create a model under app folder, and also a migration in database/migrations
But what I can't find is how to delete a model...
|
[
"Deleting a model: just delete the model under App/ or whatever other folder.\nDeleting a migration: if you have migrated it (meaning the database has suffered changes) you have two choices: \nThe \"project starting\"/ugly way is to migrate:rollback until the migration is undone (if it was the last migration you did, one rollback is enough, if not, you're gonna have to rollback a couple of times) then delete the migration file (the one inside the database/migrations folder. Important thing here: the migration's class will still be autoloader by composer. So you have to remove the migration class loading from vendor/composer/autoload_classmap.php. Maybe composer dumpautoload will work, it didn't for me though. If you have no important data in the DB and you can wipe it, delete the migration file, composer dumpautoload then run php artisan migrate:refresh. This will rollback every migration then migrate everything back in.\nThe \"this is in production and I messed up\" way: create another migration where the up method is dropping the first migration's table, down is creating it (basically the up method from the first migration). Leave the two migration files in there, don't remove them.\nIf you haven't migrated it, just delete the migration file, composer dumpautoload and if you have some class/file not found error, check if vendor/composer/autoload_classmap.php has the class of the file you just removed and delete the row there.\n",
"No command, just do it manually and its safe\n\nDelete the model first (if you don't) need the model any longer\nDelete the migration from ...database/migrations folder\nIf you have already migrated i.e if you have already run php artisan migrate, log into your phpmyadmin or SQL(whichever the case is) and in your database, delete the table created by the migration\nStill within your database, in the migrations table, locate the row with that migration file name and delete the row.\n\nWorks for me, hope it helps!\n",
"search in vendor/composer/autoload_classmap.php \nCtrl+F write modelname \ndelete allow edit this folder and delete model path\n",
"The problem can also arise when your database name is different from the one defined in .env file.\nDB_DATABASE=laravel\n\nBy default, database structure in .env sets database name as laravel. You can replace laravel with the name of your database.\n",
"Here is what I've created for my project to remove controller and model\n\napp/Console/Commands/RemoveController.php\n\n<?php\n\nnamespace App\\Console\\Commands;\n\nuse Illuminate\\Console\\Command;\n\nclass RemoveController extends Command\n{\n /**\n * The name and signature of the console command.\n *\n * @var string\n */\n protected $signature = 'remove:controller {name}';\n\n /**\n * The console command description.\n *\n * @var string\n */\n protected $description = 'Remove the controller class';\n\n /**\n * Create a new command instance.\n *\n * @return void\n */\n public function __construct()\n {\n parent::__construct();\n }\n\n /**\n * Execute the console command.\n *\n * @return mixed\n */\n public function handle():void\n {\n $controllerName = $this->argument('name').'.php';\n $controllerPath = base_path('app/Http/Controllers/').$controllerName;\n if(file_exists($controllerPath)){\n unlink($controllerPath);\n $this->line('Controller removed successfully.');\n }else{\n $this->line('No controller found.');\n }\n }\n}\n\n\napp/Console/Commands/RemoveModel.php\n\n<?php\n\nnamespace App\\Console\\Commands;\n\nuse Illuminate\\Console\\Command;\n\nclass RemoveModel extends Command\n{\n /**\n * The name and signature of the console command.\n *\n * @var string\n */\n protected $signature = 'remove:model {name}';\n\n /**\n * The console command description.\n *\n * @var string\n */\n protected $description = 'Remove the model class';\n\n /**\n * Create a new command instance.\n *\n * @return void\n */\n public function __construct()\n {\n parent::__construct();\n }\n\n /**\n * Execute the console command.\n *\n * @return mixed\n */\n public function handle():void\n {\n $modelName = $this->argument('name').'.php';\n $modelPath = base_path('app/').$modelName;\n if(file_exists($modelPath)){\n unlink($modelPath);\n $this->line('Model removed successfully.');\n }else{\n $this->line('No controller found.');\n }\n }\n}\n\nI Hope this helps someone\n",
"There is no any artisan command do it.You want to do it manually.\n\nYou want to delete your model from the Models directory\nPath : app\\Models\\yourmodel.php\n\nIn the next step you want to delete your migration file from migration folder\nPath : database\\migrations\\yourmigrationfile.php\n\n\nNote: Already, If you have migrated, you should want to delete table from your database.you can log into your phpmyadmin panel and you can to do it.\n",
"I had this problem. with add table name in model file, my problem was solved.\nclass Company extends Model\n{\npublic $table = 'table_name';\n\n}\n"
] |
[
54,
7,
0,
0,
0,
0,
0
] |
[
"You can delete model in App folder if you see this error (Model Already Exists!)\n\n"
] |
[
-1
] |
[
"laravel_5",
"model",
"php"
] |
stackoverflow_0030517098_laravel_5_model_php.txt
|
Q:
PyDev and Django: how to restart dev server?
I'm new to Django. I think I'm making a simple mistake.
I launched the dev server with Pydev:
RClick on project >> Django >> Custom
command >> runserver
The server came up, and everything was great. But now I'm trying to stop it, and can't figure out how. I stopped the process in the PyDev console, and closed Eclipse, but web pages are still being served from http://127.0.0.1:8000.
I launched and quit the server from the command line normally:
python manage.py runserver
But the server is still up. What am I doing wrong here?
A:
By default, the runserver command runs in autoreload mode, which runs in a separate process. This means that PyDev doesn't know how to stop it, and doesn't display its output in the console window.
If you run the command runserver --noreload instead, the auto-reloader will be disabled. Then you can see the console output and stop the server normally. However, this means that changes to your Python files won't be effective until you manually restart the server.
A:
Run the project 1. Right click on the project (not subfolders) 2. Run As > Pydev:Django
Terminate 1. Click terminate in console window
The server is down
A:
I usually run it from console. Running from PyDev adds unnecessary confusion, and doesn't bring any benefit until you happen to use PyDev's GUI interactive debugging.
A:
Edit: Latest PyDev versions (since PyDev 3.4.1) no longer need any workaround:
i.e.: PyDev will properly kill subprocesses on a kill process operation and when debugging even with regular reloading on, PyDev will attach the debugger to the child processes.
Old answer (for PyDev versions older than 3.4.1):
Unfortunately, that's expected, as PyDev will simply kill the parent process (i.e.: as if instead of ctrl+C you kill the parent process in the task manager).
The solution would be editing Django itself so that the child process polls the parent process to know it's still alive and exit if it's not... see: How to make child process die after parent exits? for a reference.
After a quick look it seems related to django/utils/autoreload.py and the way it starts up things -- so, it'd be needed to start a thread that keeps seeing if the parent is alive and if it's not it kills the child process -- I've reported that as a bug in Django itself: https://code.djangoproject.com/ticket/16982
Note: as a workaround for PyDev, you can make Django allocate a new console (out of PyDev) while still running from PyDev (so, until a proper solution is available from Django, the patch below can be used to make the Django autoreload allocate a new console -- where you can properly use Ctrl+C).
Index: django/utils/autoreload.py
===================================================================
--- django/utils/autoreload.py (revision 16923)
+++ django/utils/autoreload.py (working copy)
@@ -98,11 +98,14 @@
def restart_with_reloader():
while True:
args = [sys.executable] + ['-W%s' % o for o in sys.warnoptions] + sys.argv
- if sys.platform == "win32":
- args = ['"%s"' % arg for arg in args]
new_environ = os.environ.copy()
new_environ["RUN_MAIN"] = 'true'
- exit_code = os.spawnve(os.P_WAIT, sys.executable, args, new_environ)
+
+ import subprocess
+ popen = subprocess.Popen(args, env=new_environ, creationflags=subprocess.CREATE_NEW_CONSOLE)
+ exit_code = popen.wait()
if exit_code != 3:
return exit_code
A:
Solution: create an interpreter error in some project file. This will cause the server to crash. Server can then be restarted as normal.
A:
If you operate on Windows using the CMD: Quit the server with CTRL+BREAK.
python manage.py runserver localhost:8000
A:
you can quit by clicking Ctrl+ Pause keys. Note that the Pause key might be called Break and in some laptops it is made using the combination Fn + F12. Hope this might helps.
A:
run sudo lsof -i:8000
then run kill -9 #PID should work to kill the processes running that server.
then you can python manage.py server on that port again
|
PyDev and Django: how to restart dev server?
|
I'm new to Django. I think I'm making a simple mistake.
I launched the dev server with Pydev:
RClick on project >> Django >> Custom
command >> runserver
The server came up, and everything was great. But now I'm trying to stop it, and can't figure out how. I stopped the process in the PyDev console, and closed Eclipse, but web pages are still being served from http://127.0.0.1:8000.
I launched and quit the server from the command line normally:
python manage.py runserver
But the server is still up. What am I doing wrong here?
|
[
"By default, the runserver command runs in autoreload mode, which runs in a separate process. This means that PyDev doesn't know how to stop it, and doesn't display its output in the console window.\nIf you run the command runserver --noreload instead, the auto-reloader will be disabled. Then you can see the console output and stop the server normally. However, this means that changes to your Python files won't be effective until you manually restart the server.\n",
"Run the project 1. Right click on the project (not subfolders) 2. Run As > Pydev:Django\nTerminate 1. Click terminate in console window\nThe server is down\n",
"I usually run it from console. Running from PyDev adds unnecessary confusion, and doesn't bring any benefit until you happen to use PyDev's GUI interactive debugging.\n",
"Edit: Latest PyDev versions (since PyDev 3.4.1) no longer need any workaround:\ni.e.: PyDev will properly kill subprocesses on a kill process operation and when debugging even with regular reloading on, PyDev will attach the debugger to the child processes.\n\nOld answer (for PyDev versions older than 3.4.1):\nUnfortunately, that's expected, as PyDev will simply kill the parent process (i.e.: as if instead of ctrl+C you kill the parent process in the task manager).\nThe solution would be editing Django itself so that the child process polls the parent process to know it's still alive and exit if it's not... see: How to make child process die after parent exits? for a reference.\nAfter a quick look it seems related to django/utils/autoreload.py and the way it starts up things -- so, it'd be needed to start a thread that keeps seeing if the parent is alive and if it's not it kills the child process -- I've reported that as a bug in Django itself: https://code.djangoproject.com/ticket/16982\nNote: as a workaround for PyDev, you can make Django allocate a new console (out of PyDev) while still running from PyDev (so, until a proper solution is available from Django, the patch below can be used to make the Django autoreload allocate a new console -- where you can properly use Ctrl+C).\nIndex: django/utils/autoreload.py\n===================================================================\n--- django/utils/autoreload.py (revision 16923)\n+++ django/utils/autoreload.py (working copy)\n@@ -98,11 +98,14 @@\n def restart_with_reloader():\n while True:\n args = [sys.executable] + ['-W%s' % o for o in sys.warnoptions] + sys.argv\n- if sys.platform == \"win32\":\n- args = ['\"%s\"' % arg for arg in args]\n new_environ = os.environ.copy()\n new_environ[\"RUN_MAIN\"] = 'true'\n- exit_code = os.spawnve(os.P_WAIT, sys.executable, args, new_environ)\n+\n+ import subprocess\n+ popen = subprocess.Popen(args, env=new_environ, creationflags=subprocess.CREATE_NEW_CONSOLE)\n+ exit_code = popen.wait()\n if exit_code != 3:\n return exit_code\n\n",
"Solution: create an interpreter error in some project file. This will cause the server to crash. Server can then be restarted as normal.\n",
"If you operate on Windows using the CMD: Quit the server with CTRL+BREAK.\npython manage.py runserver localhost:8000\n\n",
"you can quit by clicking Ctrl+ Pause keys. Note that the Pause key might be called Break and in some laptops it is made using the combination Fn + F12. Hope this might helps.\n",
"run sudo lsof -i:8000\nthen run kill -9 #PID should work to kill the processes running that server.\nthen you can python manage.py server on that port again\n"
] |
[
14,
5,
4,
3,
2,
1,
0,
0
] |
[] |
[] |
[
"devserver",
"django",
"eclipse",
"pydev",
"python"
] |
stackoverflow_0002746512_devserver_django_eclipse_pydev_python.txt
|
Q:
Define a `globalThis` symbol index type definition
Through the typings/global.d.ts I can define a type through declare module global { }. However, I am unaware how I would be able to define a symbol index through this syntax.
Imagine the following code, which does compile in JavaScript, but complains in strict TS (Playground link):
globalThis[Symbol.for('internal.fake')] = { loader: true };
console.log(globalThis[Symbol.for('internal.fake')]);
The following TS error can be seen:
Element implicitly has an 'any' type because expression of type
'symbol' can't be used to index type 'typeof globalThis'.(7053)
Would it be possible to define this in a typescript friendly way, without having to cheat using any assertions?
A:
Yes, it is possible to define this in a TypeScript-friendly way without using any assertions. You can use the keyof operator and the typeof operator to create a type that represents the symbol index type of the globalThis object.
Here is an example:
declare module global {
type GlobalSymbolIndex = keyof typeof globalThis;
}
globalThis[Symbol.for("internal.fake")] = { loader: true };
console.log(globalThis[Symbol.for("internal.fake")]);
|
Define a `globalThis` symbol index type definition
|
Through the typings/global.d.ts I can define a type through declare module global { }. However, I am unaware how I would be able to define a symbol index through this syntax.
Imagine the following code, which does compile in JavaScript, but complains in strict TS (Playground link):
globalThis[Symbol.for('internal.fake')] = { loader: true };
console.log(globalThis[Symbol.for('internal.fake')]);
The following TS error can be seen:
Element implicitly has an 'any' type because expression of type
'symbol' can't be used to index type 'typeof globalThis'.(7053)
Would it be possible to define this in a typescript friendly way, without having to cheat using any assertions?
|
[
"Yes, it is possible to define this in a TypeScript-friendly way without using any assertions. You can use the keyof operator and the typeof operator to create a type that represents the symbol index type of the globalThis object.\nHere is an example:\ndeclare module global {\n type GlobalSymbolIndex = keyof typeof globalThis;\n}\n\nglobalThis[Symbol.for(\"internal.fake\")] = { loader: true };\nconsole.log(globalThis[Symbol.for(\"internal.fake\")]);\n\n"
] |
[
0
] |
[] |
[] |
[
"typescript"
] |
stackoverflow_0074464132_typescript.txt
|
Q:
Is it possible to use property in the src path
I would like to use @store.Name in the src="../Pandora.jpg".
At the moment my code is:
<img src="../Pandora.jpg" class="card-img-top" alt="@store.Name Logo">
but I want something like:
<img src="../@store.Name.jpg" class="card-img-top" alt="@store.Name Logo">
A:
It is possible with Razor syntax. Make sure that you need to use @() to append the string and avoid .jpg treat as the nested property.
<img src="../@(store.Name).jpg" class="card-img-top" alt="@store.Name Logo">
Demo @ .NET Fiddle
A:
It is not possible to use a property in the src path like that. You would need to use a string concatenation or interpolation syntax to combine the property with the rest of the src path.
For example, using string concatenation:
<img src="../' + @store.Name + '.jpg" class="card-img-top" alt="@store.Name Logo">
Or using interpolation syntax:
<img src="../${@store.Name}.jpg" class="card-img-top" alt="@store.Name Logo">
You may also need to ensure that the property value is properly formatted for use in a file path, such as removing any spaces or special characters.
|
Is it possible to use property in the src path
|
I would like to use @store.Name in the src="../Pandora.jpg".
At the moment my code is:
<img src="../Pandora.jpg" class="card-img-top" alt="@store.Name Logo">
but I want something like:
<img src="../@store.Name.jpg" class="card-img-top" alt="@store.Name Logo">
|
[
"It is possible with Razor syntax. Make sure that you need to use @() to append the string and avoid .jpg treat as the nested property.\n<img src=\"../@(store.Name).jpg\" class=\"card-img-top\" alt=\"@store.Name Logo\">\n\nDemo @ .NET Fiddle\n",
"It is not possible to use a property in the src path like that. You would need to use a string concatenation or interpolation syntax to combine the property with the rest of the src path.\nFor example, using string concatenation:\n<img src=\"../' + @store.Name + '.jpg\" class=\"card-img-top\" alt=\"@store.Name Logo\">\n\nOr using interpolation syntax:\n<img src=\"../${@store.Name}.jpg\" class=\"card-img-top\" alt=\"@store.Name Logo\">\n\nYou may also need to ensure that the property value is properly formatted for use in a file path, such as removing any spaces or special characters.\n"
] |
[
2,
0
] |
[] |
[] |
[
"c#",
"razor"
] |
stackoverflow_0074666415_c#_razor.txt
|
Q:
Hi I am new to python programming. I have written the following code but I keep getting this error. Can anyone help me at all please?
count = 1
total = 0
average = 0
array = []
while input("Enter q to quit or any other key to continue: ") != "q":
numlist = input('Enter number\n')
array.append(numlist)
try:
count = count + 1
total = total + float(numlist)
except:
count = count - 1
print('Enter a valid number')
continue
average = float(total) / float(count)
array.sort()
mid = len(array) // 2
res = (array[mid] + array[~mid]) / 2
print('Avg:', average)
print("The median is : ", res)
I get this following error:
Traceback (most recent call last):
File "<string>", line 22, in <module>
TypeError: unsupported operand type(s) for /: 'str' and 'int'
I was expecting to get 'enter a valid number' when the user enters anything but number.
A:
An input function is returning a string even though you actually type a number:
https://docs.python.org/3/library/functions.html#input
You need to convert that string to number before appending to array, for instance:
array.append(float(numlist))
but it should be in try / except block so your validation checks also work.
In this case you will be indexing only actual numbers, not everything that has been typed.
|
Hi I am new to python programming. I have written the following code but I keep getting this error. Can anyone help me at all please?
|
count = 1
total = 0
average = 0
array = []
while input("Enter q to quit or any other key to continue: ") != "q":
numlist = input('Enter number\n')
array.append(numlist)
try:
count = count + 1
total = total + float(numlist)
except:
count = count - 1
print('Enter a valid number')
continue
average = float(total) / float(count)
array.sort()
mid = len(array) // 2
res = (array[mid] + array[~mid]) / 2
print('Avg:', average)
print("The median is : ", res)
I get this following error:
Traceback (most recent call last):
File "<string>", line 22, in <module>
TypeError: unsupported operand type(s) for /: 'str' and 'int'
I was expecting to get 'enter a valid number' when the user enters anything but number.
|
[
"An input function is returning a string even though you actually type a number:\nhttps://docs.python.org/3/library/functions.html#input\nYou need to convert that string to number before appending to array, for instance:\narray.append(float(numlist))\n\nbut it should be in try / except block so your validation checks also work.\nIn this case you will be indexing only actual numbers, not everything that has been typed.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074666567_python.txt
|
Q:
how can i make media query work with min-with?
@charset "UTF-8";
/* import the basis style page */
@import url("body.css");
/* why is this not working? */
/* import alternative style 500px */
@media (min-width: 500px){
@import url("screen_layout_small.css");
}
The screen_layout_small.css file contains :
@charset "UTF-8";
body {
background-color: red;
}
The url "screen_layout_small.css" works when it is not in a @media (works when it is not in a responsive command ?)
I tryed to load it when width >= 500px but it doesn't work.
By the way it does not mather if I use min-with or max-with, the file does not load in the @media.
A:
I think you are using a wrong syntax. It should be:
@import url|string list-of-mediaqueries;
So in your case it should be:
@import "screen_layout_small.css" screen and (min-width: 500px);
A:
I think the mistake is in the syntax. Try the code that I mentioned below
@charset "UTF-8";
/* inport the basis style page */
@import url("body.css");
/* why is this not working???? */
/* import alternative style 500px*/
@media screen and (min-width: 500px){
@import url("screen_layout_small.css");
}
A:
@charset "UTF-8";
/* inport the basis style page */
@import url("body.css");
/* import alternative style 500px*/
@import "screen_layout_small.css" screen and (max-width: 500px);
this is the working code thanks to maria romano
|
how can i make media query work with min-with?
|
@charset "UTF-8";
/* import the basis style page */
@import url("body.css");
/* why is this not working? */
/* import alternative style 500px */
@media (min-width: 500px){
@import url("screen_layout_small.css");
}
The screen_layout_small.css file contains :
@charset "UTF-8";
body {
background-color: red;
}
The url "screen_layout_small.css" works when it is not in a @media (works when it is not in a responsive command ?)
I tryed to load it when width >= 500px but it doesn't work.
By the way it does not mather if I use min-with or max-with, the file does not load in the @media.
|
[
"I think you are using a wrong syntax. It should be:\n@import url|string list-of-mediaqueries;\n\nSo in your case it should be:\n@import \"screen_layout_small.css\" screen and (min-width: 500px);\n\n",
"I think the mistake is in the syntax. Try the code that I mentioned below\n@charset \"UTF-8\";\n/* inport the basis style page */\n@import url(\"body.css\");\n/* why is this not working???? */\n/* import alternative style 500px*/\n@media screen and (min-width: 500px){\n@import url(\"screen_layout_small.css\");\n}\n\n",
"@charset \"UTF-8\";\n/* inport the basis style page */\n@import url(\"body.css\");\n/* import alternative style 500px*/\n@import \"screen_layout_small.css\" screen and (max-width: 500px);\n\nthis is the working code thanks to maria romano\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"css",
"import",
"media",
"url"
] |
stackoverflow_0074625531_css_import_media_url.txt
|
Q:
Is Azure Notifications Hub a deceased service?
Two years ago, I barely managed to make this service work.
The documentation seems to be very dated, Microsoft isn't actively pushing this and the sample code Microsoft supplies is no longer using supported libraries.
Has anyone got access to or can they supply a valid sample Android/IOS sample code that uses the Azure notification Hub?
|
Is Azure Notifications Hub a deceased service?
|
Two years ago, I barely managed to make this service work.
The documentation seems to be very dated, Microsoft isn't actively pushing this and the sample code Microsoft supplies is no longer using supported libraries.
Has anyone got access to or can they supply a valid sample Android/IOS sample code that uses the Azure notification Hub?
|
[] |
[] |
[
"Azure Notification Hubs is still an active service offered by Microsoft Azure. While the documentation and sample code may be out of date, you can still use the service to send push notifications to your Android and iOS applications. You can find updated documentation and tutorials on how to use Azure Notification Hubs on the Microsoft Azure website. Additionally, you can reach out to Microsoft for support if you have any specific questions or issues with using the service.\n"
] |
[
-1
] |
[
"azure_notificationhub"
] |
stackoverflow_0074666619_azure_notificationhub.txt
|
Q:
Assigned a complex value in cupy RawKernel
I am a beginner learning how to exploit GPU for parallel computation using python and cupy. I would like to implement my code to simulate some problems in physics and require to use complex number, but don't know how to manage it. Although there are examples in Cupy's official document, it only mentions about include complex.cuh library and how to declare a complex variable. I can't find any example about how to assign a complex number correctly, as well ass how to call the function in the complex.cuh library to do calculation.
I am stuck in line 11 of this code. I want to make a complex number value equal x[tIdx]+j*y[t_Idx], j is the imaginary number. I tried several ways and no one works, so I left this one here.
import cupy as cp
import time
add_kernel = cp.RawKernel(r'''
#include <cupy/complex.cuh>
extern "C" __global__
void test(double* x, double* y, complex<float>* z){
int tId_x = blockDim.x*blockIdx.x + threadIdx.x;
int tId_y = blockDim.y*blockIdx.y + threadIdx.y;
complex<float>* value = complex(x[tId_x],y[tId_y]);
z[tId_x*blockDim.y*gridDim.y+tId_y] = value;
}''',"test")
x = cp.random.rand(1,8,4096,dtype = cp.float32)
y = cp.random.rand(1,8,4096,dtype = cp.float32)
z = cp.zeros((4096,4096), dtype = cp.complex64)
t1 = time.time()
add_kernel((128,128),(32,32),(x,y,z))
print(time.time()-t1)
What is the proper way to assign a complex number in the RawKernel?
Thank you for answering this question!
A:
@plaeonix, thank you very much for your hint. I find out the answer.
This line:
complex<float>* value = complex(x[tId_x],y[tId_y])
should be replaced to:
complex<float> value = complex<float>(x[tId_x],y[tId_y])
Then the assignment of a complex number works.
|
Assigned a complex value in cupy RawKernel
|
I am a beginner learning how to exploit GPU for parallel computation using python and cupy. I would like to implement my code to simulate some problems in physics and require to use complex number, but don't know how to manage it. Although there are examples in Cupy's official document, it only mentions about include complex.cuh library and how to declare a complex variable. I can't find any example about how to assign a complex number correctly, as well ass how to call the function in the complex.cuh library to do calculation.
I am stuck in line 11 of this code. I want to make a complex number value equal x[tIdx]+j*y[t_Idx], j is the imaginary number. I tried several ways and no one works, so I left this one here.
import cupy as cp
import time
add_kernel = cp.RawKernel(r'''
#include <cupy/complex.cuh>
extern "C" __global__
void test(double* x, double* y, complex<float>* z){
int tId_x = blockDim.x*blockIdx.x + threadIdx.x;
int tId_y = blockDim.y*blockIdx.y + threadIdx.y;
complex<float>* value = complex(x[tId_x],y[tId_y]);
z[tId_x*blockDim.y*gridDim.y+tId_y] = value;
}''',"test")
x = cp.random.rand(1,8,4096,dtype = cp.float32)
y = cp.random.rand(1,8,4096,dtype = cp.float32)
z = cp.zeros((4096,4096), dtype = cp.complex64)
t1 = time.time()
add_kernel((128,128),(32,32),(x,y,z))
print(time.time()-t1)
What is the proper way to assign a complex number in the RawKernel?
Thank you for answering this question!
|
[
"@plaeonix, thank you very much for your hint. I find out the answer.\nThis line:\ncomplex<float>* value = complex(x[tId_x],y[tId_y])\nshould be replaced to:\ncomplex<float> value = complex<float>(x[tId_x],y[tId_y])\nThen the assignment of a complex number works.\n"
] |
[
1
] |
[] |
[] |
[
"cuda",
"cupy",
"python"
] |
stackoverflow_0074654285_cuda_cupy_python.txt
|
Q:
Dynamic class method parameter in typescript
I'm implementing EventBus in typescript and want to make dynamic parameter event in method emit. How can I do what?
interface IEventBusListener {
(...params: any[]): void
}
class EventBus {
constructor(private listeners: Record<string | symbol, IEventBusListener[]> = {}) { }
on(event: string | symbol, callback: IEventBusListener) {
if (!this.listeners[event]) {
this.listeners[event] = [];
}
this.listeners[event].push(callback);
}
off(event: string | symbol, callback: IEventBusListener) {
if (!this.listeners[event]) {
throw new Error(`Нет события: ${event.toString()}`);
}
this.listeners[event] = this.listeners[event].filter(
listener => listener !== callback
);
}
emit<T extends keyof typeof this.listeners>(event: T, ...args: any[]) {
if (!this.listeners[event]) {
throw new Event(`Нет события: ${event.toString()}`);
}
this.listeners[event].forEach(listener => {
listener(...args);
});
}
}
I'll expect auto complete and type check, but it don't work.
A:
You should define the relations map between event name and callback parameters. Like this:
class EventEmitter<T extends Record<string, any[]>> {
on<E extends keyof T>(eventName: E, callback: (...args: T[E]) => void) {
// ...
}
}
type ChildMap = {
'foo': [number]
'baz': [string]
}
class Child extends EventEmitter<ChildMap> {
}
const child = new Child();
Or use eventemitter3 library
|
Dynamic class method parameter in typescript
|
I'm implementing EventBus in typescript and want to make dynamic parameter event in method emit. How can I do what?
interface IEventBusListener {
(...params: any[]): void
}
class EventBus {
constructor(private listeners: Record<string | symbol, IEventBusListener[]> = {}) { }
on(event: string | symbol, callback: IEventBusListener) {
if (!this.listeners[event]) {
this.listeners[event] = [];
}
this.listeners[event].push(callback);
}
off(event: string | symbol, callback: IEventBusListener) {
if (!this.listeners[event]) {
throw new Error(`Нет события: ${event.toString()}`);
}
this.listeners[event] = this.listeners[event].filter(
listener => listener !== callback
);
}
emit<T extends keyof typeof this.listeners>(event: T, ...args: any[]) {
if (!this.listeners[event]) {
throw new Event(`Нет события: ${event.toString()}`);
}
this.listeners[event].forEach(listener => {
listener(...args);
});
}
}
I'll expect auto complete and type check, but it don't work.
|
[
"You should define the relations map between event name and callback parameters. Like this:\nclass EventEmitter<T extends Record<string, any[]>> {\n on<E extends keyof T>(eventName: E, callback: (...args: T[E]) => void) {\n // ...\n }\n}\n\ntype ChildMap = {\n 'foo': [number]\n 'baz': [string]\n}\nclass Child extends EventEmitter<ChildMap> {\n\n}\n\nconst child = new Child();\n\nOr use eventemitter3 library\n"
] |
[
0
] |
[] |
[] |
[
"typescript"
] |
stackoverflow_0074666507_typescript.txt
|
Q:
Lodash merge including undefined values
I'm trying to use Lodash to merge object A into object B, but the trouble I am having is that object A has some undefined values and I want these to be copied over to object B.
Lodash docs for _.merge() says:
"Recursively merges own enumerable properties of the source object(s), that don't resolve to undefined into the destination object."
Is there another function that can do this, or can it be easily overwritten?
EDIT A:
Sample input:
A = {
name: "Bob Smith",
job: "Racing Driver",
address: undefined
}
B = {
name: "Bob Smith",
job: "Web Developer",
address: "1 Regent Street, London",
phone: "0800 800 80"
}
Expected Output
B = {
name: "Bob Smith",
job: "Racing Driver",
address: undefined,
phone: "0800 800 80"
}
EDIT B:
Just to confirm, it needs to be a "deep" merge. object may contain nested objects.
A:
Easiest would be to use 3rd package for this https://github.com/unclechu/node-deep-extend which goal is only deep merging and nothing else.
A:
_.assign/_.extend will do that:
_.assign(B, A);
A:
Thanks to @Bergi - https://stackoverflow.com/a/22581862/1828637 - assign will keep undefined. I did a customizer here using assignWith for deep merge with customizer, as i use this in redux/react:
function keepUnchangedRefsOnly(objValue, srcValue) {
if (objValue === undefined) { // do i need this?
return srcValue;
} else if (isPlainObject(objValue)) {
return assignWith({}, objValue, srcValue, keepUnchangedRefsOnly);
} else if (Array.isArray(objValue)) {
if (isEmpty(objValue) && !isEmpty(srcValue))return [...srcValue];
else if (!isEmpty(objValue) && isEmpty(srcValue)) return objValue;
else if (isEmpty(objValue) && isEmpty(srcValue)) return objValue; // both empty
else return [ ...objValue, ...srcValue ];
}
}
Usage like this - https://stackoverflow.com/a/49437903/1828637
A:
I've met this issue like you. Just try replace undefined with null instead.
Example:
const a = { something: 'has value' };
const b = { something: undefined };
const c = { something: null };
console.log(_.merge({}, a, b))
console.log(_.merge({}, a, c))
<script src="https://cdn.jsdelivr.net/npm/[email protected]/lodash.min.js"></script>
A:
FWIW, here is a small function I use as a lodash replacement:
function merge(dst: any, src: any, stack = new Map<any, any>()) {
if (dst === src || dst == null || src == null)
return src
if (stack.has(src))
return stack.get(src)
const dstTag = Object.prototype.toString.call(dst)
const srcTag = Object.prototype.toString.call(src)
if (dstTag !== srcTag || (dstTag !== '[object Object]' && dstTag !== '[object Array]'))
return src
stack.set(src, dst)
Object.keys(src).forEach(key =>
dst[key] = merge(dst[key], src[key], stack)
)
stack.delete(src)
return dst
}
It fits my needs so far but, if you find anything wrong with it, tell me!
|
Lodash merge including undefined values
|
I'm trying to use Lodash to merge object A into object B, but the trouble I am having is that object A has some undefined values and I want these to be copied over to object B.
Lodash docs for _.merge() says:
"Recursively merges own enumerable properties of the source object(s), that don't resolve to undefined into the destination object."
Is there another function that can do this, or can it be easily overwritten?
EDIT A:
Sample input:
A = {
name: "Bob Smith",
job: "Racing Driver",
address: undefined
}
B = {
name: "Bob Smith",
job: "Web Developer",
address: "1 Regent Street, London",
phone: "0800 800 80"
}
Expected Output
B = {
name: "Bob Smith",
job: "Racing Driver",
address: undefined,
phone: "0800 800 80"
}
EDIT B:
Just to confirm, it needs to be a "deep" merge. object may contain nested objects.
|
[
"Easiest would be to use 3rd package for this https://github.com/unclechu/node-deep-extend which goal is only deep merging and nothing else. \n",
"_.assign/_.extend will do that:\n_.assign(B, A);\n\n",
"Thanks to @Bergi - https://stackoverflow.com/a/22581862/1828637 - assign will keep undefined. I did a customizer here using assignWith for deep merge with customizer, as i use this in redux/react:\nfunction keepUnchangedRefsOnly(objValue, srcValue) {\n if (objValue === undefined) { // do i need this?\n return srcValue;\n } else if (isPlainObject(objValue)) {\n return assignWith({}, objValue, srcValue, keepUnchangedRefsOnly);\n } else if (Array.isArray(objValue)) {\n if (isEmpty(objValue) && !isEmpty(srcValue))return [...srcValue];\n else if (!isEmpty(objValue) && isEmpty(srcValue)) return objValue;\n else if (isEmpty(objValue) && isEmpty(srcValue)) return objValue; // both empty\n else return [ ...objValue, ...srcValue ];\n }\n}\n\nUsage like this - https://stackoverflow.com/a/49437903/1828637\n",
"I've met this issue like you. Just try replace undefined with null instead.\nExample:\n\n\nconst a = { something: 'has value' };\nconst b = { something: undefined };\nconst c = { something: null };\n\nconsole.log(_.merge({}, a, b))\nconsole.log(_.merge({}, a, c))\n<script src=\"https://cdn.jsdelivr.net/npm/[email protected]/lodash.min.js\"></script>\n\n\n\n",
"FWIW, here is a small function I use as a lodash replacement:\nfunction merge(dst: any, src: any, stack = new Map<any, any>()) {\n if (dst === src || dst == null || src == null)\n return src\n \n if (stack.has(src))\n return stack.get(src)\n \n const dstTag = Object.prototype.toString.call(dst)\n const srcTag = Object.prototype.toString.call(src)\n if (dstTag !== srcTag || (dstTag !== '[object Object]' && dstTag !== '[object Array]'))\n return src\n\n stack.set(src, dst)\n Object.keys(src).forEach(key =>\n dst[key] = merge(dst[key], src[key], stack)\n )\n stack.delete(src)\n \n return dst\n}\n\nIt fits my needs so far but, if you find anything wrong with it, tell me!\n"
] |
[
3,
1,
0,
0,
0
] |
[
"Check this out. It is a small gist containing a deep extend lodash method which you can use within your own application. This will also allow you to get the functionality you need without adding another dependency in your app (since you're already using lodash)\n"
] |
[
-1
] |
[
"javascript",
"lodash"
] |
stackoverflow_0022581220_javascript_lodash.txt
|
Q:
use provider in two screens
I have one problem with provider. I don't want to create global provider. I mean, creating provider for a particular screen. And I want to use same provider in two screen without creating anther instance.
for example.
my provider (for particular screen)
class ServiceScreenProvider with ChangeNotifier {
final BuildContext _context;
ServiceScreenProvider(this._context);
}
my first screen (for input)
class ServiceScreen extends StatefulWidget {
const ServiceScreen({Key? key}) : super(key: key);
@override
_ServiceScreenState createState() => _ServiceScreenState();
}
class _ServiceScreenState extends State<ServiceScreen> {
@override
Widget build(BuildContext context) {
return ChangeNotifierProvider<ServiceScreenProvider>(
create: (ctx) => ServiceScreenProvider(ctx),
child: Column(
children: const [
ServiceLogo(),
FormWidget(),
],
),
);
}
}
here i am creating privider first time.
now I want to use this provider for anther screen
class ServiceDetailScreen extends StatefulWidget {
const ServiceDetailScreen({Key? key}) : super(key: key);
@override
_ServiceDetailScreenState createState() => _ServiceDetailScreenState();
}
class _ServiceDetailScreenState extends State<ServiceDetailScreen> {
@override
Widget build(BuildContext context) {
return Column(
children: const [
ServiceLogo(),
FormWidget(),
],
);
}
}
text
I passing the context in constructor of ServiceDetailScreen screen while open the new sServiceDetailScreen. and I am using the ServiceScreenProvider.
Is any other way to get the instance of ServiceScreenProvider without passing the context in constructor.
thank you.
A:
Provider.of method allows you to access the provider instance from anywhere in your widget tree:
class ServiceDetailScreen extends StatefulWidget {
const ServiceDetailScreen({Key? key}) : super(key: key);
@override
_ServiceDetailScreenState createState() => _ServiceDetailScreenState();
}
class _ServiceDetailScreenState extends State<ServiceDetailScreen> {
@override
Widget build(BuildContext context) {
return Column(
children: [
ServiceLogo(),
FormWidget(),
// Access the provider instance using the `Provider.of` method
RaisedButton(
onPressed: () {
var provider = Provider.of<ServiceScreenProvider>(context);
// Use the provider instance here
},
),
],
);
}
}
|
use provider in two screens
|
I have one problem with provider. I don't want to create global provider. I mean, creating provider for a particular screen. And I want to use same provider in two screen without creating anther instance.
for example.
my provider (for particular screen)
class ServiceScreenProvider with ChangeNotifier {
final BuildContext _context;
ServiceScreenProvider(this._context);
}
my first screen (for input)
class ServiceScreen extends StatefulWidget {
const ServiceScreen({Key? key}) : super(key: key);
@override
_ServiceScreenState createState() => _ServiceScreenState();
}
class _ServiceScreenState extends State<ServiceScreen> {
@override
Widget build(BuildContext context) {
return ChangeNotifierProvider<ServiceScreenProvider>(
create: (ctx) => ServiceScreenProvider(ctx),
child: Column(
children: const [
ServiceLogo(),
FormWidget(),
],
),
);
}
}
here i am creating privider first time.
now I want to use this provider for anther screen
class ServiceDetailScreen extends StatefulWidget {
const ServiceDetailScreen({Key? key}) : super(key: key);
@override
_ServiceDetailScreenState createState() => _ServiceDetailScreenState();
}
class _ServiceDetailScreenState extends State<ServiceDetailScreen> {
@override
Widget build(BuildContext context) {
return Column(
children: const [
ServiceLogo(),
FormWidget(),
],
);
}
}
text
I passing the context in constructor of ServiceDetailScreen screen while open the new sServiceDetailScreen. and I am using the ServiceScreenProvider.
Is any other way to get the instance of ServiceScreenProvider without passing the context in constructor.
thank you.
|
[
"Provider.of method allows you to access the provider instance from anywhere in your widget tree:\nclass ServiceDetailScreen extends StatefulWidget {\n const ServiceDetailScreen({Key? key}) : super(key: key);\n\n @override\n _ServiceDetailScreenState createState() => _ServiceDetailScreenState();\n}\n\nclass _ServiceDetailScreenState extends State<ServiceDetailScreen> {\n @override\n Widget build(BuildContext context) {\n return Column(\n children: [\n ServiceLogo(),\n FormWidget(),\n // Access the provider instance using the `Provider.of` method\n RaisedButton(\n onPressed: () {\n var provider = Provider.of<ServiceScreenProvider>(context);\n // Use the provider instance here\n },\n ),\n ],\n );\n }\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"flutter",
"flutter_provider"
] |
stackoverflow_0074666618_flutter_flutter_provider.txt
|
Q:
How to process inline CSV in powershell script?
I'm trying to avoid the extremely verbose hash maps and arrays, as commonly used in powershell. Why? Because I have 100's of lines, and it just doesn't make any sense to have to wrap every single line in a @(name='foo; id='bar') etc.), when all I need is a CSV type of array.
$header = @('name', 'id', 'type', 'loc')
$mycsv = @(
# name, id, type, loc
'Brave', 'Brave.Brave', 1, 'winget'
'Adobe Acrobat (64-bit)', '{AC76BA86-1033-1033-7760-BC15014EA700}', 2, ''
'GitHub CLI', 'GitHub.cli', 3, 'C:\portable'
)
# Do some magic here to set the CSV / hash headers so I can use them as shown below
Foreach ($app in $mycsv) {
Write-Host "App Name: $app.name"
Write-Host "App Type: $app.type"
Write-Host "App id : $app.id"
Write-Host "App Loc : $app.type"
Write-Host ("-"*40)
}
I'm sure you see where I am going.
So how can I process the inline CSV line-by-line using the header names?
Expected output:
App Name: Brave
App Type: 1
App id : Brave.Brave
App Loc : winget
----------------------------------------
...
UPDATE: 2022-12-03
The ultimate solution is the following very brief and non-verbose code:
$my = @'
name,id,type,loc
Brave, Brave.Brave,1,winget
"Adobe Acrobat (64-bit)",{AC76BA86-1033-1033-7760-BC15014EA700},2,
GitHub CLI,GitHub.cli,,C:\portable
'@
ConvertFrom-Csv $my | % {
Write-Host "App Name: $($_.name)"
Write-Host "App Type: $($_.type)"
Write-Host "App id : $($_.id)"
Write-Host "App Loc : $($_.loc)"
Write-Host $("-"*40)
}
A:
You can use an in-memory, i.e. string representation of CSV data using a here-string and parse it into objects with ConvertFrom-Csv:
# This creates objects ([pscustomobject] instances) with properties
# named for the fields in the header line (the first line), i.e:
# .name, .id. .type, and .loc
# NOTE:
# * The whitespace around the fields is purely for *readability*.
# * If any field values contain "," themselves, enclose them in "..."
$mycsv =
@'
name, id, type, loc
Brave, Brave.Brave, 1, winget
Adobe Acrobat (64-bit), {AC76BA86-1033-1033-7760-BC15014EA700}, 2,
GitHub CLI, GitHub.cli, 3, C:\portable
'@ | ConvertFrom-Csv
$mycsv | Format-List then provides the desired output (without Format-List, you'd get implicit Format-Table formatting, because the objects have no more than 4 properties).
As an aside: Format-List in essence provides the for-display formatting you've attempted with your loop of Write-Host calls; if you really need the latter approach, note that, as pointed out in Walter Mitty's answer, you need to enclose property-access expressions such as $_.name in $(...) in order to expand as such inside an expandable (double-quoted) PowerShell string ("...") - see this answer for a systematic overview of the syntax of PowerShell's expandable strings (string interpolation).
Note:
This approach is convenient:
It allows you to omit quoting, unless needed, namely only if a field value happens to contain , itself.
Use "..." (double-quoting) around field values that themselves contain , ('...', i.e. single-quoting does not have syntactic meaning in CSV data, and any ' characters are retained verbatim).
Should such a field additionally contain " chars., escape them as ""
It allows you to use incidental whitespace for more readable formatting, as shown above.
You may also use a separator other than , (e.g., |) in the input and pass it to ConvertFrom-Csv via the -Delimiter parameter.
Note: CSV data is in general untyped, which means that ConvertFrom-Csv (as well as Import-Csv) creates objects whose properties are all strings ([string]-typed).
Optional reading: A custom CSV notation that enables creation of typed properties:
Convenience function ConvertFrom-CsvTyped (source code below) overcomes the limitation of ConvertFrom-Csv invariably creating only string-typed properties, by enabling a custom header notation that supports preceding each column name in the header line with a type literal; e.g. [int] ID (see this answer for a systematic overview of PowerShell's type literals, which can refer to any .NET type).
This enables you to create (non-string) typed properties from the input CSV, as long as the target type's values can be represented as numbers or string literals, which includes:
Numeric types ([int], [long], [double], [decimal], ...)
Date and time-related types [datetime], [datetimeoffset], and [timespan]
[bool] (use 0 and 1 as the column values)
To test whether a given type can be used, cast it from a sample number or string, e.g.: [timespan] '01:00' or [byte] 0x40
Examples - note the type literals preceding the 2nd and third column names, [int] and [datetime] :
@'
Name, [int] ID, [datetime] Timestamp
Forty-two, 0x2a, 1970-01-01
Forty-three, 0x2b, 1970-01-02
'@ | ConvertFrom-CsvTyped
Output - note how the hex. numbers were recognized as such (and formatted as decimals by default), and how the data strings were recognized as [datetime] instances:
Name ID Timestamp
---- -- ---------
Forty-two 42 1/1/1970 12:00:00 AM
Forty-three 43 1/2/1970 12:00:00 AM
Adding -AsSourceCode to the call above allows you to output the parsed objects as a PowerShell source code string, namely as an array of [pscustomobject] literals:
@'
Name, [int] ID, [datetime] Timestamp
Forty-two, 0x2a, 1970-01-01
Forty-three, 0x2b, 1970-01-02
'@ | ConvertFrom-CsvTyped -AsSourceCode
Output - note that if you were to use this in a script or as input to Invoke-Expression (for testing only), you'd get the same objects and for-display output as above:
@(
[pscustomobject] @{ Name = 'Forty-two'; ID = [int] 0x2a; Timestamp = [datetime] '1970-01-01' }
[pscustomobject] @{ Name = 'Forty-three'; ID = [int] 0x2b; Timestamp = [datetime] '1970-01-02' }
)
ConvertFrom-CsvTyped source code:
function ConvertFrom-CsvTyped {
<#
.SYNOPSIS
Converts CSV data to objects with typed properties;
.DESCRIPTION
This command enhances ConvertFrom-Csv as follows:
* Header fields (column names) may be preceded by type literals in order
to specify a type for the properties of the resulting objects, e.g. "[int] Id"
* With -AsSourceCode, the data can be transformed to an array of
[pscustomobject] literals.
.PARAMETER Delimiter
The single-character delimiter (separator) that separates the column values.
"," is the (culture-invariant) default.
.PARAMETER AsSourceCode
Instead of outputting the parsed CSV data as objects, output them as
as source-code representations in the form of an array of [pscustomobject] literals.
.EXAMPLE
"Name, [int] ID, [datetime] Timestamp`nForty-two, 0x40, 1970-01-01Z" | ConvertFrom-CsvTyped
Parses the CSV input into an object with typed properties, resulting in the following for-display output:
Name ID Timestamp
---- -- ---------
Forty-two 64 12/31/1969 7:00:00 PM
.EXAMPLE
"Name, [int] ID, [datetime] Timestamp`nForty-two, 0x40, 1970-01-01Z" | ConvertFrom-CsvTyped -AsSourceCode
Transforms the CSV input into an equivalent source-code representation, expressed
as an array of [pscustomobject] literals:
@(
[pscustomobject] @{ Name = 'Forty-two'; ID = [int] 0x40; Timestamp = [datetime] '1970-01-01Z' }
)
#>
[CmdletBinding(PositionalBinding = $false)]
param(
[Parameter(Mandatory, ValueFromPipeline)]
[string[]] $InputObject,
[char] $Delimiter = ',',
[switch] $AsSourceCode
)
begin {
$allLines = ''
}
process {
if (-not $allLines) {
$allLines = $InputObject -join "`n"
}
else {
$allLines += "`n" + ($InputObject -join "`n")
}
}
end {
$header, $dataLines = $allLines -split '\r?\n'
# Parse the header line in order to derive the column (property) names.
$colNames = ($header, $header | ConvertFrom-Csv -ErrorAction Stop -Delimiter $Delimiter)[0].psobject.Properties.Name
[string[]] $colTypeNames = , 'string' * $colNames.Count
[type[]] $colTypes = , $null * $colNames.Count
$mustReType = $false; $mustRebuildHeader = $false
if (-not $dataLines) { throw "No data found after the header line; input must be valid CSV data." }
foreach ($i in 0..($colNames.Count - 1)) {
if ($colNames[$i] -match '^\[([^]]+)\]\s*(.*)$') {
if ('' -eq $Matches[2]) { throw "Missing column name after type specifier '[$($Matches[1])]'" }
if ($Matches[1] -notin 'string', 'System.String') {
$mustReType = $true
$colTypeNames[$i] = $Matches[1]
try {
$colTypes[$i] = [type] $Matches[1]
}
catch { throw }
}
$mustRebuildHeader = $true
$colNames[$i] = $Matches[2]
}
}
if ($mustRebuildHeader) {
$header = $(foreach ($colName in $colNames) { if ($colName -match [regex]::Escape($Delimiter)) { '"{0}"' -f $colName.Replace('"', '""') } else { $colName } }) -join $Delimiter
}
if ($AsSourceCode) {
# Note: To make the output suitable for direct piping to Invoke-Expression (which is helpful for testing),
# a *single* string mut be output.
(& {
"@("
& { $header; $dataLines } | ConvertFrom-Csv -Delimiter $Delimiter | ForEach-Object {
@"
[pscustomobject] @{ $(
$(foreach ($i in 0..($colNames.Count-1)) {
if (($propName = $colNames[$i]) -match '\W') {
$propName = "'{0}'" -f $propName.Replace("'", "''")
}
$isString = $colTypes[$i] -in $null, [string]
$cast = if (-not $isString) { '[{0}] ' -f $colTypeNames[$i] }
$value = $_.($colNames[$i])
if ($colTypes[$i] -in [bool] -and ($value -as [int]) -notin 0, 1) { Write-Warning "'$value' is interpreted as `$true - use 0 or 1 to represent [bool] values." }
if ($isString -or $null -eq ($value -as [double])) { $value = "'{0}'" -f $(if ($null -ne $value) { $value.Replace("'", "''") }) }
'{0} = {1}{2}' -f $colNames[$i], $cast, $value
}) -join '; ') }
"@
}
")"
}) -join "`n"
}
else {
if (-not $mustReType) {
# No type-casting needed - just pass the data through to ConvertFrom-Csv
& { $header; $dataLines } | ConvertFrom-Csv -ErrorAction Stop -Delimiter $Delimiter
}
else {
# Construct a class with typed properties matching the CSV input dynamically
$i = 0
@"
class __ConvertFromCsvTypedHelper {
$(
$(foreach ($i in 0..($colNames.Count-1)) {
' [{0}] ${{{1}}}' -f $colTypeNames[$i], $colNames[$i]
}) -join "`n"
)
}
"@ | Invoke-Expression
# Pass the data through to ConvertFrom-Csv and cast the results to the helper type.
try {
[__ConvertFromCsvTypedHelper[]] (& { $header; $dataLines } | ConvertFrom-Csv -ErrorAction Stop -Delimiter $Delimiter)
}
catch { $_ }
}
}
}
}
A:
To process inline CSV in a PowerShell script, you can use the ConvertFrom-Csv cmdlet to convert the CSV data into objects with properties that you can use in your script. Here is an example of how you could use this cmdlet to process the CSV data in your script:
$header = @('name', 'id', 'type', 'loc')
$mycsv = @(
# name, id, type, loc
'Brave', 'Brave.Brave', 1, 'winget'
'Adobe Acrobat (64-bit)', '{AC76BA86-1033-1033-7760-BC15014EA700}', 2, ''
'GitHub CLI', 'GitHub.cli', 3, 'C:\portable'
)
# Convert the CSV data into objects with properties
$apps = $mycsv | ConvertFrom-Csv -Header $header
Foreach ($app in $apps) {
Write-Host "App Name: $($app.name)"
Write-Host "App Type: $($app.type)"
Write-Host "App id : $($app.id)"
Write-Host "App Loc : $($app.loc)"
Write-Host ("-"*40)
}
This script uses the ConvertFrom-Csv cmdlet to convert the inline CSV data into objects with properties that match the values in the $header variable. It then uses a foreach loop to iterate over the objects in the $apps variable and prints the values of the properties for each object.
Note: In this example, the ConvertFrom-Csv cmdlet assumes that the first row of the CSV data contains the headers, which is why we need to specify the -Header parameter when calling the cmdlet. If your CSV data does not have headers, you can specify the property names using the -Property parameter instead. For example:
$mycsv = @(
# name, id, type, loc
'Brave', 'Brave.Brave', 1, 'winget'
'Adobe Acrobat (64-bit)', '{AC76BA86-1033-1033-7760-BC15014EA700}', 2, ''
'GitHub CLI', 'GitHub.cli', 3, 'C:\portable'
)
# Convert the CSV data into objects with properties
$apps = $mycsv | ConvertFrom-Csv -Property @('name', 'id', 'type', 'loc')
This script uses the ConvertFrom-Csv cmdlet to convert the inline CSV data into objects with properties that match the values specified in the -Property parameter. It then uses a foreach loop to iterate over the objects in the $apps variable and prints the values of the properties for each object.
A:
Here are a few techniques that might help you use data in CSV format.
I've changed your input a little. Instead of defining a separate header, I've included the header record as the first line of the CSV data. that's what ConvertFrom-CSV expects. I also changed single quotes into double quotes. And I omitted one field completely.
The first output shows what happens if you feed the output of ConvertFrom-CSV to format-List. I don't recommend that you do this if your plan is to use the data in variables. format-list is suitable for display, but not further processing.
The second output mimics your sample output. The here string contains various subexpressions, each of which can access the current data via the automatic variable $_.
Last, I show you the members of the pipeline stream. Note the four properties that got their names from your field names.
$mycsv = @"
name, id, type, loc
"Brave", "Brave.Brave", 1, "winget"
"Adobe Acrobat (64-bit)", "{AC76BA86-1033-1033-7760-BC15014EA700}", 2,
"GitHub CLI", "GitHub.cli", 3, "C:\portable"
"@
ConvertFrom-CSV $mycsv | Format-List
ConvertFrom-Csv $mycsv | % {@"
App Name: $($_.name)
App Type: $($_.type)
App id : $($_.id)
App Loc : $($_.loc)
$("-"*40)
"@
}
ConvertFrom-CSV $mycsv | gm
|
How to process inline CSV in powershell script?
|
I'm trying to avoid the extremely verbose hash maps and arrays, as commonly used in powershell. Why? Because I have 100's of lines, and it just doesn't make any sense to have to wrap every single line in a @(name='foo; id='bar') etc.), when all I need is a CSV type of array.
$header = @('name', 'id', 'type', 'loc')
$mycsv = @(
# name, id, type, loc
'Brave', 'Brave.Brave', 1, 'winget'
'Adobe Acrobat (64-bit)', '{AC76BA86-1033-1033-7760-BC15014EA700}', 2, ''
'GitHub CLI', 'GitHub.cli', 3, 'C:\portable'
)
# Do some magic here to set the CSV / hash headers so I can use them as shown below
Foreach ($app in $mycsv) {
Write-Host "App Name: $app.name"
Write-Host "App Type: $app.type"
Write-Host "App id : $app.id"
Write-Host "App Loc : $app.type"
Write-Host ("-"*40)
}
I'm sure you see where I am going.
So how can I process the inline CSV line-by-line using the header names?
Expected output:
App Name: Brave
App Type: 1
App id : Brave.Brave
App Loc : winget
----------------------------------------
...
UPDATE: 2022-12-03
The ultimate solution is the following very brief and non-verbose code:
$my = @'
name,id,type,loc
Brave, Brave.Brave,1,winget
"Adobe Acrobat (64-bit)",{AC76BA86-1033-1033-7760-BC15014EA700},2,
GitHub CLI,GitHub.cli,,C:\portable
'@
ConvertFrom-Csv $my | % {
Write-Host "App Name: $($_.name)"
Write-Host "App Type: $($_.type)"
Write-Host "App id : $($_.id)"
Write-Host "App Loc : $($_.loc)"
Write-Host $("-"*40)
}
|
[
"\nYou can use an in-memory, i.e. string representation of CSV data using a here-string and parse it into objects with ConvertFrom-Csv:\n# This creates objects ([pscustomobject] instances) with properties\n# named for the fields in the header line (the first line), i.e: \n# .name, .id. .type, and .loc\n# NOTE: \n# * The whitespace around the fields is purely for *readability*.\n# * If any field values contain \",\" themselves, enclose them in \"...\"\n$mycsv =\n@'\n name, id, type, loc\n Brave, Brave.Brave, 1, winget\n Adobe Acrobat (64-bit), {AC76BA86-1033-1033-7760-BC15014EA700}, 2,\n GitHub CLI, GitHub.cli, 3, C:\\portable\n'@ | ConvertFrom-Csv\n\n$mycsv | Format-List then provides the desired output (without Format-List, you'd get implicit Format-Table formatting, because the objects have no more than 4 properties).\n\nAs an aside: Format-List in essence provides the for-display formatting you've attempted with your loop of Write-Host calls; if you really need the latter approach, note that, as pointed out in Walter Mitty's answer, you need to enclose property-access expressions such as $_.name in $(...) in order to expand as such inside an expandable (double-quoted) PowerShell string (\"...\") - see this answer for a systematic overview of the syntax of PowerShell's expandable strings (string interpolation).\n\nNote:\n\nThis approach is convenient:\n\nIt allows you to omit quoting, unless needed, namely only if a field value happens to contain , itself.\n\nUse \"...\" (double-quoting) around field values that themselves contain , ('...', i.e. single-quoting does not have syntactic meaning in CSV data, and any ' characters are retained verbatim).\n\nShould such a field additionally contain \" chars., escape them as \"\"\n\n\n\n\nIt allows you to use incidental whitespace for more readable formatting, as shown above.\n\n\n\nYou may also use a separator other than , (e.g., |) in the input and pass it to ConvertFrom-Csv via the -Delimiter parameter.\n\nNote: CSV data is in general untyped, which means that ConvertFrom-Csv (as well as Import-Csv) creates objects whose properties are all strings ([string]-typed).\n\n\n\nOptional reading: A custom CSV notation that enables creation of typed properties:\nConvenience function ConvertFrom-CsvTyped (source code below) overcomes the limitation of ConvertFrom-Csv invariably creating only string-typed properties, by enabling a custom header notation that supports preceding each column name in the header line with a type literal; e.g. [int] ID (see this answer for a systematic overview of PowerShell's type literals, which can refer to any .NET type).\nThis enables you to create (non-string) typed properties from the input CSV, as long as the target type's values can be represented as numbers or string literals, which includes:\n\nNumeric types ([int], [long], [double], [decimal], ...)\nDate and time-related types [datetime], [datetimeoffset], and [timespan]\n[bool] (use 0 and 1 as the column values)\nTo test whether a given type can be used, cast it from a sample number or string, e.g.: [timespan] '01:00' or [byte] 0x40\n\nExamples - note the type literals preceding the 2nd and third column names, [int] and [datetime] :\n@'\n Name, [int] ID, [datetime] Timestamp\n Forty-two, 0x2a, 1970-01-01\n Forty-three, 0x2b, 1970-01-02\n'@ | ConvertFrom-CsvTyped\n\nOutput - note how the hex. numbers were recognized as such (and formatted as decimals by default), and how the data strings were recognized as [datetime] instances:\nName ID Timestamp\n---- -- ---------\nForty-two 42 1/1/1970 12:00:00 AM\nForty-three 43 1/2/1970 12:00:00 AM\n\nAdding -AsSourceCode to the call above allows you to output the parsed objects as a PowerShell source code string, namely as an array of [pscustomobject] literals:\n@'\n Name, [int] ID, [datetime] Timestamp\n Forty-two, 0x2a, 1970-01-01\n Forty-three, 0x2b, 1970-01-02\n'@ | ConvertFrom-CsvTyped -AsSourceCode\n\nOutput - note that if you were to use this in a script or as input to Invoke-Expression (for testing only), you'd get the same objects and for-display output as above:\n@(\n [pscustomobject] @{ Name = 'Forty-two'; ID = [int] 0x2a; Timestamp = [datetime] '1970-01-01' }\n [pscustomobject] @{ Name = 'Forty-three'; ID = [int] 0x2b; Timestamp = [datetime] '1970-01-02' }\n)\n\n\nConvertFrom-CsvTyped source code:\nfunction ConvertFrom-CsvTyped {\n <#\n.SYNOPSIS\n Converts CSV data to objects with typed properties;\n.DESCRIPTION\n This command enhances ConvertFrom-Csv as follows:\n * Header fields (column names) may be preceded by type literals in order\n to specify a type for the properties of the resulting objects, e.g. \"[int] Id\"\n * With -AsSourceCode, the data can be transformed to an array of \n [pscustomobject] literals.\n\n.PARAMETER Delimiter\n The single-character delimiter (separator) that separates the column values.\n \",\" is the (culture-invariant) default.\n\n.PARAMETER AsSourceCode\n Instead of outputting the parsed CSV data as objects, output them as\n as source-code representations in the form of an array of [pscustomobject] literals.\n\n.EXAMPLE\n \"Name, [int] ID, [datetime] Timestamp`nForty-two, 0x40, 1970-01-01Z\" | ConvertFrom-CsvTyped\n \n Parses the CSV input into an object with typed properties, resulting in the following for-display output:\n Name ID Timestamp\n ---- -- ---------\n Forty-two 64 12/31/1969 7:00:00 PM \n\n .EXAMPLE\n \"Name, [int] ID, [datetime] Timestamp`nForty-two, 0x40, 1970-01-01Z\" | ConvertFrom-CsvTyped -AsSourceCode\n \n Transforms the CSV input into an equivalent source-code representation, expressed\n as an array of [pscustomobject] literals:\n @(\n [pscustomobject] @{ Name = 'Forty-two'; ID = [int] 0x40; Timestamp = [datetime] '1970-01-01Z' }\n )\n#>\n\n [CmdletBinding(PositionalBinding = $false)]\n param(\n [Parameter(Mandatory, ValueFromPipeline)]\n [string[]] $InputObject,\n [char] $Delimiter = ',',\n [switch] $AsSourceCode\n )\n begin {\n $allLines = ''\n }\n process {\n if (-not $allLines) {\n $allLines = $InputObject -join \"`n\"\n }\n else {\n $allLines += \"`n\" + ($InputObject -join \"`n\")\n }\n }\n end {\n\n $header, $dataLines = $allLines -split '\\r?\\n'\n\n # Parse the header line in order to derive the column (property) names.\n $colNames = ($header, $header | ConvertFrom-Csv -ErrorAction Stop -Delimiter $Delimiter)[0].psobject.Properties.Name\n [string[]] $colTypeNames = , 'string' * $colNames.Count\n [type[]] $colTypes = , $null * $colNames.Count\n $mustReType = $false; $mustRebuildHeader = $false\n\n if (-not $dataLines) { throw \"No data found after the header line; input must be valid CSV data.\" }\n\n foreach ($i in 0..($colNames.Count - 1)) {\n if ($colNames[$i] -match '^\\[([^]]+)\\]\\s*(.*)$') {\n if ('' -eq $Matches[2]) { throw \"Missing column name after type specifier '[$($Matches[1])]'\" }\n if ($Matches[1] -notin 'string', 'System.String') {\n $mustReType = $true\n $colTypeNames[$i] = $Matches[1]\n try {\n $colTypes[$i] = [type] $Matches[1]\n }\n catch { throw }\n }\n $mustRebuildHeader = $true\n $colNames[$i] = $Matches[2]\n }\n }\n if ($mustRebuildHeader) {\n $header = $(foreach ($colName in $colNames) { if ($colName -match [regex]::Escape($Delimiter)) { '\"{0}\"' -f $colName.Replace('\"', '\"\"') } else { $colName } }) -join $Delimiter\n }\n\n if ($AsSourceCode) {\n # Note: To make the output suitable for direct piping to Invoke-Expression (which is helpful for testing),\n # a *single* string mut be output.\n (& {\n \"@(\"\n & { $header; $dataLines } | ConvertFrom-Csv -Delimiter $Delimiter | ForEach-Object {\n @\"\n [pscustomobject] @{ $(\n $(foreach ($i in 0..($colNames.Count-1)) {\n if (($propName = $colNames[$i]) -match '\\W') {\n $propName = \"'{0}'\" -f $propName.Replace(\"'\", \"''\")\n }\n $isString = $colTypes[$i] -in $null, [string]\n $cast = if (-not $isString) { '[{0}] ' -f $colTypeNames[$i] }\n $value = $_.($colNames[$i])\n if ($colTypes[$i] -in [bool] -and ($value -as [int]) -notin 0, 1) { Write-Warning \"'$value' is interpreted as `$true - use 0 or 1 to represent [bool] values.\" }\n if ($isString -or $null -eq ($value -as [double])) { $value = \"'{0}'\" -f $(if ($null -ne $value) { $value.Replace(\"'\", \"''\") }) }\n '{0} = {1}{2}' -f $colNames[$i], $cast, $value\n }) -join '; ') }\n\"@\n }\n \")\"\n }) -join \"`n\"\n }\n else {\n if (-not $mustReType) {\n # No type-casting needed - just pass the data through to ConvertFrom-Csv\n & { $header; $dataLines } | ConvertFrom-Csv -ErrorAction Stop -Delimiter $Delimiter\n }\n else {\n # Construct a class with typed properties matching the CSV input dynamically\n $i = 0\n @\"\nclass __ConvertFromCsvTypedHelper {\n$(\n $(foreach ($i in 0..($colNames.Count-1)) {\n ' [{0}] ${{{1}}}' -f $colTypeNames[$i], $colNames[$i]\n }) -join \"`n\"\n)\n}\n\"@ | Invoke-Expression\n\n # Pass the data through to ConvertFrom-Csv and cast the results to the helper type.\n try {\n [__ConvertFromCsvTypedHelper[]] (& { $header; $dataLines } | ConvertFrom-Csv -ErrorAction Stop -Delimiter $Delimiter)\n }\n catch { $_ }\n }\n }\n }\n}\n\n",
"To process inline CSV in a PowerShell script, you can use the ConvertFrom-Csv cmdlet to convert the CSV data into objects with properties that you can use in your script. Here is an example of how you could use this cmdlet to process the CSV data in your script:\n$header = @('name', 'id', 'type', 'loc')\n\n$mycsv = @(\n # name, id, type, loc\n 'Brave', 'Brave.Brave', 1, 'winget'\n 'Adobe Acrobat (64-bit)', '{AC76BA86-1033-1033-7760-BC15014EA700}', 2, ''\n 'GitHub CLI', 'GitHub.cli', 3, 'C:\\portable'\n)\n\n# Convert the CSV data into objects with properties\n$apps = $mycsv | ConvertFrom-Csv -Header $header\n\nForeach ($app in $apps) {\n Write-Host \"App Name: $($app.name)\"\n Write-Host \"App Type: $($app.type)\"\n Write-Host \"App id : $($app.id)\"\n Write-Host \"App Loc : $($app.loc)\"\n Write-Host (\"-\"*40)\n}\n\nThis script uses the ConvertFrom-Csv cmdlet to convert the inline CSV data into objects with properties that match the values in the $header variable. It then uses a foreach loop to iterate over the objects in the $apps variable and prints the values of the properties for each object.\nNote: In this example, the ConvertFrom-Csv cmdlet assumes that the first row of the CSV data contains the headers, which is why we need to specify the -Header parameter when calling the cmdlet. If your CSV data does not have headers, you can specify the property names using the -Property parameter instead. For example:\n$mycsv = @(\n # name, id, type, loc\n 'Brave', 'Brave.Brave', 1, 'winget'\n 'Adobe Acrobat (64-bit)', '{AC76BA86-1033-1033-7760-BC15014EA700}', 2, ''\n 'GitHub CLI', 'GitHub.cli', 3, 'C:\\portable'\n)\n\n# Convert the CSV data into objects with properties\n$apps = $mycsv | ConvertFrom-Csv -Property @('name', 'id', 'type', 'loc')\n\nThis script uses the ConvertFrom-Csv cmdlet to convert the inline CSV data into objects with properties that match the values specified in the -Property parameter. It then uses a foreach loop to iterate over the objects in the $apps variable and prints the values of the properties for each object.\n",
"Here are a few techniques that might help you use data in CSV format.\nI've changed your input a little. Instead of defining a separate header, I've included the header record as the first line of the CSV data. that's what ConvertFrom-CSV expects. I also changed single quotes into double quotes. And I omitted one field completely.\nThe first output shows what happens if you feed the output of ConvertFrom-CSV to format-List. I don't recommend that you do this if your plan is to use the data in variables. format-list is suitable for display, but not further processing.\nThe second output mimics your sample output. The here string contains various subexpressions, each of which can access the current data via the automatic variable $_.\nLast, I show you the members of the pipeline stream. Note the four properties that got their names from your field names.\n$mycsv = @\"\nname, id, type, loc\n\"Brave\", \"Brave.Brave\", 1, \"winget\"\n\"Adobe Acrobat (64-bit)\", \"{AC76BA86-1033-1033-7760-BC15014EA700}\", 2,\n\"GitHub CLI\", \"GitHub.cli\", 3, \"C:\\portable\"\n\"@\n\nConvertFrom-CSV $mycsv | Format-List\n\nConvertFrom-Csv $mycsv | % {@\"\nApp Name: $($_.name)\nApp Type: $($_.type)\nApp id : $($_.id)\nApp Loc : $($_.loc)\n$(\"-\"*40)\n\"@\n}\n\nConvertFrom-CSV $mycsv | gm\n\n"
] |
[
2,
1,
1
] |
[] |
[] |
[
"arrays",
"csv",
"powershell"
] |
stackoverflow_0074663556_arrays_csv_powershell.txt
|
Q:
Express.js routes not found after separating them into their respective files
I have multiple Express routes that i have separated into their own respective file, I have exported the express router from all these files and have imported them into my index.js file and set the following
app.use('/', routes1)
app.use('/', routes2)
app.use('/', routes3)
app.use('/', routes4)
after trying to visit some of the routes in these files I get a 404 error. I do not want to append anything after the '/' all i want is to neatly separate the routes into their own file.
I would like to be able to access all routes after the '/'
similar to below.
app.use('/', routes1, routes2, routes3, routes4)
|
Express.js routes not found after separating them into their respective files
|
I have multiple Express routes that i have separated into their own respective file, I have exported the express router from all these files and have imported them into my index.js file and set the following
app.use('/', routes1)
app.use('/', routes2)
app.use('/', routes3)
app.use('/', routes4)
after trying to visit some of the routes in these files I get a 404 error. I do not want to append anything after the '/' all i want is to neatly separate the routes into their own file.
I would like to be able to access all routes after the '/'
similar to below.
app.use('/', routes1, routes2, routes3, routes4)
|
[] |
[] |
[
"You can try using the express.Router() method to create a new router instance for each file, and then use the app.use() method to mount the router onto the desired path.\nFor example, in your routes1 file:\nconst express = require('express');\nconst router = express.Router();\n\nrouter.get('/', (req, res) => {\n res.send('Hello from routes1!');\n});\n\nmodule.exports = router;\n\nThen, in your index.js file, you can import the router and mount it onto the desired path:\nconst express = require('express');\nconst app = express();\nconst routes1 = require('./routes1');\nconst routes2 = require('./routes2');\nconst routes3 = require('./routes3');\nconst routes4 = require('./routes4');\n\napp.use('/routes1', routes1);\napp.use('/routes2', routes2);\napp.use('/routes3', routes3);\napp.use('/routes4', routes4);\n\napp.listen(3000, () => {\n console.log('Server listening on port 3000');\n});\n\nNow, when you visit http://localhost:3000/routes1, you should see the response from the routes1 file. Similarly, you can access the routes from the other files by visiting the corresponding path, i.e. http://localhost:3000/routes2, http://localhost:3000/routes3, etc.\n"
] |
[
-1
] |
[
"express",
"javascript"
] |
stackoverflow_0074666620_express_javascript.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.