content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
Why " NumExpr defaulting to 8 threads. " warning message shown in python?
I am trying to use the lux library in python to get visualization recommendations. It shows warnings like NumExpr defaulting to 8 threads..
import pandas as pd
import numpy as np
import opendatasets as od
pip install lux-api
import lux
import matplotlib
And then:
link = "https://www.kaggle.com/noordeen/insurance-premium-prediction"
od.download(link)
df = pd.read_csv("./insurance-premium-prediction/insurance.csv")
But, everything is working fine. Is there any problem or should I ignore it?
Warning shows like this:
A:
This is not really something to worry about in most cases. The warning comes from this function, here the most important part:
...
env_configured = False
n_cores = detect_number_of_cores()
if 'NUMEXPR_MAX_THREADS' in os.environ:
# The user has configured NumExpr in the expected way, so suppress logs.
env_configured = True
n_cores = MAX_THREADS
...
if 'NUMEXPR_NUM_THREADS' in os.environ:
requested_threads = int(os.environ['NUMEXPR_NUM_THREADS'])
elif 'OMP_NUM_THREADS' in os.environ:
requested_threads = int(os.environ['OMP_NUM_THREADS'])
else:
requested_threads = n_cores
if not env_configured:
log.info('NumExpr defaulting to %d threads.'%n_cores)
So if neither NUMEXPR_MAX_THREADS nor NUMEXPR_NUM_THREADS nor OMP_NUM_THREADS are set, NumExpr uses so many threads as there are cores (even if the documentation says "at most 8", yet this is not what I see in the code).
You might want to use another number of threads, e.g. while really huge matrices are calculated and one could profit from it or to use less threads, because there is no improvement. Set the environment variables either in the shell or prior to importing numexpr, e.g.
import os
os.environ['NUMEXPR_MAX_THREADS'] = '4'
os.environ['NUMEXPR_NUM_THREADS'] = '2'
import numexpr as ne
| Why " NumExpr defaulting to 8 threads. " warning message shown in python? | I am trying to use the lux library in python to get visualization recommendations. It shows warnings like NumExpr defaulting to 8 threads..
import pandas as pd
import numpy as np
import opendatasets as od
pip install lux-api
import lux
import matplotlib
And then:
link = "https://www.kaggle.com/noordeen/insurance-premium-prediction"
od.download(link)
df = pd.read_csv("./insurance-premium-prediction/insurance.csv")
But, everything is working fine. Is there any problem or should I ignore it?
Warning shows like this:
| [
"This is not really something to worry about in most cases. The warning comes from this function, here the most important part:\n...\n env_configured = False\n n_cores = detect_number_of_cores()\n if 'NUMEXPR_MAX_THREADS' in os.environ:\n # The user has configured NumExpr in the expected way, so suppress logs.\n env_configured = True\n n_cores = MAX_THREADS\n...\n if 'NUMEXPR_NUM_THREADS' in os.environ:\n requested_threads = int(os.environ['NUMEXPR_NUM_THREADS'])\n elif 'OMP_NUM_THREADS' in os.environ:\n requested_threads = int(os.environ['OMP_NUM_THREADS'])\n else:\n requested_threads = n_cores\n if not env_configured:\n log.info('NumExpr defaulting to %d threads.'%n_cores)\n\nSo if neither NUMEXPR_MAX_THREADS nor NUMEXPR_NUM_THREADS nor OMP_NUM_THREADS are set, NumExpr uses so many threads as there are cores (even if the documentation says \"at most 8\", yet this is not what I see in the code).\nYou might want to use another number of threads, e.g. while really huge matrices are calculated and one could profit from it or to use less threads, because there is no improvement. Set the environment variables either in the shell or prior to importing numexpr, e.g.\nimport os\nos.environ['NUMEXPR_MAX_THREADS'] = '4'\nos.environ['NUMEXPR_NUM_THREADS'] = '2'\nimport numexpr as ne \n\n"
] | [
0
] | [] | [] | [
"numexpr",
"numpy",
"python",
"warnings"
] | stackoverflow_0071248521_numexpr_numpy_python_warnings.txt |
Q:
How to receive a num at each step and continue until zero is entered; then this program should print each entered num as its own num
Here I have this code:
N = int(input())
Tmp=n
While tmp>0 :
Print(n)
Tmp-=1
But for ex: when I have:
3
2
1
0
As entered nums, it just prints:
3
3
3
But I need to print:
3
3
3
2
2
1
Here I have this code:
N = int(input())
Tmp=n
While tmp>0 :
Print(n)
Tmp-=1
But there is a problem!bc it just prints
3
3
3
Instead of:
3
3
3
2
2
1
A:
you need to print(tmp) instead of print(n)
this will print 3 2 1.
To get 3 3 3 2 2 1 you need to change your code more:
n = int(input())
tmp=n
while tmp > 0:
for _ in range(tmp)
print(tmp)
tmp -= 1
A:
You need to add if statement when tmp equals 0 then subtract n by 1 and assign tmp back to n:
n = int(input('input : '))
tmp = n
while tmp > 0:
print(n)
tmp -= 1
if tmp == 0:
n -= 1
tmp = n
Output:
# input : 3
# 3
# 3
# 3
# 2
# 2
# 1
| How to receive a num at each step and continue until zero is entered; then this program should print each entered num as its own num | Here I have this code:
N = int(input())
Tmp=n
While tmp>0 :
Print(n)
Tmp-=1
But for ex: when I have:
3
2
1
0
As entered nums, it just prints:
3
3
3
But I need to print:
3
3
3
2
2
1
Here I have this code:
N = int(input())
Tmp=n
While tmp>0 :
Print(n)
Tmp-=1
But there is a problem!bc it just prints
3
3
3
Instead of:
3
3
3
2
2
1
| [
"you need to print(tmp) instead of print(n)\nthis will print 3 2 1.\nTo get 3 3 3 2 2 1 you need to change your code more:\nn = int(input())\n\ntmp=n\n\nwhile tmp > 0:\n for _ in range(tmp)\n print(tmp)\n tmp -= 1\n\n\n",
"You need to add if statement when tmp equals 0 then subtract n by 1 and assign tmp back to n:\nn = int(input('input : '))\ntmp = n\nwhile tmp > 0:\n print(n)\n tmp -= 1\n if tmp == 0:\n n -= 1\n tmp = n\n\nOutput:\n# input : 3\n# 3\n# 3\n# 3\n# 2\n# 2\n# 1\n\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074655549_python.txt |
Q:
Spreading out shift assignments in constraint solver (ortools)
I used the Google OR-Tools Employee Scheduling script (thanks by the way) to make a on-call scheduler. Everything works fine and it is doing what it is supposed to. It makes sure each person works about the same amount of "shifts" (two week periods), it lets certain shifts be requested and I added a constraint where it won't let someone work shifts back to back.
Everyone gets the same amount of shifts (as much as possible):
df['Name'].value_counts()
Out[42]:
Jeff 7
Bubba 7
Sarah 6
Scott 6
Name: Name, dtype: int64
One thing I notice is that it will use up a person as much as it can before moving on to the next. E.g it will go 1-2-1-2-1-3...3-4-3-2-3-4. As opposed to 1-2-3-4-1-2-3-4...
print(df)
Name Date Shift
0 Sarah 01-09-2022 On Call
1 Scott 01-23-2022 On Call
2 Sarah 02-06-2022 On Call
3 Scott 02-20-2022 On Call
4 Sarah 03-06-2022 On Call
5 Jeff 03-20-2022 On Call
6 Sarah 04-03-2022 On Call
7 Jeff 04-17-2022 On Call
8 Sarah 05-01-2022 On Call
9 Jeff 05-15-2022 On Call
10 Sarah 05-29-2022 On Call
11 Jeff 06-12-2022 On Call
12 Bubba 06-26-2022 On Call
13 Jeff 07-10-2022 On Call
14 Bubba 07-24-2022 On Call
15 Jeff 08-07-2022 On Call
16 Scott 08-21-2022 On Call
17 Bubba 09-04-2022 On Call
18 Jeff 09-18-2022 On Call
19 Bubba 10-02-2022 On Call
20 Scott 10-16-2022 On Call
21 Bubba 10-30-2022 On Call
22 Scott 11-13-2022 On Call
23 Bubba 11-27-2022 On Call
24 Scott 12-11-2022 On Call
25 Bubba 12-25-2022 On Call
(See how it kind of burns up one person--like Sarah at the start. I would expect Scott to be a lot like this, because of the -1 in request to not be on call during a large stretch--but a more even spread amongst everyone else would be ideal).
So I have two questions:
Is there a way to make it distribute the people more evenly?
Also, can I add another level of constraint in here by identifying certain time periods that contain holidays and then equally distribute those as well (kind of like a shift inside a shift)?
Here is my script:
# %% imports
import pandas as pd
from ortools.sat.python import cp_model
# %% Data for the model
num_employees = 4
num_shifts = 1
num_oncall_shifts = 26
all_employees = range(num_employees)
all_shifts = range(num_shifts)
all_oncall_shifts = range(num_oncall_shifts)
dict_shift_name = {0: 'On Call'}
dict_emp_name = {0: 'Bubba', 1: 'Scott', 2: 'Jeff', 3: 'Sarah'}
dict_dates = {
0: '01-09-2022',
1: '01-23-2022',
2: '02-06-2022',
3: '02-20-2022',
4: '03-06-2022',
5: '03-20-2022',
6: '04-03-2022',
7: '04-17-2022',
8: '05-01-2022',
9: '05-15-2022',
10: '05-29-2022',
11: '06-12-2022',
12: '06-26-2022',
13: '07-10-2022',
14: '07-24-2022',
15: '08-07-2022',
16: '08-21-2022',
17: '09-04-2022',
18: '09-18-2022',
19: '10-02-2022',
20: '10-16-2022',
21: '10-30-2022',
22: '11-13-2022',
23: '11-27-2022',
24: '12-11-2022',
25: '12-25-2022'
}
shift_requests = [
[
#Employee 0 Bubba
#1/09 1/23 2/06 2/20 3/06 3/20 4/03 4/17 5/01 5/15 5/29 6/12
[0], [0], [0], [0], [0], [0], [0], [0], [0], [0], [0], [0],
#6/26 7/10 7/24 8/07 8/21 9/04 9/18 10/02 10/16 10/30 11/13 11/27
[0], [0], [0], [0], [-4], [0], [0], [0], [0], [0], [0], [0],
#12/11 12/25
[0], [0]
],
[
#Employee 1 Scott
#1/09 1/23 2/06 2/20 3/06 3/20 4/03 4/17 5/01 5/15 5/29 6/12
[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[-1],[-1],
#6/26 7/10 7/24 8/07 8/21 9/04 9/18 10/02 10/16 10/30 11/13 11/27
[-1],[-1],[-1],[-1],[-1],[-1],[-1],[0],[0],[0],[0],[0],
#12/11 12/25
[0],[0]
],
[
#Employee 2 Jeff
#1/09 1/23 2/06 2/20 3/06 3/20 4/03 4/17 5/01 5/15 5/29 6/12
[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],
#6/26 7/10 7/24 8/07 8/21 9/04 9/18 10/02 10/16 10/30 11/13 11/27
[0],[0],[0],[0],[-2],[0],[0],[0],[0],[0],[0],[0],
#12/11 12/25
[0],[0]
],
[
#Employee 3 Sarah
#1/09 1/23 2/06 2/20 3/06 3/20 4/03 4/17 5/01 5/15 5/29 6/12
[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],
#6/26 7/10 7/24 8/07 8/21 9/04 9/18 10/02 10/16 10/30 11/13 11/27
[0],[0],[0],[0],[-3],[0],[0],[0],[0],[0],[0],[0],
#12/11 12/25
[0],[0]
],
]
# dataframe
df = pd.DataFrame(columns=['Name', 'Date', 'Shift'])
# %% Create the Model
model = cp_model.CpModel()
# %% Create the variables
# Shift variables# Creates shift variables.
# shifts[(n, d, s)]: nurse 'n' works shift 's' on day 'd'.
shifts = {}
for n in all_employees:
for d in all_oncall_shifts:
for s in all_shifts:
shifts[(n, d, s)] = model.NewBoolVar('shift_n%id%is%i' % (n, d, s))
# %% Add constraints
# Each shift is assigned to exactly one employee in .
for d in all_oncall_shifts :
for s in all_shifts:
model.AddExactlyOne(shifts[(n, d, s)] for n in all_employees)
# Each employee works at most one shift per oncall_shifts.
for n in all_employees:
for d in all_oncall_shifts:
model.AddAtMostOne(shifts[(n, d, s)] for s in all_shifts)
# Try to distribute the shifts evenly, so that each employee works
# min_shifts_per_employee shifts. If this is not possible, because the total
# number of shifts is not divisible by the number of employee, some employees will
# be assigned one more shift.
min_shifts_per_employee = (num_shifts * num_oncall_shifts) // num_employees
if num_shifts * num_oncall_shifts % num_employees == 0:
max_shifts_per_employee = min_shifts_per_employee
else:
max_shifts_per_employee = min_shifts_per_employee + 1
for n in all_employees:
num_shifts_worked = 0
for d in all_oncall_shifts:
for s in all_shifts:
num_shifts_worked += shifts[(n, d, s)]
model.Add(min_shifts_per_employee <= num_shifts_worked)
model.Add(num_shifts_worked <= max_shifts_per_employee)
# "penalize" working shift back to back
for d in all_employees:
for b in all_oncall_shifts[:-1]:
for r in all_shifts:
for r1 in all_shifts:
model.AddImplication(shifts[(d, b, r)], shifts[(d, b+1, r1)].Not())
# %% Objective
model.Maximize(
sum(shift_requests[n][d][s] * shifts[(n, d, s)] for n in all_employees
for d in all_oncall_shifts for s in all_shifts))
# %% Solve
# Creates the solver and solve.
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.OPTIMAL:
print('Solution:')
for d in all_oncall_shifts:
print('On Call Starts: ', dict_dates[d])
for n in all_employees:
for s in all_shifts:
if solver.Value(shifts[(n, d, s)]) == 1:
if shift_requests[n][d][s] == 1:
print(dict_emp_name[n], ' is ', dict_shift_name[s], '(requested).')
else:
print(dict_emp_name[n], ' is ', dict_shift_name[s],
'(not requested).')
list_append = [dict_emp_name[n], dict_dates[d], dict_shift_name[s]]
df.loc[len(df)] = list_append
print()
print(f'Number of shift requests met = {solver.ObjectiveValue()}',
f'(out of {num_employees * min_shifts_per_employee})')
else:
print('No optimal solution found !')
# %% Stats
print('\nStatistics')
print(' - conflicts: %i' % solver.NumConflicts())
print(' - branches : %i' % solver.NumBranches())
print(' - wall time: %f s' % solver.WallTime())
I get that it is difficult to program "fairness" into something, so any help would be greatly appreciated.
*** edit--added examples ***
A:
There seem to be frequent complaints about lack of documentation, but there is some available on the Google OR-Tools site at https://developers.google.com/optimization .
I learned a lot from the user manual from the old Google OR-Tools ConstraintSolver, to be found at https://www.scribd.com/document/482135694/user-manual-A4-v-0-5-2-pdf or Google "or-tools user_manual_A4.v.0.5.2 Nikolaj van Omme Laurent Perron Vincent Furnon". Even if you're using the new solver, a lot of concepts from that document still apply.
The Google team maintains a list of recommended literature at https://developers.google.com/optimization/support/resources
To do your two-level optimization:
Define a new variable bestFirstObjective and constrain it to be equal to the expression of your optimization value:
# here I simply use 1000 as the horizon for whatever maximum value would be plausible
firstObjective = model.NewIntVar(0, 1000, "Objective1")
model.Add(firstObjective == sum(shift_requests[n][d][s] * shifts[(n, d, s)] for n in all_employees
for d in all_oncall_shifts for s in all_shifts))
model.Maximize(bestFirstObjective)
Then after solving, retrieve the value for firstObjective, and save it in a new Python integer bestFirstObjective (not an IntVar).
Now in a new model2, add all the original variables and constraints, but instead of calling model.Maximize(bestFirstObjective) do this:
model2.Add(firstObjective == bestFirstObjective)
In this new model, construct the variables and constraints you need for the subsidiary objective, and then add:
model2.Maximize(secondObjective)
and solve.
As Laurent Perron mentioned in Multiple objective functions with binary variables Google OR-tools, you can add the optimized solution to the original model as a hint during the second solution, that should speed things up. See https://developers.google.com/optimization/reference/python/sat/python/cp_model#addhint.
I'm not sure if the second model is really needed, it might suffice to reuse the first one, it depends on whether calling Maximize a second time replaces the original objective or confuses the model, I simply don't know which is the case.
In your last comment it sounds like you need more help to construct a measure for "evenly distributed", something like trying to maximize the lowest interval between two consecutive shifts of any employee. The challenge here is to compute the length of this interval from the Booleans that describe whether a shift is worked or not.
Let's assume that the number of days between successive shifts should be "evenly distributed". This would be the case if the shortest break between any two shifts for all the employees would be as large as possible.
One approach would be to first construct a Boolean which is true if an employee worked on day d (pseudo-code here, I haven't actually tested anything):
# For each employee e and day d:
workedOnDay[e, d] = model2.NewBoolvar("Worked on day xxx")
model2.AddMaxEquality(workedOnDay[e, d], {shifts[(e, d, s)] for s in all_shifts})
new IntVar offDaysCounter for each employee/day. If the employee worked on the previous day (d-1) constrain its value to be 0. If the employee did not work on the previous day, constrain it to be the value of the previous day for that employee plus 1. This could be done something like this:
On the first day there is no "previous day" so constrain offDaysCounter to be an arbitrary large number without condition.
# For each employee e
offDaysCounter[e, 0] = model.NewIntVar(num_oncall_shifts, num_oncall_shifts, "OffDaysCounter xxx")
On the subsequent days:
# For each employee e and day d (except the first day)
offDaysCounter[e, d] = model.NewIntVar(0, 2 * num_oncall_shifts, "OffDaysCounter xxx")
model2.Add(offDaysCounter[e, d] == 0).OnlyEnforceIf(workedOnDay[e, (d - 1)])
model2.Add(offDaysCounter[e, d] == offDaysCounter[e, (d - 1)] + 1).OnlyEnforceIf(workedOnDay[e, (d - 1)].Not())
The variable will count up on the off days and be reset to 0 on the worked days; its value just before the next worked day is the length of the break.
Now we can compute the actual number of off days between shifts. Create a new IntVar offDays for each employee/day. If the employee did not work on a day, set its value to be equal to offDays for the following day. If the employee worked on a day, set its value to be equal to offDaysCounter for the same day. This variable will now have a value on each day equal to the break between the last and next worked day relative to this day.
# For each employee e and day d (except the last day):
offDays[e, d] = model.NewIntVar(0, 2 * num_oncall_shifts, "OffDays xxx")
model.Add(offDays[e, d] == offDays[e, (d + 1)]).OnlyEnforceIf(workedOnDays[e, d].Not())
model.Add(offDays[e, d] == offDaysCounter[e, d]).OnlyEnforceIf(workedOnDays[e, d])
On the last day, there is no following day, so set the value of offDays to an arbitrary large number (like the total number of days), if the day was not worked, otherwise to the offDaysCounter[e, d]
The result could look something like this for a given employee:
day
0
1
2
3
4
5
6
7
8
workedOnDay
1
0
0
1
0
1
0
0
0
offDaysCounter
18
0
1
2
0
1
0
1
2
offDays
18
2
2
2
1
1
18
18
18
Now we can construct the objective: maximize the minimum interval between any employees shifts:
secondObjective = model.NewIntVar(0, 2 * num_oncall_shifts, "second objective")
model.AddMinEquality(secondObjective, {offDays[e, d] for d in all_oncall_shifts, for e in all_employees})
model.Maximize(secondObjective)
As I said, I haven't actually tested anything...
| Spreading out shift assignments in constraint solver (ortools) | I used the Google OR-Tools Employee Scheduling script (thanks by the way) to make a on-call scheduler. Everything works fine and it is doing what it is supposed to. It makes sure each person works about the same amount of "shifts" (two week periods), it lets certain shifts be requested and I added a constraint where it won't let someone work shifts back to back.
Everyone gets the same amount of shifts (as much as possible):
df['Name'].value_counts()
Out[42]:
Jeff 7
Bubba 7
Sarah 6
Scott 6
Name: Name, dtype: int64
One thing I notice is that it will use up a person as much as it can before moving on to the next. E.g it will go 1-2-1-2-1-3...3-4-3-2-3-4. As opposed to 1-2-3-4-1-2-3-4...
print(df)
Name Date Shift
0 Sarah 01-09-2022 On Call
1 Scott 01-23-2022 On Call
2 Sarah 02-06-2022 On Call
3 Scott 02-20-2022 On Call
4 Sarah 03-06-2022 On Call
5 Jeff 03-20-2022 On Call
6 Sarah 04-03-2022 On Call
7 Jeff 04-17-2022 On Call
8 Sarah 05-01-2022 On Call
9 Jeff 05-15-2022 On Call
10 Sarah 05-29-2022 On Call
11 Jeff 06-12-2022 On Call
12 Bubba 06-26-2022 On Call
13 Jeff 07-10-2022 On Call
14 Bubba 07-24-2022 On Call
15 Jeff 08-07-2022 On Call
16 Scott 08-21-2022 On Call
17 Bubba 09-04-2022 On Call
18 Jeff 09-18-2022 On Call
19 Bubba 10-02-2022 On Call
20 Scott 10-16-2022 On Call
21 Bubba 10-30-2022 On Call
22 Scott 11-13-2022 On Call
23 Bubba 11-27-2022 On Call
24 Scott 12-11-2022 On Call
25 Bubba 12-25-2022 On Call
(See how it kind of burns up one person--like Sarah at the start. I would expect Scott to be a lot like this, because of the -1 in request to not be on call during a large stretch--but a more even spread amongst everyone else would be ideal).
So I have two questions:
Is there a way to make it distribute the people more evenly?
Also, can I add another level of constraint in here by identifying certain time periods that contain holidays and then equally distribute those as well (kind of like a shift inside a shift)?
Here is my script:
# %% imports
import pandas as pd
from ortools.sat.python import cp_model
# %% Data for the model
num_employees = 4
num_shifts = 1
num_oncall_shifts = 26
all_employees = range(num_employees)
all_shifts = range(num_shifts)
all_oncall_shifts = range(num_oncall_shifts)
dict_shift_name = {0: 'On Call'}
dict_emp_name = {0: 'Bubba', 1: 'Scott', 2: 'Jeff', 3: 'Sarah'}
dict_dates = {
0: '01-09-2022',
1: '01-23-2022',
2: '02-06-2022',
3: '02-20-2022',
4: '03-06-2022',
5: '03-20-2022',
6: '04-03-2022',
7: '04-17-2022',
8: '05-01-2022',
9: '05-15-2022',
10: '05-29-2022',
11: '06-12-2022',
12: '06-26-2022',
13: '07-10-2022',
14: '07-24-2022',
15: '08-07-2022',
16: '08-21-2022',
17: '09-04-2022',
18: '09-18-2022',
19: '10-02-2022',
20: '10-16-2022',
21: '10-30-2022',
22: '11-13-2022',
23: '11-27-2022',
24: '12-11-2022',
25: '12-25-2022'
}
shift_requests = [
[
#Employee 0 Bubba
#1/09 1/23 2/06 2/20 3/06 3/20 4/03 4/17 5/01 5/15 5/29 6/12
[0], [0], [0], [0], [0], [0], [0], [0], [0], [0], [0], [0],
#6/26 7/10 7/24 8/07 8/21 9/04 9/18 10/02 10/16 10/30 11/13 11/27
[0], [0], [0], [0], [-4], [0], [0], [0], [0], [0], [0], [0],
#12/11 12/25
[0], [0]
],
[
#Employee 1 Scott
#1/09 1/23 2/06 2/20 3/06 3/20 4/03 4/17 5/01 5/15 5/29 6/12
[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[-1],[-1],
#6/26 7/10 7/24 8/07 8/21 9/04 9/18 10/02 10/16 10/30 11/13 11/27
[-1],[-1],[-1],[-1],[-1],[-1],[-1],[0],[0],[0],[0],[0],
#12/11 12/25
[0],[0]
],
[
#Employee 2 Jeff
#1/09 1/23 2/06 2/20 3/06 3/20 4/03 4/17 5/01 5/15 5/29 6/12
[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],
#6/26 7/10 7/24 8/07 8/21 9/04 9/18 10/02 10/16 10/30 11/13 11/27
[0],[0],[0],[0],[-2],[0],[0],[0],[0],[0],[0],[0],
#12/11 12/25
[0],[0]
],
[
#Employee 3 Sarah
#1/09 1/23 2/06 2/20 3/06 3/20 4/03 4/17 5/01 5/15 5/29 6/12
[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],[0],
#6/26 7/10 7/24 8/07 8/21 9/04 9/18 10/02 10/16 10/30 11/13 11/27
[0],[0],[0],[0],[-3],[0],[0],[0],[0],[0],[0],[0],
#12/11 12/25
[0],[0]
],
]
# dataframe
df = pd.DataFrame(columns=['Name', 'Date', 'Shift'])
# %% Create the Model
model = cp_model.CpModel()
# %% Create the variables
# Shift variables# Creates shift variables.
# shifts[(n, d, s)]: nurse 'n' works shift 's' on day 'd'.
shifts = {}
for n in all_employees:
for d in all_oncall_shifts:
for s in all_shifts:
shifts[(n, d, s)] = model.NewBoolVar('shift_n%id%is%i' % (n, d, s))
# %% Add constraints
# Each shift is assigned to exactly one employee in .
for d in all_oncall_shifts :
for s in all_shifts:
model.AddExactlyOne(shifts[(n, d, s)] for n in all_employees)
# Each employee works at most one shift per oncall_shifts.
for n in all_employees:
for d in all_oncall_shifts:
model.AddAtMostOne(shifts[(n, d, s)] for s in all_shifts)
# Try to distribute the shifts evenly, so that each employee works
# min_shifts_per_employee shifts. If this is not possible, because the total
# number of shifts is not divisible by the number of employee, some employees will
# be assigned one more shift.
min_shifts_per_employee = (num_shifts * num_oncall_shifts) // num_employees
if num_shifts * num_oncall_shifts % num_employees == 0:
max_shifts_per_employee = min_shifts_per_employee
else:
max_shifts_per_employee = min_shifts_per_employee + 1
for n in all_employees:
num_shifts_worked = 0
for d in all_oncall_shifts:
for s in all_shifts:
num_shifts_worked += shifts[(n, d, s)]
model.Add(min_shifts_per_employee <= num_shifts_worked)
model.Add(num_shifts_worked <= max_shifts_per_employee)
# "penalize" working shift back to back
for d in all_employees:
for b in all_oncall_shifts[:-1]:
for r in all_shifts:
for r1 in all_shifts:
model.AddImplication(shifts[(d, b, r)], shifts[(d, b+1, r1)].Not())
# %% Objective
model.Maximize(
sum(shift_requests[n][d][s] * shifts[(n, d, s)] for n in all_employees
for d in all_oncall_shifts for s in all_shifts))
# %% Solve
# Creates the solver and solve.
solver = cp_model.CpSolver()
status = solver.Solve(model)
if status == cp_model.OPTIMAL:
print('Solution:')
for d in all_oncall_shifts:
print('On Call Starts: ', dict_dates[d])
for n in all_employees:
for s in all_shifts:
if solver.Value(shifts[(n, d, s)]) == 1:
if shift_requests[n][d][s] == 1:
print(dict_emp_name[n], ' is ', dict_shift_name[s], '(requested).')
else:
print(dict_emp_name[n], ' is ', dict_shift_name[s],
'(not requested).')
list_append = [dict_emp_name[n], dict_dates[d], dict_shift_name[s]]
df.loc[len(df)] = list_append
print()
print(f'Number of shift requests met = {solver.ObjectiveValue()}',
f'(out of {num_employees * min_shifts_per_employee})')
else:
print('No optimal solution found !')
# %% Stats
print('\nStatistics')
print(' - conflicts: %i' % solver.NumConflicts())
print(' - branches : %i' % solver.NumBranches())
print(' - wall time: %f s' % solver.WallTime())
I get that it is difficult to program "fairness" into something, so any help would be greatly appreciated.
*** edit--added examples ***
| [
"There seem to be frequent complaints about lack of documentation, but there is some available on the Google OR-Tools site at https://developers.google.com/optimization .\nI learned a lot from the user manual from the old Google OR-Tools ConstraintSolver, to be found at https://www.scribd.com/document/482135694/user-manual-A4-v-0-5-2-pdf or Google \"or-tools user_manual_A4.v.0.5.2 Nikolaj van Omme Laurent Perron Vincent Furnon\". Even if you're using the new solver, a lot of concepts from that document still apply.\nThe Google team maintains a list of recommended literature at https://developers.google.com/optimization/support/resources\nTo do your two-level optimization:\nDefine a new variable bestFirstObjective and constrain it to be equal to the expression of your optimization value:\n# here I simply use 1000 as the horizon for whatever maximum value would be plausible\nfirstObjective = model.NewIntVar(0, 1000, \"Objective1\")\nmodel.Add(firstObjective == sum(shift_requests[n][d][s] * shifts[(n, d, s)] for n in all_employees\n for d in all_oncall_shifts for s in all_shifts))\nmodel.Maximize(bestFirstObjective)\n\nThen after solving, retrieve the value for firstObjective, and save it in a new Python integer bestFirstObjective (not an IntVar).\nNow in a new model2, add all the original variables and constraints, but instead of calling model.Maximize(bestFirstObjective) do this:\nmodel2.Add(firstObjective == bestFirstObjective)\n\nIn this new model, construct the variables and constraints you need for the subsidiary objective, and then add:\nmodel2.Maximize(secondObjective)\n\nand solve.\nAs Laurent Perron mentioned in Multiple objective functions with binary variables Google OR-tools, you can add the optimized solution to the original model as a hint during the second solution, that should speed things up. See https://developers.google.com/optimization/reference/python/sat/python/cp_model#addhint.\nI'm not sure if the second model is really needed, it might suffice to reuse the first one, it depends on whether calling Maximize a second time replaces the original objective or confuses the model, I simply don't know which is the case.\nIn your last comment it sounds like you need more help to construct a measure for \"evenly distributed\", something like trying to maximize the lowest interval between two consecutive shifts of any employee. The challenge here is to compute the length of this interval from the Booleans that describe whether a shift is worked or not.\nLet's assume that the number of days between successive shifts should be \"evenly distributed\". This would be the case if the shortest break between any two shifts for all the employees would be as large as possible.\nOne approach would be to first construct a Boolean which is true if an employee worked on day d (pseudo-code here, I haven't actually tested anything):\n# For each employee e and day d:\nworkedOnDay[e, d] = model2.NewBoolvar(\"Worked on day xxx\")\nmodel2.AddMaxEquality(workedOnDay[e, d], {shifts[(e, d, s)] for s in all_shifts})\n\nnew IntVar offDaysCounter for each employee/day. If the employee worked on the previous day (d-1) constrain its value to be 0. If the employee did not work on the previous day, constrain it to be the value of the previous day for that employee plus 1. This could be done something like this:\nOn the first day there is no \"previous day\" so constrain offDaysCounter to be an arbitrary large number without condition.\n# For each employee e \noffDaysCounter[e, 0] = model.NewIntVar(num_oncall_shifts, num_oncall_shifts, \"OffDaysCounter xxx\")\n\nOn the subsequent days:\n# For each employee e and day d (except the first day)\noffDaysCounter[e, d] = model.NewIntVar(0, 2 * num_oncall_shifts, \"OffDaysCounter xxx\")\nmodel2.Add(offDaysCounter[e, d] == 0).OnlyEnforceIf(workedOnDay[e, (d - 1)])\nmodel2.Add(offDaysCounter[e, d] == offDaysCounter[e, (d - 1)] + 1).OnlyEnforceIf(workedOnDay[e, (d - 1)].Not())\n\nThe variable will count up on the off days and be reset to 0 on the worked days; its value just before the next worked day is the length of the break.\nNow we can compute the actual number of off days between shifts. Create a new IntVar offDays for each employee/day. If the employee did not work on a day, set its value to be equal to offDays for the following day. If the employee worked on a day, set its value to be equal to offDaysCounter for the same day. This variable will now have a value on each day equal to the break between the last and next worked day relative to this day.\n# For each employee e and day d (except the last day):\noffDays[e, d] = model.NewIntVar(0, 2 * num_oncall_shifts, \"OffDays xxx\")\nmodel.Add(offDays[e, d] == offDays[e, (d + 1)]).OnlyEnforceIf(workedOnDays[e, d].Not())\nmodel.Add(offDays[e, d] == offDaysCounter[e, d]).OnlyEnforceIf(workedOnDays[e, d])\n\nOn the last day, there is no following day, so set the value of offDays to an arbitrary large number (like the total number of days), if the day was not worked, otherwise to the offDaysCounter[e, d]\nThe result could look something like this for a given employee:\n\n\n\n\nday\n0\n1\n2\n3\n4\n5\n6\n7\n8\n\n\n\n\nworkedOnDay\n1\n0\n0\n1\n0\n1\n0\n0\n0\n\n\noffDaysCounter\n18\n0\n1\n2\n0\n1\n0\n1\n2\n\n\noffDays\n18\n2\n2\n2\n1\n1\n18\n18\n18\n\n\n\n\nNow we can construct the objective: maximize the minimum interval between any employees shifts:\nsecondObjective = model.NewIntVar(0, 2 * num_oncall_shifts, \"second objective\")\nmodel.AddMinEquality(secondObjective, {offDays[e, d] for d in all_oncall_shifts, for e in all_employees})\nmodel.Maximize(secondObjective)\n\nAs I said, I haven't actually tested anything...\n"
] | [
1
] | [] | [] | [
"constraint_programming",
"cp_sat_solver",
"or_tools",
"python",
"python_3.x"
] | stackoverflow_0074627968_constraint_programming_cp_sat_solver_or_tools_python_python_3.x.txt |
Q:
What is the meaning of reset_states() and update_state() in tf.keras metrics?
I am checking very simple metrics objects in tensorflow.keras such as BinaryAccuracy or AUC. They all have reset_states() and update_state() arguments, but I found their documentation insufficient and unclear.
Can you explain what they mean?
A:
update_state measures the metrics (mean, auc, accuracy), and stores them in the object, so it can later be retrieved with result:
import tensorflow as tf
mean_object = tf.metrics.Mean()
values = [1, 2, 3, 4, 5]
for ix, val in enumerate(values):
mean_object.update_state(val)
print(mean_object.result().numpy(), 'is the mean of', values[:ix+1])
1.0 is the mean of [1]
1.5 is the mean of [1, 2]
2.0 is the mean of [1, 2, 3]
2.5 is the mean of [1, 2, 3, 4]
3.0 is the mean of [1, 2, 3, 4, 5]
reset_states resets the metric to zero:
mean_object.reset_states()
mean_object.result().numpy()
0.0
I'm not sure I made it more clear than the documentation, it's pretty well explained in my opinion.
Calling the object, e.g., mean_object([1, 2, 3, 4]) will update the metric, and return the result.
import tensorflow as tf
mean_object = tf.metrics.Mean()
values = [1, 2, 3, 4, 5]
print(mean_object.result())
returned_mean = mean_object(values)
print(mean_object.result())
print(returned_mean)
tf.Tensor(0.0, shape=(), dtype=float32)
tf.Tensor(3.0, shape=(), dtype=float32)
tf.Tensor(3.0, shape=(), dtype=float32)
A:
I think by default the .result() method computes the mean of all the values that the .update_states was given.
Here is an example:
import tensorflow as tf
m = tf.keras.metrics.Accuracy()
m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])
result = m.result().numpy()
print(f"result: {result}")
m.update_state([[1], [2], [3], [4]], [[0], [0], [3], [4]])
result = m.result().numpy()
print(f"result: {result}")
Here the accuracy of the first set of values is 0.75 and the accuracy of the second set of values is 0.5. So the last call to the method "result" will print 0.625. This is equal to (0.75+0.5)/2.
| What is the meaning of reset_states() and update_state() in tf.keras metrics? | I am checking very simple metrics objects in tensorflow.keras such as BinaryAccuracy or AUC. They all have reset_states() and update_state() arguments, but I found their documentation insufficient and unclear.
Can you explain what they mean?
| [
"update_state measures the metrics (mean, auc, accuracy), and stores them in the object, so it can later be retrieved with result:\nimport tensorflow as tf\n\nmean_object = tf.metrics.Mean()\n\nvalues = [1, 2, 3, 4, 5]\n\nfor ix, val in enumerate(values):\n mean_object.update_state(val)\n print(mean_object.result().numpy(), 'is the mean of', values[:ix+1])\n\n1.0 is the mean of [1]\n1.5 is the mean of [1, 2]\n2.0 is the mean of [1, 2, 3]\n2.5 is the mean of [1, 2, 3, 4]\n3.0 is the mean of [1, 2, 3, 4, 5]\n\nreset_states resets the metric to zero:\nmean_object.reset_states()\nmean_object.result().numpy()\n\n0.0\n\nI'm not sure I made it more clear than the documentation, it's pretty well explained in my opinion.\nCalling the object, e.g., mean_object([1, 2, 3, 4]) will update the metric, and return the result.\nimport tensorflow as tf\n\nmean_object = tf.metrics.Mean()\n\nvalues = [1, 2, 3, 4, 5]\n\nprint(mean_object.result())\nreturned_mean = mean_object(values)\nprint(mean_object.result())\nprint(returned_mean)\n\ntf.Tensor(0.0, shape=(), dtype=float32)\ntf.Tensor(3.0, shape=(), dtype=float32)\ntf.Tensor(3.0, shape=(), dtype=float32)\n\n",
"I think by default the .result() method computes the mean of all the values that the .update_states was given.\nHere is an example:\nimport tensorflow as tf\n\nm = tf.keras.metrics.Accuracy()\nm.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])\nresult = m.result().numpy()\nprint(f\"result: {result}\")\nm.update_state([[1], [2], [3], [4]], [[0], [0], [3], [4]])\nresult = m.result().numpy()\nprint(f\"result: {result}\")\n\nHere the accuracy of the first set of values is 0.75 and the accuracy of the second set of values is 0.5. So the last call to the method \"result\" will print 0.625. This is equal to (0.75+0.5)/2.\n"
] | [
4,
0
] | [] | [] | [
"keras",
"metrics",
"python",
"tensorflow",
"tensorflow2.0"
] | stackoverflow_0065722580_keras_metrics_python_tensorflow_tensorflow2.0.txt |
Q:
Using Selenium and Python, but Print function not working
I am using selenium to write a test script to purchase a number of items automatically. However for some reason when I am asking python to print a certain elements text, nothing is appearing in the console for me to assert that my script has selected the correct colour of the item.
# Driver select the first trainer option visible on the page
driver.find_element(By.XPATH,"//img[@title='Fresh Foam X 1080v12, M1080Z12']").click()
# Driver click on the orange version of these trainers
driver.find_element(By.XPATH,"//button[contains(@title,'M1080M12')]//span[contains(@class,'p-auto')]").click()
# Make sure you have the correct colour
trainerColour1 = driver.find_element(By.XPATH,"//span[@class='display-color-name color-name-mobile font-body regular pdp-update-event-triggerd']").text
print (trainerColour1) # For some reason its not printing this element on the log
#assert "apricot" in trainerColour1
At this point I expected "Vibrant orange with spring tide and vibrant apricot" to appear on the console log and for me to assert the word "apricot" to make sure the correct colour had been selected. However nothing is appearing on the console.Console Result
A:
If the text is not on the element and is on childrens of the element, change .text to .get_attribute("innerText")
Other solution could be accurate more on the element you want to extract, but sometimes you need this cause text is mixed on multiple elements
trainerColour1 = driver.find_element(By.XPATH,"//span[@class='display-color-name color-name-mobile font-body regular pdp-update-event-triggerd']").get_attribute("innerText")
| Using Selenium and Python, but Print function not working | I am using selenium to write a test script to purchase a number of items automatically. However for some reason when I am asking python to print a certain elements text, nothing is appearing in the console for me to assert that my script has selected the correct colour of the item.
# Driver select the first trainer option visible on the page
driver.find_element(By.XPATH,"//img[@title='Fresh Foam X 1080v12, M1080Z12']").click()
# Driver click on the orange version of these trainers
driver.find_element(By.XPATH,"//button[contains(@title,'M1080M12')]//span[contains(@class,'p-auto')]").click()
# Make sure you have the correct colour
trainerColour1 = driver.find_element(By.XPATH,"//span[@class='display-color-name color-name-mobile font-body regular pdp-update-event-triggerd']").text
print (trainerColour1) # For some reason its not printing this element on the log
#assert "apricot" in trainerColour1
At this point I expected "Vibrant orange with spring tide and vibrant apricot" to appear on the console log and for me to assert the word "apricot" to make sure the correct colour had been selected. However nothing is appearing on the console.Console Result
| [
"If the text is not on the element and is on childrens of the element, change .text to .get_attribute(\"innerText\")\nOther solution could be accurate more on the element you want to extract, but sometimes you need this cause text is mixed on multiple elements\ntrainerColour1 = driver.find_element(By.XPATH,\"//span[@class='display-color-name color-name-mobile font-body regular pdp-update-event-triggerd']\").get_attribute(\"innerText\") \n\n"
] | [
0
] | [] | [] | [
"console",
"pycharm",
"python",
"selenium",
"selenium_webdriver"
] | stackoverflow_0074654178_console_pycharm_python_selenium_selenium_webdriver.txt |
Q:
Can I setup a simple job queue with celery on a plotly dashboard?
I have a dashboard very similar to this one-
import datetime
import dash
from dash import dcc, html
import plotly
from dash.dependencies import Input, Output
# pip install pyorbital
from pyorbital.orbital import Orbital
satellite = Orbital('TERRA')
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div(
html.Div([
html.H4('TERRA Satellite Live Feed'),
html.Div(id='live-update-text'),
dcc.Graph(id='live-update-graph'),
dcc.Interval(
id='interval-component',
interval=1*1000, # in milliseconds
n_intervals=0
)
])
)
# Multiple components can update everytime interval gets fired.
@app.callback(Output('live-update-graph', 'figure'),
Input('live-update-graph', 'relayout'),
Input('interval-component', 'n_intervals'))
def update_graph_live(relayout, n):
if ctx.triggered_id == 'relayout':
* code that affects the y axis *
return fig
else:
satellite = Orbital('TERRA')
data = {
'time': [],
'Latitude': [],
'Longitude': [],
'Altitude': []
}
# Collect some data
for i in range(180):
time = datetime.datetime.now() - datetime.timedelta(seconds=i*20)
lon, lat, alt = satellite.get_lonlatalt(
time
)
data['Longitude'].append(lon)
data['Latitude'].append(lat)
data['Altitude'].append(alt)
data['time'].append(time)
# Create the graph with subplots
fig = plotly.tools.make_subplots(rows=2, cols=1, vertical_spacing=0.2)
fig['layout']['margin'] = {
'l': 30, 'r': 10, 'b': 30, 't': 10
}
fig['layout']['legend'] = {'x': 0, 'y': 1, 'xanchor': 'left'}
fig.append_trace({
'x': data['time'],
'y': data['Altitude'],
'name': 'Altitude',
'mode': 'lines+markers',
'type': 'scatter'
}, 1, 1)
fig.append_trace({
'x': data['Longitude'],
'y': data['Latitude'],
'text': data['time'],
'name': 'Longitude vs Latitude',
'mode': 'lines+markers',
'type': 'scatter'
}, 2, 1)
return fig
if __name__ == '__main__':
app.run_server(debug=True)
I want to setup a job queue. Right now, the "code that affects the y axis" part never runs because the interval component fires before it finishes processing. I want to setup logic that says "add every callback to a queue and then fire them one at a time in the order that they were called".
Two questions
1- Can I achieve this with celery?
2- If so, what does a small working example look like?
A:
Yes, you can achieve this with Celery. Celery is a task queue that allows you to schedule tasks to be executed at a later time. It is designed to be used in distributed systems and can be used to manage the execution of callbacks in your Dash application.
A small working example of using Celery with Dash would look something like this:
# Import Celery and create a Celery instance
from celery import Celery
celery_app = Celery('tasks', broker='redis://localhost:6379/0')
# Define the callback function that will be added to the queue
@celery_app.task(name='update_graph_live')
def update_graph_live(relayout, n):
if ctx.triggered_id == 'relayout':
* code that affects the y axis *
return fig
else:
satellite = Orbital('TERRA')
data = {
'time': [],
'Latitude': [],
'Longitude': [],
'Altitude': []
}
# Collect some data
for i in range(180):
time = datetime.datetime.now() - datetime.timedelta(seconds=i*20)
lon, lat, alt = satellite.get_lonlatalt( time ) data['Longitude'].append(lon) data['Latitude'].append(lat) data['Altitude'].append(alt) data['time'].append(time)
# Create the graph with subplots fig = plotly.tools.make_subplots(rows=2, cols=1, vertical_spacing=0.2) fig['layout']['margin'] = { 'l': 30, 'r': 10, 'b': 30, 't': 10 } fig['layout']['legend'] = {'x': 0, 'y': 1, 'xanchor': 'left'}
fig.append_trace({ 'x': data['time'], 'y': data['Altitude'], 'name': 'Altitude', 'mode': 'lines+markers', 'type': 'scatter' }, 1, 1) fig.append_trace({ 'x': data['Longitude'], 'y': data['Latitude'], 'text': data['time'], 'name': 'Longitude vs Latitude', 'mode': 'lines+markers', 'type': 'scatter' }, 2, 1)
return fig
# Update the callback to add the task to the queue
@app.callback(Output('live-update-graph', 'figure'),
Input('live-update-graph', 'relayout'),
Input('interval-component', 'n_intervals'))
def update_graph_live(relayout, n):
# Add the task to the queue update_graph_live.delay(relayout, n) # Return an empty figure until the task is completed return {}
if __name__ == '__main__':
app.run_server(debug=True)
In this example, the callback is updated to add the task to the queue using Celery's delay() method. The callback then returns an empty figure until the task is completed.
A:
Yes, you can use Celery to create a job queue for your Plotly Dash app. Celery is a task queue and distributed job queue that can be used to run long-running tasks in the background, such as data processing or API calls.
Here is a simple example of how you can use Celery to set up a job queue for your Dash app:
First, install Celery and the required dependencies:
pip install celery
pip install eventlet
Next, create a celery.py file that defines a Celery app and sets up the required configuration:
from celery import Celery
app = Celery(__name__)
app.config_from_object('celeryconfig')
Then, create a celeryconfig.py file that defines the broker and backend for the Celery app:
BROKER_URL = 'amqp://localhost:5672'
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
Next, modify your Dash app to use Celery to run the long-running tasks in the background. In your update_graph_live function, you can use the delay method to add a task to the Celery queue, and the get method to retrieve the result of the task when it is finished:
@app.callback(Output('live-update-graph', 'figure'),
Input('live-update-graph', 'relayout'),
Input('interval-component', 'n_intervals'))
def update_graph_live(relayout, n):
if ctx.triggered_id == 'relayout':
# Add the "code that affects the y axis" task to the Celery queue
result = code_that_affects_the_y_axis.delay()
# Wait for the task to finish and retrieve the result
result = result.get()
return result
else:
satellite = Orbital('TERRA')
data = {
'time': [],
'Latitude': [],
'Longitude': [],
'Altitude': []
}
# Collect some data
for i in range(180):
time = datetime.datetime.now() - datetime.timedelta(seconds=i*20)
lon, lat, alt = satellite.get_lonlatalt(
time
)
data['Longitude'].append(lon)
data['Latitude'].append(lat)
data['Altitude'].append(alt)
data['time'].
| Can I setup a simple job queue with celery on a plotly dashboard? | I have a dashboard very similar to this one-
import datetime
import dash
from dash import dcc, html
import plotly
from dash.dependencies import Input, Output
# pip install pyorbital
from pyorbital.orbital import Orbital
satellite = Orbital('TERRA')
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div(
html.Div([
html.H4('TERRA Satellite Live Feed'),
html.Div(id='live-update-text'),
dcc.Graph(id='live-update-graph'),
dcc.Interval(
id='interval-component',
interval=1*1000, # in milliseconds
n_intervals=0
)
])
)
# Multiple components can update everytime interval gets fired.
@app.callback(Output('live-update-graph', 'figure'),
Input('live-update-graph', 'relayout'),
Input('interval-component', 'n_intervals'))
def update_graph_live(relayout, n):
if ctx.triggered_id == 'relayout':
* code that affects the y axis *
return fig
else:
satellite = Orbital('TERRA')
data = {
'time': [],
'Latitude': [],
'Longitude': [],
'Altitude': []
}
# Collect some data
for i in range(180):
time = datetime.datetime.now() - datetime.timedelta(seconds=i*20)
lon, lat, alt = satellite.get_lonlatalt(
time
)
data['Longitude'].append(lon)
data['Latitude'].append(lat)
data['Altitude'].append(alt)
data['time'].append(time)
# Create the graph with subplots
fig = plotly.tools.make_subplots(rows=2, cols=1, vertical_spacing=0.2)
fig['layout']['margin'] = {
'l': 30, 'r': 10, 'b': 30, 't': 10
}
fig['layout']['legend'] = {'x': 0, 'y': 1, 'xanchor': 'left'}
fig.append_trace({
'x': data['time'],
'y': data['Altitude'],
'name': 'Altitude',
'mode': 'lines+markers',
'type': 'scatter'
}, 1, 1)
fig.append_trace({
'x': data['Longitude'],
'y': data['Latitude'],
'text': data['time'],
'name': 'Longitude vs Latitude',
'mode': 'lines+markers',
'type': 'scatter'
}, 2, 1)
return fig
if __name__ == '__main__':
app.run_server(debug=True)
I want to setup a job queue. Right now, the "code that affects the y axis" part never runs because the interval component fires before it finishes processing. I want to setup logic that says "add every callback to a queue and then fire them one at a time in the order that they were called".
Two questions
1- Can I achieve this with celery?
2- If so, what does a small working example look like?
| [
"Yes, you can achieve this with Celery. Celery is a task queue that allows you to schedule tasks to be executed at a later time. It is designed to be used in distributed systems and can be used to manage the execution of callbacks in your Dash application.\nA small working example of using Celery with Dash would look something like this:\n# Import Celery and create a Celery instance\nfrom celery import Celery\ncelery_app = Celery('tasks', broker='redis://localhost:6379/0')\n\n# Define the callback function that will be added to the queue\n@celery_app.task(name='update_graph_live')\ndef update_graph_live(relayout, n):\n if ctx.triggered_id == 'relayout':\n * code that affects the y axis * \n return fig \n else:\n satellite = Orbital('TERRA')\n data = {\n 'time': [],\n 'Latitude': [],\n 'Longitude': [],\n 'Altitude': []\n }\n\n # Collect some data \n for i in range(180): \n time = datetime.datetime.now() - datetime.timedelta(seconds=i*20) \n lon, lat, alt = satellite.get_lonlatalt( time ) data['Longitude'].append(lon) data['Latitude'].append(lat) data['Altitude'].append(alt) data['time'].append(time)\n\n # Create the graph with subplots fig = plotly.tools.make_subplots(rows=2, cols=1, vertical_spacing=0.2) fig['layout']['margin'] = { 'l': 30, 'r': 10, 'b': 30, 't': 10 } fig['layout']['legend'] = {'x': 0, 'y': 1, 'xanchor': 'left'}\n\n fig.append_trace({ 'x': data['time'], 'y': data['Altitude'], 'name': 'Altitude', 'mode': 'lines+markers', 'type': 'scatter' }, 1, 1) fig.append_trace({ 'x': data['Longitude'], 'y': data['Latitude'], 'text': data['time'], 'name': 'Longitude vs Latitude', 'mode': 'lines+markers', 'type': 'scatter' }, 2, 1)\n\n return fig\n\n# Update the callback to add the task to the queue\[email protected](Output('live-update-graph', 'figure'),\n Input('live-update-graph', 'relayout'),\n Input('interval-component', 'n_intervals'))\ndef update_graph_live(relayout, n):\n # Add the task to the queue update_graph_live.delay(relayout, n) # Return an empty figure until the task is completed return {}\n\nif __name__ == '__main__':\n app.run_server(debug=True)\n\nIn this example, the callback is updated to add the task to the queue using Celery's delay() method. The callback then returns an empty figure until the task is completed.\n",
"Yes, you can use Celery to create a job queue for your Plotly Dash app. Celery is a task queue and distributed job queue that can be used to run long-running tasks in the background, such as data processing or API calls.\nHere is a simple example of how you can use Celery to set up a job queue for your Dash app:\nFirst, install Celery and the required dependencies:\npip install celery\npip install eventlet\n\nNext, create a celery.py file that defines a Celery app and sets up the required configuration:\nfrom celery import Celery\n\napp = Celery(__name__)\napp.config_from_object('celeryconfig')\n\nThen, create a celeryconfig.py file that defines the broker and backend for the Celery app:\nBROKER_URL = 'amqp://localhost:5672'\nCELERY_RESULT_BACKEND = 'redis://localhost:6379'\n\nNext, modify your Dash app to use Celery to run the long-running tasks in the background. In your update_graph_live function, you can use the delay method to add a task to the Celery queue, and the get method to retrieve the result of the task when it is finished:\[email protected](Output('live-update-graph', 'figure'),\n Input('live-update-graph', 'relayout'),\n Input('interval-component', 'n_intervals'))\ndef update_graph_live(relayout, n):\n if ctx.triggered_id == 'relayout':\n # Add the \"code that affects the y axis\" task to the Celery queue\n result = code_that_affects_the_y_axis.delay()\n\n # Wait for the task to finish and retrieve the result\n result = result.get()\n return result\n else:\n satellite = Orbital('TERRA')\n data = {\n 'time': [],\n 'Latitude': [],\n 'Longitude': [],\n 'Altitude': []\n }\n\n # Collect some data\n for i in range(180):\n time = datetime.datetime.now() - datetime.timedelta(seconds=i*20)\n lon, lat, alt = satellite.get_lonlatalt(\n time\n )\n data['Longitude'].append(lon)\n data['Latitude'].append(lat)\n data['Altitude'].append(alt)\n data['time'].\n\n"
] | [
0,
0
] | [] | [] | [
"celery",
"plotly",
"plotly_dash",
"plotly_python",
"python"
] | stackoverflow_0074533185_celery_plotly_plotly_dash_plotly_python_python.txt |
Q:
How do I use selenium ChromeDriver to scroll the sidebar on Google maps to load more results?
I’ve run into a problem trying to use Selenium ChromeDriver to scroll down the sidebar of a google maps results page. I am trying to get to the 6th result down but the result does not fully load until you scroll down. Using the find_element_by_xpath method, I am successfully able to access results 1-5 and click into them individually, but when trying to use the actions.move_to_element(link).perform() method to scroll to the 6th element, it does not work and throws an error message.
The error that I get is:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:
However, I know this element exists because when I manually scroll and more results are loaded, the Xpath works correctly. What am I doing wrong? I’ve spent many hours trying to solve this and I haven’t been able to solve with the available content out there. I appreciate any help or insights you can offer, thank you!
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as soup
import time
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://www.google.com/maps")
time.sleep(7)
page = soup(driver.page_source, 'html.parser')
#find the searchbar, enter search, and hit return
search = driver.find_element_by_id('searchboxinput')
search.send_keys("dentists in Austin Texas")
search.send_keys(Keys.RETURN)
driver.maximize_window()
time.sleep(7)
#I want to get the 6th result down but it requires a sidebar scroll to load
link = driver.find_element_by_xpath("//*[@id='pane']/div/div[1]/div/div/div[4]/div[1]/div[13]/div/a")
actions.move_to_element(link).perform()
link.click()
time.sleep(5)
driver.back()```
A:
I found a solution that works, it is to target the element in XPATH from the javascript interface of selenium. You must then execute two commands on an instruction (targeting and scroll)
driver.executeScript("var el = document.evaluate('/html/body/jsl/div[3]/div[10]/div[8]/div/div[1]/div/div/div[4]/div[1]', document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue; el.scroll(0, 5000);");
this is the only solution that worked for me
A:
The search results in the google map are located with //div[contains(@aria-label,'dentists in Austin Texas')]//div[contains(@jsaction,'mouseover')] XPath.
So, to select 6-th element there you can do the following
from selenium.webdriver.common.action_chains import ActionChains
results = driver.find_elements_by_xpath('//div[contains(@aria-label,"dentists in Austin Texas")]//div[contains(@jsaction,"mouseover")]')
ActionChains(driver).move_to_element(results[6]).click(button).perform()
A:
I was just implementing scrolling on google map sidebar, it's working on my side. check this code please
# selecting scroll body
driver.find_element_by_xpath('/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]').click()
#start scrolling your sidebar
html = driver.find_element_by_xpath('/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]')
html.send_keys(Keys.END)
also add the "KEYS" library
from selenium.webdriver.common.keys import Keys
I hope it would help you.
by the way I have implemented scrapping of google map with its available data and used above code to scroll. check if you have any problem, let me know then
| How do I use selenium ChromeDriver to scroll the sidebar on Google maps to load more results? | I’ve run into a problem trying to use Selenium ChromeDriver to scroll down the sidebar of a google maps results page. I am trying to get to the 6th result down but the result does not fully load until you scroll down. Using the find_element_by_xpath method, I am successfully able to access results 1-5 and click into them individually, but when trying to use the actions.move_to_element(link).perform() method to scroll to the 6th element, it does not work and throws an error message.
The error that I get is:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element:
However, I know this element exists because when I manually scroll and more results are loaded, the Xpath works correctly. What am I doing wrong? I’ve spent many hours trying to solve this and I haven’t been able to solve with the available content out there. I appreciate any help or insights you can offer, thank you!
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from bs4 import BeautifulSoup as soup
import time
PATH = "C:\Program Files (x86)\chromedriver.exe"
driver = webdriver.Chrome(PATH)
driver.get("https://www.google.com/maps")
time.sleep(7)
page = soup(driver.page_source, 'html.parser')
#find the searchbar, enter search, and hit return
search = driver.find_element_by_id('searchboxinput')
search.send_keys("dentists in Austin Texas")
search.send_keys(Keys.RETURN)
driver.maximize_window()
time.sleep(7)
#I want to get the 6th result down but it requires a sidebar scroll to load
link = driver.find_element_by_xpath("//*[@id='pane']/div/div[1]/div/div/div[4]/div[1]/div[13]/div/a")
actions.move_to_element(link).perform()
link.click()
time.sleep(5)
driver.back()```
| [
"I found a solution that works, it is to target the element in XPATH from the javascript interface of selenium. You must then execute two commands on an instruction (targeting and scroll)\ndriver.executeScript(\"var el = document.evaluate('/html/body/jsl/div[3]/div[10]/div[8]/div/div[1]/div/div/div[4]/div[1]', document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue; el.scroll(0, 5000);\");\n\nthis is the only solution that worked for me\n",
"The search results in the google map are located with //div[contains(@aria-label,'dentists in Austin Texas')]//div[contains(@jsaction,'mouseover')] XPath.\nSo, to select 6-th element there you can do the following\nfrom selenium.webdriver.common.action_chains import ActionChains\n\nresults = driver.find_elements_by_xpath('//div[contains(@aria-label,\"dentists in Austin Texas\")]//div[contains(@jsaction,\"mouseover\")]')\n\nActionChains(driver).move_to_element(results[6]).click(button).perform()\n\n",
"I was just implementing scrolling on google map sidebar, it's working on my side. check this code please\n# selecting scroll body\ndriver.find_element_by_xpath('/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]').click()\n\n\n#start scrolling your sidebar\nhtml = driver.find_element_by_xpath('/html/body/div[3]/div[9]/div[9]/div/div/div[1]/div[2]/div/div[1]/div/div/div[2]/div[1]')\nhtml.send_keys(Keys.END)\n\nalso add the \"KEYS\" library\nfrom selenium.webdriver.common.keys import Keys\n\nI hope it would help you.\nby the way I have implemented scrapping of google map with its available data and used above code to scroll. check if you have any problem, let me know then\n"
] | [
1,
0,
0
] | [] | [] | [
"html",
"python",
"scroll",
"selenium",
"selenium_webdriver"
] | stackoverflow_0067783868_html_python_scroll_selenium_selenium_webdriver.txt |
Q:
Distance from a point to a line
I have created a class "Point" and i want to calculate the shortest distance between a given point and a line ( characterized by 2 other points ), all points are known.
I tried to use this formula : |Ax+By+C| / sqrt(A^2+B^2) , but i messed up and got more confused by the minute (mostly because of math formulas :( )...
I did find some sites where people asked this question too, but it either was not for Python or it was in a 3D system not 2D ...
Below is my class :
class Point:
def __init__(self,initx,inity):
self.x = initx
self.y = inity
def getX(self):
return self.x
def getY(self):
return self.y
def __str__(self):
return "x=" + str(self.x) + ", y=" + str(self.y)
def distance_from_point(self,the_other_point):
dx = the_other_point.getX() - self.x
dy = the_other_point.getY() - self.y
def slope(self,other_point):
if self.x - other_point.getX() == 0 :
return 0
else:
panta = (self.y - other_point.getY())/ (self.x - other_point.getX())
return panta
Can someone help me write a separate function or a method that does what i want ? I tried for 2 hours and I can't figure it out ...
A:
You should be able to use this formula from the points directly. So, you'd have something like:
import math
class Point:
def distance_to_line(self, p1, p2):
x_diff = p2.x - p1.x
y_diff = p2.y - p1.y
num = abs(y_diff*self.x - x_diff*self.y + p2.x*p1.y - p2.y*p1.x)
den = math.sqrt(y_diff**2 + x_diff**2)
return num / den
A:
The distance formula between two points is Distance =sqrt((x2−x1)^2+(y2−y1)^2).
And the formula to calculate slope is slope = (y2 - y1) / (x2 - x1).
so below is a simple method to calculate the distance
def distance_from_other_point(self, other_point):
return math.sqrt( ( other_point.getX() - self.getX() )**2 + ( other_point.getY() - self.getY() )**2 )
def slope(self, otehr_point):
return ( other_point.getY() - self.getY() )*1.0 / ( other_point.getX() - self.getX() )
In the second method, slope, I multiplied with 1.0 so that result will be in float.
Note - I used the syntax of python 2.7.6 though hopefully, it will work in python 3.x as well.
A:
You can install FastLine via pip and then use it in this way:
from FastLine import Line
# define a line by two points
l1 = Line(p1=(0,0), p2=(10,10))
# or define a line by slope and intercept
l2 = Line(m=0.5, b=-1)
# compute distance
d1 = l1.distance_to((20,50))
# returns 21.213203435596427
d2 = l2.distance_to((-15,17))
# returns 22.807893370497855
A:
It is
Q-P0 = α (P1-P0) + β k×(P1-P0)
where k×(P1-P0) is the vector multiplication between the z versor and the position vector.
Writing the above equation as a scalar system, we have
ΔX = α dx - β dy
ΔY = α dy + β dx
solving for α, β
(dx²+dy²) α = + ΔX dx + ΔY dy
(dx²+dy²) β = - ΔX dy + ΔY dx
Because (dx²+dy²) = |P1-P0|²
ΔY dx - ΔX dy
β |P1-P0| = distance = ---------------
|P1-P0|
Of course, our result is equivalent to (P1-P0)×(Q-P0)/|P1-P0|.
Final remark, the distance of Q from (P1-P0) is oriented, and maybe you need the absolute value of the distance.
| Distance from a point to a line | I have created a class "Point" and i want to calculate the shortest distance between a given point and a line ( characterized by 2 other points ), all points are known.
I tried to use this formula : |Ax+By+C| / sqrt(A^2+B^2) , but i messed up and got more confused by the minute (mostly because of math formulas :( )...
I did find some sites where people asked this question too, but it either was not for Python or it was in a 3D system not 2D ...
Below is my class :
class Point:
def __init__(self,initx,inity):
self.x = initx
self.y = inity
def getX(self):
return self.x
def getY(self):
return self.y
def __str__(self):
return "x=" + str(self.x) + ", y=" + str(self.y)
def distance_from_point(self,the_other_point):
dx = the_other_point.getX() - self.x
dy = the_other_point.getY() - self.y
def slope(self,other_point):
if self.x - other_point.getX() == 0 :
return 0
else:
panta = (self.y - other_point.getY())/ (self.x - other_point.getX())
return panta
Can someone help me write a separate function or a method that does what i want ? I tried for 2 hours and I can't figure it out ...
| [
"You should be able to use this formula from the points directly. So, you'd have something like:\nimport math\n\nclass Point:\n def distance_to_line(self, p1, p2):\n x_diff = p2.x - p1.x\n y_diff = p2.y - p1.y\n num = abs(y_diff*self.x - x_diff*self.y + p2.x*p1.y - p2.y*p1.x)\n den = math.sqrt(y_diff**2 + x_diff**2)\n return num / den\n\n",
"The distance formula between two points is Distance =sqrt((x2−x1)^2+(y2−y1)^2).\nAnd the formula to calculate slope is slope = (y2 - y1) / (x2 - x1). \nso below is a simple method to calculate the distance\ndef distance_from_other_point(self, other_point):\n return math.sqrt( ( other_point.getX() - self.getX() )**2 + ( other_point.getY() - self.getY() )**2 )\n\ndef slope(self, otehr_point):\n return ( other_point.getY() - self.getY() )*1.0 / ( other_point.getX() - self.getX() )\n\nIn the second method, slope, I multiplied with 1.0 so that result will be in float.\nNote - I used the syntax of python 2.7.6 though hopefully, it will work in python 3.x as well.\n",
"You can install FastLine via pip and then use it in this way:\nfrom FastLine import Line\n# define a line by two points\nl1 = Line(p1=(0,0), p2=(10,10))\n# or define a line by slope and intercept\nl2 = Line(m=0.5, b=-1)\n\n# compute distance\nd1 = l1.distance_to((20,50))\n# returns 21.213203435596427\nd2 = l2.distance_to((-15,17))\n# returns 22.807893370497855\n\n",
"\nIt is\nQ-P0 = α (P1-P0) + β k×(P1-P0)\n\nwhere k×(P1-P0) is the vector multiplication between the z versor and the position vector.\nWriting the above equation as a scalar system, we have\nΔX = α dx - β dy\nΔY = α dy + β dx\n\nsolving for α, β\n(dx²+dy²) α = + ΔX dx + ΔY dy\n(dx²+dy²) β = - ΔX dy + ΔY dx\n\nBecause (dx²+dy²) = |P1-P0|²\n ΔY dx - ΔX dy\nβ |P1-P0| = distance = ---------------\n |P1-P0|\n\nOf course, our result is equivalent to (P1-P0)×(Q-P0)/|P1-P0|.\nFinal remark, the distance of Q from (P1-P0) is oriented, and maybe you need the absolute value of the distance.\n"
] | [
6,
0,
0,
0
] | [] | [] | [
"distance",
"line",
"point",
"python",
"python_3.x"
] | stackoverflow_0040970478_distance_line_point_python_python_3.x.txt |
Q:
Numpy/Scipy: Efficient Determinant of Gram Matrix
I need to compute the (log of the) determinant of the Gram matrix of a matrix A and I was wondering if there is a way to compute this efficiently and in a stable way in Numpy/Scipy.
import numpy as np
m, n = 100, 150
J = np.random.randn(m, n)
np.log(np.det(J.dot(J.T)))
is there some LAPACK routine or some math trick I could use to speed things up and make it more stable?
A:
For better numerical stability, I would suggest to use slogdet, which is your main aim in any case. There may also be a very minimal gain if you use np.inner(J, J) instead of J.dot(J.T). For really speeding things up, I would recommend using jax.numpy.
import numpy as np
import jax
import jax.numpy as jnp
m, n = 100, 150
J = np.random.randn(m, n)
def a(J):
return np.log(np.linalg.det(J.dot(J.T)))
def b(J):
return np.linalg.slogdet(np.inner(J, J))[1]
def c(J):
return jnp.linalg.slogdet(jnp.inner(J, J))[1]
# jit + compile
d = jax.jit(c)
d(J)
# check correctness
print(np.allclose(a(J), b(J))) # True
print(np.allclose(a(J), c(J))) # True
print(np.allclose(a(J), d(J))) # True
Checking run times, on Google Colab:
%timeit -n 1000 -r 10 a(J)
# 240 µs ± 16.2 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)
%timeit -n 1000 -r 10 b(J)
# 227 µs ± 10.2 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)
J_dev = jax.device_put(J)
%timeit -n 1000 -r 10 c(J_dev).block_until_ready()
# 112 µs ± 4.46 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)
%timeit -n 1000 -r 10 d(J_dev).block_until_ready()
# 96.2 µs ± 4.23 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)
So rougly about ~2x speedup is possible this way.
| Numpy/Scipy: Efficient Determinant of Gram Matrix | I need to compute the (log of the) determinant of the Gram matrix of a matrix A and I was wondering if there is a way to compute this efficiently and in a stable way in Numpy/Scipy.
import numpy as np
m, n = 100, 150
J = np.random.randn(m, n)
np.log(np.det(J.dot(J.T)))
is there some LAPACK routine or some math trick I could use to speed things up and make it more stable?
| [
"For better numerical stability, I would suggest to use slogdet, which is your main aim in any case. There may also be a very minimal gain if you use np.inner(J, J) instead of J.dot(J.T). For really speeding things up, I would recommend using jax.numpy.\nimport numpy as np\nimport jax\nimport jax.numpy as jnp\n\nm, n = 100, 150\nJ = np.random.randn(m, n)\n\ndef a(J):\n return np.log(np.linalg.det(J.dot(J.T)))\n\ndef b(J):\n return np.linalg.slogdet(np.inner(J, J))[1]\n\ndef c(J):\n return jnp.linalg.slogdet(jnp.inner(J, J))[1]\n\n# jit + compile\nd = jax.jit(c)\nd(J)\n\n# check correctness\nprint(np.allclose(a(J), b(J))) # True\nprint(np.allclose(a(J), c(J))) # True\nprint(np.allclose(a(J), d(J))) # True\n\nChecking run times, on Google Colab:\n%timeit -n 1000 -r 10 a(J)\n# 240 µs ± 16.2 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)\n\n%timeit -n 1000 -r 10 b(J)\n# 227 µs ± 10.2 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)\n\nJ_dev = jax.device_put(J)\n\n%timeit -n 1000 -r 10 c(J_dev).block_until_ready()\n# 112 µs ± 4.46 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)\n\n%timeit -n 1000 -r 10 d(J_dev).block_until_ready()\n# 96.2 µs ± 4.23 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)\n\nSo rougly about ~2x speedup is possible this way.\n"
] | [
2
] | [] | [] | [
"determinants",
"numpy",
"python",
"scipy"
] | stackoverflow_0074654414_determinants_numpy_python_scipy.txt |
Q:
Telethon async and await mess
Here is my code as follows:
import time
import telethon
import asyncio
# Enter your API ID and API hash here
api_id = 13******
api_hash = '8ba0a***********************'
# Enter the name of the text file containing the messages
message_file = 'messages.txt'
async def main():
# Connect to the Telegram API using your API ID and API hash
client = telethon.TelegramClient('sender', api_id, api_hash)
# Read the messages from the text file
with open(message_file, 'r') as f:
messages = f.readlines()
# Send the messages and pause for one minute between each message
for message in messages:
await client.send_message('@Example_recipient', message.strip())
await time.sleep(60)
# Disconnect from the Telegram API
client.disconnect()
async def run() -> asyncio.coroutine:
await main()
asyncio.run(run())
Whenever I run it, it gives me this error:
File "/usr/local/lib/python3.10/dist-packages/telethon/network/mtprotosender.py", line 173, in send
raise ConnectionError('Cannot send requests while disconnected')
ConnectionError: Cannot send requests while disconnected
asyncio.run fixed a previous error which was:
RuntimeWarning: coroutine 'run' was never awaited
run()
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Now I am stuck with this new error that I can't seem to get around.
A:
Try changing your code to something like this:
import asyncio
import telethon
api_id = 13******
api_hash = '8ba0a***********************'
message_file = 'messages.txt'
# Better style would be to have these not initiated
# at the top-level and instead behind the if name == main
# guard, but lets keep it simple.
client = telethon.TelegramClient('sender', api_id, api_hash)
with open(message_file, 'r') as f:
messages = f.readlines()
async def main():
for message in messages:
await client.send_message('@Example_recipient', message.strip())
await asyncio.sleep(60)
if __name__ == "__main__":
# The context manager will release the resource
# (i.e. call client.disconnect) for us when the
# block exits.
with client:
client.loop.run_until_complete(main())
which is much closer to the example in the documentation. Also it's not clear why you have the 60 second wait, or why you are using asyncio if you want to wait between sending messages. It looks like you could just use telethon.sync and block and not have any of the asyncio boilerplate?
Update based on comments
with open(file) as handle:
messages = list(handle.readlines())
...
while len(messages):
first = messages.pop()
# check here before popping that the list
# isn't already empty
second = messages.pop()
await client.send_message(first)
await client.send_message(second)
| Telethon async and await mess | Here is my code as follows:
import time
import telethon
import asyncio
# Enter your API ID and API hash here
api_id = 13******
api_hash = '8ba0a***********************'
# Enter the name of the text file containing the messages
message_file = 'messages.txt'
async def main():
# Connect to the Telegram API using your API ID and API hash
client = telethon.TelegramClient('sender', api_id, api_hash)
# Read the messages from the text file
with open(message_file, 'r') as f:
messages = f.readlines()
# Send the messages and pause for one minute between each message
for message in messages:
await client.send_message('@Example_recipient', message.strip())
await time.sleep(60)
# Disconnect from the Telegram API
client.disconnect()
async def run() -> asyncio.coroutine:
await main()
asyncio.run(run())
Whenever I run it, it gives me this error:
File "/usr/local/lib/python3.10/dist-packages/telethon/network/mtprotosender.py", line 173, in send
raise ConnectionError('Cannot send requests while disconnected')
ConnectionError: Cannot send requests while disconnected
asyncio.run fixed a previous error which was:
RuntimeWarning: coroutine 'run' was never awaited
run()
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Now I am stuck with this new error that I can't seem to get around.
| [
"Try changing your code to something like this:\nimport asyncio\nimport telethon\n\napi_id = 13******\napi_hash = '8ba0a***********************'\nmessage_file = 'messages.txt'\n\n# Better style would be to have these not initiated\n# at the top-level and instead behind the if name == main\n# guard, but lets keep it simple.\nclient = telethon.TelegramClient('sender', api_id, api_hash)\nwith open(message_file, 'r') as f:\n messages = f.readlines()\n\nasync def main():\n for message in messages:\n await client.send_message('@Example_recipient', message.strip())\n await asyncio.sleep(60)\n\nif __name__ == \"__main__\":\n # The context manager will release the resource\n # (i.e. call client.disconnect) for us when the\n # block exits.\n with client:\n client.loop.run_until_complete(main())\n \n\nwhich is much closer to the example in the documentation. Also it's not clear why you have the 60 second wait, or why you are using asyncio if you want to wait between sending messages. It looks like you could just use telethon.sync and block and not have any of the asyncio boilerplate?\nUpdate based on comments\nwith open(file) as handle:\n messages = list(handle.readlines())\n\n...\n\n while len(messages):\n first = messages.pop()\n # check here before popping that the list\n # isn't already empty\n second = messages.pop()\n await client.send_message(first)\n await client.send_message(second)\n\n"
] | [
0
] | [] | [] | [
"async_await",
"python",
"telegram",
"telethon"
] | stackoverflow_0074655509_async_await_python_telegram_telethon.txt |
Q:
Assigning to a subset of a Dataframe (with a selection or other method) in python Polars
In Pandas I can do the following:
data = pd.DataFrame(
{
"era": ["01", "01", "02", "02", "03", "10"],
"pred1": [1, 2, 3, 4, 5,6],
"pred2": [2,4,5,6,7,8],
"pred3": [3,5,6,8,9,1],
"something_else": [5,4,3,67,5,4],
})
pred_cols = ["pred1", "pred2", "pred3"]
ERA_COL = "era"
DOWNSAMPLE_CROSS_VAL = 10
test_split = ['01', '02', '10']
test_split_index = data[ERA_COL].isin(test_split)
downsampled_train_split_index = train_split_index[test_split_index].index[::DOWNSAMPLE_CROSS_VAL]
data.loc[test_split_index, "pred1"] = somefunction()["another_column"]
How can I achieve the same in Polars? I tried to do some data.filter(****) = somefunction()["another_column"], but the filter output is not assignable with Polars.
A:
Let's see if I can help. It would appear that what you want to accomplish is to replace a subset/filtered portion of a column with values derived from other one or more other columns.
For example, if you are attempting to accomplish this:
ERA_COL = "era"
test_split = ["01", "02", "10"]
test_split_index = data[ERA_COL].isin(test_split)
data.loc[test_split_index, "pred1"] = -2 * data["pred3"]
print(data)
>>> print(data)
era pred1 pred2 pred3 something_else
0 01 -6 2 3 5
1 01 -10 4 5 4
2 02 -12 5 6 3
3 02 -16 6 8 67
4 03 5 7 9 5
5 10 -2 8 1 4
We would accomplish the above in Polars using a when/then/otherwise expression:
(
pl.from_pandas(data)
.with_column(
pl.when(pl.col(ERA_COL).is_in(test_split))
.then(-2 * pl.col('pred3'))
.otherwise(pl.col('pred1'))
.alias('pred1')
)
)
shape: (6, 5)
┌─────┬───────┬───────┬───────┬────────────────┐
│ era ┆ pred1 ┆ pred2 ┆ pred3 ┆ something_else │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═══════╪═══════╪═══════╪════════════════╡
│ 01 ┆ -6 ┆ 2 ┆ 3 ┆ 5 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 01 ┆ -10 ┆ 4 ┆ 5 ┆ 4 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 02 ┆ -12 ┆ 5 ┆ 6 ┆ 3 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 02 ┆ -16 ┆ 6 ┆ 8 ┆ 67 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 03 ┆ 5 ┆ 7 ┆ 9 ┆ 5 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤
│ 10 ┆ -2 ┆ 8 ┆ 1 ┆ 4 │
└─────┴───────┴───────┴───────┴────────────────┘
Is this what you were looking to accomplish?
A:
A few things as general points.
polars syntax doesn't attempt to match that of pandas.
In polars, you can only assign a whole df, you can't assign part of a df.
polars doesn't use an internal index so there's no index to record
For your problem, assuming there isn't already a natural index, you'd want to make an explicit index.
pldata=pl.from_pandas(data).with_row_count(name='myindx)
then recording the index would be
test_split_index = pldata.filter(pl.col(ERA_COL).is_in(test_split)).select('myindx').to_series()
For your last bit on the final assignment, without knowing anything about somefunction my best guess is that you'd want to do that with a join.
Maybe something like:
pldata=pldata.join(
pl.from_pandas(some_function()['another_column']) \
.with_column(test_split_index.alias('myindex')),
on='myindex')
Your test_split_index is actually a bool not the index whereas the above is the actual index so take that with a grain of salt.
All that being said, polars has free copies of data so rather than keeping track of index positions manually (as error prone as that can be), you can just make 2 new dfs since, under the hood, it just references the data, it doesn't make a physical copy of it.
Something like:
testdata=pldata.filter(pl.col(ERA_COL).is_in(test_split))
traindata=pldata.filter(~pl.col(ERA_COL).is_in(test_split))
| Assigning to a subset of a Dataframe (with a selection or other method) in python Polars | In Pandas I can do the following:
data = pd.DataFrame(
{
"era": ["01", "01", "02", "02", "03", "10"],
"pred1": [1, 2, 3, 4, 5,6],
"pred2": [2,4,5,6,7,8],
"pred3": [3,5,6,8,9,1],
"something_else": [5,4,3,67,5,4],
})
pred_cols = ["pred1", "pred2", "pred3"]
ERA_COL = "era"
DOWNSAMPLE_CROSS_VAL = 10
test_split = ['01', '02', '10']
test_split_index = data[ERA_COL].isin(test_split)
downsampled_train_split_index = train_split_index[test_split_index].index[::DOWNSAMPLE_CROSS_VAL]
data.loc[test_split_index, "pred1"] = somefunction()["another_column"]
How can I achieve the same in Polars? I tried to do some data.filter(****) = somefunction()["another_column"], but the filter output is not assignable with Polars.
| [
"Let's see if I can help. It would appear that what you want to accomplish is to replace a subset/filtered portion of a column with values derived from other one or more other columns.\nFor example, if you are attempting to accomplish this:\nERA_COL = \"era\"\n\ntest_split = [\"01\", \"02\", \"10\"]\ntest_split_index = data[ERA_COL].isin(test_split)\n\ndata.loc[test_split_index, \"pred1\"] = -2 * data[\"pred3\"]\nprint(data)\n\n>>> print(data)\n era pred1 pred2 pred3 something_else\n0 01 -6 2 3 5\n1 01 -10 4 5 4\n2 02 -12 5 6 3\n3 02 -16 6 8 67\n4 03 5 7 9 5\n5 10 -2 8 1 4\n\nWe would accomplish the above in Polars using a when/then/otherwise expression:\n(\n pl.from_pandas(data)\n .with_column(\n pl.when(pl.col(ERA_COL).is_in(test_split))\n .then(-2 * pl.col('pred3'))\n .otherwise(pl.col('pred1'))\n .alias('pred1')\n )\n)\n\nshape: (6, 5)\n┌─────┬───────┬───────┬───────┬────────────────┐\n│ era ┆ pred1 ┆ pred2 ┆ pred3 ┆ something_else │\n│ --- ┆ --- ┆ --- ┆ --- ┆ --- │\n│ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 │\n╞═════╪═══════╪═══════╪═══════╪════════════════╡\n│ 01 ┆ -6 ┆ 2 ┆ 3 ┆ 5 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤\n│ 01 ┆ -10 ┆ 4 ┆ 5 ┆ 4 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤\n│ 02 ┆ -12 ┆ 5 ┆ 6 ┆ 3 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤\n│ 02 ┆ -16 ┆ 6 ┆ 8 ┆ 67 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤\n│ 03 ┆ 5 ┆ 7 ┆ 9 ┆ 5 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌╌┤\n│ 10 ┆ -2 ┆ 8 ┆ 1 ┆ 4 │\n└─────┴───────┴───────┴───────┴────────────────┘\n\nIs this what you were looking to accomplish?\n",
"A few things as general points.\n\npolars syntax doesn't attempt to match that of pandas.\n\nIn polars, you can only assign a whole df, you can't assign part of a df.\n\npolars doesn't use an internal index so there's no index to record\n\n\nFor your problem, assuming there isn't already a natural index, you'd want to make an explicit index.\npldata=pl.from_pandas(data).with_row_count(name='myindx)\n\nthen recording the index would be\ntest_split_index = pldata.filter(pl.col(ERA_COL).is_in(test_split)).select('myindx').to_series()\n\nFor your last bit on the final assignment, without knowing anything about somefunction my best guess is that you'd want to do that with a join.\nMaybe something like:\npldata=pldata.join(\n pl.from_pandas(some_function()['another_column']) \\\n .with_column(test_split_index.alias('myindex')),\n on='myindex')\n\nYour test_split_index is actually a bool not the index whereas the above is the actual index so take that with a grain of salt.\nAll that being said, polars has free copies of data so rather than keeping track of index positions manually (as error prone as that can be), you can just make 2 new dfs since, under the hood, it just references the data, it doesn't make a physical copy of it.\nSomething like:\ntestdata=pldata.filter(pl.col(ERA_COL).is_in(test_split))\n\ntraindata=pldata.filter(~pl.col(ERA_COL).is_in(test_split))\n\n"
] | [
0,
0
] | [] | [] | [
"python",
"python_polars"
] | stackoverflow_0074645846_python_python_polars.txt |
Q:
convert flatten dict to nested dict
I use this function to convert nested dict to flatten dict:
make_flatten_dict = lambda d, sep: pd.json_normalize(d, sep=sep).to_dict(orient='records')[0]
input:
d = {'a': 1,
'c': {'a': '#a_val', 'b': {'x': '#x_value', 'y' : '#y'}},
'd': [1, '#d_i1', 3]}
output:
{'a': 1, 'd': [1, '#d_i1', 3], 'c.a': '#a_val', 'c.b.x': '#x_value', 'c.b.y': '#y'}
How I can build input from the output?
A:
For each multi-key you need to build the tree, add a {} for each one except the last, then use the last one to assign the value
value = {'a': 1, 'd': [1, '#d_i1', 3], 'c.a': '#a_val', 'c.b.x': '#x_value', 'c.b.y': '#y'}
result = {}
for k, v in value.items():
tmp = result
*keys, last = k.split(".")
for key in keys:
tmp = tmp.setdefault(key, {})
tmp[last] = v
print(result)
# {'a': 1, 'd': [1, '#d_i1', 3], 'c': {'a': '#a_val', 'b': {'x': '#x_value', 'y': '#y'}}}
| convert flatten dict to nested dict | I use this function to convert nested dict to flatten dict:
make_flatten_dict = lambda d, sep: pd.json_normalize(d, sep=sep).to_dict(orient='records')[0]
input:
d = {'a': 1,
'c': {'a': '#a_val', 'b': {'x': '#x_value', 'y' : '#y'}},
'd': [1, '#d_i1', 3]}
output:
{'a': 1, 'd': [1, '#d_i1', 3], 'c.a': '#a_val', 'c.b.x': '#x_value', 'c.b.y': '#y'}
How I can build input from the output?
| [
"For each multi-key you need to build the tree, add a {} for each one except the last, then use the last one to assign the value\nvalue = {'a': 1, 'd': [1, '#d_i1', 3], 'c.a': '#a_val', 'c.b.x': '#x_value', 'c.b.y': '#y'}\n\nresult = {}\nfor k, v in value.items():\n tmp = result\n *keys, last = k.split(\".\")\n for key in keys:\n tmp = tmp.setdefault(key, {})\n tmp[last] = v\n\nprint(result)\n# {'a': 1, 'd': [1, '#d_i1', 3], 'c': {'a': '#a_val', 'b': {'x': '#x_value', 'y': '#y'}}}\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074656144_python.txt |
Q:
Bitcoin Chart with log scale Python
I'm using Python (beginner) and I want to plot the Bitcoin price in log scale but without seeing the log price, I want to see the linear price.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from cryptocmd import CmcScraper
from math import e
from matplotlib.ticker import ScalarFormatter
# -------------IMPORT THE DATA----------------
btc_data = CmcScraper("BTC", "28-04-2012", "27-11-2022", True, True, "USD")
# Create a Dataframe
df = btc_data.get_dataframe()
#Set the index as Date instead of numerical value
df = df.set_index(pd.DatetimeIndex(df["Date"].values))
df
#Plot the Data
plt.style.use('fivethirtyeight')
plt.figure(figsize =(20, 10))
plt.title("Bitcoin Price", fontsize=18)
plt.yscale("log")
plt.plot(df["Close"])
plt.xlabel("Date", fontsize=15)
plt.ylabel("Price", fontsize=15)
plt.show()
My output
As you can see we have log scale price but I want to see "100 - 1 000 - 10 000" instead of "10^2 - 10^3 - 10^4" on the y axis.
Does anyone know how to solve this?
Have a nice day!
A:
Welcome to Stackoverflow!
You were getting there, the following code will yield what you want (I simply added some fake data + 1 line of code to your plotting code):
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
y = [10**x for x in np.arange(0, 5, 0.1)]
x = [x for x in np.linspace(2018, 2023, len(y))]
#Plot the Data
plt.style.use('fivethirtyeight')
plt.figure(figsize =(20, 10))
plt.title("Bitcoin Price", fontsize=18)
plt.yscale("log")
plt.plot(x, y)
plt.xlabel("Date", fontsize=15)
plt.ylabel("Price", fontsize=15)
plt.gca().get_yaxis().set_major_formatter(ticker.ScalarFormatter())
plt.show()
This generates the following figure:
The fundamental lines are these:
import matplotlib.ticker as ticker
plt.gca().get_yaxis().set_major_formatter(ticker.ScalarFormatter())
Explanation: plt.gca() gets the currently active axis object. This object is the one we want to adapt. And the actual thing we want to adapt is the way our ticks get formatted for our y axis. Hence the latter part: .get_yaxis().set_major_formatter(). Now, we only need to choose which formatter. I chose ScalarFormatter, which is the default for scalars. More info on your choices can be found here.
Hope this helps!
| Bitcoin Chart with log scale Python | I'm using Python (beginner) and I want to plot the Bitcoin price in log scale but without seeing the log price, I want to see the linear price.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from cryptocmd import CmcScraper
from math import e
from matplotlib.ticker import ScalarFormatter
# -------------IMPORT THE DATA----------------
btc_data = CmcScraper("BTC", "28-04-2012", "27-11-2022", True, True, "USD")
# Create a Dataframe
df = btc_data.get_dataframe()
#Set the index as Date instead of numerical value
df = df.set_index(pd.DatetimeIndex(df["Date"].values))
df
#Plot the Data
plt.style.use('fivethirtyeight')
plt.figure(figsize =(20, 10))
plt.title("Bitcoin Price", fontsize=18)
plt.yscale("log")
plt.plot(df["Close"])
plt.xlabel("Date", fontsize=15)
plt.ylabel("Price", fontsize=15)
plt.show()
My output
As you can see we have log scale price but I want to see "100 - 1 000 - 10 000" instead of "10^2 - 10^3 - 10^4" on the y axis.
Does anyone know how to solve this?
Have a nice day!
| [
"Welcome to Stackoverflow!\nYou were getting there, the following code will yield what you want (I simply added some fake data + 1 line of code to your plotting code):\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\n\ny = [10**x for x in np.arange(0, 5, 0.1)]\nx = [x for x in np.linspace(2018, 2023, len(y))]\n\n#Plot the Data \nplt.style.use('fivethirtyeight') \nplt.figure(figsize =(20, 10))\nplt.title(\"Bitcoin Price\", fontsize=18)\nplt.yscale(\"log\")\nplt.plot(x, y)\nplt.xlabel(\"Date\", fontsize=15)\nplt.ylabel(\"Price\", fontsize=15)\nplt.gca().get_yaxis().set_major_formatter(ticker.ScalarFormatter())\nplt.show()\n\nThis generates the following figure:\n\nThe fundamental lines are these:\nimport matplotlib.ticker as ticker\nplt.gca().get_yaxis().set_major_formatter(ticker.ScalarFormatter())\n\nExplanation: plt.gca() gets the currently active axis object. This object is the one we want to adapt. And the actual thing we want to adapt is the way our ticks get formatted for our y axis. Hence the latter part: .get_yaxis().set_major_formatter(). Now, we only need to choose which formatter. I chose ScalarFormatter, which is the default for scalars. More info on your choices can be found here.\nHope this helps!\n"
] | [
0
] | [] | [] | [
"bitcoin",
"price",
"python",
"scale"
] | stackoverflow_0074654327_bitcoin_price_python_scale.txt |
Q:
How to lookup in python between 2 dataframes with match mode -> an exact match or the next larger item?
I'd like to create a lookup (similar to excel for example) with match mode -> an exact match or the next larger item.
Let's say I have these 2 dataframes:
seed(1)
np.random.seed(1)
Wins_Range = np.arange(1,101,1)
Wins = pd.DataFrame({"Wins Needed": Wins_Range})
Wins
Wins Needed
0 1
1 2
2 3
3 4
4 5
... ...
95 96
96 97
97 98
98 99
99 100
And the second one:
Levels_Range = np.arange(1,101,1)
Levels = pd.DataFrame({"Level": Levels_Range})
Levels["Wins"]=np.random.choice([1,2,3,4,5],size=len(Levels), p=[0.2,0.2,0.2,0.2,0.2]).cumsum()
Levels
Level Wins
0 1 3
1 2 7
2 3 8
3 4 10
4 5 11
... ... ...
95 96 281
96 97 286
97 98 289
98 99 290
99 100 294
Now, I'd like to pull the level from Levels df to the Wins df when the condition is Wins Needed=Wins but as I said - the match mode will be an exact match or the next larger item.
BTW - the type of Levels["Wins"] is float and the type of Wins["Win"] is int if that matters.
I've tried to use the merge function but it doesn't work (I'm new at python) -
Wins.merge(Levels, on='Wins Needed', how='left')
Thanks in advance!
A:
You need a merge_asof:
out = pd.merge_asof(Wins, Levels, left_on='Wins Needed', right_on='Wins',
direction='forward')[['Wins Needed', 'Level']]
Or
Wins['Level'] = pd.merge_asof(Wins, Levels, left_on='Wins Needed', right_on='Wins',
direction='forward')['Level']
NB. the keys must be sorted for a merge_asof.
Output:
Wins Needed Level
0 1 1
1 2 1
2 3 1
3 4 2
4 5 2
.. ... ...
95 96 35
96 97 35
97 98 36
98 99 36
99 100 37
[100 rows x 2 columns]
If the values are not initially sorted:
Wins['Level'] = pd.merge_asof(Wins[['Wins Needed']].reset_index().sort_values(by='Wins Needed'),
Levels.sort_values(by='Wins'),
left_on='Wins Needed', right_on='Wins',
direction='forward').set_index('index')['Level']
| How to lookup in python between 2 dataframes with match mode -> an exact match or the next larger item? | I'd like to create a lookup (similar to excel for example) with match mode -> an exact match or the next larger item.
Let's say I have these 2 dataframes:
seed(1)
np.random.seed(1)
Wins_Range = np.arange(1,101,1)
Wins = pd.DataFrame({"Wins Needed": Wins_Range})
Wins
Wins Needed
0 1
1 2
2 3
3 4
4 5
... ...
95 96
96 97
97 98
98 99
99 100
And the second one:
Levels_Range = np.arange(1,101,1)
Levels = pd.DataFrame({"Level": Levels_Range})
Levels["Wins"]=np.random.choice([1,2,3,4,5],size=len(Levels), p=[0.2,0.2,0.2,0.2,0.2]).cumsum()
Levels
Level Wins
0 1 3
1 2 7
2 3 8
3 4 10
4 5 11
... ... ...
95 96 281
96 97 286
97 98 289
98 99 290
99 100 294
Now, I'd like to pull the level from Levels df to the Wins df when the condition is Wins Needed=Wins but as I said - the match mode will be an exact match or the next larger item.
BTW - the type of Levels["Wins"] is float and the type of Wins["Win"] is int if that matters.
I've tried to use the merge function but it doesn't work (I'm new at python) -
Wins.merge(Levels, on='Wins Needed', how='left')
Thanks in advance!
| [
"You need a merge_asof:\nout = pd.merge_asof(Wins, Levels, left_on='Wins Needed', right_on='Wins',\n direction='forward')[['Wins Needed', 'Level']]\n\nOr\nWins['Level'] = pd.merge_asof(Wins, Levels, left_on='Wins Needed', right_on='Wins',\n direction='forward')['Level']\n\nNB. the keys must be sorted for a merge_asof.\nOutput:\n Wins Needed Level\n0 1 1\n1 2 1\n2 3 1\n3 4 2\n4 5 2\n.. ... ...\n95 96 35\n96 97 35\n97 98 36\n98 99 36\n99 100 37\n\n[100 rows x 2 columns]\n\nIf the values are not initially sorted:\nWins['Level'] = pd.merge_asof(Wins[['Wins Needed']].reset_index().sort_values(by='Wins Needed'),\n Levels.sort_values(by='Wins'),\n left_on='Wins Needed', right_on='Wins',\n direction='forward').set_index('index')['Level']\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074656342_dataframe_pandas_python.txt |
Q:
I am trying to make a simple gradient descent algorithm in python, but it goes back up after passing through the lowest point
Problem
I am trying to build a simple gradient descent algorithm and plot it on a heatmap.
I assume there are better ways to do this, but I have to use this methodology.
My professor and I have very similar code but we cannot understand why mine behaves differently. Once the lowest point is reached, it should just turn around the lowest point. It works for my professor, but mine goes back up and stops in the middle of nowhere for some reason that we cannot figure out.
Let me be precise, it does not stop because the number of iterations (passed as a parameter of the function) is reached, but the gradient vector becomes very small in a random place as it should happen at the lowest point.
Here is my code :
from math import cos, sin, exp
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import derivative
def f(x, y):
return 4 * exp(-((x**2)/2 + (y**2)/2)) * sin(x*(y-1/2)) * cos(x/2 + y)
precision = 10e-5
# Derivee par rapport a x
def ddx(f):
return lambda x, y: derivative(f, x, dx=precision, n=1, args=(y,))
# Derivee par rapport a y
def ddy(f):
return lambda x, y: derivative(f, y, dx=precision, n=1, args=(x,))
# grad(f) retourne la fonction vectorielle de deux variables réelles : (x, y) -> ∇(x, y)
def grad(f):
return lambda x, y: np.array([ddx(f)(x, y), ddy(f)(x, y)])
def display_heatmap_with_gradient_descent(f, a, b, c, d, n, x0, y0, iterations, h):
x = np.arange(a,b,n)
y = np.arange(c,d,n)
X, Y = np.meshgrid(x, y) # préparation du maillage
f_vect = np.vectorize(f) # transformation de f en une fonction vectorielle
Z = f_vect(X,Y) # calcul des images
# Trace la carte de couleur
fig, ax = plt.subplots()
colormap = ax.pcolormesh(X, Y, Z, cmap='YlGnBu')
fig.colorbar(colormap)
# Fonction gradient pour f
gradient = grad(f)
# Ajoute la descente de gradient
x = x0
y = y0
for i in range(iterations):
vecteur = -gradient(x, y)
new_x = x + h * vecteur[0]
new_y = y + h * vecteur[1]
print("Itération " + str(i) + " : (" + str(x) + ", " + str(y) + ") -> (" + str(new_x) + ", " + str(new_y) + ")")
print("\tVecteur : " + str(vecteur))
ax.plot([x, new_x], [y, new_y], color='black')
x = new_x
y = new_y
# Affiche le point d'arrêt
ax.plot([x], [y], marker='o', markersize=3, color="red")
plt.show()
display_heatmap_with_gradient_descent(f, -5, 5, -5, 5, 0.05, -0.36, -0.39, 100, 0.1)
Update
I do not know if this is the only problem, but I made some tests with my gradient function, and apparently there is something wrong with it.
For example, I tested the following code :
print(grad(lambda x, y : x**2 - y**2)(1., 1.))
And it gives me [2., 2.], but it should be [2., -2.] since ddx is 2x and ddy is -2x for f = x^2 - y^2.
So I guess this is where the problem lies but I do not understand what is wrong with my grad function.
In addition, if the problem is in the grad function, it is strange that my algorithm almost works and goes through the lowest point.
With such a core problem, I would assume it would do anything.
Track
Again, this might not be the only problem, but at least it should be part of it.
If I do grad(f)(x, y) where x=y, then my ddx and ddy called by grad will return the same functions. It explains why I get [2., 2.] instead of [2., -2.] for grad(lambda x, y : x**2 - y**2)(1., 1.).
How can I process to get different partial derivatives with x = y ?
Let me know if you need any further explanation.
Thank you in advance.
A:
Solution
The problem was in effect with the grad function or rather with the ddx and ddy functions. These two functions were actually the same, except I was swapping x and y.
In other words, I wasn't computing the partial derivatives correctly which caused my gradient to be wrong and my algorithm to not work properly.
Here is the part of the code that has been modified, the rest is unchanged:
def f_replace_y(f, y):
return lambda x: f(x, y)
def f_replace_x(f, x):
return lambda y: f(x, y)
# Dérivée partielle de f par rapport à x
def ddx(f):
return lambda x, y : derivative(f_replace_y(f, y), x, dx=precision, n=1)
# Dériavée partielle de f par rapport à y
def ddy(f):
return lambda x, y : derivative(f_replace_x(f, x), y, dx=precision, n=1)
However, I still wonder how my previous code could pass through the lowest point given that my partial derivatives were wrong.
| I am trying to make a simple gradient descent algorithm in python, but it goes back up after passing through the lowest point | Problem
I am trying to build a simple gradient descent algorithm and plot it on a heatmap.
I assume there are better ways to do this, but I have to use this methodology.
My professor and I have very similar code but we cannot understand why mine behaves differently. Once the lowest point is reached, it should just turn around the lowest point. It works for my professor, but mine goes back up and stops in the middle of nowhere for some reason that we cannot figure out.
Let me be precise, it does not stop because the number of iterations (passed as a parameter of the function) is reached, but the gradient vector becomes very small in a random place as it should happen at the lowest point.
Here is my code :
from math import cos, sin, exp
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import derivative
def f(x, y):
return 4 * exp(-((x**2)/2 + (y**2)/2)) * sin(x*(y-1/2)) * cos(x/2 + y)
precision = 10e-5
# Derivee par rapport a x
def ddx(f):
return lambda x, y: derivative(f, x, dx=precision, n=1, args=(y,))
# Derivee par rapport a y
def ddy(f):
return lambda x, y: derivative(f, y, dx=precision, n=1, args=(x,))
# grad(f) retourne la fonction vectorielle de deux variables réelles : (x, y) -> ∇(x, y)
def grad(f):
return lambda x, y: np.array([ddx(f)(x, y), ddy(f)(x, y)])
def display_heatmap_with_gradient_descent(f, a, b, c, d, n, x0, y0, iterations, h):
x = np.arange(a,b,n)
y = np.arange(c,d,n)
X, Y = np.meshgrid(x, y) # préparation du maillage
f_vect = np.vectorize(f) # transformation de f en une fonction vectorielle
Z = f_vect(X,Y) # calcul des images
# Trace la carte de couleur
fig, ax = plt.subplots()
colormap = ax.pcolormesh(X, Y, Z, cmap='YlGnBu')
fig.colorbar(colormap)
# Fonction gradient pour f
gradient = grad(f)
# Ajoute la descente de gradient
x = x0
y = y0
for i in range(iterations):
vecteur = -gradient(x, y)
new_x = x + h * vecteur[0]
new_y = y + h * vecteur[1]
print("Itération " + str(i) + " : (" + str(x) + ", " + str(y) + ") -> (" + str(new_x) + ", " + str(new_y) + ")")
print("\tVecteur : " + str(vecteur))
ax.plot([x, new_x], [y, new_y], color='black')
x = new_x
y = new_y
# Affiche le point d'arrêt
ax.plot([x], [y], marker='o', markersize=3, color="red")
plt.show()
display_heatmap_with_gradient_descent(f, -5, 5, -5, 5, 0.05, -0.36, -0.39, 100, 0.1)
Update
I do not know if this is the only problem, but I made some tests with my gradient function, and apparently there is something wrong with it.
For example, I tested the following code :
print(grad(lambda x, y : x**2 - y**2)(1., 1.))
And it gives me [2., 2.], but it should be [2., -2.] since ddx is 2x and ddy is -2x for f = x^2 - y^2.
So I guess this is where the problem lies but I do not understand what is wrong with my grad function.
In addition, if the problem is in the grad function, it is strange that my algorithm almost works and goes through the lowest point.
With such a core problem, I would assume it would do anything.
Track
Again, this might not be the only problem, but at least it should be part of it.
If I do grad(f)(x, y) where x=y, then my ddx and ddy called by grad will return the same functions. It explains why I get [2., 2.] instead of [2., -2.] for grad(lambda x, y : x**2 - y**2)(1., 1.).
How can I process to get different partial derivatives with x = y ?
Let me know if you need any further explanation.
Thank you in advance.
| [
"Solution\nThe problem was in effect with the grad function or rather with the ddx and ddy functions. These two functions were actually the same, except I was swapping x and y.\nIn other words, I wasn't computing the partial derivatives correctly which caused my gradient to be wrong and my algorithm to not work properly.\nHere is the part of the code that has been modified, the rest is unchanged:\ndef f_replace_y(f, y):\n return lambda x: f(x, y)\n\ndef f_replace_x(f, x):\n return lambda y: f(x, y)\n\n# Dérivée partielle de f par rapport à x\ndef ddx(f):\n return lambda x, y : derivative(f_replace_y(f, y), x, dx=precision, n=1)\n\n# Dériavée partielle de f par rapport à y\ndef ddy(f):\n return lambda x, y : derivative(f_replace_x(f, x), y, dx=precision, n=1)\n\nHowever, I still wonder how my previous code could pass through the lowest point given that my partial derivatives were wrong.\n"
] | [
0
] | [] | [] | [
"algorithm",
"gradient",
"gradient_descent",
"math",
"python"
] | stackoverflow_0074646385_algorithm_gradient_gradient_descent_math_python.txt |
Q:
Does the preprocessing of one algorithm change the conditions of the experiment?
As an example,
We have two algorithms that utilize the same dataset and the same train and test data:
1 - uses k-NN and returns the accuracy;
2 -applies preprocessing before k-NN and adds a few more things, before returning the accuracy.
Although the preprocessing "is a part of" algorithm number 2, I've been told that we cannot compare these two methods because the experiment's conditions have changed as a result of the preprocessing.
Given that the preprocessing is only exclusive to algorithm no. 2, I believe that the circumstances have not been altered.
Which statement is the correct one?
A:
It depends what you are comparing.
if you compare the two methods "with preprocessing allowed", then you don't include the preprocessing in the experiment; and in principle you should test several (identical) queries;
if you compare "with no preprocessing allowed", then include everything in the measurement.
| Does the preprocessing of one algorithm change the conditions of the experiment? | As an example,
We have two algorithms that utilize the same dataset and the same train and test data:
1 - uses k-NN and returns the accuracy;
2 -applies preprocessing before k-NN and adds a few more things, before returning the accuracy.
Although the preprocessing "is a part of" algorithm number 2, I've been told that we cannot compare these two methods because the experiment's conditions have changed as a result of the preprocessing.
Given that the preprocessing is only exclusive to algorithm no. 2, I believe that the circumstances have not been altered.
Which statement is the correct one?
| [
"It depends what you are comparing.\n\nif you compare the two methods \"with preprocessing allowed\", then you don't include the preprocessing in the experiment; and in principle you should test several (identical) queries;\n\nif you compare \"with no preprocessing allowed\", then include everything in the measurement.\n\n\n"
] | [
1
] | [] | [] | [
"algorithm",
"comparison",
"machine_learning",
"python",
"theory"
] | stackoverflow_0074656128_algorithm_comparison_machine_learning_python_theory.txt |
Q:
Pytest mock fastapi.Depends upon direct function call
How can we mock the fastapi.Depends function in a pytest? It works if we access the function via starlette.testclient.TestClient (see 1st example below). It fails if we call the method directly (see 2nd example).
We know that we can override the Depends with app.dependency_overrides[get_user] = ... but same here, works when accessed via fastapi, fails when method is accessed directly.
We also tried mocking Depends but had no luck.
from fastapi import Depends, FastAPI
from starlette.testclient import TestClient
app = FastAPI()
def get_user():
return "me"
@app.get("/")
def demo(user_id=Depends(get_user)):
return {"user_id": user_id}
# works
def test_fastapi():
client = TestClient(app)
result = client.get("/")
assert result.json()["user_id"] == "me"
# fails: as result is {'user_id': Depends(get_user)}
def test_method_call():
result = demo()
assert result["user_id"] == "me"
Please note that this is a very simplified example. In the real code the method that fails is far deeper in the hierarchy. So we cannot change the get_user method or the corresponding call easily.
A:
I am not sure if this is the correct approach to handle this, but I had a similar issue when trying to test a deeply nested function that was using Depends and thus was not willing to use the override approach.
My "mock" was pretty simple, instead of trying to mock fastapi.Depends I just passed in my desired return value to replace the default parameter value.
So in your case something like:
def test_method_call():
result = demo(user_id="me")
assert result["user_id"] == "me"
| Pytest mock fastapi.Depends upon direct function call | How can we mock the fastapi.Depends function in a pytest? It works if we access the function via starlette.testclient.TestClient (see 1st example below). It fails if we call the method directly (see 2nd example).
We know that we can override the Depends with app.dependency_overrides[get_user] = ... but same here, works when accessed via fastapi, fails when method is accessed directly.
We also tried mocking Depends but had no luck.
from fastapi import Depends, FastAPI
from starlette.testclient import TestClient
app = FastAPI()
def get_user():
return "me"
@app.get("/")
def demo(user_id=Depends(get_user)):
return {"user_id": user_id}
# works
def test_fastapi():
client = TestClient(app)
result = client.get("/")
assert result.json()["user_id"] == "me"
# fails: as result is {'user_id': Depends(get_user)}
def test_method_call():
result = demo()
assert result["user_id"] == "me"
Please note that this is a very simplified example. In the real code the method that fails is far deeper in the hierarchy. So we cannot change the get_user method or the corresponding call easily.
| [
"I am not sure if this is the correct approach to handle this, but I had a similar issue when trying to test a deeply nested function that was using Depends and thus was not willing to use the override approach.\nMy \"mock\" was pretty simple, instead of trying to mock fastapi.Depends I just passed in my desired return value to replace the default parameter value.\nSo in your case something like:\ndef test_method_call():\n result = demo(user_id=\"me\")\n assert result[\"user_id\"] == \"me\"\n\n"
] | [
0
] | [] | [] | [
"fastapi",
"mocking",
"pytest",
"python"
] | stackoverflow_0074630659_fastapi_mocking_pytest_python.txt |
Q:
group rows based on partial strings from two columns and sum values
df = pd.DataFrame({'c1':['Ax','Ay','Bx','By'], 'c2':['Ay','Ax','By','Bx'], 'c3':[1,2,3,4]})
c1 c2 c3
0 Ax Ay 1
1 Ay Ax 2
2 Bx By 3
3 By Bx 4
I'd like to sum the c3 values by aggregating the same xy combinations from the c1 and c2 columns.
The expected output is
c1 c2 c3
0 x y 4 #[Ax Ay] + [Bx By]
1 y x 6 #[Ay Ax] + [By Bx]
A:
You can select values in c1 and c2 without first letters and aggregate sum:
df = df.groupby([df.c1.str[1:], df.c2.str[1:]]).sum().reset_index()
print (df)
c1 c2 c3
0 x y 4
1 y x 6
| group rows based on partial strings from two columns and sum values | df = pd.DataFrame({'c1':['Ax','Ay','Bx','By'], 'c2':['Ay','Ax','By','Bx'], 'c3':[1,2,3,4]})
c1 c2 c3
0 Ax Ay 1
1 Ay Ax 2
2 Bx By 3
3 By Bx 4
I'd like to sum the c3 values by aggregating the same xy combinations from the c1 and c2 columns.
The expected output is
c1 c2 c3
0 x y 4 #[Ax Ay] + [Bx By]
1 y x 6 #[Ay Ax] + [By Bx]
| [
"You can select values in c1 and c2 without first letters and aggregate sum:\ndf = df.groupby([df.c1.str[1:], df.c2.str[1:]]).sum().reset_index()\nprint (df)\n c1 c2 c3\n0 x y 4\n1 y x 6\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074656475_pandas_python.txt |
Q:
Use config and wildcards in snakemake rule output
I have a list of fasta may be used, but some of them may also not be used, so I hope to use snakemake to index fastq if need
I bulit a yaml file like this
# config.yaml
reference_genome:
fa1: "path/to/genome"
fa2: "..."
fa3: "..."
...
and I write a snakemake like this
configfile: "config.yaml"
rule all:
input:
expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac'])
rule index:
input:
#reference_genomeFile
ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome]
output:
expand('{reference_genome}.{type}', reference_genome={reference_genome}, type=['amb', 'ann', 'pac'])
log:
'log/rule_index_{reference_genome}.log'
shell:
"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1"
I hope snakemake can monitor the index file (amb, ann, pac), but this script will raise follow error:
name 'reference_genome' is not defined
File "/public/...", line ..., in <module>
update: base on @dariober's answer:
if we runing with following config.yaml
reference_genome:
fa1: "genome_1.fa"
fa2: "genome_2.fa"
fa3: "genome_3.fa"
I expect the output is
genome_1.fa.{amb, ann, pac}
genome_2.fa.{amb, ann, pac}
genome_3.fa.{amb, ann, pac}
If we use following workaround
rule all:
input:
expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac'])
rule index:
input:
#reference_genomeFile
ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome]
output:
expand('{{reference_genome}}.{type}', type=['amb', 'ann', 'pac'])
log:
'log/rule_index_{reference_genome}.log'
shell:
"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1"
we will get
$ snakemake -s snakemake_test.smk --configfile config.yaml
# for reference_name is fa1
[Fri Dec 2 17:56:29 2022]
rule index:
input: genome_1.fa
output: fa1.amb, fa1.ann, fa1.pac
log: log/rule_index_fa1.log
jobid: 1
wildcards: reference_genome=fa1
...
Thats not my expected output
the output is fa1.amb, fa1.ann, fa1.pac, but I wanted output is genome_1.fa.amb, genome_1.fa.ann, genome_1.fa.pac
A:
My guess is that you want:
rule all:
input:
expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac'])
rule index:
input:
#reference_genomeFile
ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome]
output:
expand('{{reference_genome}}.{type}', type=['amb', 'ann', 'pac'])
log:
'log/rule_index_{reference_genome}.log'
shell:
"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1"
snakemake -p -n -j 1 --configfile config.yaml
That is: Run rule index for each genome file, i.e., three times here. Each run of index generates the three index files. Note the use of double curly braces {{reference_genome}} to tell expand that this wildcard does not need to be expanded.
Example config.yaml:
reference_genome:
fa1: "genome_1.fa"
fa2: "genome_2.fa"
fa3: "genome_3.fa"
A:
Building upon dariober's answer and judging from your comments, I think this is what you are looking for?
configfile: "config.yaml"
rule all:
input:
expand(
"{reference_genome}.{type}",
reference_genome=list(config["reference_genome"].values()),
type=["amb", "ann", "pac"],
),
rule index:
input:
#reference_genomeFile
ref_genome="{reference_genome}",
output:
expand("{{reference_genome}}.{type}", type=["amb", "ann", "pac"]),
log:
"log/rule_index_{reference_genome}.log",
shell:
"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1"
with config.yaml
reference_genome:
fa1: "path/to/genome1"
fa2: "path/to/genome2"
fa3: "path/to/genome3"
I've modified the rule all to use the filepaths from config.yaml rather than the list ['fa1','fa2','fa3']. I've also removed the lambda wildcard from the input of rule index as it seems unnecessary.
| Use config and wildcards in snakemake rule output | I have a list of fasta may be used, but some of them may also not be used, so I hope to use snakemake to index fastq if need
I bulit a yaml file like this
# config.yaml
reference_genome:
fa1: "path/to/genome"
fa2: "..."
fa3: "..."
...
and I write a snakemake like this
configfile: "config.yaml"
rule all:
input:
expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac'])
rule index:
input:
#reference_genomeFile
ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome]
output:
expand('{reference_genome}.{type}', reference_genome={reference_genome}, type=['amb', 'ann', 'pac'])
log:
'log/rule_index_{reference_genome}.log'
shell:
"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1"
I hope snakemake can monitor the index file (amb, ann, pac), but this script will raise follow error:
name 'reference_genome' is not defined
File "/public/...", line ..., in <module>
update: base on @dariober's answer:
if we runing with following config.yaml
reference_genome:
fa1: "genome_1.fa"
fa2: "genome_2.fa"
fa3: "genome_3.fa"
I expect the output is
genome_1.fa.{amb, ann, pac}
genome_2.fa.{amb, ann, pac}
genome_3.fa.{amb, ann, pac}
If we use following workaround
rule all:
input:
expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac'])
rule index:
input:
#reference_genomeFile
ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome]
output:
expand('{{reference_genome}}.{type}', type=['amb', 'ann', 'pac'])
log:
'log/rule_index_{reference_genome}.log'
shell:
"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1"
we will get
$ snakemake -s snakemake_test.smk --configfile config.yaml
# for reference_name is fa1
[Fri Dec 2 17:56:29 2022]
rule index:
input: genome_1.fa
output: fa1.amb, fa1.ann, fa1.pac
log: log/rule_index_fa1.log
jobid: 1
wildcards: reference_genome=fa1
...
Thats not my expected output
the output is fa1.amb, fa1.ann, fa1.pac, but I wanted output is genome_1.fa.amb, genome_1.fa.ann, genome_1.fa.pac
| [
"My guess is that you want:\nrule all:\n input:\n expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac'])\n\nrule index: \n input: \n #reference_genomeFile\n ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome]\n output: \n expand('{{reference_genome}}.{type}', type=['amb', 'ann', 'pac'])\n log: \n 'log/rule_index_{reference_genome}.log'\n shell: \n \"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1\"\n\nsnakemake -p -n -j 1 --configfile config.yaml\n\nThat is: Run rule index for each genome file, i.e., three times here. Each run of index generates the three index files. Note the use of double curly braces {{reference_genome}} to tell expand that this wildcard does not need to be expanded.\n\nExample config.yaml:\nreference_genome:\n fa1: \"genome_1.fa\"\n fa2: \"genome_2.fa\"\n fa3: \"genome_3.fa\"\n\n",
"Building upon dariober's answer and judging from your comments, I think this is what you are looking for?\nconfigfile: \"config.yaml\"\n\n\nrule all:\n input:\n expand(\n \"{reference_genome}.{type}\",\n reference_genome=list(config[\"reference_genome\"].values()),\n type=[\"amb\", \"ann\", \"pac\"],\n ),\n\n\nrule index:\n input:\n #reference_genomeFile\n ref_genome=\"{reference_genome}\",\n output:\n expand(\"{{reference_genome}}.{type}\", type=[\"amb\", \"ann\", \"pac\"]),\n log:\n \"log/rule_index_{reference_genome}.log\",\n shell:\n \"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1\"\n\nwith config.yaml\nreference_genome:\n fa1: \"path/to/genome1\"\n fa2: \"path/to/genome2\"\n fa3: \"path/to/genome3\"\n\nI've modified the rule all to use the filepaths from config.yaml rather than the list ['fa1','fa2','fa3']. I've also removed the lambda wildcard from the input of rule index as it seems unnecessary.\n"
] | [
1,
1
] | [] | [] | [
"bioinformatics",
"pipeline",
"python",
"snakemake",
"wildcard"
] | stackoverflow_0074652332_bioinformatics_pipeline_python_snakemake_wildcard.txt |
Q:
How to parse a string of multiple jsons without separators in python?
Given a single-lined string of multiple, arbitrary nested json-files without separators, like for example:
contents = r'{"payload":{"device":{"serial":213}}}{"payload":{"device":{"serial":123}}}'
How can contents be parsed into an array of dicts/jsons ? I tried
df = pd.read_json(contents, lines=True)
But only got a ValueError response:
ValueError: Unexpected character found when decoding array value (2)
A:
You can split the string, then parse each JSON string into a dictionary:
import json
contents = r'{"payload":{"device":{"serial":213}}}{"payload":{"device":{"serial":123}}}'
json_strings = contents.replace('}{', '}|{').split('|')
json_dicts = [json.loads(string) for string in json_strings]
Output:
[{'payload': {'device': {'serial': 213}}}, {'payload': {'device': {'serial': 123}}}]
| How to parse a string of multiple jsons without separators in python? | Given a single-lined string of multiple, arbitrary nested json-files without separators, like for example:
contents = r'{"payload":{"device":{"serial":213}}}{"payload":{"device":{"serial":123}}}'
How can contents be parsed into an array of dicts/jsons ? I tried
df = pd.read_json(contents, lines=True)
But only got a ValueError response:
ValueError: Unexpected character found when decoding array value (2)
| [
"You can split the string, then parse each JSON string into a dictionary:\nimport json\n\ncontents = r'{\"payload\":{\"device\":{\"serial\":213}}}{\"payload\":{\"device\":{\"serial\":123}}}'\n\njson_strings = contents.replace('}{', '}|{').split('|')\njson_dicts = [json.loads(string) for string in json_strings]\n\nOutput:\n[{'payload': {'device': {'serial': 213}}}, {'payload': {'device': {'serial': 123}}}]\n\n"
] | [
1
] | [] | [] | [
"amazon_web_services",
"arrays",
"json",
"ndjson",
"python"
] | stackoverflow_0074656450_amazon_web_services_arrays_json_ndjson_python.txt |
Q:
Calculate average temperature in reducer
I am trying to write a code that would calculate average temperature (reducer.py) based on ncdc weather.
0057011060999991928010112004+67500+012067FM-12+001199999V0202001N012319999999N0500001N9+00281+99999102171ADDAY181999GF108991999999999999001001MD1710261+9999MW1801
0062011060999991928010206004+67500+012067FM-12+001199999V0201801N00931220001CN0200001N9+00281+99999100901ADDAA199002091AY121999GF101991999999017501999999MD1810461+9999
0108011060999991928010212004+67500+012067FM-12+001199999V0201601N009319999999N0100001N9+00111+99999100062ADDAY171999GF108991999011012501001001MD1810542+9999MW1681EQDQ01+000042SCOTLCQ02+100063APOSLPQ03+000542APC3
0087011060999991928010306004+67500+012067FM-12+001199999V0202001N022619999999N0100001N9+00501+99999098781ADDAA199001091AY161999GF108991999011004501001001MD1310061+9999MW1601EQDQ01+000042SCOTLC
0057011060999991928010312004+67500+012067FM-12+001199999V0202301N01541004501CN0040001N9+00001+99999098951ADDAY161999GF108991081061004501999999MD1210201+9999MW1601
#!/usr/bin/env python
import sys
(last_key, max_val) = (None, -sys.maxint)
for line in sys.stdin:
(key, val) = line.strip().split("\t")
if last_key and last_key != key:
print "%s\t%s" % (last_key, max_val)
(last_key, max_val) = (key, int(val))
else:
(last_key, max_val) = (key, max(max_val, int(val)))
if last_key:
print "%s\t%s" % (last_key, max_val)
A:
First of all, your shown data has no tabs, so it's not clear why you've shown code that splits lines on tabs and finds the max. Not an average.
To find an average, you'll need to collect all seen values into a list (values.append(int(val))), then you can from statistics import mean and call mean(values) at the end of the loop
I'd highly suggest that you use mrjob or pyspark instead
| Calculate average temperature in reducer | I am trying to write a code that would calculate average temperature (reducer.py) based on ncdc weather.
0057011060999991928010112004+67500+012067FM-12+001199999V0202001N012319999999N0500001N9+00281+99999102171ADDAY181999GF108991999999999999001001MD1710261+9999MW1801
0062011060999991928010206004+67500+012067FM-12+001199999V0201801N00931220001CN0200001N9+00281+99999100901ADDAA199002091AY121999GF101991999999017501999999MD1810461+9999
0108011060999991928010212004+67500+012067FM-12+001199999V0201601N009319999999N0100001N9+00111+99999100062ADDAY171999GF108991999011012501001001MD1810542+9999MW1681EQDQ01+000042SCOTLCQ02+100063APOSLPQ03+000542APC3
0087011060999991928010306004+67500+012067FM-12+001199999V0202001N022619999999N0100001N9+00501+99999098781ADDAA199001091AY161999GF108991999011004501001001MD1310061+9999MW1601EQDQ01+000042SCOTLC
0057011060999991928010312004+67500+012067FM-12+001199999V0202301N01541004501CN0040001N9+00001+99999098951ADDAY161999GF108991081061004501999999MD1210201+9999MW1601
#!/usr/bin/env python
import sys
(last_key, max_val) = (None, -sys.maxint)
for line in sys.stdin:
(key, val) = line.strip().split("\t")
if last_key and last_key != key:
print "%s\t%s" % (last_key, max_val)
(last_key, max_val) = (key, int(val))
else:
(last_key, max_val) = (key, max(max_val, int(val)))
if last_key:
print "%s\t%s" % (last_key, max_val)
| [
"First of all, your shown data has no tabs, so it's not clear why you've shown code that splits lines on tabs and finds the max. Not an average.\nTo find an average, you'll need to collect all seen values into a list (values.append(int(val))), then you can from statistics import mean and call mean(values) at the end of the loop\nI'd highly suggest that you use mrjob or pyspark instead\n"
] | [
0
] | [] | [] | [
"hadoop",
"hadoop_streaming",
"mapreduce",
"python"
] | stackoverflow_0074651008_hadoop_hadoop_streaming_mapreduce_python.txt |
Q:
Display a list of values from an array on a grid drawn using pygame
I'm trying to create a small program that will draw a 6 x 6 grid. I have an array of values (36 elements) which I want to display in each box. I'm able to draw the grid using the below code, however I'm not able to figure out how to display the text from the array in each box. Later I want to check where the current selected box is and determine if it has a particular value and perform some action based on it.
`
matrix = ["1", "0", "0", "P", "2", "0",
"0", "0", "0", "0", "0", "0",
"0", "0", "0", "0", "0", "0",
"0", "0", "0", "0", "0", "0",
"0", "0", "0", "0", "4", "3"]
I want to show the values from matrix array in the grid
class Grid:
def __init__(self, width, height, rows, cols):
self.width = width
self.height = height
self.rows = rows
self.cols = cols
def draw(self, screen):
# draw the grid
for i in range(self.rows):
for j in range(self.cols):
pygame.draw.rect(screen, (255, 255, 255), (i * self.width, j * self.height, self.width, self.height), 1)
def show_grid():
pygame.init()
screen_width = 500
screen_height = 500
screen = pygame.display.set_mode((screen_width, screen_height))
pygame.display.set_caption("Text Adventure Game")
grid = Grid(50, 50, 6, 6)
player = Player(0, 0, 50, 50, (255, 127, 0))
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_UP:
player.move("up")
if event.key == pygame.K_DOWN:
player.move("down")
if event.key == pygame.K_LEFT:
player.move("left")
if event.key == pygame.K_RIGHT:
player.move("right")
screen.fill((0, 0, 0))
grid.draw(screen)
player.draw(screen)
pygame.display.update()
pygame.quit()
class Player:
global steps
steps = 0
def __init__(self, x, y, width, height, color):
self.x = x
self.y = y
self.width = width
self.height = height
self.color = color
def draw(self, screen):
pygame.draw.rect(screen, self.color, (self.x, self.y, self.width, self.height))
def move(self, direction):
if direction == "up":
if self.y > 0:
self.y -= self.height
steps = steps - 50
print(self.x, self.y)
elif direction == "down":
if self.y < 250:
self.y += self.height
print(self.x, self.y)
steps = steps + 50
elif direction == "left":
if self.x > 0:
self.x -= self.width
print(self.x, self.y)
elif direction == "right":
if self.x < 250:
self.x += self.width
print(self.x, self.y)
`
I want to populate the grid with the values from the matrix array and perform actions based on the values
A:
def print_matrix(matrix):
for i in range(len(matrix)):
if i % 6 == 0 and i != 0:
print('')
print(matrix[i], end = ' ')
print('')
print_matrix(matrix)
| Display a list of values from an array on a grid drawn using pygame | I'm trying to create a small program that will draw a 6 x 6 grid. I have an array of values (36 elements) which I want to display in each box. I'm able to draw the grid using the below code, however I'm not able to figure out how to display the text from the array in each box. Later I want to check where the current selected box is and determine if it has a particular value and perform some action based on it.
`
matrix = ["1", "0", "0", "P", "2", "0",
"0", "0", "0", "0", "0", "0",
"0", "0", "0", "0", "0", "0",
"0", "0", "0", "0", "0", "0",
"0", "0", "0", "0", "4", "3"]
I want to show the values from matrix array in the grid
class Grid:
def __init__(self, width, height, rows, cols):
self.width = width
self.height = height
self.rows = rows
self.cols = cols
def draw(self, screen):
# draw the grid
for i in range(self.rows):
for j in range(self.cols):
pygame.draw.rect(screen, (255, 255, 255), (i * self.width, j * self.height, self.width, self.height), 1)
def show_grid():
pygame.init()
screen_width = 500
screen_height = 500
screen = pygame.display.set_mode((screen_width, screen_height))
pygame.display.set_caption("Text Adventure Game")
grid = Grid(50, 50, 6, 6)
player = Player(0, 0, 50, 50, (255, 127, 0))
running = True
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_UP:
player.move("up")
if event.key == pygame.K_DOWN:
player.move("down")
if event.key == pygame.K_LEFT:
player.move("left")
if event.key == pygame.K_RIGHT:
player.move("right")
screen.fill((0, 0, 0))
grid.draw(screen)
player.draw(screen)
pygame.display.update()
pygame.quit()
class Player:
global steps
steps = 0
def __init__(self, x, y, width, height, color):
self.x = x
self.y = y
self.width = width
self.height = height
self.color = color
def draw(self, screen):
pygame.draw.rect(screen, self.color, (self.x, self.y, self.width, self.height))
def move(self, direction):
if direction == "up":
if self.y > 0:
self.y -= self.height
steps = steps - 50
print(self.x, self.y)
elif direction == "down":
if self.y < 250:
self.y += self.height
print(self.x, self.y)
steps = steps + 50
elif direction == "left":
if self.x > 0:
self.x -= self.width
print(self.x, self.y)
elif direction == "right":
if self.x < 250:
self.x += self.width
print(self.x, self.y)
`
I want to populate the grid with the values from the matrix array and perform actions based on the values
| [
"def print_matrix(matrix):\n for i in range(len(matrix)):\n if i % 6 == 0 and i != 0:\n print('')\n print(matrix[i], end = ' ')\n print('')\n\nprint_matrix(matrix)\n\n"
] | [
0
] | [] | [] | [
"pygame",
"python"
] | stackoverflow_0074642010_pygame_python.txt |
Q:
Spherical Graph Layout in Python
Objective
Display a 3D Sphere graph structure based on input edges & nodes using VTK for visualisation. As for example shown in https://epfl-lts2.github.io/gspbox-html/doc/graphs/gsp_sphere.html
Target:
State of work
Input data as given factor
NetworkX for position calculation
Handover to VTK methods for 3D visualisation
Problem
3 years ago, I had achieved the visualisation as shown above. Unfortunately, I did a little bit of too much cleaning and I just realized, that I dont have these methods anymore. It is somehow a force-directed graph on a sphere surface. Maybe similar to the "strong gravity" parameter in the 2D forceatlas. I have not found any 3D implementation of this yet.
I tried again with the following algorithms, but none of them has produced this layout, neither have parameter tuning of these algorithms (or did I miss an important one?):
NetworkX: Spherical, Spring, Shell, Kamada Kawaii, Fruchterman-Reingold (the 2D fruchterman-reingold in Gephi looks like it could come close to the target in a 3D version, yet gephi does not support 3D or did I oversee something?)
ForceAtlas2
Gephi (the 2D fruchterman-reingold looks like a circle, but this is not available in 3D, nor does the 3D Force Atlas produce valid Z-Coordinates (they are within a range of +1e-4 and -1e-4)
Researching for "spherical graph layout" has not brought me to any progress (only to this view which seems very similar https://observablehq.com/@fil/3d-graph-on-sphere ).
How can I achieve this spherical layout using python (or a third party which provides a positioning information)
Update: I made some progress and found the keywords non-euclidean, hyperbolic and spherical force-directed algorithms, however still not achieved anything yet. Or Non-Euclidean Riemann Embeddings (https://www2.cs.arizona.edu/~kobourov/riemann_embedders.pdf)
A:
Have you tried the python lib version of the GSPBOX?
If yes, why it does not work for you?
https://pygsp.readthedocs.io/en/stable/reference/graphs.html
A:
To display a 3D Sphere graph structure using VTK, you can use the vtkSphereSource class to generate the sphere geometry, and the vtkGraphLayoutView class to visualize the graph.
Here is an example of how you can create a 3D Sphere graph structure in VTK:
First, import the required modules:
sphere = vtk.vtkSphereSource()
sphere.SetRadius(1.0)
sphere.SetThetaResolution(32)
sphere.SetPhiResolution(32)
Next, create a vtkSphereSource object and set the radius and resolution of the sphere:
sphere = vtk.vtkSphereSource()
sphere.SetRadius(1.0)
sphere.SetThetaResolution(32)
sphere.SetPhiResolution(32)
Then, create a vtkPolyData object and set the points and polygons of the sphere using the output of the vtkSphereSource object:
polydata = vtk.vtkPolyData()
polydata.SetPoints(sphere.GetOutput().GetPoints())
polydata.SetPolys(sphere.GetOutput().GetPolys())
Next, create a vtkPoints object and set the coordinates of the nodes in the graph:
points = vtk.vtkPoints()
points.InsertNextPoint(1.0, 0.0, 0.0)
points.InsertNextPoint(0.0, 1.0, 0.0)
points.InsertNextPoint(0.0, 0.0, 1.0)
Then, create a vtkCellArray object and add the edges of the graph to the vtkCellArray object:
cells = vtk.vtkCellArray()
cells.InsertNextCell(2)
cells.InsertCellPoint(0)
cells.InsertCellPoint(1)
cells.InsertNextCell(2)
cells.InsertCellPoint(0)
cells.InsertCellPoint(2)
Next, create a vtkPolyData object and set the points and edges of the graph using the vtkPoints and vtkCellArray objects:
graph = vtk.vtkPolyData()
graph.SetPoints(points)
graph.SetLines(cells)
Then, create a vtkGraphLayoutView object and add the vtkPolyData objects for the sphere and the graph to the vtkGraphLayoutView object:
view = vtk.vtkGraphLayoutView()
view.AddRepresentationFromInput(polydata)
view.
| Spherical Graph Layout in Python | Objective
Display a 3D Sphere graph structure based on input edges & nodes using VTK for visualisation. As for example shown in https://epfl-lts2.github.io/gspbox-html/doc/graphs/gsp_sphere.html
Target:
State of work
Input data as given factor
NetworkX for position calculation
Handover to VTK methods for 3D visualisation
Problem
3 years ago, I had achieved the visualisation as shown above. Unfortunately, I did a little bit of too much cleaning and I just realized, that I dont have these methods anymore. It is somehow a force-directed graph on a sphere surface. Maybe similar to the "strong gravity" parameter in the 2D forceatlas. I have not found any 3D implementation of this yet.
I tried again with the following algorithms, but none of them has produced this layout, neither have parameter tuning of these algorithms (or did I miss an important one?):
NetworkX: Spherical, Spring, Shell, Kamada Kawaii, Fruchterman-Reingold (the 2D fruchterman-reingold in Gephi looks like it could come close to the target in a 3D version, yet gephi does not support 3D or did I oversee something?)
ForceAtlas2
Gephi (the 2D fruchterman-reingold looks like a circle, but this is not available in 3D, nor does the 3D Force Atlas produce valid Z-Coordinates (they are within a range of +1e-4 and -1e-4)
Researching for "spherical graph layout" has not brought me to any progress (only to this view which seems very similar https://observablehq.com/@fil/3d-graph-on-sphere ).
How can I achieve this spherical layout using python (or a third party which provides a positioning information)
Update: I made some progress and found the keywords non-euclidean, hyperbolic and spherical force-directed algorithms, however still not achieved anything yet. Or Non-Euclidean Riemann Embeddings (https://www2.cs.arizona.edu/~kobourov/riemann_embedders.pdf)
| [
"Have you tried the python lib version of the GSPBOX?\nIf yes, why it does not work for you?\nhttps://pygsp.readthedocs.io/en/stable/reference/graphs.html\n",
"To display a 3D Sphere graph structure using VTK, you can use the vtkSphereSource class to generate the sphere geometry, and the vtkGraphLayoutView class to visualize the graph.\nHere is an example of how you can create a 3D Sphere graph structure in VTK:\nFirst, import the required modules:\nsphere = vtk.vtkSphereSource()\nsphere.SetRadius(1.0)\nsphere.SetThetaResolution(32)\nsphere.SetPhiResolution(32)\n\nNext, create a vtkSphereSource object and set the radius and resolution of the sphere:\nsphere = vtk.vtkSphereSource()\nsphere.SetRadius(1.0)\nsphere.SetThetaResolution(32)\nsphere.SetPhiResolution(32)\n\nThen, create a vtkPolyData object and set the points and polygons of the sphere using the output of the vtkSphereSource object:\n polydata = vtk.vtkPolyData()\npolydata.SetPoints(sphere.GetOutput().GetPoints())\npolydata.SetPolys(sphere.GetOutput().GetPolys())\n\nNext, create a vtkPoints object and set the coordinates of the nodes in the graph:\n\npoints = vtk.vtkPoints()\npoints.InsertNextPoint(1.0, 0.0, 0.0)\npoints.InsertNextPoint(0.0, 1.0, 0.0)\npoints.InsertNextPoint(0.0, 0.0, 1.0)\n\nThen, create a vtkCellArray object and add the edges of the graph to the vtkCellArray object:\ncells = vtk.vtkCellArray()\ncells.InsertNextCell(2)\ncells.InsertCellPoint(0)\ncells.InsertCellPoint(1)\ncells.InsertNextCell(2)\ncells.InsertCellPoint(0)\ncells.InsertCellPoint(2)\n\nNext, create a vtkPolyData object and set the points and edges of the graph using the vtkPoints and vtkCellArray objects:\ngraph = vtk.vtkPolyData()\n\ngraph.SetPoints(points)\ngraph.SetLines(cells)\nThen, create a vtkGraphLayoutView object and add the vtkPolyData objects for the sphere and the graph to the vtkGraphLayoutView object:\n view = vtk.vtkGraphLayoutView()\nview.AddRepresentationFromInput(polydata)\nview.\n\n"
] | [
0,
0
] | [] | [] | [
"3d",
"graph",
"python"
] | stackoverflow_0074164603_3d_graph_python.txt |
Q:
How to set this code to open at specific times of the day?
SO basically how do i do to set this to run at a specific time of the day ?
import winsound
from win10toast import ToastNotifier
def timer (reminder,seconds):
notificator=ToastNotifier()
notificator=ToastNotifier("Reminder",f"""Alarm will go off in (seconds) Seconds.""",duration=20
notificator.show_toast(f"Reminder",reminder,duration=20)
#alarm
frequency=2500
duration=1000
winsound.Beep(frequency,duration)
if __name__=="__main__":
words=input("What shall i be reminded of: ")
sec=int(input("Enter seconds: "))
timer(words,sec)
Could this be ? as i tried to write it but doesn t seem to work
import time
local_time = float(input())
local_time = local_time * 60
time.sleep(local_time)
A:
Two possibilities from the top of my head:
[Linux] Use cron job https://help.ubuntu.com/community/CronHowto
[Any OS] Use scheduler https://schedule.readthedocs.io/en/stable/
[Windows] Scheduling a .py file on Task Scheduler in Windows 10
| How to set this code to open at specific times of the day? | SO basically how do i do to set this to run at a specific time of the day ?
import winsound
from win10toast import ToastNotifier
def timer (reminder,seconds):
notificator=ToastNotifier()
notificator=ToastNotifier("Reminder",f"""Alarm will go off in (seconds) Seconds.""",duration=20
notificator.show_toast(f"Reminder",reminder,duration=20)
#alarm
frequency=2500
duration=1000
winsound.Beep(frequency,duration)
if __name__=="__main__":
words=input("What shall i be reminded of: ")
sec=int(input("Enter seconds: "))
timer(words,sec)
Could this be ? as i tried to write it but doesn t seem to work
import time
local_time = float(input())
local_time = local_time * 60
time.sleep(local_time)
| [
"Two possibilities from the top of my head:\n\n[Linux] Use cron job https://help.ubuntu.com/community/CronHowto\n[Any OS] Use scheduler https://schedule.readthedocs.io/en/stable/\n[Windows] Scheduling a .py file on Task Scheduler in Windows 10\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074656493_python.txt |
Q:
Reverse complement from a file
The task is: Write a script (call it what you want) that that can analyze a fastafile (MySequences.fasta) by finding the reverse complement of the sequences. Using python.
from itertools import repeat
#opening file
filename = "MySequences.fasta"
file = open(filename, 'r')
#reading the file
for line in file:
line = line.strip()
if ">" in line:
header = line
elif (len(line) == 0):
continue
else:
seq = line
#reverse complement
def reverse_complement(seq):
compline = ''
for n in seq:
if n == 'A':
compline += 'T'
elif n == 'T':
compline += 'A'
elif n == 'C':
compline += 'G'
elif n == 'G':
compline += 'C'
return((compline)[::-1])
#run each line
for line in file:
rc = reverse_complement(seq)
print(rc)
A:
You run your function in the wrong place.
To run your function for each iterator, run the function there.
#reading the file
for line in file:
line = line.strip()
if ">" in line:
header = line
elif (len(line) == 0):
continue
else:
seq = line
#run function for each line, each time.
rc = reverse_complement(seq)
print(rc)
In your previous code, all iteration is successful. But you didn't put the line to the function to run each time. In your previous code, after all, iterations, only the last line is assigned. Therefore you put the last line to the function at the end. This is why your code prints only one line.
The solution.
from itertools import repeat
#reverse complement
def reverse_complement(seq):
compline = ''
for n in seq:
if n == 'A':
compline += 'T'
elif n == 'T':
compline += 'A'
elif n == 'C':
compline += 'G'
elif n == 'G':
compline += 'C'
return((compline)[::-1])
#opening file
filename = "MySequences.fasta"
file = open(filename, 'r')
#reading the file
for line in file:
line = line.strip()
if ">" in line:
header = line
elif (len(line) == 0):
continue
else:
seq = line
#run each line
rc = reverse_complement(seq)
print(rc)
Also, this is your other mistake.
You put seq as input instead line.
But even if you fix this, this code won't work for the same reason I told you before.
for line in file:
rc = reverse_complement(seq)
print(rc)
| Reverse complement from a file | The task is: Write a script (call it what you want) that that can analyze a fastafile (MySequences.fasta) by finding the reverse complement of the sequences. Using python.
from itertools import repeat
#opening file
filename = "MySequences.fasta"
file = open(filename, 'r')
#reading the file
for line in file:
line = line.strip()
if ">" in line:
header = line
elif (len(line) == 0):
continue
else:
seq = line
#reverse complement
def reverse_complement(seq):
compline = ''
for n in seq:
if n == 'A':
compline += 'T'
elif n == 'T':
compline += 'A'
elif n == 'C':
compline += 'G'
elif n == 'G':
compline += 'C'
return((compline)[::-1])
#run each line
for line in file:
rc = reverse_complement(seq)
print(rc)
| [
"You run your function in the wrong place.\nTo run your function for each iterator, run the function there.\n#reading the file\n\nfor line in file:\n line = line.strip()\n if \">\" in line:\n header = line\n elif (len(line) == 0):\n continue\n else:\n seq = line\n #run function for each line, each time.\n rc = reverse_complement(seq)\n print(rc)\n\nIn your previous code, all iteration is successful. But you didn't put the line to the function to run each time. In your previous code, after all, iterations, only the last line is assigned. Therefore you put the last line to the function at the end. This is why your code prints only one line.\nThe solution.\nfrom itertools import repeat\n\n#reverse complement\n\ndef reverse_complement(seq):\n compline = ''\n for n in seq:\n if n == 'A':\n compline += 'T'\n elif n == 'T':\n compline += 'A'\n elif n == 'C':\n compline += 'G'\n elif n == 'G':\n compline += 'C'\n return((compline)[::-1])\n\n\n#opening file\n\nfilename = \"MySequences.fasta\"\nfile = open(filename, 'r')\n\n\n#reading the file\n\nfor line in file:\n line = line.strip()\n if \">\" in line:\n header = line\n elif (len(line) == 0):\n continue\n else:\n seq = line\n #run each line\n rc = reverse_complement(seq)\n print(rc) \n\nAlso, this is your other mistake.\nYou put seq as input instead line.\nBut even if you fix this, this code won't work for the same reason I told you before.\nfor line in file:\n rc = reverse_complement(seq)\n print(rc) \n\n"
] | [
0
] | [] | [] | [
"bioinformatics",
"python"
] | stackoverflow_0074656373_bioinformatics_python.txt |
Q:
Using a square matrix with Networkx but keep getting Adjacency matrix not square
So I'm using Networkx to plot a cooc matrix. It works well with small samples but I keep getting this error when I run it with a big cooc matrix (reason why I can't share a minimum reproductible example):
Traceback (most recent call last):
File "", line 113, in <module>
G = nx.from_pandas_adjacency(matrix)
File "", line 205, in from_pandas_adjacency
G = from_numpy_array(A, create_using=create_using)
File "", line 1357, in from_numpy_array
raise nx.NetworkXError(f"Adjacency matrix not square: nx,ny={A.shape}")
networkx.exception.NetworkXError: Adjacency matrix not square: nx,ny=(74, 76)
This is my code :
G = nx.from_pandas_adjacency(matrix)
# visualize it with pyvis
N = Network(height='100%', width='100%', bgcolor='#222222', font_color='white')
N.barnes_hut()
for n in G.nodes:
N.add_node(n)
for e in G.edges:
N.add_edge((e[0]), (e[1]))
And this is the ouput of my matrix :
Ali Sarah Josh Maura Mort ... Jasmine Lily Adam Ute
Ali 0 3 2 2 ... 0 0 1 0
Sarah 3 0 3 3 ... 0 0 1 0
Josh 2 3 0 4 ... 0 0 1 0
Maura Mort 2 3 4 0 ... 0 0 1 0
Shelly 0 0 0 0 ... 0 0 0 0
... ... ... ... ... ... ... ... ... ...
Nicol 0 0 0 0 ... 0 0 0 0
Jasmine 0 0 0 0 ... 0 0 0 0
Lily 0 0 0 0 ... 0 0 0 0
Adam 1 1 1 1 ... 0 0 0 0
Ute 0 0 0 0 ... 0 0 0 0
[74 rows x 74 columns]
Weirdly, it looks like my matrix is a square (74 x 74).
Any idea what might be the problem ?
A:
So I was able to fix my problem by first converting my matrix into a stack.
cooc_matrix = matrix(matrixLabel, texts)
matrix = pd.DataFrame(cooc_matrix.todense(), index=matrixLabel, columns=matrixLabel)
print(matrix)
#This fixed my problem
stw = matrix.stack()
stw = stw[stw >= 1].rename_axis(('source', 'target')).reset_index(name='weight')
print(stw)
G = nx.from_pandas_edgelist(stw, edge_attr=True)
A:
I got the same problem. I am using a square pandas data frame (as indicated by df.shape) but I am getting the error Adjacency matrix not square. Using stack() and from_pandas_edgelist() solved the problem for me.
| Using a square matrix with Networkx but keep getting Adjacency matrix not square | So I'm using Networkx to plot a cooc matrix. It works well with small samples but I keep getting this error when I run it with a big cooc matrix (reason why I can't share a minimum reproductible example):
Traceback (most recent call last):
File "", line 113, in <module>
G = nx.from_pandas_adjacency(matrix)
File "", line 205, in from_pandas_adjacency
G = from_numpy_array(A, create_using=create_using)
File "", line 1357, in from_numpy_array
raise nx.NetworkXError(f"Adjacency matrix not square: nx,ny={A.shape}")
networkx.exception.NetworkXError: Adjacency matrix not square: nx,ny=(74, 76)
This is my code :
G = nx.from_pandas_adjacency(matrix)
# visualize it with pyvis
N = Network(height='100%', width='100%', bgcolor='#222222', font_color='white')
N.barnes_hut()
for n in G.nodes:
N.add_node(n)
for e in G.edges:
N.add_edge((e[0]), (e[1]))
And this is the ouput of my matrix :
Ali Sarah Josh Maura Mort ... Jasmine Lily Adam Ute
Ali 0 3 2 2 ... 0 0 1 0
Sarah 3 0 3 3 ... 0 0 1 0
Josh 2 3 0 4 ... 0 0 1 0
Maura Mort 2 3 4 0 ... 0 0 1 0
Shelly 0 0 0 0 ... 0 0 0 0
... ... ... ... ... ... ... ... ... ...
Nicol 0 0 0 0 ... 0 0 0 0
Jasmine 0 0 0 0 ... 0 0 0 0
Lily 0 0 0 0 ... 0 0 0 0
Adam 1 1 1 1 ... 0 0 0 0
Ute 0 0 0 0 ... 0 0 0 0
[74 rows x 74 columns]
Weirdly, it looks like my matrix is a square (74 x 74).
Any idea what might be the problem ?
| [
"So I was able to fix my problem by first converting my matrix into a stack.\ncooc_matrix = matrix(matrixLabel, texts)\nmatrix = pd.DataFrame(cooc_matrix.todense(), index=matrixLabel, columns=matrixLabel)\nprint(matrix)\n\n#This fixed my problem\nstw = matrix.stack()\nstw = stw[stw >= 1].rename_axis(('source', 'target')).reset_index(name='weight')\nprint(stw)\n\nG = nx.from_pandas_edgelist(stw, edge_attr=True)\n\n",
"I got the same problem. I am using a square pandas data frame (as indicated by df.shape) but I am getting the error Adjacency matrix not square. Using stack() and from_pandas_edgelist() solved the problem for me.\n"
] | [
2,
0
] | [] | [] | [
"matrix",
"networkx",
"python"
] | stackoverflow_0069349516_matrix_networkx_python.txt |
Q:
What is the best way to verify an email address if it actually exist?
Is there any way to verify an email whether the email actually exist or no in python?
Or is thee any platform offering such services?
For Example:
I have some emails
[email protected]
[email protected]
[email protected]
[email protected]
How can I be 100% sure which of the following email really exist on the internet?
A:
There are several tutorials on the internet. Here's what I found...
First you need to check for the correct formatting and for this you can use regular expressions like this:
import re
addressToVerify ='[email protected]'
match = re.match('^[_a-z0-9-]+(\.[_a-z0-9-]+)*@[a-z0-9-]+(\.[a-z0-9-]+)*(\.[a-z]{2,4})$', addressToVerify)
if match == None:
print('Bad Syntax')
raise ValueError('Bad Syntax')
DNS
Next we need to get the MX record for the target domain, in order to start the email verification process. Note that you are allowed in the RFCs to have a mail server on your A record, but that's outside of the scope of this article and demo script.
import dns.resolver
records = dns.resolver.query('scottbrady91.com', 'MX')
mxRecord = records[0].exchange
mxRecord = str(mxRecord)
Python DNS Python doesn't have any inbuilt DNS components, so we've
pulled in the popular dnspython library. Any library that can resolve
an MX record from a domain name will work though.
Mailbox
Now that we have all the preflight information we need, we can now find out if the email address exists.
import socket
import smtplib
# Get local server hostname
host = socket.gethostname()
# SMTP lib setup (use debug level for full output)
server = smtplib.SMTP()
server.set_debuglevel(0)
# SMTP Conversation
server.connect(mxRecord)
server.helo(host)
server.mail('[email protected]')
code, message = server.rcpt(str(addressToVerify))
server.quit()
# Assume 250 as Success
if code == 250:
print('Success')
else:
print('Bad')
What we are doing here is the first three commands of an SMTP conversation for sending an email, stopping just before we send any data.
The actual SMTP commands issued are: HELO, MAIL FROM and RCPT TO. It is the response to RCPT TO that we are interested in. If the server sends back a 250, then that means we are good to send an email (the email address exists), otherwise the server will return a different status code (usually a 550), meaning the email address does not exist on that server.
And that's email verification!
source: https://www.scottbrady91.com/Email-Verification/Python-Email-Verification-Script
Here are some other alternatives from another website...
Let’s make it more sophisticated and assume we want the following criteria to be met for an account@domain email address:
Both consist of case-insensitive alphanumeric characters. Dashes,
periods, hyphens or underscores are also allowed
Both can only start and end with alphanumeric characters
Both contain no white spaces account has at least one character and
domain has at least two domain includes at least one period
And of course, there’s an ‘@’ symbol between them
^[a-z]([w-]*[a-z]|[w-.]*[a-z]{2,}|[a-z])*@[a-z]([w-]*[a-z]|[w-.]*[a-z]{2,}|[a-z]){4,}?.[a-z]{2,}$
Validating emails with Python libraries
Another way of running sophisticated checks is with ready-to-use packages, and there are plenty to choose from. Here are several popular options:
email-validator 1.0.5
This library focuses strictly on the domain part of an email address, and checks if it’s in an [email protected] format.
As a bonus, the library also comes with a validation tool. It checks if the domain name can be resolved, thus giving you a good idea about its validity.
pylsEmail 1.3.2
This Python program to validate email addresses can also be used to validate both a domain and an email address with just one call. If a given address can’t be validated, the script will inform you of the most likely reasons.
py3-validate-email
This comprehensive library checks an email address for a proper structure, but it doesn’t just stop there.
It also verifies if the domain’s MX records exist (as in, is able to send/receive emails), whether a particular address on this domain exists, and also if it’s not blacklisted. This way, you avoid emailing blacklisted accounts, which would affect deliverability.
source: https://mailtrap.io/blog/python-validate-email/
These are just extracts from these websites, make sure to visit them and confirm this can be useful for your specific case, they also include much more methods and explain this in more detail, hope it helps.
A:
You can send a verification email with a code to the entered email to verify its presence. This can be done using any python framework. Django, flask, etc.
A:
You can use gmass.co, connect to your google account , go to settings , apikeys , then create an api key and use it this way :
sent = requests.get('https://verify.gmass.co/verify?email='+email+'&key='+api,
headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36'}
)
if ("valid" in sent.content):
print(email+"valid")
else:
print(email+"invalid")
A:
There are many services offering such tools that can validate your emails in bulk. If you are doing this for a marketing campaign to avoid bounce rate and getting your campaign marked as spammed then there are services that can help you checking bulk emails for your marketing campaigns.
However, there is some online platform too but you can't check multiple emails.
Validating email addresses is easy using valid email checkers.
You can use SMTPlib, a python library used for sending emails and using python regex too. But It will only check for the email if it is an email or something else.
import re
regex = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
def check(email):
if(re.fullmatch(regex, email)):
print("Valid Email")
else:
print("Invalid Email")
if __name__ == '__main__':
email = input("Enter Email:")
check(email)
This is not the actual solution to your question, but an example of matching email with regex.
Check out the services that offer emails validation in bulk.
I'd recommend EmailChecks
A:
The best method would be to send a verification mail, with some code they need to submit. Not only this is simple, it also proves the email belongs to them
A:
I believe there are three ways to approach this.
1. REGEX
Other thread participants have already suggested this. The problem with this approach is that it simply checks a string for the correct syntax and does not ensure that the email address exists.
2. Bulk Software
There are plenty of GUI tools that allow you to import lists that are then being validated. However, this usually does not fit the software development case.
3. Email Validation API
The previously mentioned GUI tools often offer an API that allows you to conduct real-time validation of your user's email addresses.
A few vendors that provide this:
emailvalidation.io
emailable.com
datavalidation.com
| What is the best way to verify an email address if it actually exist? | Is there any way to verify an email whether the email actually exist or no in python?
Or is thee any platform offering such services?
For Example:
I have some emails
[email protected]
[email protected]
[email protected]
[email protected]
How can I be 100% sure which of the following email really exist on the internet?
| [
"There are several tutorials on the internet. Here's what I found...\n\n\nFirst you need to check for the correct formatting and for this you can use regular expressions like this:\n import re\n \n addressToVerify ='[email protected]'\n match = re.match('^[_a-z0-9-]+(\\.[_a-z0-9-]+)*@[a-z0-9-]+(\\.[a-z0-9-]+)*(\\.[a-z]{2,4})$', addressToVerify)\n \n if match == None:\n print('Bad Syntax')\n raise ValueError('Bad Syntax')\n\nDNS\nNext we need to get the MX record for the target domain, in order to start the email verification process. Note that you are allowed in the RFCs to have a mail server on your A record, but that's outside of the scope of this article and demo script.\n import dns.resolver\n \n records = dns.resolver.query('scottbrady91.com', 'MX')\n mxRecord = records[0].exchange\n mxRecord = str(mxRecord)\n\nPython DNS Python doesn't have any inbuilt DNS components, so we've\npulled in the popular dnspython library. Any library that can resolve\nan MX record from a domain name will work though.\nMailbox\nNow that we have all the preflight information we need, we can now find out if the email address exists.\n import socket\n import smtplib\n \n # Get local server hostname\n host = socket.gethostname()\n \n # SMTP lib setup (use debug level for full output)\n server = smtplib.SMTP()\n server.set_debuglevel(0)\n \n # SMTP Conversation\n server.connect(mxRecord)\n server.helo(host)\n server.mail('[email protected]')\n code, message = server.rcpt(str(addressToVerify))\n server.quit()\n \n # Assume 250 as Success\n if code == 250:\n print('Success')\n else:\n print('Bad')\n\nWhat we are doing here is the first three commands of an SMTP conversation for sending an email, stopping just before we send any data.\nThe actual SMTP commands issued are: HELO, MAIL FROM and RCPT TO. It is the response to RCPT TO that we are interested in. If the server sends back a 250, then that means we are good to send an email (the email address exists), otherwise the server will return a different status code (usually a 550), meaning the email address does not exist on that server.\nAnd that's email verification!\n\nsource: https://www.scottbrady91.com/Email-Verification/Python-Email-Verification-Script\n\nHere are some other alternatives from another website...\n\nLet’s make it more sophisticated and assume we want the following criteria to be met for an account@domain email address:\n\nBoth consist of case-insensitive alphanumeric characters. Dashes,\nperiods, hyphens or underscores are also allowed\nBoth can only start and end with alphanumeric characters\nBoth contain no white spaces account has at least one character and\ndomain has at least two domain includes at least one period\nAnd of course, there’s an ‘@’ symbol between them\n\n ^[a-z]([w-]*[a-z]|[w-.]*[a-z]{2,}|[a-z])*@[a-z]([w-]*[a-z]|[w-.]*[a-z]{2,}|[a-z]){4,}?.[a-z]{2,}$\n\nValidating emails with Python libraries\nAnother way of running sophisticated checks is with ready-to-use packages, and there are plenty to choose from. Here are several popular options:\nemail-validator 1.0.5\nThis library focuses strictly on the domain part of an email address, and checks if it’s in an [email protected] format.\nAs a bonus, the library also comes with a validation tool. It checks if the domain name can be resolved, thus giving you a good idea about its validity.\npylsEmail 1.3.2\nThis Python program to validate email addresses can also be used to validate both a domain and an email address with just one call. If a given address can’t be validated, the script will inform you of the most likely reasons.\npy3-validate-email\nThis comprehensive library checks an email address for a proper structure, but it doesn’t just stop there.\nIt also verifies if the domain’s MX records exist (as in, is able to send/receive emails), whether a particular address on this domain exists, and also if it’s not blacklisted. This way, you avoid emailing blacklisted accounts, which would affect deliverability.\n\nsource: https://mailtrap.io/blog/python-validate-email/\nThese are just extracts from these websites, make sure to visit them and confirm this can be useful for your specific case, they also include much more methods and explain this in more detail, hope it helps.\n",
"You can send a verification email with a code to the entered email to verify its presence. This can be done using any python framework. Django, flask, etc.\n",
"You can use gmass.co, connect to your google account , go to settings , apikeys , then create an api key and use it this way :\nsent = requests.get('https://verify.gmass.co/verify?email='+email+'&key='+api,\n headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36'}\n )\nif (\"valid\" in sent.content):\n print(email+\"valid\")\nelse: \n print(email+\"invalid\")\n\n",
"There are many services offering such tools that can validate your emails in bulk. If you are doing this for a marketing campaign to avoid bounce rate and getting your campaign marked as spammed then there are services that can help you checking bulk emails for your marketing campaigns.\nHowever, there is some online platform too but you can't check multiple emails.\n\nValidating email addresses is easy using valid email checkers.\nYou can use SMTPlib, a python library used for sending emails and using python regex too. But It will only check for the email if it is an email or something else.\n\nimport re\n\nregex = r'\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b'\ndef check(email):\n\n if(re.fullmatch(regex, email)):\n print(\"Valid Email\")\n \n else:\n print(\"Invalid Email\")\n\nif __name__ == '__main__':\n email = input(\"Enter Email:\")\n check(email)\n\nThis is not the actual solution to your question, but an example of matching email with regex.\nCheck out the services that offer emails validation in bulk.\nI'd recommend EmailChecks\n",
"The best method would be to send a verification mail, with some code they need to submit. Not only this is simple, it also proves the email belongs to them\n",
"I believe there are three ways to approach this.\n1. REGEX\nOther thread participants have already suggested this. The problem with this approach is that it simply checks a string for the correct syntax and does not ensure that the email address exists.\n2. Bulk Software\nThere are plenty of GUI tools that allow you to import lists that are then being validated. However, this usually does not fit the software development case.\n3. Email Validation API\nThe previously mentioned GUI tools often offer an API that allows you to conduct real-time validation of your user's email addresses.\nA few vendors that provide this:\n\nemailvalidation.io\nemailable.com\ndatavalidation.com\n\n"
] | [
2,
1,
1,
1,
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0069412522_python.txt |
Q:
Flask - show all data form Mongodb in html template
I am using MongoDB as a database. I want to show all my data in the HTML template
python code:
from flask import Flask, render_template, request, url_for
from flask_pymongo import PyMongo
import os
app = Flask(__name__)
app.config['MONGO_DBNAME'] = 'flask_assignment'
app.config['MONGO_URI'] = 'mongodb://username:[email protected]:31698/db_name'
mongo = PyMongo(app)
@app.route('/index')
def index():
emp_list = mongo.db.employee_entry.find()
return render_template('index.html', emp_list = emp_list)
app.run(debug=True)
my HTML code:
{% for emp in emp_list %}
<tr>
<td>{{ emp['name'] }}</td>
<td>{{ emp['password'] }}</td>
<td>{{ emp['email'] }}</td>
</tr>
{% endfor %}
when I ran the server it shows me nothing blank page...
A:
Maybe the issue is that the emp_list is very large, and it takes a long time to insert it in the template, see the page won't be shown.
You can limit the data to for example 10 documents, using:
emp_list = mongo.db.employee_entry.find().limit(10)
and see if it solves the problem.
A:
OK, I'm sorry for that msg earlier
try this:
{% for key,value in emp_list %}
<tr>
<th scope="row">{{loop.index}}</th>
<td>{{value}}</td>
<td>{{value}}</td>
<td>{{value}}</td>
</tr>
{% endfor %}
where loop.index is used for incrementing you can search it's use
I am 100% sure that just pasting it without editing it will not work.
But what i am actually trying to say is try using key,value function of for loops in python(jinja2)
try edit this so that it can fit in your code
| Flask - show all data form Mongodb in html template | I am using MongoDB as a database. I want to show all my data in the HTML template
python code:
from flask import Flask, render_template, request, url_for
from flask_pymongo import PyMongo
import os
app = Flask(__name__)
app.config['MONGO_DBNAME'] = 'flask_assignment'
app.config['MONGO_URI'] = 'mongodb://username:[email protected]:31698/db_name'
mongo = PyMongo(app)
@app.route('/index')
def index():
emp_list = mongo.db.employee_entry.find()
return render_template('index.html', emp_list = emp_list)
app.run(debug=True)
my HTML code:
{% for emp in emp_list %}
<tr>
<td>{{ emp['name'] }}</td>
<td>{{ emp['password'] }}</td>
<td>{{ emp['email'] }}</td>
</tr>
{% endfor %}
when I ran the server it shows me nothing blank page...
| [
"Maybe the issue is that the emp_list is very large, and it takes a long time to insert it in the template, see the page won't be shown. \nYou can limit the data to for example 10 documents, using:\nemp_list = mongo.db.employee_entry.find().limit(10)\n\nand see if it solves the problem.\n",
"OK, I'm sorry for that msg earlier\ntry this:\n{% for key,value in emp_list %}\n<tr>\n <th scope=\"row\">{{loop.index}}</th>\n <td>{{value}}</td>\n <td>{{value}}</td>\n <td>{{value}}</td>\n</tr>\n{% endfor %}\n\nwhere loop.index is used for incrementing you can search it's use\nI am 100% sure that just pasting it without editing it will not work.\nBut what i am actually trying to say is try using key,value function of for loops in python(jinja2)\ntry edit this so that it can fit in your code\n"
] | [
0,
0
] | [] | [] | [
"flask",
"mongodb",
"python"
] | stackoverflow_0048941101_flask_mongodb_python.txt |
Q:
How to make Django render URL dispatcher from HTML in Pandas column, instead of forwarding raw HTML?
I want to render a pandas dataframe in HTML, in which 1 column has URL dispatched links to other pages. If I try to render this HTML, it just keeps raw HTML, instead of converting the URLS:
utils.py
import pandas as pd
df = pd.DataFrame(["2022-007", "2022-008", "2022-111", "2022-222", "2022-555", "2022-151"], columns=["column_of_interest"])
df["column_of_interest"] = df['column_of_interest'].apply(lambda x: '''<a href="{{% url 'columndetails' {0} %}}">{0}</a>'''.format(x)
df_html = generate_html(df)
context={"df" : df_html}
def generate_html(dataframe: pd.DataFrame):
# get the table HTML from the dataframe
table_html = dataframe.to_html(table_id="table", escape=False)
# construct the complete HTML with jQuery Data tables
# You can disable paging or enable y scrolling on lines 20 and 21 respectively
html = f"""
{table_html}
<script src="https://code.jquery.com/jquery-3.6.0.slim.min.js" integrity="sha256-u7e5khyithlIdTpu22PHhENmPcRdFiHRjhAuHcs05RI=" crossorigin="anonymous"></script>
<script type="text/javascript" src="https://cdn.datatables.net/1.11.5/js/jquery.dataTables.min.js"></script>
<script>
$(document).ready( function () {{
$('#table').DataTable({{
// paging: false,
// scrollY: 400,
}});
}});
</script>
"""
# return the html
return html
views.py
def column(request):
context = get_context(request)
return render(request, "database/column.html", context)
def columndetails(request, column_of_interest):
return render(request, "/columndetails.html")
urls.py
urlpatterns = [
path('columndetails/<str:column_of_interest>/', views.labrequest_details, name="columndetails")]
toprocess.html
{% extends "database/layout.html" %}
{% load static %}
{% block body %}
<link href="https://cdn.datatables.net/1.11.5/css/jquery.dataTables.min.css" rel="stylesheet">
<br />
<div style="float: left;" class="container" id="labrequestoverview">
{{ df|safe }}
</div>
Everything shows normal, and the HTML is rendered almost as should, however the HTML is not being rendered by Django:
Request URL: http://127.0.0.1:8000/%7B%25%20url%20'columndetails'%202022-007%25%7D
The current path, {% url 'columndetails' 2022-007%}, didn’t match any of these.
Is it possible to have Django render this HTML as it intended and not just forward it as raw HTML?
A:
In your view, you cannot use {% url '' %}.
To resolve a URL dynamically in your utils.py, use build_absolute_uri instead. You can also combine this with reverse() like so (note: you will have to pass your request object):
request.build_absolute_uri(reverse('columndetails', args=('2022-007', )))
| How to make Django render URL dispatcher from HTML in Pandas column, instead of forwarding raw HTML? | I want to render a pandas dataframe in HTML, in which 1 column has URL dispatched links to other pages. If I try to render this HTML, it just keeps raw HTML, instead of converting the URLS:
utils.py
import pandas as pd
df = pd.DataFrame(["2022-007", "2022-008", "2022-111", "2022-222", "2022-555", "2022-151"], columns=["column_of_interest"])
df["column_of_interest"] = df['column_of_interest'].apply(lambda x: '''<a href="{{% url 'columndetails' {0} %}}">{0}</a>'''.format(x)
df_html = generate_html(df)
context={"df" : df_html}
def generate_html(dataframe: pd.DataFrame):
# get the table HTML from the dataframe
table_html = dataframe.to_html(table_id="table", escape=False)
# construct the complete HTML with jQuery Data tables
# You can disable paging or enable y scrolling on lines 20 and 21 respectively
html = f"""
{table_html}
<script src="https://code.jquery.com/jquery-3.6.0.slim.min.js" integrity="sha256-u7e5khyithlIdTpu22PHhENmPcRdFiHRjhAuHcs05RI=" crossorigin="anonymous"></script>
<script type="text/javascript" src="https://cdn.datatables.net/1.11.5/js/jquery.dataTables.min.js"></script>
<script>
$(document).ready( function () {{
$('#table').DataTable({{
// paging: false,
// scrollY: 400,
}});
}});
</script>
"""
# return the html
return html
views.py
def column(request):
context = get_context(request)
return render(request, "database/column.html", context)
def columndetails(request, column_of_interest):
return render(request, "/columndetails.html")
urls.py
urlpatterns = [
path('columndetails/<str:column_of_interest>/', views.labrequest_details, name="columndetails")]
toprocess.html
{% extends "database/layout.html" %}
{% load static %}
{% block body %}
<link href="https://cdn.datatables.net/1.11.5/css/jquery.dataTables.min.css" rel="stylesheet">
<br />
<div style="float: left;" class="container" id="labrequestoverview">
{{ df|safe }}
</div>
Everything shows normal, and the HTML is rendered almost as should, however the HTML is not being rendered by Django:
Request URL: http://127.0.0.1:8000/%7B%25%20url%20'columndetails'%202022-007%25%7D
The current path, {% url 'columndetails' 2022-007%}, didn’t match any of these.
Is it possible to have Django render this HTML as it intended and not just forward it as raw HTML?
| [
"In your view, you cannot use {% url '' %}.\nTo resolve a URL dynamically in your utils.py, use build_absolute_uri instead. You can also combine this with reverse() like so (note: you will have to pass your request object):\nrequest.build_absolute_uri(reverse('columndetails', args=('2022-007', )))\n\n"
] | [
1
] | [] | [] | [
"django",
"django_urls",
"html",
"python",
"url"
] | stackoverflow_0074656451_django_django_urls_html_python_url.txt |
Q:
Saving changes to a dataframe after editing in a GUI
I wrote a code, that extracts data from a csv file and displays it in a GUI (when data is already present).
No i need to find a way, that if I change or edit Data in the GUI, the value should be replaced in the csv file as well.
This part here is for extracting the data form the file (which works great):
`
def updatetext(self):
"""adds information extracted from database already provided"""
df_subj = Content.extract_saved_data(self.date)
self.lineEditFirstDiagnosed.setText(str(df_subj["First_Diagnosed_preop"][0])) \
if str(df_subj["First_Diagnosed_preop"][0]) != 'nan' else self.lineEditFirstDiagnosed.setText('')
self.lineEditAdmNeurIndCheck.setText(str(df_subj['Admission_preop'][0])) \
if str(df_subj["Admission_preop"][0]) != 'nan' else self.lineEditAdmNeurIndCheck.setText('')
self.DismNeurIndCheckLabel.setText(str(df_subj['Dismissal_preop'][0])) \
if str(df_subj["Dismissal_preop"][0]) != 'nan' else self.DismNeurIndCheckLabel.setText('')
self.lineEditOutpatientContact.setText(str(df_subj['Outpat_Contact_preop'][0])) \
if str(df_subj["Outpat_Contact_preop"][0]) != 'nan' else self.lineEditOutpatientContact.setText('')
self.lineEditNChContact.setText(str(df_subj['nch_preop'][0])) \
if str(df_subj["nch_preop"][0]) != 'nan' else self.lineEditNChContact.setText('')
self.lineEditDBSconferenceDate.setText(str(df_subj['DBS_Conference_preop'][0])) \
if str(df_subj["DBS_Conference_preop"][0]) != 'nan' else self.lineEditDBSconferenceDate.setText('')
`
Now for updating changes i started writing this:
def onClickedSaveReturn(self):
"""closes GUI and returns to calling (main) GUI"""
df_subj = {k: [] for k in Content.extract_saved_data(self.date).keys()} # extract empty dictionary
df_subj["First_Diagnosed_preop"] = self.lineEditFirstDiagnosed.text()
df_subj['Admission_preop'] = self.lineEditAdmNeurIndCheck.text()
df_subj['Dismissal_preop'] = self.DismNeurIndCheckLabel.text()
df_subj['Outpat_Contact_preop'] = self.lineEditOutpatientContact.text()
df_subj['nch_preop'] = self.lineEditNChContact.text()
df_subj['DBS_Conference_preop'] = self.lineEditDBSconferenceDate.text()
df_subj["H&Y_preop"] = self.hy.text()
But im not sure how to actually achieve the updating/replacing.
GUI
This is what the GUI looks like. If i change the year now for example to 1997, it should be updated in my csv
Hope someone can help me.
Thank you!!
Expecting to get updated data in my csv file.
A:
I don't know if I understand exactly what you need, but I think after the modifications you should use the to_csv() function to export the changes to the CSV file, and connect it for example with a Button click.
In case you can not find the file after saving, you should note that the to_csv() function usually saves the files in the root directory.
# in case you have a save Button in your GUI
self.save_Button.clicked.connect(self.SaveChanges)
def SaveChanges(self):
df_subj.to_csv("file_ame.csv", index=False)
| Saving changes to a dataframe after editing in a GUI | I wrote a code, that extracts data from a csv file and displays it in a GUI (when data is already present).
No i need to find a way, that if I change or edit Data in the GUI, the value should be replaced in the csv file as well.
This part here is for extracting the data form the file (which works great):
`
def updatetext(self):
"""adds information extracted from database already provided"""
df_subj = Content.extract_saved_data(self.date)
self.lineEditFirstDiagnosed.setText(str(df_subj["First_Diagnosed_preop"][0])) \
if str(df_subj["First_Diagnosed_preop"][0]) != 'nan' else self.lineEditFirstDiagnosed.setText('')
self.lineEditAdmNeurIndCheck.setText(str(df_subj['Admission_preop'][0])) \
if str(df_subj["Admission_preop"][0]) != 'nan' else self.lineEditAdmNeurIndCheck.setText('')
self.DismNeurIndCheckLabel.setText(str(df_subj['Dismissal_preop'][0])) \
if str(df_subj["Dismissal_preop"][0]) != 'nan' else self.DismNeurIndCheckLabel.setText('')
self.lineEditOutpatientContact.setText(str(df_subj['Outpat_Contact_preop'][0])) \
if str(df_subj["Outpat_Contact_preop"][0]) != 'nan' else self.lineEditOutpatientContact.setText('')
self.lineEditNChContact.setText(str(df_subj['nch_preop'][0])) \
if str(df_subj["nch_preop"][0]) != 'nan' else self.lineEditNChContact.setText('')
self.lineEditDBSconferenceDate.setText(str(df_subj['DBS_Conference_preop'][0])) \
if str(df_subj["DBS_Conference_preop"][0]) != 'nan' else self.lineEditDBSconferenceDate.setText('')
`
Now for updating changes i started writing this:
def onClickedSaveReturn(self):
"""closes GUI and returns to calling (main) GUI"""
df_subj = {k: [] for k in Content.extract_saved_data(self.date).keys()} # extract empty dictionary
df_subj["First_Diagnosed_preop"] = self.lineEditFirstDiagnosed.text()
df_subj['Admission_preop'] = self.lineEditAdmNeurIndCheck.text()
df_subj['Dismissal_preop'] = self.DismNeurIndCheckLabel.text()
df_subj['Outpat_Contact_preop'] = self.lineEditOutpatientContact.text()
df_subj['nch_preop'] = self.lineEditNChContact.text()
df_subj['DBS_Conference_preop'] = self.lineEditDBSconferenceDate.text()
df_subj["H&Y_preop"] = self.hy.text()
But im not sure how to actually achieve the updating/replacing.
GUI
This is what the GUI looks like. If i change the year now for example to 1997, it should be updated in my csv
Hope someone can help me.
Thank you!!
Expecting to get updated data in my csv file.
| [
"I don't know if I understand exactly what you need, but I think after the modifications you should use the to_csv() function to export the changes to the CSV file, and connect it for example with a Button click.\nIn case you can not find the file after saving, you should note that the to_csv() function usually saves the files in the root directory.\n# in case you have a save Button in your GUI\nself.save_Button.clicked.connect(self.SaveChanges)\n\ndef SaveChanges(self):\n df_subj.to_csv(\"file_ame.csv\", index=False)\n\n"
] | [
0
] | [] | [] | [
"csv",
"pyqt5",
"python"
] | stackoverflow_0074654646_csv_pyqt5_python.txt |
Q:
Python compute object property in separate task to improve performace
I wonder if it's possible to compute an object property in a separate background thread when it's initialized to speed up my computation. I have this example code:
class Element:
def __init__(self):
self.__area = -1 # cache the area value
@property
def area(self)
if self.__area < 0:
self.__area = 2 # <- Here I put 2 as example but its slow area algorithm computation
return self.__area
Now if I want to compute the total area of N elements.
total_area = sum(el.area for el in elements)
this works fine for a low number of elements but when the number of element increment I need a way to process this in parallel. I think it's the same for me to precompute the area rather than compute the total area in parallel.
A:
The problem with Python and parallel computing is that there is that thing called GIL (Global Interpreter Lock). The GIL prevents a process to run multiple threads at the same time. So for that to work you would need to spawn a new process which has quiet some overhead. Furthermore it is cumbersome to exchange data between the processes.
There is that multiprocessing package which helps with creating new processes: https://docs.python.org/3/library/multiprocessing.html
Still in your case the question is, how do you split up the work. You could split up the elements in subblocks and create a process for each subblock which gets calculated there. The messaging between the processes can be implemented using pipes, as also described in 1. So you need to send a lot of objects to the new processes and then send them back via the pipes. I am not sure if the overhead of all that will make it faster in the end or even slower.
| Python compute object property in separate task to improve performace | I wonder if it's possible to compute an object property in a separate background thread when it's initialized to speed up my computation. I have this example code:
class Element:
def __init__(self):
self.__area = -1 # cache the area value
@property
def area(self)
if self.__area < 0:
self.__area = 2 # <- Here I put 2 as example but its slow area algorithm computation
return self.__area
Now if I want to compute the total area of N elements.
total_area = sum(el.area for el in elements)
this works fine for a low number of elements but when the number of element increment I need a way to process this in parallel. I think it's the same for me to precompute the area rather than compute the total area in parallel.
| [
"The problem with Python and parallel computing is that there is that thing called GIL (Global Interpreter Lock). The GIL prevents a process to run multiple threads at the same time. So for that to work you would need to spawn a new process which has quiet some overhead. Furthermore it is cumbersome to exchange data between the processes.\nThere is that multiprocessing package which helps with creating new processes: https://docs.python.org/3/library/multiprocessing.html\nStill in your case the question is, how do you split up the work. You could split up the elements in subblocks and create a process for each subblock which gets calculated there. The messaging between the processes can be implemented using pipes, as also described in 1. So you need to send a lot of objects to the new processes and then send them back via the pipes. I am not sure if the overhead of all that will make it faster in the end or even slower.\n"
] | [
0
] | [] | [] | [
"background_process",
"parallel_processing",
"python",
"python_multithreading"
] | stackoverflow_0074656410_background_process_parallel_processing_python_python_multithreading.txt |
Q:
Hidden Friend in Python
I'm trying to create a hidden friend for my company.
In this logic, they will fill out a google forms form and, at the end of the week, I will download it to my computer as a csv file.
the data collected are: Full name, email address and desired gift.
The idea is to automate the draw and each member will receive a secret friend in their email, with an email address to present them with a virtual gift.
At the stage I'm at, I'm putting together the logic of the draw, but I'm not managing to develop. Because it's not making sense of the draw. One person is drawing two and it should only be one at a time.
import glob
import random
import csv
from itertools import permutations, combinations_with_replacement, combinations
all_list = []
for glob in glob.glob("random_friend/csv/*"):
file1 = open(glob, "r+")
reader = csv.reader(file1, delimiter=',')
for i in reader:
all_list.append(i)
all_list.pop(0)
perm = permutations(all_list)
gift = random.choice(['chocolat', 'Squeeze', 'fridge magnet', 'popcorn door cushion kit', 'cocktail shaker kit', 'Suspense book'])
print(gift)
for i in perm:
name_one = i[1][1]
name_two = i[2][1]
mail_one = i[1][2]
mail_two = i[2][2]
print(f"""{name_one} took {name_two} and present with a {gift} and send it by e-mail to {mail_two}""")
A:
It would be very helpful if you could attach some sample records from the input .csv file (anonymized if possible).
Without that, have you tried shuffling the original list instead of using the permutations?
import glob
import random
import csv
all_list = []
for glob in glob.glob("random_friend/csv/*"):
file1 = open(glob, "r+")
reader = csv.reader(file1, delimiter=',')
for i in reader:
all_list.append(i)
all_list.pop(0)
random.shuffle(all_list)
gift = random.choice(['chocolat', 'Squeeze', 'fridge magnet', 'popcorn door cushion kit', 'cocktail shaker kit', 'Suspense book'])
for i in range(0, len(all_list), 2):
name_one = all_list[i][1]
name_two = all_list[i+1][1]
mail_one = all_list[i][2]
mail_two = all_list[i+1][2]
print(f"""{name_one} took {name_two} and present with a {gift} and send it by e-mail to {mail_two}""")
| Hidden Friend in Python | I'm trying to create a hidden friend for my company.
In this logic, they will fill out a google forms form and, at the end of the week, I will download it to my computer as a csv file.
the data collected are: Full name, email address and desired gift.
The idea is to automate the draw and each member will receive a secret friend in their email, with an email address to present them with a virtual gift.
At the stage I'm at, I'm putting together the logic of the draw, but I'm not managing to develop. Because it's not making sense of the draw. One person is drawing two and it should only be one at a time.
import glob
import random
import csv
from itertools import permutations, combinations_with_replacement, combinations
all_list = []
for glob in glob.glob("random_friend/csv/*"):
file1 = open(glob, "r+")
reader = csv.reader(file1, delimiter=',')
for i in reader:
all_list.append(i)
all_list.pop(0)
perm = permutations(all_list)
gift = random.choice(['chocolat', 'Squeeze', 'fridge magnet', 'popcorn door cushion kit', 'cocktail shaker kit', 'Suspense book'])
print(gift)
for i in perm:
name_one = i[1][1]
name_two = i[2][1]
mail_one = i[1][2]
mail_two = i[2][2]
print(f"""{name_one} took {name_two} and present with a {gift} and send it by e-mail to {mail_two}""")
| [
"It would be very helpful if you could attach some sample records from the input .csv file (anonymized if possible).\nWithout that, have you tried shuffling the original list instead of using the permutations?\nimport glob\nimport random\nimport csv\n\nall_list = []\nfor glob in glob.glob(\"random_friend/csv/*\"):\n file1 = open(glob, \"r+\")\n reader = csv.reader(file1, delimiter=',')\n for i in reader:\n all_list.append(i)\n all_list.pop(0)\n\nrandom.shuffle(all_list)\n\ngift = random.choice(['chocolat', 'Squeeze', 'fridge magnet', 'popcorn door cushion kit', 'cocktail shaker kit', 'Suspense book'])\n\nfor i in range(0, len(all_list), 2):\n name_one = all_list[i][1]\n name_two = all_list[i+1][1]\n mail_one = all_list[i][2]\n mail_two = all_list[i+1][2]\n\n print(f\"\"\"{name_one} took {name_two} and present with a {gift} and send it by e-mail to {mail_two}\"\"\")\n\n"
] | [
1
] | [] | [] | [
"python",
"python_itertools",
"random"
] | stackoverflow_0074656583_python_python_itertools_random.txt |
Q:
How to mock a function which makes a mutation on an argument that is necessary for the caller fuction logic
I want to be able to mock a function that mutates an argument, and that it's mutation is relevant in order for the code to continue executing correctly.
Consider the following code:
def mutate_my_dict(mutable_dict):
if os.path.exists("a.txt"):
mutable_dict["new_key"] = "new_value"
return True
def function_under_test():
my_dict = {"key": "value"}
if mutate_my_dict(my_dict):
return my_dict["new_key"]
return "No Key"
def test_function_under_test():
with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock:
mutate_my_dict_mock.return_value = True
result = function_under_test()
assert result == "new_value"
**Please understand i know i can just mock os.path.exists in this case but this is just an example. I intentionally want to mock the function and not the external module.
**
I also read the docs here:
https://docs.python.org/3/library/unittest.mock-examples.html#coping-with-mutable-arguments
But it doesn't seem to fit in my case.
This is the test i've written so far, but it obviously doesn't work since the key changes:
def test_function_under_test():
with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock:
mutate_my_dict_mock.return_value = True
result = function_under_test()
assert result == "new_value"
Thanks in advance for all of your time :)
A:
With the help of Peter i managed to come up with this final test:
def mock_mutate_my_dict(my_dict):
my_dict["new_key"] = "new_value"
return True
def test_function_under_test():
with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock:
mutate_my_dict_mock.side_effect = mock_mutate_my_dict
result = function_under_test()
assert result == "new_value"
How it works is that with a side effect you can run a function instead of the intended function.
In this function you need to both change all of the mutating arguments and return the value returned.
| How to mock a function which makes a mutation on an argument that is necessary for the caller fuction logic | I want to be able to mock a function that mutates an argument, and that it's mutation is relevant in order for the code to continue executing correctly.
Consider the following code:
def mutate_my_dict(mutable_dict):
if os.path.exists("a.txt"):
mutable_dict["new_key"] = "new_value"
return True
def function_under_test():
my_dict = {"key": "value"}
if mutate_my_dict(my_dict):
return my_dict["new_key"]
return "No Key"
def test_function_under_test():
with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock:
mutate_my_dict_mock.return_value = True
result = function_under_test()
assert result == "new_value"
**Please understand i know i can just mock os.path.exists in this case but this is just an example. I intentionally want to mock the function and not the external module.
**
I also read the docs here:
https://docs.python.org/3/library/unittest.mock-examples.html#coping-with-mutable-arguments
But it doesn't seem to fit in my case.
This is the test i've written so far, but it obviously doesn't work since the key changes:
def test_function_under_test():
with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock:
mutate_my_dict_mock.return_value = True
result = function_under_test()
assert result == "new_value"
Thanks in advance for all of your time :)
| [
"With the help of Peter i managed to come up with this final test:\ndef mock_mutate_my_dict(my_dict):\n my_dict[\"new_key\"] = \"new_value\"\n return True\n\n\ndef test_function_under_test():\n with patch(\"stack_over_flow.mutate_my_dict\") as mutate_my_dict_mock:\n mutate_my_dict_mock.side_effect = mock_mutate_my_dict\n result = function_under_test()\n assert result == \"new_value\"\n\nHow it works is that with a side effect you can run a function instead of the intended function.\nIn this function you need to both change all of the mutating arguments and return the value returned.\n"
] | [
0
] | [] | [] | [
"pytest",
"python",
"python_3.x",
"python_unittest.mock",
"unit_testing"
] | stackoverflow_0074643203_pytest_python_python_3.x_python_unittest.mock_unit_testing.txt |
Q:
Display MongoDB Documents data on a Webpage using Python Flask
I wrote a code using Python and trying to display all the Documents from Mongodb on a web page. However, on webpage I see the Column names, but no data.
And on the command, it does print all the data. Any help is greatly appreciated.
import pymongo
from pymongo import MongoClient
import datetime
import sys
from flask import Flask, render_template, request
import werkzeug
from flask_table import Table,Col
from bson.json_util import dumps
import json
app = Flask(__name__)
try:
client = pymongo.MongoClient("XXXX")
print("Connected to Avengers MongoClient Successfully from Project
Script!!!")
except:
print("Connection to MongoClient Failed!!!")
db = client.avengers_hack_db
@app.route('/')
def Results():
try:
Project_List_Col = db.ppm_master_db_collection.find()#.limit(10)
for row in Project_List_Col:
print(row)
return render_template('Results.html',tasks=row)
except Exception as e:
return dumps({'error': str(e)})
if __name__ == '__main__':
app.run(debug = True)
The HTML (Results.html) Page is:
<html>
<body>
{% for task_id in tasks %}
<h3>{{task_id}}</h3>
{% endfor %}
</body>
</html>
A:
Removed the for loop and rewrote the code as below:
@app.route('/')
def Results():
try:
Project_List_Col = db.ppm_master_db_collection.find()
return render_template('Results.html',tasks=Project_List_Col)
except Exception as e:
return dumps({'error': str(e)})
if __name__ == '__main__':
app.run(debug = True)
Documents are displayed on the HTML Page as is.
(***Will work on the formatting part. Meanwhile any pointers are greatly appreciated.)
A:
Try using key,value function of for loop and also remove that loop in the python app file from route like upper answer @Dinakar suggested
| Display MongoDB Documents data on a Webpage using Python Flask | I wrote a code using Python and trying to display all the Documents from Mongodb on a web page. However, on webpage I see the Column names, but no data.
And on the command, it does print all the data. Any help is greatly appreciated.
import pymongo
from pymongo import MongoClient
import datetime
import sys
from flask import Flask, render_template, request
import werkzeug
from flask_table import Table,Col
from bson.json_util import dumps
import json
app = Flask(__name__)
try:
client = pymongo.MongoClient("XXXX")
print("Connected to Avengers MongoClient Successfully from Project
Script!!!")
except:
print("Connection to MongoClient Failed!!!")
db = client.avengers_hack_db
@app.route('/')
def Results():
try:
Project_List_Col = db.ppm_master_db_collection.find()#.limit(10)
for row in Project_List_Col:
print(row)
return render_template('Results.html',tasks=row)
except Exception as e:
return dumps({'error': str(e)})
if __name__ == '__main__':
app.run(debug = True)
The HTML (Results.html) Page is:
<html>
<body>
{% for task_id in tasks %}
<h3>{{task_id}}</h3>
{% endfor %}
</body>
</html>
| [
"Removed the for loop and rewrote the code as below:\[email protected]('/')\ndef Results():\n try:\n Project_List_Col = db.ppm_master_db_collection.find()\n return render_template('Results.html',tasks=Project_List_Col)\n except Exception as e:\n return dumps({'error': str(e)})\n\nif __name__ == '__main__': \n app.run(debug = True)\n\nDocuments are displayed on the HTML Page as is.\n(***Will work on the formatting part. Meanwhile any pointers are greatly appreciated.)\n",
"Try using key,value function of for loop and also remove that loop in the python app file from route like upper answer @Dinakar suggested\n"
] | [
0,
0
] | [] | [] | [
"flask",
"mongodb",
"python"
] | stackoverflow_0057637088_flask_mongodb_python.txt |
Q:
Python: Move files from multiple folders in different locations into one folder
I am able to move all files from one folder to another. I need help in order to move files to destination folder from multiple source folders.
import os
import shutil
source1 = "C:\\Users\\user\\OneDrive\\Desktop\\1\\"
source2 = "C:\\Users\\user\\OneDrive\\Desktop\\2\\"
destination = "C:\\Users\\user\\OneDrive\\Desktop\\Destination\\"
files = os.listdir(source1, source2)
for f in files:
shutil.move(source1 + f,source2 + f, destination + f)
print("Files Transferred")
I am getting error :
files = os.listdir(source1, source2)
TypeError: listdir() takes at most 1 argument (2 given)
A:
This is the line interpreter is complaining about, you cannot pass two directories to os.listdir function
files = os.listdir(source1, source2)
You have to have a nested loop (or list comprehension) to do what you want so:
import os
sources = [source1, source2, ..., sourceN]
files_to_move = []
for source in sources:
current_source_files =[f"{source}{filename}" for filename in os.listdir(source)]
files_to_move.extend(current_source_files)
for f in files_to_move:
shutil.move(f, f"{destination}{f.split(os.sep)[-1]}")
For "cleaner" solution it's worth to look at:
https://docs.python.org/3/library/os.path.html#module-os.path
| Python: Move files from multiple folders in different locations into one folder | I am able to move all files from one folder to another. I need help in order to move files to destination folder from multiple source folders.
import os
import shutil
source1 = "C:\\Users\\user\\OneDrive\\Desktop\\1\\"
source2 = "C:\\Users\\user\\OneDrive\\Desktop\\2\\"
destination = "C:\\Users\\user\\OneDrive\\Desktop\\Destination\\"
files = os.listdir(source1, source2)
for f in files:
shutil.move(source1 + f,source2 + f, destination + f)
print("Files Transferred")
I am getting error :
files = os.listdir(source1, source2)
TypeError: listdir() takes at most 1 argument (2 given)
| [
"This is the line interpreter is complaining about, you cannot pass two directories to os.listdir function\nfiles = os.listdir(source1, source2)\n\nYou have to have a nested loop (or list comprehension) to do what you want so:\nimport os\nsources = [source1, source2, ..., sourceN]\nfiles_to_move = []\nfor source in sources:\n current_source_files =[f\"{source}{filename}\" for filename in os.listdir(source)]\n files_to_move.extend(current_source_files)\nfor f in files_to_move:\n shutil.move(f, f\"{destination}{f.split(os.sep)[-1]}\")\n\nFor \"cleaner\" solution it's worth to look at:\nhttps://docs.python.org/3/library/os.path.html#module-os.path\n"
] | [
1
] | [] | [] | [
"directory",
"file",
"python",
"shutil"
] | stackoverflow_0074656592_directory_file_python_shutil.txt |
Q:
Evaluating a Multiline Code Block (after an `if`) Without Indentation
Is it possible to make Python3 see an unindented code chunk as a code block? If so how?
This is more of a curiousity of how Python works. Typically if you want to run a code chunk after an if statement you need to indent what comes below:
if True:
x = 'hello'
print(x)
## hello
Is there a way to use the if and not indent the next 2 lines?
You can get it to work if the next line is a function call (not an assignment) and you wrap it with parenthesis as seen below:
if True:(
print('hello')
)
## hello
But it fails to work if you add in multiple lines or an assignment:
if True:(
print('hello')
print('hello2')
)
## File "<stdin>", line 3
## print('hello2')
## ^
## SyntaxError: invalid syntax
## >>> )
## File "<stdin>", line 1
## )
## ^
## SyntaxError: unmatched ')'
if True:(
x = 'hello'
)
## File "<stdin>", line 2
## x = 'hello'
## ^
## SyntaxError: invalid syntax
## >>> )
## File "<stdin>", line 1
## )
## ^
## SyntaxError: unmatched ')'
Is there a way to evaluate the multiple lines after the if without indenting them? Perhaps similar to the parenthisis trick I used for the simple print('hello) but that works for multiple lines and assignments?
A:
This code should work:
if True:(
x:='hello_x',
print('hello'),
print(x)
)
## hello
## hello_x
In your case, you are using a tuple to break python's indentation logic, so you need to separate each element with a comma. And since you are in a tuple, you need to use the Walrus Operator := to assign a value.
| Evaluating a Multiline Code Block (after an `if`) Without Indentation | Is it possible to make Python3 see an unindented code chunk as a code block? If so how?
This is more of a curiousity of how Python works. Typically if you want to run a code chunk after an if statement you need to indent what comes below:
if True:
x = 'hello'
print(x)
## hello
Is there a way to use the if and not indent the next 2 lines?
You can get it to work if the next line is a function call (not an assignment) and you wrap it with parenthesis as seen below:
if True:(
print('hello')
)
## hello
But it fails to work if you add in multiple lines or an assignment:
if True:(
print('hello')
print('hello2')
)
## File "<stdin>", line 3
## print('hello2')
## ^
## SyntaxError: invalid syntax
## >>> )
## File "<stdin>", line 1
## )
## ^
## SyntaxError: unmatched ')'
if True:(
x = 'hello'
)
## File "<stdin>", line 2
## x = 'hello'
## ^
## SyntaxError: invalid syntax
## >>> )
## File "<stdin>", line 1
## )
## ^
## SyntaxError: unmatched ')'
Is there a way to evaluate the multiple lines after the if without indenting them? Perhaps similar to the parenthisis trick I used for the simple print('hello) but that works for multiple lines and assignments?
| [
"This code should work:\nif True:(\nx:='hello_x',\nprint('hello'),\nprint(x)\n)\n\n## hello\n## hello_x\n\nIn your case, you are using a tuple to break python's indentation logic, so you need to separate each element with a comma. And since you are in a tuple, you need to use the Walrus Operator := to assign a value.\n"
] | [
3
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074656650_python_python_3.x.txt |
Q:
Pandas development environment: pytest does not see changes after building edited .pyx file
Q: Why is pytest not seeing changes when I edit a .pyx file and build? What step am I missing?
I'm using Visuals Studio Code with remote containers as described at the end of this page.
If I add changes to pandas/_libs/tslibs/offsets.pyx, and then run
(pandas-dev) root@60017c489843:/workspaces/pandas# python setup.py build_ext -j 4
Compiling pandas/_libs/tslibs/offsets.pyx because it changed.
[1/1] Cythonizing pandas/_libs/tslibs/offsets.pyx
/opt/conda/envs/pandas-dev/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py:108: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*. warnings.warn(msg, _BetaConfiguration)
# ... more output here without errors ...
my unit-test fails because it does not test against my updated version of offsets.pxy. He points to a line (see below) where the error exists only in the old version of the file.
pandas/tests/tseries/offsets/test_offsets.py ........x......................................................F
.... more output here ...
E TypeError: __init__() got an unexpected keyword argument 'milliseconds'
pandas/_libs/tslibs/offsets.pyx:325: TypeError
Whatever change I add to cdef _determine_offset and build, pytest does not see the edits, therefore I assume I'm missing a compilation step somewhere.
Reproducible example
clone my pandas fork: git clone [email protected]:markopacak/pandas.git
git checkout bug-dateoffset-milliseconds
In your dev-environment (docker container or VS Code remote container) run:
conda activate pandas-dev
python setup.py build_ext -j 4
pytest pandas/tests/tseries/offsets/test_offsets.py::TestDateOffset
Assumes you have set-up a dev environment for pandas, ideally using remote-containers on VS Code like I did.
(pandas-dev) root@60017c489843:/workspaces/pandas# python --version
Python 3.8.15
A:
I'm pretty sure you need to install once the extensions are built (otherwise where are the built extension and how python/pytest should know where to look?). This is how my workflow looked some time ago (not sure it still applies but should be close enough):
python setup.py build_ext --inplace -j 4
python -m pip install -e . --no-build-isolation --no-use-pep517
...
pytest pandas/tests/xxxx/yyyy.py
Installing in development mode (-e) is the most convenient option in my opinion for development.
| Pandas development environment: pytest does not see changes after building edited .pyx file | Q: Why is pytest not seeing changes when I edit a .pyx file and build? What step am I missing?
I'm using Visuals Studio Code with remote containers as described at the end of this page.
If I add changes to pandas/_libs/tslibs/offsets.pyx, and then run
(pandas-dev) root@60017c489843:/workspaces/pandas# python setup.py build_ext -j 4
Compiling pandas/_libs/tslibs/offsets.pyx because it changed.
[1/1] Cythonizing pandas/_libs/tslibs/offsets.pyx
/opt/conda/envs/pandas-dev/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py:108: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*. warnings.warn(msg, _BetaConfiguration)
# ... more output here without errors ...
my unit-test fails because it does not test against my updated version of offsets.pxy. He points to a line (see below) where the error exists only in the old version of the file.
pandas/tests/tseries/offsets/test_offsets.py ........x......................................................F
.... more output here ...
E TypeError: __init__() got an unexpected keyword argument 'milliseconds'
pandas/_libs/tslibs/offsets.pyx:325: TypeError
Whatever change I add to cdef _determine_offset and build, pytest does not see the edits, therefore I assume I'm missing a compilation step somewhere.
Reproducible example
clone my pandas fork: git clone [email protected]:markopacak/pandas.git
git checkout bug-dateoffset-milliseconds
In your dev-environment (docker container or VS Code remote container) run:
conda activate pandas-dev
python setup.py build_ext -j 4
pytest pandas/tests/tseries/offsets/test_offsets.py::TestDateOffset
Assumes you have set-up a dev environment for pandas, ideally using remote-containers on VS Code like I did.
(pandas-dev) root@60017c489843:/workspaces/pandas# python --version
Python 3.8.15
| [
"I'm pretty sure you need to install once the extensions are built (otherwise where are the built extension and how python/pytest should know where to look?). This is how my workflow looked some time ago (not sure it still applies but should be close enough):\npython setup.py build_ext --inplace -j 4\npython -m pip install -e . --no-build-isolation --no-use-pep517\n\n...\n\npytest pandas/tests/xxxx/yyyy.py\n\nInstalling in development mode (-e) is the most convenient option in my opinion for development.\n"
] | [
1
] | [] | [] | [
"cython",
"pandas",
"pytest",
"python",
"visual_studio_code"
] | stackoverflow_0074656048_cython_pandas_pytest_python_visual_studio_code.txt |
Q:
Square Every Digit of a Number in Python?
Square Every Digit of a Number in Python?
if we run 9119 through the function, 811181 will come out, because 92 is 81 and 12 is 1.
write a code but this not working.
def sq(num):
words = num.split() # split the text
for word in words: # for each word in the line:
print(word**2) # print the word
num = 9119
sq(num)
A:
We can use list to split every character of a string, also we can use "end" in "print" to indicate the deliminter in the print out.
def sq(num):
words = list(str(num)) # split the text
for word in words: # for each word in the line:
print(int(word)**2, end="") # print the word
num = 9119
sq(num)
Alternatively
return ''.join(str(int(i)**2) for i in str(num))
A:
def sq(num):
z = ''.join(str(int(i)**2) for i in str(num))
return int(z)
A:
number=str(input("Enter the number :"))
def pc(number):
digits = list(number)
for j in digits:
print(int(j)**2,end="")
pc(number)
A:
We can also support input of negative numbers and zeros. Uses arithmetic operators (% and //) for fun.
def sq(num):
num = abs(num) #Handle negative numbers
output = str((num % 10)**2) #Process rightmost digit
while(num > 0):
num //= 10 #Remove rightmost digit
output = str((num % 10)**2) + output #Add squared digit to output
print(output)
A:
Also you can try this variant:
def square_digits(num):
return int(''.join(str(int(i)**2) for i in str(num)))
| Square Every Digit of a Number in Python? | Square Every Digit of a Number in Python?
if we run 9119 through the function, 811181 will come out, because 92 is 81 and 12 is 1.
write a code but this not working.
def sq(num):
words = num.split() # split the text
for word in words: # for each word in the line:
print(word**2) # print the word
num = 9119
sq(num)
| [
"We can use list to split every character of a string, also we can use \"end\" in \"print\" to indicate the deliminter in the print out.\ndef sq(num):\n words = list(str(num)) # split the text\n for word in words: # for each word in the line:\n print(int(word)**2, end=\"\") # print the word\n\nnum = 9119\nsq(num)\n\nAlternatively\nreturn ''.join(str(int(i)**2) for i in str(num))\n\n",
"def sq(num):\n z = ''.join(str(int(i)**2) for i in str(num))\n return int(z)\n\n",
"number=str(input(\"Enter the number :\"))\n\ndef pc(number):\n\n digits = list(number)\n\n for j in digits:\n\n print(int(j)**2,end=\"\")\n \npc(number)\n\n",
"We can also support input of negative numbers and zeros. Uses arithmetic operators (% and //) for fun.\ndef sq(num):\n num = abs(num) #Handle negative numbers\n output = str((num % 10)**2) #Process rightmost digit \n\n while(num > 0): \n num //= 10 #Remove rightmost digit \n output = str((num % 10)**2) + output #Add squared digit to output\n print(output)\n\n",
"Also you can try this variant:\ndef square_digits(num):\nreturn int(''.join(str(int(i)**2) for i in str(num)))\n"
] | [
3,
2,
1,
0,
0
] | [
"def square_digits(num):\n num = str(num)\n result = ''\n for i in num:\n result += str(int(i)**2)\n return int(result)\nvar = square_digits(123)\nprint(var)\n\n"
] | [
-1
] | [
"numbers",
"python",
"python_3.x"
] | stackoverflow_0049604549_numbers_python_python_3.x.txt |
Q:
looping through a data frame Python
I have this data frame where I sliced columns from the original data frame:
Type 1 Attack
Grass 62
Grass 82
Dragon 100
Fire 52
Rock 100
I want to create each Pokemon’s adjusted attack attribute against grass Pokemon based on ‘Type 1’ where;
the attack attribute is doubled if grass Pokemon are bad against that type
halved if they are good against that type
else remains the same.
I have looping through the data:
grass_attack = []
for value in df_["Type 1"]:
if value ==["Type 1 == Fire"] or value==["Type 1 == Flying"] or value==["Type 1 == Poison"] or value==["Type 1 == Bug"] or value==["Type1== Steel"] or value ==["Type 1 == Grass"] or value ==["Type 1 == Dragon"]:
result.append(df_["Attack"]/2)
elif value==["Type 1==Ground"] or value==["Type1== Ground"] or value==["Type 1 == Water"]:
grass_attack.append(df_["Attack"]*2)
else:
grass_attack.append(df_["Attack"])
df_["grass_attack"] = grass_attack
print(df_)
but I got some crazy results after this. How can I efficiently loop through a data frame's column in order to adjust another column?
or is there another way to do this?
A:
There is some issues with your code as @azro pointed in the comments and there is no need for a loop here. You can simply use numpy.select to create a multi-conditionnal column.
Here is an example to give you the general logic :
df["Attack"] = df["Attack"].astype(int)
conditions = [df["Type 1"].eq("Grass"), df["Type 1"].isin(["Fire", "Rock"])]
choices = [df["Attack"].div(2), df["Attack"].mul(2)]
df["grass_attack"] = np.select(conditions, choices, default=df["Attack"]).astype(int)
# Output :
print(df)
Type 1 Attack grass_attack
0 Grass 62 31
1 Grass 82 41
2 Dragon 100 100
3 Fire 52 104
4 Rock 100 200
A:
You could use apply to do the necessary calculations. In the following code, the modify_Attack() function is used to calculate the Grass Attack values based on the Type1 and Attack values.
Type 1 values that are in the bad list will have their attack values halved.
Type 1 values that are in the good list will have their attack values doubled.
All other attack values will remain unchanged.
Here is the code:
import pandas as pd
# Create dataframe
df = pd.DataFrame({ 'Type 1': ['Grass', 'Grass', 'Dragon', 'Fire', 'Rock'],
'Attack': [62, 82, 100, 52, 100]})
# Function to modify the Attack value based on the Type 1 value
def modify_Attack(type_val, attack_val):
bad = ['Fire', 'Flying', 'Poison', 'Bug', 'Steel', 'Grass', 'Dragon']
good = ['Ground','Water']
result = attack_val # default value is unchanged
if type_val in bad:
result /= 2
elif type_val in good:
result *= 2
return result
# Create the Grass Attack column
df['Grass Attack'] = df.apply(lambda x: modify_Attack(x['Type 1'], x['Attack']), axis=1).astype(int)
# print the dataframe
print(df)
OUTPUT:
Type 1 Attack Grass Attack
0 Grass 62 31
1 Grass 82 41
2 Dragon 100 50
3 Fire 52 26
4 Rock 100 100
| looping through a data frame Python | I have this data frame where I sliced columns from the original data frame:
Type 1 Attack
Grass 62
Grass 82
Dragon 100
Fire 52
Rock 100
I want to create each Pokemon’s adjusted attack attribute against grass Pokemon based on ‘Type 1’ where;
the attack attribute is doubled if grass Pokemon are bad against that type
halved if they are good against that type
else remains the same.
I have looping through the data:
grass_attack = []
for value in df_["Type 1"]:
if value ==["Type 1 == Fire"] or value==["Type 1 == Flying"] or value==["Type 1 == Poison"] or value==["Type 1 == Bug"] or value==["Type1== Steel"] or value ==["Type 1 == Grass"] or value ==["Type 1 == Dragon"]:
result.append(df_["Attack"]/2)
elif value==["Type 1==Ground"] or value==["Type1== Ground"] or value==["Type 1 == Water"]:
grass_attack.append(df_["Attack"]*2)
else:
grass_attack.append(df_["Attack"])
df_["grass_attack"] = grass_attack
print(df_)
but I got some crazy results after this. How can I efficiently loop through a data frame's column in order to adjust another column?
or is there another way to do this?
| [
"There is some issues with your code as @azro pointed in the comments and there is no need for a loop here. You can simply use numpy.select to create a multi-conditionnal column.\nHere is an example to give you the general logic :\ndf[\"Attack\"] = df[\"Attack\"].astype(int)\n \nconditions = [df[\"Type 1\"].eq(\"Grass\"), df[\"Type 1\"].isin([\"Fire\", \"Rock\"])]\nchoices = [df[\"Attack\"].div(2), df[\"Attack\"].mul(2)]\n \ndf[\"grass_attack\"] = np.select(conditions, choices, default=df[\"Attack\"]).astype(int)\n\n# Output :\nprint(df)\n\n Type 1 Attack grass_attack\n0 Grass 62 31\n1 Grass 82 41\n2 Dragon 100 100\n3 Fire 52 104\n4 Rock 100 200\n\n",
"You could use apply to do the necessary calculations. In the following code, the modify_Attack() function is used to calculate the Grass Attack values based on the Type1 and Attack values.\n\nType 1 values that are in the bad list will have their attack values halved.\nType 1 values that are in the good list will have their attack values doubled.\nAll other attack values will remain unchanged.\n\nHere is the code:\nimport pandas as pd\n\n# Create dataframe\ndf = pd.DataFrame({ 'Type 1': ['Grass', 'Grass', 'Dragon', 'Fire', 'Rock'],\n 'Attack': [62, 82, 100, 52, 100]})\n\n\n# Function to modify the Attack value based on the Type 1 value\ndef modify_Attack(type_val, attack_val):\n bad = ['Fire', 'Flying', 'Poison', 'Bug', 'Steel', 'Grass', 'Dragon']\n good = ['Ground','Water']\n \n result = attack_val # default value is unchanged\n \n if type_val in bad:\n result /= 2\n\n elif type_val in good:\n result *= 2\n \n return result\n \n\n# Create the Grass Attack column\ndf['Grass Attack'] = df.apply(lambda x: modify_Attack(x['Type 1'], x['Attack']), axis=1).astype(int)\n\n# print the dataframe\nprint(df)\n\nOUTPUT:\n Type 1 Attack Grass Attack\n0 Grass 62 31\n1 Grass 82 41\n2 Dragon 100 50\n3 Fire 52 26\n4 Rock 100 100\n\n"
] | [
1,
1
] | [] | [] | [
"dataframe",
"loops",
"pandas",
"python"
] | stackoverflow_0074656291_dataframe_loops_pandas_python.txt |
Q:
yfinance Crypto symbol list
I am using yfinance in python to get crypto symbol pair prices. It gives real time data via its yf.download(tickers=tickers, period=period, interval=interval) function in a very nice format. I am wondering is there any function in yfinance to pull out all the supported crypto-pairs without doing any webscraping on this
A:
To my knowledge YahooFinance uses CoinMarketCap to retrive crypto market information.
CoinMarketCap offers the API you request here: (not free)
https://pro-api.coinmarketcap.com/v1/exchange/market-pairs/latest
I suggest you transfer to the Binance API. It includes the endpoint GET /api/v1/exchangeInfo as documented here:
https://github.com/binance/binance-spot-api-docs/blob/master/rest-api.md
Your direct endpoint would be https://api.binance.com/api/v1/exchangeInfo
A:
You could use https://python-binance.readthedocs.io/en/latest/general.html#id4 :
from binance import Client
info = client.get_exchange_info()
symbols = [x['symbol'] for x in info['symbols']]
A:
"BTC-USD"works pretty well for me
A:
you can use python-binance library
from binance import Client
from tqdm.autonotebook import tqdm
import pandas as pd
import numpy as np
def get_binance_data(ticker, interval='4h', start='1 Jan 2018', end=None):
client = Client()
intervals = {
'15m': Client.KLINE_INTERVAL_15MINUTE,
'1h': Client.KLINE_INTERVAL_1HOUR,
'4h': Client.KLINE_INTERVAL_4HOUR,
'1d': Client.KLINE_INTERVAL_1DAY
}
interval = intervals.get(interval, '4h')
# print(f'Historical interval {interval}')
klines = client.get_historical_klines(symbol=ticker, interval=interval, start_str=start, end_str=end)
data = pd.DataFrame(klines)
data.columns = ['open_time','open', 'high', 'low', 'close', 'volume','close_time', 'qav','num_trades','taker_base_vol','taker_quote_vol', 'ignore']
data.index = [pd.to_datetime(x, unit='ms').strftime('%Y-%m-%d %H:%M:%S') for x in data.open_time]
usecols=['open', 'high', 'low', 'close', 'volume', 'qav','num_trades','taker_base_vol','taker_quote_vol']
data = data[usecols]
data = data.astype('float')
return data
client = Client()
exchange_info = client.get_exchange_info()
symbols=[s['symbol'] for s in exchange_info['symbols'] if s['status'] == 'TRADING']
ticker_list = symbols[:50]
# tiker_list = np.random.choice(symbols, size=50)
print('Number of crypto pairs: ', len(symbols))
print('First 50 pairs: ', *ticker_list)
# collect pair closes in one dataframe
coins = []
for ticker in tqdm(ticker_list):
try:
close_price = get_binance_data(ticker, interval='1d', start='1 Jan 2018', end='1 Jul 2022')['close'].to_dict()
info = {'name': ticker}
info.update(close_price)
coins.append(info)
except Exception as err:
print(err)
continue
coins = pd.DataFrame(coins)
# print(coins.head())
coins.head()
result:
if you need data of all pairs it is better use multithreading or asynchronous requests
A:
you can get crypto symbol list from yahoo.finance only as "coin-USD"
import requests
from requests_html import HTMLSession
session = HTMLSession()
num_currencies=250
resp = session.get(f"https://finance.yahoo.com/crypto?offset=0&count={num_currencies}")
tables = pd.read_html(resp.html.raw_html)
df = tables[0].copy()
symbols_yf = df.Symbol.tolist()
print(symbols_yf[:15])
print(df.head(5))
result:
| yfinance Crypto symbol list | I am using yfinance in python to get crypto symbol pair prices. It gives real time data via its yf.download(tickers=tickers, period=period, interval=interval) function in a very nice format. I am wondering is there any function in yfinance to pull out all the supported crypto-pairs without doing any webscraping on this
| [
"To my knowledge YahooFinance uses CoinMarketCap to retrive crypto market information.\nCoinMarketCap offers the API you request here: (not free)\nhttps://pro-api.coinmarketcap.com/v1/exchange/market-pairs/latest\nI suggest you transfer to the Binance API. It includes the endpoint GET /api/v1/exchangeInfo as documented here:\nhttps://github.com/binance/binance-spot-api-docs/blob/master/rest-api.md\nYour direct endpoint would be https://api.binance.com/api/v1/exchangeInfo\n",
"You could use https://python-binance.readthedocs.io/en/latest/general.html#id4 :\nfrom binance import Client \n\ninfo = client.get_exchange_info()\nsymbols = [x['symbol'] for x in info['symbols']]\n\n",
"\"BTC-USD\"works pretty well for me\n",
"you can use python-binance library\nfrom binance import Client\nfrom tqdm.autonotebook import tqdm\nimport pandas as pd\nimport numpy as np\n\ndef get_binance_data(ticker, interval='4h', start='1 Jan 2018', end=None):\n client = Client()\n intervals = {\n '15m': Client.KLINE_INTERVAL_15MINUTE,\n '1h': Client.KLINE_INTERVAL_1HOUR, \n '4h': Client.KLINE_INTERVAL_4HOUR,\n '1d': Client.KLINE_INTERVAL_1DAY\n }\n interval = intervals.get(interval, '4h')\n# print(f'Historical interval {interval}')\n klines = client.get_historical_klines(symbol=ticker, interval=interval, start_str=start, end_str=end)\n data = pd.DataFrame(klines)\n data.columns = ['open_time','open', 'high', 'low', 'close', 'volume','close_time', 'qav','num_trades','taker_base_vol','taker_quote_vol', 'ignore']\n data.index = [pd.to_datetime(x, unit='ms').strftime('%Y-%m-%d %H:%M:%S') for x in data.open_time]\n usecols=['open', 'high', 'low', 'close', 'volume', 'qav','num_trades','taker_base_vol','taker_quote_vol']\n data = data[usecols]\n data = data.astype('float')\n return data\n\n\nclient = Client()\nexchange_info = client.get_exchange_info()\nsymbols=[s['symbol'] for s in exchange_info['symbols'] if s['status'] == 'TRADING']\nticker_list = symbols[:50]\n# tiker_list = np.random.choice(symbols, size=50)\nprint('Number of crypto pairs: ', len(symbols))\nprint('First 50 pairs: ', *ticker_list)\n\n# collect pair closes in one dataframe\ncoins = []\nfor ticker in tqdm(ticker_list):\n try:\n close_price = get_binance_data(ticker, interval='1d', start='1 Jan 2018', end='1 Jul 2022')['close'].to_dict()\n info = {'name': ticker}\n info.update(close_price)\n coins.append(info)\n except Exception as err:\n print(err)\n continue\n\ncoins = pd.DataFrame(coins)\n# print(coins.head())\ncoins.head()\n\nresult:\n\nif you need data of all pairs it is better use multithreading or asynchronous requests\n",
"you can get crypto symbol list from yahoo.finance only as \"coin-USD\"\nimport requests\nfrom requests_html import HTMLSession\nsession = HTMLSession()\nnum_currencies=250\nresp = session.get(f\"https://finance.yahoo.com/crypto?offset=0&count={num_currencies}\")\ntables = pd.read_html(resp.html.raw_html) \ndf = tables[0].copy()\nsymbols_yf = df.Symbol.tolist()\nprint(symbols_yf[:15])\nprint(df.head(5))\n\nresult:\n\n"
] | [
0,
0,
0,
0,
0
] | [] | [] | [
"cryptocurrency",
"python",
"yfinance"
] | stackoverflow_0067146805_cryptocurrency_python_yfinance.txt |
Q:
what is the fastest way to insert data into snowflake db table
I have multiple .csv.gz files (each greater than 10GB) that need to be parsed - multiple rows are read to create one row insertion. The approach I'm taking is as follows:
read .csv.gz file
save soon-to-be-inserted rows into a buffer
if there is enough data in the buffer, perform multirow insertion to database table
Now snowflake limits maximum number of expressions to 16384. I've been running this for about a day but the speed at which it is inserting is very slow. I am using sqlalchemy to do this:
url = "snowflake://<my snowflake url>"
engine = create_engine(url)
savedvalues = []
with pd.read_csv(datapath, header=0, chunksize=10**6) as reader:
for chunk in reader:
for index, row in chunk.iterrows():
"""
<parsing data>
"""
savedvalues.append(<parsed values>)
if(len(savedvalues) > 16384):
stmt = mytable.insert().values(savedvalues)
with engine.connect() as conn:
conn.execute(stmt)
savedvalues = []
Is there a faster way to insert data into snowflake database tables?
I'm looking into COPY INTO <table> operation but not sure if this is truly faster than what I'm doing right now.
Any suggestions would be much appreciated!
A:
Here is an article describing a Python multithreaded approach to bulk loading into Snowflake Zero to Snowflake: Multi-Threaded Bulk Loading with Python. Also note to optimize the number of parallel operations for a load, Snowflake recommends data files roughly 100-250 MB (or larger) in size compressed.
| what is the fastest way to insert data into snowflake db table | I have multiple .csv.gz files (each greater than 10GB) that need to be parsed - multiple rows are read to create one row insertion. The approach I'm taking is as follows:
read .csv.gz file
save soon-to-be-inserted rows into a buffer
if there is enough data in the buffer, perform multirow insertion to database table
Now snowflake limits maximum number of expressions to 16384. I've been running this for about a day but the speed at which it is inserting is very slow. I am using sqlalchemy to do this:
url = "snowflake://<my snowflake url>"
engine = create_engine(url)
savedvalues = []
with pd.read_csv(datapath, header=0, chunksize=10**6) as reader:
for chunk in reader:
for index, row in chunk.iterrows():
"""
<parsing data>
"""
savedvalues.append(<parsed values>)
if(len(savedvalues) > 16384):
stmt = mytable.insert().values(savedvalues)
with engine.connect() as conn:
conn.execute(stmt)
savedvalues = []
Is there a faster way to insert data into snowflake database tables?
I'm looking into COPY INTO <table> operation but not sure if this is truly faster than what I'm doing right now.
Any suggestions would be much appreciated!
| [
"Here is an article describing a Python multithreaded approach to bulk loading into Snowflake Zero to Snowflake: Multi-Threaded Bulk Loading with Python. Also note to optimize the number of parallel operations for a load, Snowflake recommends data files roughly 100-250 MB (or larger) in size compressed.\n"
] | [
0
] | [] | [] | [
"python",
"snowflake_cloud_data_platform"
] | stackoverflow_0074649452_python_snowflake_cloud_data_platform.txt |
Q:
how to efficiently and correctly overlay pngs taking into account transparency?
when i was trying to overlay one image over the other one image had a transparent rounded rectangle filling and the other was just a normal image it looked either like this ( just putting the yellow over the pink without taking into account the rounded corners at all) or like this (looks just like the rounded rectangle without adding anything even kept the transparency)
this is how it should look like:
here are the 2 example images: (pink.png) and (yellow.png)
here is the code used for this :
import cv2
import numpy as np
layer0 = cv2.imread(r'yellow.png', cv2.IMREAD_UNCHANGED)
h0, w0 = layer0.shape[:2]
layer4 = cv2.imread(r"pink.png", cv2.IMREAD_UNCHANGED)
#just a way to help the image look more transparent in the opencv imshow because imshow always ignores
# the transparency and pretends that the image has no alpha channel
for y in range(layer4.shape[0]):
for x in range(layer4.shape[1]):
if layer4[y,x][3]<255:
layer4[y,x][:] =0,0,0,0
# Create a new np array
shapes = np.zeros_like(layer4, np.uint8)
shapes = cv2.cvtColor(shapes, cv2.COLOR_BGR2BGRA)
#the start position of the yellow image on the pink
gridpos = (497,419)
shapes[gridpos[1]:gridpos[1]+h0, gridpos[0]:gridpos[0]+w0] = layer0
# Change this into bool to use it as mask
mask = shapes.astype(bool)
# We'll create a loop to change the alpha
# value i.e transparency of the overlay
for alpha in np.arange(0, 1.1, 0.1)[::-1]:
# Create a copy of the image to work with
bg_img = layer4.copy()
# Create the overlay
bg_img[mask] = cv2.addWeighted( bg_img,1-alpha, shapes, alpha, 0)[mask]
# print the alpha value on the image
cv2.putText(bg_img, f'Alpha: {round(alpha,1)}', (50, 200),
cv2.FONT_HERSHEY_PLAIN, 8, (200, 200, 200), 7)
# resize the image before displaying
bg_img = cv2.resize(bg_img, (700, 600))
cv2.imwrite("out.png", bg_img)
cv2.imshow('Final Overlay', bg_img)
cv2.waitKey(0)
you can test different alpha combinations by pressing a key on the keyboard
A:
It looks like you are setting the whole image as a mask, this is why the rounded corners have no effect at all from your pink background. I myself was struggling a lot with this task aswell and ended up using pillow instead of OpenCV. I don't know if it is more performant, but I got it running.
Here the code that works for your example:
from PIL import Image
# load images
background = Image.open(r"pink.png")
# load image and scale it to the same size as the background
foreground = Image.open(r"yellow.png").resize(background.size)
# split gives you the r, g, b and alpha channel of the image.
# For the mask we only need alpha channel, indexed at 3
mask = background.split()[3]
# we combine the two images and provide the mask that is applied to the foreground.
im = Image.composite(background, foreground, mask)
im.show()
If your background is not monochrome as in your example, and you want to use the version, where you paste your original image, you have to create an empty image with the same size as the background, then paste your foreground to the position (your gridpos), e.g. like this:
canvas = Image.new('RGBA', background.size)
canvas.paste(foreground, gridpos)
foreground = canvas
Hope this helps!
| how to efficiently and correctly overlay pngs taking into account transparency? | when i was trying to overlay one image over the other one image had a transparent rounded rectangle filling and the other was just a normal image it looked either like this ( just putting the yellow over the pink without taking into account the rounded corners at all) or like this (looks just like the rounded rectangle without adding anything even kept the transparency)
this is how it should look like:
here are the 2 example images: (pink.png) and (yellow.png)
here is the code used for this :
import cv2
import numpy as np
layer0 = cv2.imread(r'yellow.png', cv2.IMREAD_UNCHANGED)
h0, w0 = layer0.shape[:2]
layer4 = cv2.imread(r"pink.png", cv2.IMREAD_UNCHANGED)
#just a way to help the image look more transparent in the opencv imshow because imshow always ignores
# the transparency and pretends that the image has no alpha channel
for y in range(layer4.shape[0]):
for x in range(layer4.shape[1]):
if layer4[y,x][3]<255:
layer4[y,x][:] =0,0,0,0
# Create a new np array
shapes = np.zeros_like(layer4, np.uint8)
shapes = cv2.cvtColor(shapes, cv2.COLOR_BGR2BGRA)
#the start position of the yellow image on the pink
gridpos = (497,419)
shapes[gridpos[1]:gridpos[1]+h0, gridpos[0]:gridpos[0]+w0] = layer0
# Change this into bool to use it as mask
mask = shapes.astype(bool)
# We'll create a loop to change the alpha
# value i.e transparency of the overlay
for alpha in np.arange(0, 1.1, 0.1)[::-1]:
# Create a copy of the image to work with
bg_img = layer4.copy()
# Create the overlay
bg_img[mask] = cv2.addWeighted( bg_img,1-alpha, shapes, alpha, 0)[mask]
# print the alpha value on the image
cv2.putText(bg_img, f'Alpha: {round(alpha,1)}', (50, 200),
cv2.FONT_HERSHEY_PLAIN, 8, (200, 200, 200), 7)
# resize the image before displaying
bg_img = cv2.resize(bg_img, (700, 600))
cv2.imwrite("out.png", bg_img)
cv2.imshow('Final Overlay', bg_img)
cv2.waitKey(0)
you can test different alpha combinations by pressing a key on the keyboard
| [
"It looks like you are setting the whole image as a mask, this is why the rounded corners have no effect at all from your pink background. I myself was struggling a lot with this task aswell and ended up using pillow instead of OpenCV. I don't know if it is more performant, but I got it running.\nHere the code that works for your example:\nfrom PIL import Image\n\n# load images\nbackground = Image.open(r\"pink.png\")\n# load image and scale it to the same size as the background\nforeground = Image.open(r\"yellow.png\").resize(background.size)\n# split gives you the r, g, b and alpha channel of the image.\n# For the mask we only need alpha channel, indexed at 3\nmask = background.split()[3]\n\n# we combine the two images and provide the mask that is applied to the foreground.\nim = Image.composite(background, foreground, mask)\nim.show()\n\nIf your background is not monochrome as in your example, and you want to use the version, where you paste your original image, you have to create an empty image with the same size as the background, then paste your foreground to the position (your gridpos), e.g. like this:\ncanvas = Image.new('RGBA', background.size)\ncanvas.paste(foreground, gridpos)\nforeground = canvas\n\nHope this helps!\n"
] | [
1
] | [] | [] | [
"alpha_transparency",
"opencv",
"python"
] | stackoverflow_0074654663_alpha_transparency_opencv_python.txt |
Q:
Count nodes sharing row or column with at least one other node
I have a grid of nodes (represented by ones). I would like to quickly and simply (in a way that is both readable and fast) count the number of nodes that share a column or row with another node.
Here is my solution (can it be improved?):
grid=[[0,0,0,0],[1,1,1,1],[0,0,0,1],[0,0,1,1],[0,0,0,1]]
rowlen=len(grid)
collen=len(grid[0])
rd={i: 0 for i in range(rowlen)}
cd={i: 0 for i in range(collen)}
cl=[]
rl=[]
for (rowi, row) in enumerate(grid):
for (coli,x) in enumerate(row):
if x>0:
cd[coli]=cd[coli]+1
rd[rowi]=rd[rowi]+1
cl.append(coli)
rl.append(rowi)
coords=zip(rl,cl)
coordslist=list(coords)
bools=[int(bool(((rd[a]-1) or (cd[b]-1)))) for (a,b) in coordslist]
answer=sum(bools)
A:
Sometimes fast and readable are the same, but more often they depend on your input and wishes. For example, what you are doing is still considered fast for most people, less than a second for a grid of 2500x2500 or 6.250.000 entries.
Now for your question it is noticable, that if you know the row and column counts of 1's, you can quickly look up if there are more than one 1 in a row or column.
Eventhough fast and readable are very personal, I will share with you my solutions to the problem.
Pure python
This solution uses only pure python that is available in the library.
import itertools
rows = [row.count(1) for row in grid]
cols = [col.count(1) for col in zip(*grid)]
width, height = len(grid), len(grid[0])
indices = itertools.product(range(width), range(height))
answer = sum(1 for irow, icol in indices if grid[irow][icol] == 1 and (rows[irow] > 1 or cols[icol] > 1))
For this I am using several small tricks or perks of the python language, which are the following:
row.count(1), which counts the number of times the element 1 occurs in a list.
zip(*grid), which is a reverse zipping, see this post from Mike Corcoran for more information.
itertools.product, which generates all indices for me (documentation)
sum(1 for each in val if <condition on each>), which is a generator expression.
For me the above solution, is considered very readable and concise. Now on the speed front, the solution is a bit faster than yours, but how much you might wonder? Well if we go to a grid of 5000x5000 or 25.000.000 elements, the time difference is about 4 seconds (from 12s to 8s).
Well that is kinda bad... All that trouble for 4 seconds on something that would be considered enormous for most programmers, and probably only be used once. And if you have that many elements, you will probably not use a list anymore, but a numpy array (docs).
Pure Numpy
So how would the solution look like if it was a numpy array?
import numpy as np
# Convert grid from `list` to a `np.ndarray`
arr = np.array(grid)
# Get to row and column totals
cols = arr.sum(axis=0)
rows = arr.sum(axis=1)
# Get all indices that are `1`, and combine the x and y coordinates by zip.
indices = zip(*np.where(arr == 1))
answer = sum(1 for irow, icol in indices if rows[irow] > 1 or cols[icol] > 1)
The above is even more readable than before, you just need to know a few numpy convetions and methods. Namely, that numpy uses row, col for indexing, instead of the traditional (col, row) or x, y indexing. Therefore axis=0, loops over the rows, which give you the column totals. And that np.where returns the indices of where the condition inside the brackets is True.
Now the solution is slightly slower than the pure python solution, because it has to convert the list to an array, but if your data was already a numpy array, your solution and the pure python solution would take almost twice the amount of time compared to the numpy solution.
Conclusion
Above are two different alternative solutions that would solve the same problem, but the readability and speed are depending on your knowledge of the language and your needs.
If the grid is not going to be much langer than a 1000 elements, all solutions are equally quick, namely instantaneously from a user perspective.
And the readability is as good as your skills are.
| Count nodes sharing row or column with at least one other node | I have a grid of nodes (represented by ones). I would like to quickly and simply (in a way that is both readable and fast) count the number of nodes that share a column or row with another node.
Here is my solution (can it be improved?):
grid=[[0,0,0,0],[1,1,1,1],[0,0,0,1],[0,0,1,1],[0,0,0,1]]
rowlen=len(grid)
collen=len(grid[0])
rd={i: 0 for i in range(rowlen)}
cd={i: 0 for i in range(collen)}
cl=[]
rl=[]
for (rowi, row) in enumerate(grid):
for (coli,x) in enumerate(row):
if x>0:
cd[coli]=cd[coli]+1
rd[rowi]=rd[rowi]+1
cl.append(coli)
rl.append(rowi)
coords=zip(rl,cl)
coordslist=list(coords)
bools=[int(bool(((rd[a]-1) or (cd[b]-1)))) for (a,b) in coordslist]
answer=sum(bools)
| [
"Sometimes fast and readable are the same, but more often they depend on your input and wishes. For example, what you are doing is still considered fast for most people, less than a second for a grid of 2500x2500 or 6.250.000 entries.\nNow for your question it is noticable, that if you know the row and column counts of 1's, you can quickly look up if there are more than one 1 in a row or column.\nEventhough fast and readable are very personal, I will share with you my solutions to the problem.\nPure python\nThis solution uses only pure python that is available in the library.\nimport itertools\n\nrows = [row.count(1) for row in grid]\ncols = [col.count(1) for col in zip(*grid)]\n\nwidth, height = len(grid), len(grid[0])\nindices = itertools.product(range(width), range(height))\n\nanswer = sum(1 for irow, icol in indices if grid[irow][icol] == 1 and (rows[irow] > 1 or cols[icol] > 1))\n\nFor this I am using several small tricks or perks of the python language, which are the following:\n\nrow.count(1), which counts the number of times the element 1 occurs in a list.\nzip(*grid), which is a reverse zipping, see this post from Mike Corcoran for more information.\nitertools.product, which generates all indices for me (documentation)\nsum(1 for each in val if <condition on each>), which is a generator expression.\n\nFor me the above solution, is considered very readable and concise. Now on the speed front, the solution is a bit faster than yours, but how much you might wonder? Well if we go to a grid of 5000x5000 or 25.000.000 elements, the time difference is about 4 seconds (from 12s to 8s).\nWell that is kinda bad... All that trouble for 4 seconds on something that would be considered enormous for most programmers, and probably only be used once. And if you have that many elements, you will probably not use a list anymore, but a numpy array (docs).\nPure Numpy\nSo how would the solution look like if it was a numpy array?\nimport numpy as np\n\n# Convert grid from `list` to a `np.ndarray`\narr = np.array(grid)\n\n# Get to row and column totals\ncols = arr.sum(axis=0)\nrows = arr.sum(axis=1)\n\n# Get all indices that are `1`, and combine the x and y coordinates by zip.\nindices = zip(*np.where(arr == 1))\nanswer = sum(1 for irow, icol in indices if rows[irow] > 1 or cols[icol] > 1)\n\nThe above is even more readable than before, you just need to know a few numpy convetions and methods. Namely, that numpy uses row, col for indexing, instead of the traditional (col, row) or x, y indexing. Therefore axis=0, loops over the rows, which give you the column totals. And that np.where returns the indices of where the condition inside the brackets is True.\nNow the solution is slightly slower than the pure python solution, because it has to convert the list to an array, but if your data was already a numpy array, your solution and the pure python solution would take almost twice the amount of time compared to the numpy solution.\nConclusion\nAbove are two different alternative solutions that would solve the same problem, but the readability and speed are depending on your knowledge of the language and your needs.\nIf the grid is not going to be much langer than a 1000 elements, all solutions are equally quick, namely instantaneously from a user perspective.\nAnd the readability is as good as your skills are.\n"
] | [
1
] | [] | [] | [
"hash",
"list",
"optimization",
"python",
"python_3.x"
] | stackoverflow_0074647766_hash_list_optimization_python_python_3.x.txt |
Q:
What is a "Usage Error: --settings |--slot-settings" in the azure function CLI when setting the config?
I was able to create a function app successfully through the cli but when I go to publish/create the function code, I get the error in the title.
I will say I'm following this documentation to create a blob trigger function with prefect so that's why I have the two more config settings in my code - https://discourse.prefect.io/t/how-to-create-azure-blob-storage-event-driven-prefect-2-flows-with-azure-functions/1479
az functionapp config appsettings set `
--name PrefectBlobTrigger `
--resource-group my_resource_group`
--settings "BlobConnectionString=my_blob_connection_str PREFECT_API_URL=my_prefect_api_url PREFECT_API_KEY=my_prefect_api_key "
Usage Error: --settings |--slot-settings
Is it an issue in the way i'm formatting the script?
A:
You're spot on, it looks like just a formatting issue. Can you remove any back quotes and make everything into a single line?
| What is a "Usage Error: --settings |--slot-settings" in the azure function CLI when setting the config? | I was able to create a function app successfully through the cli but when I go to publish/create the function code, I get the error in the title.
I will say I'm following this documentation to create a blob trigger function with prefect so that's why I have the two more config settings in my code - https://discourse.prefect.io/t/how-to-create-azure-blob-storage-event-driven-prefect-2-flows-with-azure-functions/1479
az functionapp config appsettings set `
--name PrefectBlobTrigger `
--resource-group my_resource_group`
--settings "BlobConnectionString=my_blob_connection_str PREFECT_API_URL=my_prefect_api_url PREFECT_API_KEY=my_prefect_api_key "
Usage Error: --settings |--slot-settings
Is it an issue in the way i'm formatting the script?
| [
"You're spot on, it looks like just a formatting issue. Can you remove any back quotes and make everything into a single line?\n"
] | [
0
] | [] | [] | [
"azure_functions",
"prefect",
"python"
] | stackoverflow_0074647958_azure_functions_prefect_python.txt |
Q:
Scatter plot - how to do it
I would like to reproduce this plot in Python: (https://i.stack.imgur.com/6CRfn.png)
Any idea how to do this?
I tried to do a normal plt.scatter() but I can't draw this axes on the zero, for example.
A:
That's a very general question... Using plt.scatter() is certainly a good option. Then just add the two lines to the plot (e.g. using axhline and axvline).
Slightly adapting this example:
import numpy as np
import matplotlib.pyplot as plt
# don't show right and top axis[![enter image description here][1]][1]
import matplotlib as mpl
mpl.rcParams['axes.spines.right'] = False
mpl.rcParams['axes.spines.top'] = False
# some random data
N = 50
x = np.random.randint(-10, high=11, size=N, dtype=int)
y = np.random.randint(-10, high=11, size=N, dtype=int)
colors = np.random.rand(N)
area = (30 * np.random.rand(N))**2 # 0 to 15 point radii
# creating a vertical and a horizontal line
plt.axvline(x=0, color='grey', alpha=0.75, linestyle='-')
plt.axhline(y=0, color='grey', alpha=0.75, linestyle='-')
# scatter plot
plt.scatter(x, y, s=area, c=colors, alpha=0.5)
plt.show()
| Scatter plot - how to do it | I would like to reproduce this plot in Python: (https://i.stack.imgur.com/6CRfn.png)
Any idea how to do this?
I tried to do a normal plt.scatter() but I can't draw this axes on the zero, for example.
| [
"That's a very general question... Using plt.scatter() is certainly a good option. Then just add the two lines to the plot (e.g. using axhline and axvline).\nSlightly adapting this example:\nimport numpy as np\nimport matplotlib.pyplot as plt\n# don't show right and top axis[![enter image description here][1]][1]\nimport matplotlib as mpl\nmpl.rcParams['axes.spines.right'] = False\nmpl.rcParams['axes.spines.top'] = False\n\n# some random data\nN = 50\nx = np.random.randint(-10, high=11, size=N, dtype=int)\ny = np.random.randint(-10, high=11, size=N, dtype=int)\ncolors = np.random.rand(N)\narea = (30 * np.random.rand(N))**2 # 0 to 15 point radii\n\n# creating a vertical and a horizontal line\nplt.axvline(x=0, color='grey', alpha=0.75, linestyle='-')\nplt.axhline(y=0, color='grey', alpha=0.75, linestyle='-')\n# scatter plot\nplt.scatter(x, y, s=area, c=colors, alpha=0.5)\n\nplt.show()\n\n\n"
] | [
0
] | [] | [] | [
"plot",
"python",
"scatter_plot"
] | stackoverflow_0074650089_plot_python_scatter_plot.txt |
Q:
Adding subscription address to chainlink vrf v2 inside of the contract
So I'm writing this lottery smart contract which is pretty straight forward, and since I want to test this on the goerli test net, I want to be able to add the contract as a subscriber to my VRF every time it's deployed.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.7;
import "node_modules/@chainlink/contracts/src/v0.8/interfaces/AggregatorV3Interface.sol";
import "node_modules/@chainlink/contracts/src/v0.8/interfaces/VRFCoordinatorV2Interface.sol";
import "node_modules/@chainlink/contracts/src/v0.8/VRFConsumerBaseV2.sol";
contract Lottery is VRFConsumerBaseV2 {
VRFCoordinatorV2Interface COORDINATOR;
address owner;
address[] buyers = [msg.sender];
address winner;
address vrfCoordinator = 0x2Ca8E0C643bDe4C2E08ab1fA0da3401AdAD7734D;
bytes32 keyHash;
uint32 callbackGasLimit = 5000000;
uint16 requestConfirmations = 3;
uint32 numWords = 1;
uint256[] public randomWords;
uint256 public requestId;
uint64 subscriptionId;
uint256 usdEntryFee;
uint256 startingTime;
uint256 endTime;
uint256 prizePool;
bool available;
AggregatorV3Interface public priceFeed;
modifier onlyOwner() {
require(msg.sender == owner);
_;
}
function enterLottery() public payable returns (bool success) {
buyers.push(msg.sender);
}
function getEntranceFee() public view returns (uint256) {
(, int256 price, , , ) = priceFeed.latestRoundData();
uint256 adjustedPrice = uint256(price) * 10**10; // 18 decimals
// $50, $2,000 / ETH
// 50/2,000
// 50 * 100000 / 2000
uint256 costToEnter = (usdEntryFee * 10**18) / adjustedPrice;
return costToEnter;
}
constructor(address _priceFeed, uint64 _subscriptionId)
VRFConsumerBaseV2(vrfCoordinator)
{
priceFeed = AggregatorV3Interface(_priceFeed);
COORDINATOR = VRFCoordinatorV2Interface(vrfCoordinator);
owner = msg.sender;
usdEntryFee = 50 * (10**18);
subscriptionId = _subscriptionId;
}
function requestRandomWords() public onlyOwner {
requestId = COORDINATOR.requestRandomWords(
keyHash,
subscriptionId,
requestConfirmations,
callbackGasLimit,
numWords
);
}
function addSubscription() public {
COORDINATOR.addConsumer(subscriptionId, address(this));
}
function fulfillRandomWords(
uint256, // requestId
uint256[] memory _randomWords
) internal override {
randomWords = _randomWords;
}
function startLottery(uint256 _endTime) public onlyOwner {
startingTime = block.timestamp;
available = true;
endTime = startingTime + _endTime;
}
function endLottery() public payable onlyOwner returns (uint256) {
require(
block.timestamp > endTime,
"Auction ending date not arrived yet!"
);
available = false;
uint256 randomIndex;
requestRandomWords();
randomIndex = randomWords[0] % buyers.length;
winner = buyers[randomIndex];
return (randomWords[0]);
}
function transferWinnings() public payable onlyOwner {
if (!payable(winner).send(prizePool)) {
revert("Transaction failed!");
}
}
}
I'm using brownie framework to deploy and test, and this is the test function I'm using.
from brownie import Lottery, accounts, config, network
from scripts.helpful_scripts import get_account
from web3 import Web3
def test_random_number():
account = get_account()
lottery = Lottery.deploy(config["networks"][network.show_active()]["eth_usd_price_feed"], 1563, {"from": account})
lottery.addSubscription({"from": account})
lottery.requestRandomWords({"from": account})
assert lottery.randomWords[0] != 0
I'm using the VRF admin wallet to do all of this but still it's not adding the contract to my subscriptions.
Also if anyone is familiar with how I could use the Chainlink VRF V2 mocks, any help would be appreciated.
A:
Using "subscriber" and "consumer" synonymously in regards to the contract using the subscription. Also, VRF v2 Mock can be found at Chainlink GitHub
You can add the contract as a subscriber upon deployment in two ways:
set the contract as a subscriber within the .py file you're using for deployment
initialize the contract as a subscriber within the .sol contract itself
In both methods you will need to do two things in the following order: create a subscription, and add the contract as a consumer to that subscription. Your code only adds a consumer but does not actually create a subscription.
1. Set contract as subscriber within the .py file:
source: Medium article
.py deployment file: --> assuming deploying mock
def deploy_VRFTest():
account = accounts[0]
# deploy mock VRF coordinator ####################################################
# VRFCoordinatorV2Mock.deploy(_baseFee, _gasPriceLink, {"from": account})
vrf_contract = VRFCoordinatorV2Mock.deploy(25 * 10e15, 10e9, {"from": account})
# create subscription & get subscription ID #######################################
sub_id_tx = vrf_contract.createSubscription({"from": account})
# add these waits so Web3 doesn't freak out and error, can also use time.sleep()
sub_id_tx.wait(1)
sub_id = sub_id_tx.events["SubscriptionCreated"]["subId"] # get subscription ID
# fund subscription ################################################################
# must be greater than set _baseFee
fund_amount_link = 30 * 10e15
fund_vrf_tx = vrf_contract.fundSubscription(
sub_id, fund_amount_link, {"from": account}
)
# request random words #############################################################
# goerli key_hash, can use any for development
key_hash = "0x79d3d8832d904592c0bf9818b621522c988bb8b0c05cdc3b15aea1b6e8db0c15"
# blocks to wait before confirming random words (can be any #)
min_request_confirm = 3
gas_lim = 10e6 # 10e5 is recommended, added extra to be sure
num_words = 1 # number of random words
request_tx = vrf_contract.requestRandomWords(
key_hash, sub_id, min_request_confirm, gas_lim, num_words, {"from": account}
)
request_tx.wait(1)
request_id = request_tx.events["RandomWordsRequested"]["requestId"]
# fulfill request ####################################################################
# deploy "ExampleContract.sol"
contract = ExampleContract.deploy(vrf_contract.address, {"from": account})
# fulfill random words ###############################################################
# vrf_contract.fulfillRandomWords(_requestId, _consumer, {"from": account})
fulfill_tx = vrf_contract.fulfillRandomWords(
request_id, contract.address, {"from": account}
)
fulfill_tx.wait(1)
# return the random number ###########################################################
success = fulfill_tx.events["RandomWordsFulfilled"]["success"]
if success:
random_word = contract.s_randomWords(0)
print(f"random number is {random_word}")
time.sleep(10)
.sol contract file:
pragma solidity >=0.6.0 <0.9.0;
import "@chainlink/contracts/src/v0.8/interfaces/VRFCoordinatorV2Interface.sol";
import "@chainlink/contracts/src/v0.8/VRFConsumerBaseV2.sol";
contract ExampleContract is VRFConsumerBaseV2 {
constructor(address _VRFCoordinator) VRFConsumerBaseV2(_VRFCoordinator) {}
uint256[] public s_randomWords;
function fulfillRandomWords(uint256, uint256[] memory randomWords)
internal
override
{
s_randomWords = randomWords;
}
}
2. Set contract as subscriber within the .sol file:
source: Chainlink docs
the actual source has a few other functions but I removed them to codense
.sol file:
notice how the constructor for this file calls the createNewSubcription() function which not only creates a new subscription, but also adds the contract as a consumer to the subscription
pragma solidity ^0.8.7;
import "@chainlink/contracts/src/v0.8/interfaces/VRFCoordinatorV2Interface.sol";
import "@chainlink/contracts/src/v0.8/VRFConsumerBaseV2.sol";
contract VRFTest is VRFConsumerBaseV2 {
VRFCoordinatorV2Interface COORDINATOR;
// Goerli coordinator
address vrfCoordinator = 0x2Ca8E0C643bDe4C2E08ab1fA0da3401AdAD7734D;
// The goerli gas lane to use, which specifies the maximum gas price to bump to.
bytes32 keyHash =
0x79d3d8832d904592c0bf9818b621522c988bb8b0c05cdc3b15aea1b6e8db0c15;
uint32 callbackGasLimit = 100000;
uint16 requestConfirmations = 3;
uint32 numWords = 2;
// Storage parameters
uint256[] public s_randomWords;
uint256 public s_requestId;
uint64 public s_subscriptionId;
address s_owner;
constructor() VRFConsumerBaseV2(vrfCoordinator) {
COORDINATOR = VRFCoordinatorV2Interface(vrfCoordinator);
s_owner = msg.sender;
//Create a new subscription when you deploy the contract.
createNewSubscription();
}
// Assumes the subscription is funded sufficiently.
function requestRandomWords() external onlyOwner {
// Will revert if subscription is not set and funded.
s_requestId = COORDINATOR.requestRandomWords(
keyHash,
s_subscriptionId,
requestConfirmations,
callbackGasLimit,
numWords
);
}
function fulfillRandomWords(
uint256, /* requestId */
uint256[] memory randomWords
) internal override {
s_randomWords = randomWords;
}
// Create a new subscription when the contract is initially deployed.
function createNewSubscription() private onlyOwner {
s_subscriptionId = COORDINATOR.createSubscription();
// Add this contract as a consumer of its own subscription.
COORDINATOR.addConsumer(s_subscriptionId, address(this));
}
function addConsumer(address consumerAddress) external onlyOwner {
// Add a consumer contract to the subscription.
COORDINATOR.addConsumer(s_subscriptionId, consumerAddress);
}
modifier onlyOwner() {
require(msg.sender == s_owner);
_;
}
}
| Adding subscription address to chainlink vrf v2 inside of the contract | So I'm writing this lottery smart contract which is pretty straight forward, and since I want to test this on the goerli test net, I want to be able to add the contract as a subscriber to my VRF every time it's deployed.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.7;
import "node_modules/@chainlink/contracts/src/v0.8/interfaces/AggregatorV3Interface.sol";
import "node_modules/@chainlink/contracts/src/v0.8/interfaces/VRFCoordinatorV2Interface.sol";
import "node_modules/@chainlink/contracts/src/v0.8/VRFConsumerBaseV2.sol";
contract Lottery is VRFConsumerBaseV2 {
VRFCoordinatorV2Interface COORDINATOR;
address owner;
address[] buyers = [msg.sender];
address winner;
address vrfCoordinator = 0x2Ca8E0C643bDe4C2E08ab1fA0da3401AdAD7734D;
bytes32 keyHash;
uint32 callbackGasLimit = 5000000;
uint16 requestConfirmations = 3;
uint32 numWords = 1;
uint256[] public randomWords;
uint256 public requestId;
uint64 subscriptionId;
uint256 usdEntryFee;
uint256 startingTime;
uint256 endTime;
uint256 prizePool;
bool available;
AggregatorV3Interface public priceFeed;
modifier onlyOwner() {
require(msg.sender == owner);
_;
}
function enterLottery() public payable returns (bool success) {
buyers.push(msg.sender);
}
function getEntranceFee() public view returns (uint256) {
(, int256 price, , , ) = priceFeed.latestRoundData();
uint256 adjustedPrice = uint256(price) * 10**10; // 18 decimals
// $50, $2,000 / ETH
// 50/2,000
// 50 * 100000 / 2000
uint256 costToEnter = (usdEntryFee * 10**18) / adjustedPrice;
return costToEnter;
}
constructor(address _priceFeed, uint64 _subscriptionId)
VRFConsumerBaseV2(vrfCoordinator)
{
priceFeed = AggregatorV3Interface(_priceFeed);
COORDINATOR = VRFCoordinatorV2Interface(vrfCoordinator);
owner = msg.sender;
usdEntryFee = 50 * (10**18);
subscriptionId = _subscriptionId;
}
function requestRandomWords() public onlyOwner {
requestId = COORDINATOR.requestRandomWords(
keyHash,
subscriptionId,
requestConfirmations,
callbackGasLimit,
numWords
);
}
function addSubscription() public {
COORDINATOR.addConsumer(subscriptionId, address(this));
}
function fulfillRandomWords(
uint256, // requestId
uint256[] memory _randomWords
) internal override {
randomWords = _randomWords;
}
function startLottery(uint256 _endTime) public onlyOwner {
startingTime = block.timestamp;
available = true;
endTime = startingTime + _endTime;
}
function endLottery() public payable onlyOwner returns (uint256) {
require(
block.timestamp > endTime,
"Auction ending date not arrived yet!"
);
available = false;
uint256 randomIndex;
requestRandomWords();
randomIndex = randomWords[0] % buyers.length;
winner = buyers[randomIndex];
return (randomWords[0]);
}
function transferWinnings() public payable onlyOwner {
if (!payable(winner).send(prizePool)) {
revert("Transaction failed!");
}
}
}
I'm using brownie framework to deploy and test, and this is the test function I'm using.
from brownie import Lottery, accounts, config, network
from scripts.helpful_scripts import get_account
from web3 import Web3
def test_random_number():
account = get_account()
lottery = Lottery.deploy(config["networks"][network.show_active()]["eth_usd_price_feed"], 1563, {"from": account})
lottery.addSubscription({"from": account})
lottery.requestRandomWords({"from": account})
assert lottery.randomWords[0] != 0
I'm using the VRF admin wallet to do all of this but still it's not adding the contract to my subscriptions.
Also if anyone is familiar with how I could use the Chainlink VRF V2 mocks, any help would be appreciated.
| [
"Using \"subscriber\" and \"consumer\" synonymously in regards to the contract using the subscription. Also, VRF v2 Mock can be found at Chainlink GitHub\nYou can add the contract as a subscriber upon deployment in two ways:\n\nset the contract as a subscriber within the .py file you're using for deployment\ninitialize the contract as a subscriber within the .sol contract itself\n\nIn both methods you will need to do two things in the following order: create a subscription, and add the contract as a consumer to that subscription. Your code only adds a consumer but does not actually create a subscription.\n1. Set contract as subscriber within the .py file:\n\nsource: Medium article\n.py deployment file: --> assuming deploying mock\n\ndef deploy_VRFTest():\naccount = accounts[0]\n# deploy mock VRF coordinator ####################################################\n# VRFCoordinatorV2Mock.deploy(_baseFee, _gasPriceLink, {\"from\": account})\nvrf_contract = VRFCoordinatorV2Mock.deploy(25 * 10e15, 10e9, {\"from\": account})\n# create subscription & get subscription ID #######################################\nsub_id_tx = vrf_contract.createSubscription({\"from\": account})\n# add these waits so Web3 doesn't freak out and error, can also use time.sleep()\nsub_id_tx.wait(1)\nsub_id = sub_id_tx.events[\"SubscriptionCreated\"][\"subId\"] # get subscription ID\n# fund subscription ################################################################\n# must be greater than set _baseFee\nfund_amount_link = 30 * 10e15\nfund_vrf_tx = vrf_contract.fundSubscription(\n sub_id, fund_amount_link, {\"from\": account}\n)\n# request random words #############################################################\n# goerli key_hash, can use any for development\nkey_hash = \"0x79d3d8832d904592c0bf9818b621522c988bb8b0c05cdc3b15aea1b6e8db0c15\"\n# blocks to wait before confirming random words (can be any #)\nmin_request_confirm = 3\ngas_lim = 10e6 # 10e5 is recommended, added extra to be sure\nnum_words = 1 # number of random words\nrequest_tx = vrf_contract.requestRandomWords(\n key_hash, sub_id, min_request_confirm, gas_lim, num_words, {\"from\": account}\n)\nrequest_tx.wait(1)\nrequest_id = request_tx.events[\"RandomWordsRequested\"][\"requestId\"]\n# fulfill request ####################################################################\n# deploy \"ExampleContract.sol\"\ncontract = ExampleContract.deploy(vrf_contract.address, {\"from\": account})\n# fulfill random words ###############################################################\n# vrf_contract.fulfillRandomWords(_requestId, _consumer, {\"from\": account})\nfulfill_tx = vrf_contract.fulfillRandomWords(\n request_id, contract.address, {\"from\": account}\n)\nfulfill_tx.wait(1)\n# return the random number ###########################################################\nsuccess = fulfill_tx.events[\"RandomWordsFulfilled\"][\"success\"]\nif success:\n random_word = contract.s_randomWords(0)\n print(f\"random number is {random_word}\")\ntime.sleep(10)\n\n\n.sol contract file:\npragma solidity >=0.6.0 <0.9.0;\nimport \"@chainlink/contracts/src/v0.8/interfaces/VRFCoordinatorV2Interface.sol\";\nimport \"@chainlink/contracts/src/v0.8/VRFConsumerBaseV2.sol\";\n contract ExampleContract is VRFConsumerBaseV2 {\n constructor(address _VRFCoordinator) VRFConsumerBaseV2(_VRFCoordinator) {}\n uint256[] public s_randomWords;\n function fulfillRandomWords(uint256, uint256[] memory randomWords)\n internal\n override\n {\n s_randomWords = randomWords;\n }\n }\n\n\n\n2. Set contract as subscriber within the .sol file:\n\nsource: Chainlink docs\n\nthe actual source has a few other functions but I removed them to codense\n\n.sol file:\n\nnotice how the constructor for this file calls the createNewSubcription() function which not only creates a new subscription, but also adds the contract as a consumer to the subscription\npragma solidity ^0.8.7;\nimport \"@chainlink/contracts/src/v0.8/interfaces/VRFCoordinatorV2Interface.sol\";\nimport \"@chainlink/contracts/src/v0.8/VRFConsumerBaseV2.sol\";\n contract VRFTest is VRFConsumerBaseV2 {\n VRFCoordinatorV2Interface COORDINATOR;\n\n // Goerli coordinator\n address vrfCoordinator = 0x2Ca8E0C643bDe4C2E08ab1fA0da3401AdAD7734D;\n // The goerli gas lane to use, which specifies the maximum gas price to bump to.\n bytes32 keyHash =\n 0x79d3d8832d904592c0bf9818b621522c988bb8b0c05cdc3b15aea1b6e8db0c15;\n uint32 callbackGasLimit = 100000;\n uint16 requestConfirmations = 3;\n uint32 numWords = 2;\n\n // Storage parameters\n uint256[] public s_randomWords;\n uint256 public s_requestId;\n uint64 public s_subscriptionId;\n address s_owner;\n\n constructor() VRFConsumerBaseV2(vrfCoordinator) {\n COORDINATOR = VRFCoordinatorV2Interface(vrfCoordinator);\n s_owner = msg.sender;\n //Create a new subscription when you deploy the contract.\n createNewSubscription();\n }\n\n // Assumes the subscription is funded sufficiently.\n function requestRandomWords() external onlyOwner {\n // Will revert if subscription is not set and funded.\n s_requestId = COORDINATOR.requestRandomWords(\n keyHash,\n s_subscriptionId,\n requestConfirmations,\n callbackGasLimit,\n numWords\n );\n }\n\n function fulfillRandomWords(\n uint256, /* requestId */\n uint256[] memory randomWords\n ) internal override {\n s_randomWords = randomWords;\n }\n\n // Create a new subscription when the contract is initially deployed.\n function createNewSubscription() private onlyOwner {\n s_subscriptionId = COORDINATOR.createSubscription();\n // Add this contract as a consumer of its own subscription.\n COORDINATOR.addConsumer(s_subscriptionId, address(this));\n }\n\n function addConsumer(address consumerAddress) external onlyOwner {\n // Add a consumer contract to the subscription.\n COORDINATOR.addConsumer(s_subscriptionId, consumerAddress);\n }\n\n modifier onlyOwner() {\n require(msg.sender == s_owner);\n _;\n }\n }\n\n\n\n"
] | [
0
] | [] | [] | [
"brownie",
"chainlink",
"python",
"solidity"
] | stackoverflow_0073788538_brownie_chainlink_python_solidity.txt |
Q:
AttributeError: 'str' object has no attribute 'numpy'
My command
Windows 11 PowerShell.
!pip install tensorflow-datasets
pip install tensorflow-datasets
# pip install tfds-nightly
import tensorflow_datasets as tfds
datasets = tfds.load("imdb_reviews")
train_set = tfds.load("imdb_reviews") # 25.000 reviews.
test_set = datasets["test"] # 25.000 reviews.
train_set, test_set = tfds.load("imdb_reviews", split=["train", "test"])
train_set, test_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test"])
train_set, test_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]"])
train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]", "test[60%:]"])
train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]", "test[60%:]"], as_supervised = True)
for review, label in train_set.take(2):
print(review.numpy().decode("utf-8"))
print(label.numpy())
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows
PS C:\Users\donhu> pip install tensorflow-datasets
Collecting tensorflow-datasets
Downloading tensorflow_datasets-4.7.0-py3-none-any.whl (4.7 MB)
|████████████████████████████████| 4.7 MB 2.2 MB/s
Requirement already satisfied: protobuf>=3.12.2 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (3.19.6)
Collecting tensorflow-metadata
Downloading tensorflow_metadata-1.11.0-py3-none-any.whl (52 kB)
|████████████████████████████████| 52 kB ...
Requirement already satisfied: numpy in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (1.23.4)
Requirement already satisfied: requests>=2.19.0 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (2.28.1)
Requirement already satisfied: six in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (1.16.0)
Requirement already satisfied: termcolor in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (2.1.0)
Collecting etils[epath]
Downloading etils-0.9.0-py3-none-any.whl (140 kB)
|████████████████████████████████| 140 kB ...
Collecting promise
Downloading promise-2.3.tar.gz (19 kB)
Requirement already satisfied: tqdm in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (4.64.1)
Collecting toml
Downloading toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting dill
Downloading dill-0.3.6-py3-none-any.whl (110 kB)
|████████████████████████████████| 110 kB 6.4 MB/s
Requirement already satisfied: absl-py in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (1.3.0)
Collecting googleapis-common-protos<2,>=1.52.0
Downloading googleapis_common_protos-1.57.0-py2.py3-none-any.whl (217 kB)
|████████████████████████████████| 217 kB 6.4 MB/s
Requirement already satisfied: certifi>=2017.4.17 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from requests>=2.19.0->tensorflow-datasets) (2022.9.24)
Requirement already satisfied: charset-normalizer<3,>=2 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from requests>=2.19.0->tensorflow-datasets) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from requests>=2.19.0->tensorflow-datasets) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from requests>=2.19.0->tensorflow-datasets) (1.26.12)Requirement already satisfied: typing_extensions; extra == "epath" in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from etils[epath]->tensorflow-datasets) (4.4.0)
Collecting importlib_resources; extra == "epath"
Downloading importlib_resources-5.10.0-py3-none-any.whl (34 kB)
Requirement already satisfied: zipp; extra == "epath" in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from etils[epath]->tensorflow-datasets) (3.10.0)
Requirement already satisfied: colorama; platform_system == "Windows" in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tqdm->tensorflow-datasets) (0.4.6)
Building wheels for collected packages: promise
Building wheel for promise (setup.py) ... done
Created wheel for promise: filename=promise-2.3-py3-none-any.whl size=21554 sha256=8d6db1312d74403cffbe332a56a0caeb292ff65701f5c958a2c836715275b299
Stored in directory: c:\users\donhu\appdata\local\pip\cache\wheels\e1\e8\83\ddea66100678d139b14bc87692ece57c6a2a937956d2532608
Successfully built promise
Installing collected packages: googleapis-common-protos, tensorflow-metadata, importlib-resources, etils, promise, toml, dill, tensorflow-datasets
Successfully installed dill-0.3.6 etils-0.9.0 googleapis-common-protos-1.57.0 importlib-resources-5.10.0 promise-2.3 tensorflow-datasets-4.7.0 tensorflow-metadata-1.11.0 toml-0.10.2
WARNING: You are using pip version 20.2.3; however, version 22.3.1 is available.
You should consider upgrading via the 'c:\users\donhu\appdata\local\programs\python\python39\python.exe -m pip install --upgrade pip' command.
PS C:\Users\donhu> python
Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow_datasets as tfds
>>> datasets = tfds.load("imdb_reviews")
Downloading and preparing dataset Unknown size (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\donhu\tensorflow_datasets\imdb_reviews\plain_text\1.0.0...
Dl Completed...: 0%| | 0/1 [00:11<?, ? url/s]
Dl Size...: 26%|███████████████████████████▌ | 21/80 [00:11<00:25, 2.27 MiB/s]
Dl Completed...: 0%| | 0/1 [00:12<?, ? url/s]
Dl Size...: 28%|████████████████████████████▉
Dl Completed...: 0%| | 0/1 [00:12<?, ? url/s]
Dl Size...: 29%|██████████████████████████████▏
Dl Completed...: 0%| | 0/1 [00:13<?, ? url/s]
Dl Size...: 30%|███████████████████████████████▌
Dl Completed...: 0%| | 0/1 [00:13<?, ? url/s]
Dl Size...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 80/80 [00:51<00:00, 1.55 MiB/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:51<00:00, 51.53s/ url]
Generating splits...: 0%| | 0/3 [00:00<?, ? splits/s]
Generating train examples...: 7479 examples [00:02, 6153.86 examples/s]
Dataset imdb_reviews downloaded and prepared to C:\Users\donhu\tensorflow_datasets\imdb_reviews\plain_text\1.0.0. Subsequent calls will reuse this data.
2022-12-01 19:48:16.580270: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-01 19:48:17.465275: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3994 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1660 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5
>>> train_set = tfds.load("imdb_reviews")
>>>
>>> test_set = datasets["test"]
>>> train_set, test_set = tfds.load("imdb_reviews", split=["train", "test"])
>>> for review, label in train_set.take(2):
... print(review.numpy().decode("utf-8"))
File "<stdin>", line 2
print(review.numpy().decode("utf-8"))
^
IndentationError: expected an indented block
>>> print(review.numpy().decode("utf-8"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'review' is not defined
>>> Windows 11 PowerShell.
File "<stdin>", line 1
Windows 11 PowerShell.
IndentationError: unexpected indent
>>>
>>> !pip install tensorflow-datasets
File "<stdin>", line 1
!pip install tensorflow-datasets
^
SyntaxError: invalid syntax
>>> pip install tensorflow-datasets
File "<stdin>", line 1
pip install tensorflow-datasets
^
SyntaxError: invalid syntax
>>>
>>> # pip install tfds-nightly
>>>
>>> import tensorflow_datasets as tfds
>>> datasets = tfds.load("imdb_reviews")
>>>
>>> train_set = tfds.load("imdb_reviews") # 25.000 reviews.
>>> test_set = datasets["test"] # 25.000 reviews.
>>>
>>> train_set, test_set = tfds.load("imdb_reviews", split=["train", "test"])
>>> train_set, test_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test"])
>>> train_set, test_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]"])
>>> train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]"], "test[60%:]")
File "<stdin>", line 1
train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]"], "test[60%:]")
^
SyntaxError: positional argument follows keyword argument
>>> train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]", "test[60%:]"])
>>> for review, label in train_set.take(2):
... print(review.numpy().decode("utf-8"))
... print(label.numpy())
...
2022-12-01 20:07:51.639683: W tensorflow/core/kernels/data/cache_dataset_ops.cc:856] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
AttributeError: 'str' object has no attribute 'numpy'
>>>
A:
You are trying to run python directly within powershell. But the powershell-interpreter speaks only powershell and cannot natively interprete python code.
You have to put the python code in a python file, e.g. my_code.py and call/execute it with python my_code.py from within powershell. Now the python interpreter is used to run the script. See How to run python code for details.
A:
I've split the code block in your question into 3 distinct parts, one for each error.
While, yes, you should put Python code into an actual script, the REPL will work too, but pip install is not Python code. Neither is plain text like Windows 11 Powershell. If you're copying code off some blog/docs, then only do actual Python code...
Python cares about indentation, so reviews not defined was because your for loop was not indented where the variable was defined.
'str' object has no attribute 'numpy' is on this line print(review.numpy().decode("utf-8")) because strings don't have numpy functions... It should already be decoded, as well. So just print out the review text directly.
I see that the video/image you've linked is from 2019... Tensorflow has had several new releases since then, so the API may have changed. You'll want to consult official documentation for the version you've installed
| AttributeError: 'str' object has no attribute 'numpy' | My command
Windows 11 PowerShell.
!pip install tensorflow-datasets
pip install tensorflow-datasets
# pip install tfds-nightly
import tensorflow_datasets as tfds
datasets = tfds.load("imdb_reviews")
train_set = tfds.load("imdb_reviews") # 25.000 reviews.
test_set = datasets["test"] # 25.000 reviews.
train_set, test_set = tfds.load("imdb_reviews", split=["train", "test"])
train_set, test_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test"])
train_set, test_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]"])
train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]", "test[60%:]"])
train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]", "test[60%:]"], as_supervised = True)
for review, label in train_set.take(2):
print(review.numpy().decode("utf-8"))
print(label.numpy())
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.
Install the latest PowerShell for new features and improvements! https://aka.ms/PSWindows
PS C:\Users\donhu> pip install tensorflow-datasets
Collecting tensorflow-datasets
Downloading tensorflow_datasets-4.7.0-py3-none-any.whl (4.7 MB)
|████████████████████████████████| 4.7 MB 2.2 MB/s
Requirement already satisfied: protobuf>=3.12.2 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (3.19.6)
Collecting tensorflow-metadata
Downloading tensorflow_metadata-1.11.0-py3-none-any.whl (52 kB)
|████████████████████████████████| 52 kB ...
Requirement already satisfied: numpy in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (1.23.4)
Requirement already satisfied: requests>=2.19.0 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (2.28.1)
Requirement already satisfied: six in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (1.16.0)
Requirement already satisfied: termcolor in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (2.1.0)
Collecting etils[epath]
Downloading etils-0.9.0-py3-none-any.whl (140 kB)
|████████████████████████████████| 140 kB ...
Collecting promise
Downloading promise-2.3.tar.gz (19 kB)
Requirement already satisfied: tqdm in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (4.64.1)
Collecting toml
Downloading toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting dill
Downloading dill-0.3.6-py3-none-any.whl (110 kB)
|████████████████████████████████| 110 kB 6.4 MB/s
Requirement already satisfied: absl-py in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tensorflow-datasets) (1.3.0)
Collecting googleapis-common-protos<2,>=1.52.0
Downloading googleapis_common_protos-1.57.0-py2.py3-none-any.whl (217 kB)
|████████████████████████████████| 217 kB 6.4 MB/s
Requirement already satisfied: certifi>=2017.4.17 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from requests>=2.19.0->tensorflow-datasets) (2022.9.24)
Requirement already satisfied: charset-normalizer<3,>=2 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from requests>=2.19.0->tensorflow-datasets) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from requests>=2.19.0->tensorflow-datasets) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from requests>=2.19.0->tensorflow-datasets) (1.26.12)Requirement already satisfied: typing_extensions; extra == "epath" in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from etils[epath]->tensorflow-datasets) (4.4.0)
Collecting importlib_resources; extra == "epath"
Downloading importlib_resources-5.10.0-py3-none-any.whl (34 kB)
Requirement already satisfied: zipp; extra == "epath" in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from etils[epath]->tensorflow-datasets) (3.10.0)
Requirement already satisfied: colorama; platform_system == "Windows" in c:\users\donhu\appdata\local\programs\python\python39\lib\site-packages (from tqdm->tensorflow-datasets) (0.4.6)
Building wheels for collected packages: promise
Building wheel for promise (setup.py) ... done
Created wheel for promise: filename=promise-2.3-py3-none-any.whl size=21554 sha256=8d6db1312d74403cffbe332a56a0caeb292ff65701f5c958a2c836715275b299
Stored in directory: c:\users\donhu\appdata\local\pip\cache\wheels\e1\e8\83\ddea66100678d139b14bc87692ece57c6a2a937956d2532608
Successfully built promise
Installing collected packages: googleapis-common-protos, tensorflow-metadata, importlib-resources, etils, promise, toml, dill, tensorflow-datasets
Successfully installed dill-0.3.6 etils-0.9.0 googleapis-common-protos-1.57.0 importlib-resources-5.10.0 promise-2.3 tensorflow-datasets-4.7.0 tensorflow-metadata-1.11.0 toml-0.10.2
WARNING: You are using pip version 20.2.3; however, version 22.3.1 is available.
You should consider upgrading via the 'c:\users\donhu\appdata\local\programs\python\python39\python.exe -m pip install --upgrade pip' command.
PS C:\Users\donhu> python
Python 3.9.0 (tags/v3.9.0:9cf6752, Oct 5 2020, 15:34:40) [MSC v.1927 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow_datasets as tfds
>>> datasets = tfds.load("imdb_reviews")
Downloading and preparing dataset Unknown size (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\donhu\tensorflow_datasets\imdb_reviews\plain_text\1.0.0...
Dl Completed...: 0%| | 0/1 [00:11<?, ? url/s]
Dl Size...: 26%|███████████████████████████▌ | 21/80 [00:11<00:25, 2.27 MiB/s]
Dl Completed...: 0%| | 0/1 [00:12<?, ? url/s]
Dl Size...: 28%|████████████████████████████▉
Dl Completed...: 0%| | 0/1 [00:12<?, ? url/s]
Dl Size...: 29%|██████████████████████████████▏
Dl Completed...: 0%| | 0/1 [00:13<?, ? url/s]
Dl Size...: 30%|███████████████████████████████▌
Dl Completed...: 0%| | 0/1 [00:13<?, ? url/s]
Dl Size...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 80/80 [00:51<00:00, 1.55 MiB/s]
Dl Completed...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:51<00:00, 51.53s/ url]
Generating splits...: 0%| | 0/3 [00:00<?, ? splits/s]
Generating train examples...: 7479 examples [00:02, 6153.86 examples/s]
Dataset imdb_reviews downloaded and prepared to C:\Users\donhu\tensorflow_datasets\imdb_reviews\plain_text\1.0.0. Subsequent calls will reuse this data.
2022-12-01 19:48:16.580270: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-01 19:48:17.465275: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1616] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 3994 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1660 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5
>>> train_set = tfds.load("imdb_reviews")
>>>
>>> test_set = datasets["test"]
>>> train_set, test_set = tfds.load("imdb_reviews", split=["train", "test"])
>>> for review, label in train_set.take(2):
... print(review.numpy().decode("utf-8"))
File "<stdin>", line 2
print(review.numpy().decode("utf-8"))
^
IndentationError: expected an indented block
>>> print(review.numpy().decode("utf-8"))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'review' is not defined
>>> Windows 11 PowerShell.
File "<stdin>", line 1
Windows 11 PowerShell.
IndentationError: unexpected indent
>>>
>>> !pip install tensorflow-datasets
File "<stdin>", line 1
!pip install tensorflow-datasets
^
SyntaxError: invalid syntax
>>> pip install tensorflow-datasets
File "<stdin>", line 1
pip install tensorflow-datasets
^
SyntaxError: invalid syntax
>>>
>>> # pip install tfds-nightly
>>>
>>> import tensorflow_datasets as tfds
>>> datasets = tfds.load("imdb_reviews")
>>>
>>> train_set = tfds.load("imdb_reviews") # 25.000 reviews.
>>> test_set = datasets["test"] # 25.000 reviews.
>>>
>>> train_set, test_set = tfds.load("imdb_reviews", split=["train", "test"])
>>> train_set, test_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test"])
>>> train_set, test_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]"])
>>> train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]"], "test[60%:]")
File "<stdin>", line 1
train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]"], "test[60%:]")
^
SyntaxError: positional argument follows keyword argument
>>> train_set, test_set, valid_set = tfds.load("imdb_reviews:1.0.0", split=["train", "test[:60%]", "test[60%:]"])
>>> for review, label in train_set.take(2):
... print(review.numpy().decode("utf-8"))
... print(label.numpy())
...
2022-12-01 20:07:51.639683: W tensorflow/core/kernels/data/cache_dataset_ops.cc:856] The calling iterator did not fully read the dataset being cached. In order to avoid unexpected truncation of the dataset, the partially cached contents of the dataset will be discarded. This can happen if you have an input pipeline similar to `dataset.cache().take(k).repeat()`. You should use `dataset.take(k).cache().repeat()` instead.
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
AttributeError: 'str' object has no attribute 'numpy'
>>>
| [
"You are trying to run python directly within powershell. But the powershell-interpreter speaks only powershell and cannot natively interprete python code.\nYou have to put the python code in a python file, e.g. my_code.py and call/execute it with python my_code.py from within powershell. Now the python interpreter is used to run the script. See How to run python code for details.\n",
"I've split the code block in your question into 3 distinct parts, one for each error.\nWhile, yes, you should put Python code into an actual script, the REPL will work too, but pip install is not Python code. Neither is plain text like Windows 11 Powershell. If you're copying code off some blog/docs, then only do actual Python code...\nPython cares about indentation, so reviews not defined was because your for loop was not indented where the variable was defined.\n'str' object has no attribute 'numpy' is on this line print(review.numpy().decode(\"utf-8\")) because strings don't have numpy functions... It should already be decoded, as well. So just print out the review text directly.\nI see that the video/image you've linked is from 2019... Tensorflow has had several new releases since then, so the API may have changed. You'll want to consult official documentation for the version you've installed\n"
] | [
1,
0
] | [] | [] | [
"python"
] | stackoverflow_0074642430_python.txt |
Q:
How to find the numbers in the thousands, hundreds, tens, and ones place in PYTHON for an input number? For example: 256 has 6 ones, 5 tens, etc
num = int(input("Please give me a number: "))
print(num)
thou = int((num // 1000))
print(thou)
hun = int((num // 100))
print(hun)
ten =int((num // 10))
print(ten)
one = int((num // 1))
print(one)
I tried this but it does not work and I'm stuck.
A:
You might want to try something like following:
def get_pos_nums(num):
pos_nums = []
while num != 0:
pos_nums.append(num % 10)
num = num // 10
return pos_nums
And call this method as following.
>>> get_pos_nums(9876)
[6, 7, 8, 9]
The 0th index will contain the units, 1st index will contain tens, 2nd index will contain hundreds and so on...
This function will fail with negative numbers. I leave the handling of negative numbers for you to figure out as an exercise.
A:
Like this?
a = str(input('Please give me a number: '))
for i in a[::-1]:
print(i)
Demo:
Please give me a number: 1324
4
2
3
1
So the first number is ones, next is tens, etc.
A:
num = 1234
thousands = num // 1000
hundreds = (num % 1000) // 100
tens = (num % 100) // 10
units = (num % 10)
print(thousands, hundreds, tens, units)
# expected output: 1 2 3 4
"//" in Python stands for integer division. It largely removes the fractional part from the floating-point number and returns an integer
For example:
4/3 = 1.333333
4//3 = 1
A:
You could try splitting the number using this function:
def get_place_values(n):
return [int(value) * 10**place for place, value in enumerate(str(n)[::-1])]
For example:
get_place_values(342)
>>> [2, 40, 300]
Next, you could write a helper function:
def get_place_val_to_word(n):
n_str = str(n)
num_to_word = {
"0": "ones",
"1": "tens",
"2": "hundreds",
"3": "thousands"
}
return f"{n_str[0]} {num_to_word[str(n_str.count('0'))]}"
Then you can combine the two like so:
def print_place_values(n):
for value in get_place_values(n):
print(get_place_val_to_word(value))
For example:
num = int(input("Please give me a number: "))
# User enters 342
print_place_values(num)
>>> 2 ones
4 tens
3 hundreds
A:
num=1234
digit_at_one_place=num%10
print(digit_at_one_place)
digits_at_tens_place=(num//10)%10
digits_at_hund_place=(num//100)%10
digits_at_thou_place=(num//1000)%10
print(digits_at_tens_place)
print(digits_at_hund_place)
print(digits_at_thou_place)
this does the job. it is simple to understand as well.
A:
Please note that I took inspiration from the above answer by 6pack kid to get this code. All I added was a way to get the exact place value instead of just getting the digits segregated.
num = int(input("Enter Number: "))
c = 1
pos_nums = []
while num != 0:
z = num % 10
pos_nums.append(z *c)
num = num // 10
c = c*10
print(pos_nums)
Once you run this code, for the input of 12345 this is what will be the output:
Enter Number: 12345
[5, 40, 300, 2000, 10000]
This helped me in getting an answer to what I needed.
A:
money = int(input("Enter amount: "))
thousand = int(money // 1000)
five_hundred = int(money % 1000 / 500)
two_hundred = int(money % 1000 % 500 / 200)
one_hundred = int(money % 1000 % 500 % 200 / 100)
fifty = int(money % 1000 % 500 % 200 % 100 / 50)
twenty = int(money % 1000 % 500 % 200 % 100 % 50 / 20)
ten = int(money % 1000 % 500 % 200 % 100 % 50 % 20 / 10)
five = int(money % 1000 % 500 % 200 % 100 % 50 % 20 % 10 / 5)
one = int(money % 1000 % 500 % 200 % 100 % 50 % 20 % 10 % 5 / 1)
if thousand >=1:
print ("P1000: " , thousand)
if five_hundred >= 1:
print ("P500: " , five_hundred)
if two_hundred >= 1:
print ("P200: " , two_hundred)
if one_hundred >= 1:
print ("P100: " , one_hundred)
if fifty >= 1:
print ("P50: " , fifty)
if twenty >= 1:
print ("P20: " , twenty)
if ten >= 1:
print ("P10: " , ten)
if five >= 1:
print ("P5: " , five)
if one >= 1:
print ("P1: " , one)
A:
Quickest way:
num = str(input("Please give me a number: "))
print([int(i) for i in num[::-1]])
A:
This will do it, doesn't use strings at all and handles any integer passed for col sensibly.
def tenscol(num: int, col: int):
ndigits = 1
while (num % (10**ndigits)) != num:
ndigits += 1
x = min(max(1, col), ndigits)
y = 10**max(0, x - 1)
return int(((num % 10**x) - (num % y)) / y)
usage:
print(tenscol(9785,-1))
print(tenscol(9785,1))
print(tenscol(9785,2))
print(tenscol(9785,3))
print(tenscol(9785,4))
print(tenscol(9785,99))
Output:
5
5
8
7
9
9
A:
def get_pos(num,unit):
return int(abs(num)/unit)%10
So for "ones" unit is 1 while for "tens", unit is 10 and so forth.
It can handle any digit and even negative numbers effectively.
So given the number 256, to get the digit in the tens position you do
get_pos(256,10)
>> 5
A:
I had to do this on many values of an array, and it's not always in base 10 (normal counting - your tens, hundreds, thousands, etc.). So the reference is slightly different: 1=1st place (1s), 2=2nd place (10s), 3=3rd place (100s), 4=4th place (1000s). So your vectorized solution:
import numpy as np
def get_place(array, place):
return (array/10**(place-1)%10).astype(int)
Works fast and also works on arrays in different bases.
A:
# method 1
num = 1234
while num>0:
print(num%10)
num//=10
# method 2
num = 1234
print('Ones Place',num%10)
print('tens place',(num//10)%10)
print("hundred's place",(num//100)%10)
print("Thousand's place ",(num//1000)%10)
| How to find the numbers in the thousands, hundreds, tens, and ones place in PYTHON for an input number? For example: 256 has 6 ones, 5 tens, etc | num = int(input("Please give me a number: "))
print(num)
thou = int((num // 1000))
print(thou)
hun = int((num // 100))
print(hun)
ten =int((num // 10))
print(ten)
one = int((num // 1))
print(one)
I tried this but it does not work and I'm stuck.
| [
"You might want to try something like following:\ndef get_pos_nums(num):\n pos_nums = []\n while num != 0:\n pos_nums.append(num % 10)\n num = num // 10\n return pos_nums\n\nAnd call this method as following.\n>>> get_pos_nums(9876)\n[6, 7, 8, 9]\n\nThe 0th index will contain the units, 1st index will contain tens, 2nd index will contain hundreds and so on...\nThis function will fail with negative numbers. I leave the handling of negative numbers for you to figure out as an exercise.\n",
"Like this?\na = str(input('Please give me a number: '))\n\nfor i in a[::-1]:\n print(i)\n\n\nDemo:\nPlease give me a number: 1324\n4\n2\n3\n1\n\nSo the first number is ones, next is tens, etc.\n",
"num = 1234\n\nthousands = num // 1000\nhundreds = (num % 1000) // 100\ntens = (num % 100) // 10\nunits = (num % 10)\n\nprint(thousands, hundreds, tens, units)\n# expected output: 1 2 3 4\n\n\"//\" in Python stands for integer division. It largely removes the fractional part from the floating-point number and returns an integer\nFor example:\n4/3 = 1.333333\n4//3 = 1\n\n",
"You could try splitting the number using this function:\ndef get_place_values(n):\n return [int(value) * 10**place for place, value in enumerate(str(n)[::-1])]\n\nFor example:\nget_place_values(342)\n>>> [2, 40, 300]\n\nNext, you could write a helper function:\ndef get_place_val_to_word(n):\n n_str = str(n)\n num_to_word = {\n \"0\": \"ones\",\n \"1\": \"tens\",\n \"2\": \"hundreds\",\n \"3\": \"thousands\"\n }\n return f\"{n_str[0]} {num_to_word[str(n_str.count('0'))]}\"\n\nThen you can combine the two like so:\ndef print_place_values(n):\n for value in get_place_values(n):\n print(get_place_val_to_word(value))\n\nFor example:\nnum = int(input(\"Please give me a number: \"))\n# User enters 342\nprint_place_values(num)\n>>> 2 ones\n4 tens\n3 hundreds\n\n",
"num=1234\ndigit_at_one_place=num%10\nprint(digit_at_one_place)\ndigits_at_tens_place=(num//10)%10\ndigits_at_hund_place=(num//100)%10\ndigits_at_thou_place=(num//1000)%10\nprint(digits_at_tens_place)\nprint(digits_at_hund_place)\nprint(digits_at_thou_place)\n\nthis does the job. it is simple to understand as well.\n",
"Please note that I took inspiration from the above answer by 6pack kid to get this code. All I added was a way to get the exact place value instead of just getting the digits segregated. \nnum = int(input(\"Enter Number: \"))\nc = 1\npos_nums = []\nwhile num != 0:\n z = num % 10\n pos_nums.append(z *c)\n num = num // 10\n c = c*10\nprint(pos_nums)\n\nOnce you run this code, for the input of 12345 this is what will be the output:\nEnter Number: 12345\n[5, 40, 300, 2000, 10000]\n\nThis helped me in getting an answer to what I needed.\n",
"money = int(input(\"Enter amount: \"))\nthousand = int(money // 1000)\nfive_hundred = int(money % 1000 / 500)\ntwo_hundred = int(money % 1000 % 500 / 200)\none_hundred = int(money % 1000 % 500 % 200 / 100)\nfifty = int(money % 1000 % 500 % 200 % 100 / 50)\ntwenty = int(money % 1000 % 500 % 200 % 100 % 50 / 20)\nten = int(money % 1000 % 500 % 200 % 100 % 50 % 20 / 10)\nfive = int(money % 1000 % 500 % 200 % 100 % 50 % 20 % 10 / 5)\none = int(money % 1000 % 500 % 200 % 100 % 50 % 20 % 10 % 5 / 1)\nif thousand >=1: \n print (\"P1000: \" , thousand)\nif five_hundred >= 1:\n print (\"P500: \" , five_hundred)\nif two_hundred >= 1:\n print (\"P200: \" , two_hundred)\nif one_hundred >= 1:\n print (\"P100: \" , one_hundred)\nif fifty >= 1:\n print (\"P50: \" , fifty)\nif twenty >= 1:\n print (\"P20: \" , twenty)\nif ten >= 1:\n print (\"P10: \" , ten)\nif five >= 1:\n print (\"P5: \" , five)\nif one >= 1:\n print (\"P1: \" , one)\n\n",
"Quickest way:\nnum = str(input(\"Please give me a number: \"))\nprint([int(i) for i in num[::-1]])\n\n",
"This will do it, doesn't use strings at all and handles any integer passed for col sensibly.\ndef tenscol(num: int, col: int):\n ndigits = 1\n while (num % (10**ndigits)) != num:\n ndigits += 1\n x = min(max(1, col), ndigits)\n y = 10**max(0, x - 1)\n return int(((num % 10**x) - (num % y)) / y)\n\nusage:\nprint(tenscol(9785,-1))\nprint(tenscol(9785,1))\nprint(tenscol(9785,2))\nprint(tenscol(9785,3))\nprint(tenscol(9785,4))\nprint(tenscol(9785,99))\n\nOutput:\n5\n5\n8\n7\n9\n9\n\n",
"def get_pos(num,unit):\n return int(abs(num)/unit)%10\n\nSo for \"ones\" unit is 1 while for \"tens\", unit is 10 and so forth.\nIt can handle any digit and even negative numbers effectively.\nSo given the number 256, to get the digit in the tens position you do\nget_pos(256,10)\n>> 5\n\n",
"I had to do this on many values of an array, and it's not always in base 10 (normal counting - your tens, hundreds, thousands, etc.). So the reference is slightly different: 1=1st place (1s), 2=2nd place (10s), 3=3rd place (100s), 4=4th place (1000s). So your vectorized solution:\nimport numpy as np\ndef get_place(array, place):\n return (array/10**(place-1)%10).astype(int)\n\nWorks fast and also works on arrays in different bases.\n",
"# method 1\n\nnum = 1234\nwhile num>0:\n print(num%10)\n num//=10\n\n# method 2\n\nnum = 1234\n\nprint('Ones Place',num%10)\nprint('tens place',(num//10)%10)\nprint(\"hundred's place\",(num//100)%10)\nprint(\"Thousand's place \",(num//1000)%10)\n\n"
] | [
14,
6,
4,
1,
1,
0,
0,
0,
0,
0,
0,
0
] | [
"In Python, you can try this method to print any position of a number.\nFor example, if you want to print the 10 the position of a number,\nMultiply the number position by 10, it will be 100,\nTake modulo of the input by 100 and then divide it by 10.\nNote: If the position get increased then the number of zeros in modulo and in divide also increases:\ninput = 1234\n\nprint(int(input % 100) / 10 )\n\nOutput:\n3\n\n",
"So I saw what another users answer was and I tried it out and it didn't quite work, Here's what I did to fix the problem. By the way I used this to find the tenth place of a number\n# Getting an input from the user\n\ninput = int(input())\n\n# Finding the tenth place of the number\n\nprint(int(input % 100) // 10)\n\n\n"
] | [
-1,
-1
] | [
"numbers",
"operators",
"python"
] | stackoverflow_0032752750_numbers_operators_python.txt |
Q:
Why does patch.contains_point() behave differently from patch.get_path().contains_point() when checking if points are within a polygon?
With matplotlib.patches, the patch.contains_point(xy) method seems to work differently from patch.get_path().contains_point(xy), at least after having added the patch to the axes. See difference True/True and True/False below. I can't find any documentation on this difference. Does anybody know? I also have difficulty seeing how contains_point() decides if the point is inside the path given the path's vertices are the unit rectangle in this case and not the rectangle I specified.
fig, ax = plt.subplots()
rect = patches.Rectangle([0.2, 0.3], 0.8, 0.5)
pnt = [0.4, 0.45] # point inside rect
print("Before adding patch to axes:")
print(rect.get_path().vertices)
print(rect.contains_point(pnt))
print(rect.get_path().contains_point(pnt))
print("After adding patch to axes")
ax.add_patch(rect)
print(rect.get_path().vertices)
print(rect.contains_point(pnt))
print(rect.get_path().contains_point(pnt))
plt.show()
Before adding patch to axes:
[[0. 0.]
[1. 0.]
[1. 1.]
[0. 1.]
[0. 0.]]
True
True
After adding patch to axes
[[0. 0.]
[1. 0.]
[1. 1.]
[0. 1.]
[0. 0.]]
False
True
A:
Although this question is old, I just faced the same issue and solved it.
The issue is after adding patch to the axes, you need to give the coordinates/points in display reference frame. This can be performed with:
ax.transData.transform()
I added one line to your code ignoring import statements. So the code goes here:
import matplotlib.pyplot as plt
import matplotlib.patches as patches
fig, ax = plt.subplots()
rect = patches.Rectangle([0.2, 0.3], 0.8, 0.5)
pnt = [0.4, 0.45] # point inside rect
print("Before adding patch to axes:")
print(rect.get_path().vertices)
print(rect.contains_point(pnt))
print(rect.get_path().contains_point(pnt))
print("After adding patch to axes")
ax.add_patch(rect)
print(rect.get_path().vertices)
# added lines
pnt_in_display_coordiantes = ax.transData.transform(pnt)
print(rect.contains_point(pnt_in_display_coordiantes))
print(rect.get_path().contains_point(pnt))
plt.show()
Output:
Before adding patch to axes:
[[0. 0.]
[1. 0.]
[1. 1.]
[0. 1.]
[0. 0.]]
True
True
After adding patch to axes
[[0. 0.]
[1. 0.]
[1. 1.]
[0. 1.]
[0. 0.]]
True
True
For more: https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html
| Why does patch.contains_point() behave differently from patch.get_path().contains_point() when checking if points are within a polygon? | With matplotlib.patches, the patch.contains_point(xy) method seems to work differently from patch.get_path().contains_point(xy), at least after having added the patch to the axes. See difference True/True and True/False below. I can't find any documentation on this difference. Does anybody know? I also have difficulty seeing how contains_point() decides if the point is inside the path given the path's vertices are the unit rectangle in this case and not the rectangle I specified.
fig, ax = plt.subplots()
rect = patches.Rectangle([0.2, 0.3], 0.8, 0.5)
pnt = [0.4, 0.45] # point inside rect
print("Before adding patch to axes:")
print(rect.get_path().vertices)
print(rect.contains_point(pnt))
print(rect.get_path().contains_point(pnt))
print("After adding patch to axes")
ax.add_patch(rect)
print(rect.get_path().vertices)
print(rect.contains_point(pnt))
print(rect.get_path().contains_point(pnt))
plt.show()
Before adding patch to axes:
[[0. 0.]
[1. 0.]
[1. 1.]
[0. 1.]
[0. 0.]]
True
True
After adding patch to axes
[[0. 0.]
[1. 0.]
[1. 1.]
[0. 1.]
[0. 0.]]
False
True
| [
"Although this question is old, I just faced the same issue and solved it.\nThe issue is after adding patch to the axes, you need to give the coordinates/points in display reference frame. This can be performed with:\nax.transData.transform()\n\nI added one line to your code ignoring import statements. So the code goes here:\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\n\nfig, ax = plt.subplots()\nrect = patches.Rectangle([0.2, 0.3], 0.8, 0.5)\n\npnt = [0.4, 0.45] # point inside rect\nprint(\"Before adding patch to axes:\")\nprint(rect.get_path().vertices)\nprint(rect.contains_point(pnt))\nprint(rect.get_path().contains_point(pnt))\n\nprint(\"After adding patch to axes\")\nax.add_patch(rect)\nprint(rect.get_path().vertices)\n# added lines\npnt_in_display_coordiantes = ax.transData.transform(pnt)\nprint(rect.contains_point(pnt_in_display_coordiantes))\nprint(rect.get_path().contains_point(pnt))\n\nplt.show()\n\nOutput:\nBefore adding patch to axes:\n[[0. 0.]\n[1. 0.]\n[1. 1.]\n[0. 1.]\n[0. 0.]]\nTrue\nTrue\nAfter adding patch to axes\n[[0. 0.]\n[1. 0.]\n[1. 1.]\n[0. 1.]\n[0. 0.]]\nTrue\nTrue\n\nFor more: https://matplotlib.org/stable/tutorials/advanced/transforms_tutorial.html\n"
] | [
0
] | [] | [] | [
"matplotlib",
"patch",
"path",
"python"
] | stackoverflow_0064454891_matplotlib_patch_path_python.txt |
Q:
django.template.exceptions.TemplateSyntaxError: Invalid block tag. Did you forget to register or load this tag?
I have a view that has context data and it extends base.html but as I want the context data to be displayed in all templates that extend from base.html and not only the view with the context data I am doing custom template tags with the context inside but I get an error.
view with and without context data:
class HomeView(ListView):
model = Product
context_object_name='products'
template_name = 'main/home.html'
paginate_by = 25
class HomeView(ListView):
model = Product
context_object_name='products'
template_name = 'main/home.html'
paginate_by = 25
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
categories = Category.objects.all()
news = News.objects.all()
context.update({
'categories' : categories,
'news' : news,
})
return context
base.html with and without the custom tag
{% news %}
{% for new in news %}
<p>{{ new.title }}</p>
{% endfor %}
The custom tag file templatetags/news.py
from django import template
from support.models import News
register = template.Library()
@register.inclusion_tag('news.html', takes_context=True)
def news(context):
return {
'news': News.objects.order_by("-date_posted")[0:25],
}
The custom tag file templatetags/news.html
{% for new in news %}
<p>{{ new.title }}</p>
{% endfor %}
File structure:
Project
main
templates/main
base.html
templatetags
news.py
news.html
models.py
urls.py
views.py
...
project
settings.py
...
...
A:
Simple thing, you should load the template tag in news.html template which is registered.
Just load the tag in news.html template:
{% load tag_name %} #Add here tag name to load
Note: Please ensure that template tag setting is added in settings.py file
A:
You need to define the python code containing your tag codes in the TEMPLATES variable in settings.py.
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
str(BASE_DIR.joinpath('templates'))
],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
'libraries':{
'tagname': 'appname.news', # your template tag
}
},
},
]
| django.template.exceptions.TemplateSyntaxError: Invalid block tag. Did you forget to register or load this tag? | I have a view that has context data and it extends base.html but as I want the context data to be displayed in all templates that extend from base.html and not only the view with the context data I am doing custom template tags with the context inside but I get an error.
view with and without context data:
class HomeView(ListView):
model = Product
context_object_name='products'
template_name = 'main/home.html'
paginate_by = 25
class HomeView(ListView):
model = Product
context_object_name='products'
template_name = 'main/home.html'
paginate_by = 25
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
categories = Category.objects.all()
news = News.objects.all()
context.update({
'categories' : categories,
'news' : news,
})
return context
base.html with and without the custom tag
{% news %}
{% for new in news %}
<p>{{ new.title }}</p>
{% endfor %}
The custom tag file templatetags/news.py
from django import template
from support.models import News
register = template.Library()
@register.inclusion_tag('news.html', takes_context=True)
def news(context):
return {
'news': News.objects.order_by("-date_posted")[0:25],
}
The custom tag file templatetags/news.html
{% for new in news %}
<p>{{ new.title }}</p>
{% endfor %}
File structure:
Project
main
templates/main
base.html
templatetags
news.py
news.html
models.py
urls.py
views.py
...
project
settings.py
...
...
| [
"Simple thing, you should load the template tag in news.html template which is registered.\nJust load the tag in news.html template:\n{% load tag_name %} #Add here tag name to load\n\nNote: Please ensure that template tag setting is added in settings.py file\n",
"You need to define the python code containing your tag codes in the TEMPLATES variable in settings.py.\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [\n str(BASE_DIR.joinpath('templates'))\n ],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n 'libraries':{\n 'tagname': 'appname.news', # your template tag\n\n }\n },\n },\n]\n\n"
] | [
0,
0
] | [] | [] | [
"django",
"django_templates",
"python"
] | stackoverflow_0074654931_django_django_templates_python.txt |
Q:
Printing Numbers in X Shape pattern in python in increasing to decreasing order
I am solving a pattern problem in python, i need to print a pattern in such a way it consists of X and the numbers are filled first in increasing order and then after reaching mid number, they go to decreasing order,
basically i did what, i find out the area where the X will display.,and fill the remaining matrix with blank spaces..,
but it is not according to my pattern..
Output Pattern image
here is my approach:
n=int(input("Enter total rows"))
#n=5
for rows in range(n):
for cols in range(n):
if((rows == cols) or (rows+cols)==n-1 ):
print(rows,end="")
else:
print(" ",end="")
print()
what i am trying to do is:
left diagonal and Right diagonal numbers :0 1 2 1 0
but what i am getting is:
left diagonal and Right diagonal numbers :0 1 2 3 4
A:
You can print the min(rows, n - rows - 1) instead of rows -
n = 5
for rows in range(n):
for cols in range(n):
if((rows == cols) or (rows+cols)==n-1 ):
print(min(rows, n - rows - 1),end="")
else:
print(" ",end="")
print()
Output:
0 0
1 1
2
1 1
0 0
For n = 7 -
0 0
1 1
2 2
3
2 2
1 1
0 0
For n = 6 -
0 0
1 1
22
22
1 1
0 0
A:
This is a longer alternative, but it does work. However you will run into a weird issue when entering an even number.
n = int(input("Enter Total Rows: "))
matrix = []
# Append each row and column to the matrix.
for rows in range(n):
row = []
for cols in range(n):
row.append(" ")
matrix.append(row)
for i in range(n):
if i > n/2: # After we reach the center, start to subtract numbers instead of adding
matrix[i][i] = abs(i - (n - 1)) # Abs is so we can invert the negative number to a positive
else:
matrix[i][i] = i
if i > n/2: # Ditto, but now instead of top left to bottom right, it goes top right to bottom left
matrix[i][n - (i + 1)] = abs(i - (n - 1))
else:
matrix[i][n - (i + 1)] = i
# Print each row of the matrix
for i in range(len(matrix)):
print(*matrix[i])
| Printing Numbers in X Shape pattern in python in increasing to decreasing order | I am solving a pattern problem in python, i need to print a pattern in such a way it consists of X and the numbers are filled first in increasing order and then after reaching mid number, they go to decreasing order,
basically i did what, i find out the area where the X will display.,and fill the remaining matrix with blank spaces..,
but it is not according to my pattern..
Output Pattern image
here is my approach:
n=int(input("Enter total rows"))
#n=5
for rows in range(n):
for cols in range(n):
if((rows == cols) or (rows+cols)==n-1 ):
print(rows,end="")
else:
print(" ",end="")
print()
what i am trying to do is:
left diagonal and Right diagonal numbers :0 1 2 1 0
but what i am getting is:
left diagonal and Right diagonal numbers :0 1 2 3 4
| [
"You can print the min(rows, n - rows - 1) instead of rows -\nn = 5\nfor rows in range(n):\n for cols in range(n):\n if((rows == cols) or (rows+cols)==n-1 ):\n print(min(rows, n - rows - 1),end=\"\")\n else:\n print(\" \",end=\"\")\n print()\n\nOutput:\n0 0\n 1 1 \n 2 \n 1 1 \n0 0\n\nFor n = 7 -\n0 0\n 1 1 \n 2 2 \n 3 \n 2 2 \n 1 1 \n0 0\n\nFor n = 6 -\n0 0\n 1 1 \n 22 \n 22 \n 1 1 \n0 0\n\n",
"This is a longer alternative, but it does work. However you will run into a weird issue when entering an even number.\nn = int(input(\"Enter Total Rows: \"))\n\nmatrix = []\n\n# Append each row and column to the matrix.\nfor rows in range(n):\n row = []\n for cols in range(n):\n row.append(\" \")\n\n matrix.append(row)\n\nfor i in range(n):\n if i > n/2: # After we reach the center, start to subtract numbers instead of adding\n matrix[i][i] = abs(i - (n - 1)) # Abs is so we can invert the negative number to a positive\n else:\n matrix[i][i] = i\n\n if i > n/2: # Ditto, but now instead of top left to bottom right, it goes top right to bottom left\n matrix[i][n - (i + 1)] = abs(i - (n - 1))\n else:\n matrix[i][n - (i + 1)] = i\n\n# Print each row of the matrix\nfor i in range(len(matrix)):\n print(*matrix[i])\n\n"
] | [
0,
0
] | [] | [] | [
"matrix",
"python",
"python_3.x"
] | stackoverflow_0074656745_matrix_python_python_3.x.txt |
Q:
How can you create a sort of geometric sequence using numpy arrays?
def nthgeo_function(nth):
pow(2,nth)
#arithmetic sequence code
#make a list of values for n as n1,n2,n3... done
list_nth = list(range(1,1000))
nth_array = np.array(list_nth)
nthgeo = arr(np.typecodes, (nthgeo_function(nth) for nth in nth_array))
def m_function(nthgeo):
print(nthgeo%m)
m_function(nthgeo)
is my code.
I imported array as arr before, and am trying to get multiple outputs from multiple inputs accordingly, like a one to one function, where 2^n where n ranges from 1 to 1000 has mod m performed on it 1,000 times.
Currently i am just trying to get a geometric sequence array for nthgeo, but the error message TypeError: 'module' object is not callable keeps showing up.
I don't really know... I started a day ago. I tried copying the format of
vals = array('i',[1,2,3,4,...1000]) *theoretically
newArr = array(vals.typecode, (anyfunction(a) for a in vals))
A:
It's not very clear what you are trying to do. Your line '[I] am trying to get multiple outputs from multiple inputs accordingly' sounds like you want to take an array as input and then perform your function on every element.
I am not exactly clear on what your function is, because it sounds like you want to calculate the nth power of 2, but it also sounds like you want to do some sort of modular arithmetic.
I'll assume it's the first case. Then, what you want is a function that's taking inputs, and as output giving 2 to the input power. There's a lot of ways you could do this, the simplest would be some sort of list comprehension, which is like a neat 1-line for loop that Python is a big fan of. It would look like this:
def nth_power(input_list: list[Int]) -> list[Int]:
return [2**n for n in input_list]
This gives you a list of numbers where each number is 2 raised to the power of the number in that place in the first list.
I think numpy arrays would also suit your needs well. Start now with a numpy array of the numbers you want, so presumably 0... 1000. This looks like:
input_list = np.arange(1000) # generates the np.array([0... 999])
Then, you can just pass this to np.pow(), like this:
output_list = np.power(2, input_list)
Here, this will give output_list as
array([ 1, 2, 4, 8, 16, 32, 64, 128, 256, 512], dtype=int32)
Hope that helps, feel free to ask if you need some clarification:)
| How can you create a sort of geometric sequence using numpy arrays? | def nthgeo_function(nth):
pow(2,nth)
#arithmetic sequence code
#make a list of values for n as n1,n2,n3... done
list_nth = list(range(1,1000))
nth_array = np.array(list_nth)
nthgeo = arr(np.typecodes, (nthgeo_function(nth) for nth in nth_array))
def m_function(nthgeo):
print(nthgeo%m)
m_function(nthgeo)
is my code.
I imported array as arr before, and am trying to get multiple outputs from multiple inputs accordingly, like a one to one function, where 2^n where n ranges from 1 to 1000 has mod m performed on it 1,000 times.
Currently i am just trying to get a geometric sequence array for nthgeo, but the error message TypeError: 'module' object is not callable keeps showing up.
I don't really know... I started a day ago. I tried copying the format of
vals = array('i',[1,2,3,4,...1000]) *theoretically
newArr = array(vals.typecode, (anyfunction(a) for a in vals))
| [
"It's not very clear what you are trying to do. Your line '[I] am trying to get multiple outputs from multiple inputs accordingly' sounds like you want to take an array as input and then perform your function on every element.\nI am not exactly clear on what your function is, because it sounds like you want to calculate the nth power of 2, but it also sounds like you want to do some sort of modular arithmetic.\nI'll assume it's the first case. Then, what you want is a function that's taking inputs, and as output giving 2 to the input power. There's a lot of ways you could do this, the simplest would be some sort of list comprehension, which is like a neat 1-line for loop that Python is a big fan of. It would look like this:\ndef nth_power(input_list: list[Int]) -> list[Int]:\n return [2**n for n in input_list]\n\nThis gives you a list of numbers where each number is 2 raised to the power of the number in that place in the first list.\nI think numpy arrays would also suit your needs well. Start now with a numpy array of the numbers you want, so presumably 0... 1000. This looks like:\ninput_list = np.arange(1000) # generates the np.array([0... 999])\n\nThen, you can just pass this to np.pow(), like this:\noutput_list = np.power(2, input_list)\n\nHere, this will give output_list as\narray([ 1, 2, 4, 8, 16, 32, 64, 128, 256, 512], dtype=int32)\n\nHope that helps, feel free to ask if you need some clarification:)\n"
] | [
0
] | [] | [] | [
"arrays",
"numpy_ndarray",
"python"
] | stackoverflow_0074656446_arrays_numpy_ndarray_python.txt |
Q:
Why does importing from random in python give me back unused import statement?
When I type for ex:
from random import shuffle
I get:
Unused import statement 'from random import shuffle'
in return and the letter go grey. Can anybody diagnose?
I tried "from random import shuffle" and was expecting to be able to use shuffle
A:
Without seeing the rest of the code I think that the error description is correct. Your import is unused. That means that you imported it but in the actual script the shuffle function was never accessed.
Try using it in your code:
from random import shuffle
my_shuffled_number = shuffle([1, 2, 3, 4, 5])
print(my_shuffled_number)
This small script makes use of the import function.
| Why does importing from random in python give me back unused import statement? | When I type for ex:
from random import shuffle
I get:
Unused import statement 'from random import shuffle'
in return and the letter go grey. Can anybody diagnose?
I tried "from random import shuffle" and was expecting to be able to use shuffle
| [
"Without seeing the rest of the code I think that the error description is correct. Your import is unused. That means that you imported it but in the actual script the shuffle function was never accessed.\nTry using it in your code:\nfrom random import shuffle\n\nmy_shuffled_number = shuffle([1, 2, 3, 4, 5])\n\nprint(my_shuffled_number)\n\nThis small script makes use of the import function.\n"
] | [
0
] | [] | [] | [
"import",
"python"
] | stackoverflow_0074656901_import_python.txt |
Q:
Percona Xtrabackup 8.0 failed and showed error "xtrabackup: Error: unknown argument: '/var/lib/mysql/data'"
I'm using Openstack Trove to create and manage backup mysql8 with Percona Xtrabackup 8.0.
When trying to create a backup for a DB, I encounter this problem: Backup failed because of Xtrabackup has an unknown argument.
Everything else looks fine to me, so I couldn't understand what wrong. Anyone had this issue or can shed some light on it? Thank you so much! Here is the log:
2020-01-03 03:04:31.313 1259 DEBUG trove.guestagent.strategies.backup.mysql_impl [-] xtrabackup: recognized server arguments: --datadir=/var/lib/mysql/data --tmpdir=/var/tmp --innodb_data_file_path=ibdata1:10M:autoextend --innodb_buffer_pool_size=600M --innodb_file_per_table=1 --innodb_log_files_in_group=2 --innodb_log_file_size=50M --innodb_log_buffer_size=25M --open_files_limit=2048 --server-id=1975412137
xtrabackup: recognized client arguments: --port=3306 --socket=/var/run/mysqld/mysqld.sock --user=os_admin --password=* --host=127.0.0.1 --stream=xbstream --user=os_admin --password=* --host=127.0.0.1
xtrabackup: Error: unknown argument: '/var/lib/mysql/data'
check_process /opt/guest-agent-venv/lib/python3.5/site-packages/trove/guestagent/strategies/backup/mysql_impl.py:94
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.strategies.backup.mysql_impl [-] Xtrabackup did not complete successfully.
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent [-] Error saving backup: e6b8729e-7646-4e25-8e58-49e49eab81a4.: trove.guestagent.strategies.backup.base.BackupError
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent Traceback (most recent call last):
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent File "/opt/guest-agent-venv/lib/python3.5/site-packages/trove/guestagent/backup/backupagent.py", line 114, in stream_backup_to_storage
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent return meta
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent File "/opt/guest-agent-venv/lib/python3.5/site-packages/trove/guestagent/strategies/backup/base.py", line 96, in __exit__
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent raise BackupError
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent trove.guestagent.strategies.backup.base.BackupError
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent
A:
xtrabackup is not entirely helpful showing what it did recognise, it would help if it also showed the entire command line as received (especially if it turns out it read a defaults file which means the problem could be entirely elsewhere).
In the meantime, you need to see exactly what trove/guestagent/strategies/backup/mysql_impl.py is doing at line 94.
I would hazard guess that it's building up a command-line string and somewhere (other than the --datadir=/var/lib/mysql/data which xtrabackup says it recognized) is another occurrence of /var/lib/mysql/data. If you can print out the full command line before executing it should offer a clue.
| Percona Xtrabackup 8.0 failed and showed error "xtrabackup: Error: unknown argument: '/var/lib/mysql/data'" | I'm using Openstack Trove to create and manage backup mysql8 with Percona Xtrabackup 8.0.
When trying to create a backup for a DB, I encounter this problem: Backup failed because of Xtrabackup has an unknown argument.
Everything else looks fine to me, so I couldn't understand what wrong. Anyone had this issue or can shed some light on it? Thank you so much! Here is the log:
2020-01-03 03:04:31.313 1259 DEBUG trove.guestagent.strategies.backup.mysql_impl [-] xtrabackup: recognized server arguments: --datadir=/var/lib/mysql/data --tmpdir=/var/tmp --innodb_data_file_path=ibdata1:10M:autoextend --innodb_buffer_pool_size=600M --innodb_file_per_table=1 --innodb_log_files_in_group=2 --innodb_log_file_size=50M --innodb_log_buffer_size=25M --open_files_limit=2048 --server-id=1975412137
xtrabackup: recognized client arguments: --port=3306 --socket=/var/run/mysqld/mysqld.sock --user=os_admin --password=* --host=127.0.0.1 --stream=xbstream --user=os_admin --password=* --host=127.0.0.1
xtrabackup: Error: unknown argument: '/var/lib/mysql/data'
check_process /opt/guest-agent-venv/lib/python3.5/site-packages/trove/guestagent/strategies/backup/mysql_impl.py:94
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.strategies.backup.mysql_impl [-] Xtrabackup did not complete successfully.
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent [-] Error saving backup: e6b8729e-7646-4e25-8e58-49e49eab81a4.: trove.guestagent.strategies.backup.base.BackupError
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent Traceback (most recent call last):
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent File "/opt/guest-agent-venv/lib/python3.5/site-packages/trove/guestagent/backup/backupagent.py", line 114, in stream_backup_to_storage
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent return meta
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent File "/opt/guest-agent-venv/lib/python3.5/site-packages/trove/guestagent/strategies/backup/base.py", line 96, in __exit__
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent raise BackupError
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent trove.guestagent.strategies.backup.base.BackupError
2020-01-03 03:04:31.314 1259 ERROR trove.guestagent.backup.backupagent
| [
"xtrabackup is not entirely helpful showing what it did recognise, it would help if it also showed the entire command line as received (especially if it turns out it read a defaults file which means the problem could be entirely elsewhere).\nIn the meantime, you need to see exactly what trove/guestagent/strategies/backup/mysql_impl.py is doing at line 94.\nI would hazard guess that it's building up a command-line string and somewhere (other than the --datadir=/var/lib/mysql/data which xtrabackup says it recognized) is another occurrence of /var/lib/mysql/data. If you can print out the full command line before executing it should offer a clue.\n"
] | [
0
] | [] | [] | [
"backup",
"mysql",
"percona",
"python"
] | stackoverflow_0059572759_backup_mysql_percona_python.txt |
Q:
Append values to lists after raster sampling, in a loop
I have multiple rasters in a specific directory from which I need to extract band1 values (chlorophyll concentration) using a CSV containg the coordinates of the points of interest.
This is the CSV (read as GeoDataFrame):
point_id point_name latitude longitude geometry
0 1 'Forte dei Marmi' 10.2427 43.5703 POINT (10.24270 43.57030)
1 2 'La Spezia' 9.9030 44.0341 POINT (9.90300 44.03410)
2 3 'Orbetello' 11.2029 42.4488 POINT (11.20290 42.44880)
3 4 'Portoferraio' 10.3328 42.8080 POINT (10.33280 42.80800)
4 5 'Fregene' 12.1990 41.7080 POINT (12.19900 41.70800)
All the rasters I need to sample are in raster_dir = 'C:/sentinel_3_processing/'
My final purpose is to have a dataframe with as much columns as raster in the folder.
The samlpling of all the rasters is working, the output is correct but I need it to be different.
As I explained before.
The output I got is:
[[10.2427, 43.5703, 0.63],
[10.2427, 43.5703, 0.94],
[10.2427, 43.5703, 0.76],
[10.2427, 43.5703, 0.76],
[10.2427, 43.5703, 1.03],
[10.2427, 43.5703, 0.86],
[10.2427, 43.5703, 0.74],
[10.2427, 43.5703, 1.71],
[10.2427, 43.5703, 3.07],,
[...],
[12.199, 41.708, 0.96],
[12.199, 41.708, 0.89],
[12.199, 41.708, 1.29],
[12.199, 41.708, 0.24],
[12.199, 41.708, 1.59],
[12.199, 41.708, 1.78],
[12.199, 41.708, 0.39],
[12.199, 41.708, 1.54],
[12.199, 41.708, 1.62]]
But I need something like that:
[
[10.2427, 43.5703, 0.63, 0.94, 0.76, 0.76, 1.03, 0.86, 0.74, 1.71, 3.07],
[...],
[12.199, 41.708, 0.96, 0.89, 1.29, 0.24, 1.59, 1.78, 0.39, 1.54, 1.62]]
]
Now I'll show you the code I wrote:
L = [] # final list that contains the other lists
for p in csv_gdf['geometry']: # for all the point contained in the dataframe...
for files in os.listdir(raster_dir): #...and for all the rasters in that folder...
if files[-4:] == '.img': #...which extention is .img...
r = rio.open(raster_dir + '\\' + files) # open the raster
list_row = []
# read the raster band1 values at those coordinates...
x = p.xy[0][0]
y = p.xy[1][0]
row, col = r.index(x, y)
chl_value = r.read(1)[row, col]
# append to list_row the coordinates ad then the raster value.
list_row.append(p.xy[0][0])
list_row.append(p.xy[1][0])
list_row.append(round(float(chl_value), 2))
# then, append all the lists created in the loop to the final list
L.append(list_row)
Could you please help me? Every piece of advice is widely appreciated!
Thank you in advance!
Hope your guys are ok!
A:
Try this,
data = [[10.2427, 43.5703, 0.63],
[10.2427, 43.5703, 0.94],
[10.2427, 43.5703, 0.76],
[10.2427, 43.5703, 0.76],
[10.2427, 43.5703, 1.03],
[10.2427, 43.5703, 0.86],
[10.2427, 43.5703, 0.74],
[10.2427, 43.5703, 1.71],
[10.2427, 43.5703, 3.07],
[12.199, 41.708, 0.96],
[12.199, 41.708, 0.89],
[12.199, 41.708, 1.29],
[12.199, 41.708, 0.24],
[12.199, 41.708, 1.59],
[12.199, 41.708, 1.78],
[12.199, 41.708, 0.39],
[12.199, 41.708, 1.54],
[12.199, 41.708, 1.62]]
df = pd.DataFrame(data)
print(df.groupby([0, 1])[2].apply(list).reset_index().apply(lambda x: [x[0], x[1]]+x[2], axis=1).values.tolist())
Explanation:
Create dataframe out of your current output
groupby first two cols and get other elements as list
Restructure to get the expected output
O/P:
[[10.2427, 43.5703, 0.63, 0.94, 0.76, 0.76, 1.03, 0.86, 0.74, 1.71, 3.07], [12.199, 41.708, 0.96, 0.89, 1.29, 0.24, 1.59, 1.78, 0.39, 1.54, 1.62]]
Note: The above code is just to give you an idea, it can be further improved. If I get some free time, I will post that as well.
| Append values to lists after raster sampling, in a loop | I have multiple rasters in a specific directory from which I need to extract band1 values (chlorophyll concentration) using a CSV containg the coordinates of the points of interest.
This is the CSV (read as GeoDataFrame):
point_id point_name latitude longitude geometry
0 1 'Forte dei Marmi' 10.2427 43.5703 POINT (10.24270 43.57030)
1 2 'La Spezia' 9.9030 44.0341 POINT (9.90300 44.03410)
2 3 'Orbetello' 11.2029 42.4488 POINT (11.20290 42.44880)
3 4 'Portoferraio' 10.3328 42.8080 POINT (10.33280 42.80800)
4 5 'Fregene' 12.1990 41.7080 POINT (12.19900 41.70800)
All the rasters I need to sample are in raster_dir = 'C:/sentinel_3_processing/'
My final purpose is to have a dataframe with as much columns as raster in the folder.
The samlpling of all the rasters is working, the output is correct but I need it to be different.
As I explained before.
The output I got is:
[[10.2427, 43.5703, 0.63],
[10.2427, 43.5703, 0.94],
[10.2427, 43.5703, 0.76],
[10.2427, 43.5703, 0.76],
[10.2427, 43.5703, 1.03],
[10.2427, 43.5703, 0.86],
[10.2427, 43.5703, 0.74],
[10.2427, 43.5703, 1.71],
[10.2427, 43.5703, 3.07],,
[...],
[12.199, 41.708, 0.96],
[12.199, 41.708, 0.89],
[12.199, 41.708, 1.29],
[12.199, 41.708, 0.24],
[12.199, 41.708, 1.59],
[12.199, 41.708, 1.78],
[12.199, 41.708, 0.39],
[12.199, 41.708, 1.54],
[12.199, 41.708, 1.62]]
But I need something like that:
[
[10.2427, 43.5703, 0.63, 0.94, 0.76, 0.76, 1.03, 0.86, 0.74, 1.71, 3.07],
[...],
[12.199, 41.708, 0.96, 0.89, 1.29, 0.24, 1.59, 1.78, 0.39, 1.54, 1.62]]
]
Now I'll show you the code I wrote:
L = [] # final list that contains the other lists
for p in csv_gdf['geometry']: # for all the point contained in the dataframe...
for files in os.listdir(raster_dir): #...and for all the rasters in that folder...
if files[-4:] == '.img': #...which extention is .img...
r = rio.open(raster_dir + '\\' + files) # open the raster
list_row = []
# read the raster band1 values at those coordinates...
x = p.xy[0][0]
y = p.xy[1][0]
row, col = r.index(x, y)
chl_value = r.read(1)[row, col]
# append to list_row the coordinates ad then the raster value.
list_row.append(p.xy[0][0])
list_row.append(p.xy[1][0])
list_row.append(round(float(chl_value), 2))
# then, append all the lists created in the loop to the final list
L.append(list_row)
Could you please help me? Every piece of advice is widely appreciated!
Thank you in advance!
Hope your guys are ok!
| [
"Try this,\ndata = [[10.2427, 43.5703, 0.63],\n [10.2427, 43.5703, 0.94],\n [10.2427, 43.5703, 0.76],\n [10.2427, 43.5703, 0.76],\n [10.2427, 43.5703, 1.03],\n [10.2427, 43.5703, 0.86],\n [10.2427, 43.5703, 0.74],\n [10.2427, 43.5703, 1.71],\n [10.2427, 43.5703, 3.07],\n [12.199, 41.708, 0.96],\n [12.199, 41.708, 0.89],\n [12.199, 41.708, 1.29],\n [12.199, 41.708, 0.24],\n [12.199, 41.708, 1.59],\n [12.199, 41.708, 1.78],\n [12.199, 41.708, 0.39],\n [12.199, 41.708, 1.54],\n [12.199, 41.708, 1.62]]\n \ndf = pd.DataFrame(data)\n\nprint(df.groupby([0, 1])[2].apply(list).reset_index().apply(lambda x: [x[0], x[1]]+x[2], axis=1).values.tolist())\n\nExplanation:\n\nCreate dataframe out of your current output\ngroupby first two cols and get other elements as list\nRestructure to get the expected output\n\nO/P:\n[[10.2427, 43.5703, 0.63, 0.94, 0.76, 0.76, 1.03, 0.86, 0.74, 1.71, 3.07], [12.199, 41.708, 0.96, 0.89, 1.29, 0.24, 1.59, 1.78, 0.39, 1.54, 1.62]]\n\nNote: The above code is just to give you an idea, it can be further improved. If I get some free time, I will post that as well.\n"
] | [
0
] | [] | [] | [
"dataframe",
"list",
"loops",
"pandas",
"python"
] | stackoverflow_0074656778_dataframe_list_loops_pandas_python.txt |
Q:
Python how do i get out of the while loop
I want to go back and forth between these two variables, but it ends the session with break. Can you be my assistant?
login = """
(1) # basic Python Learnig
(2) # JavaScrpit
(3) # SQL
(4) # C++
(0) # Exit.
"""
print(login)
python = """
(1) # -getting started python
(2) # -python syntax
(3) # -python comment lines
(4) # ---
(5) # --return to previous list
(0) # Exit.
"""
choice = 1
while choice == 1:
quest = ("What would you like to do ? " + name )
reply = input(" #")
if reply == "0" :
exit("..")
elif reply == "1" :
print(python)
elif reply == "2" :
print("JavaScrpit")
elif reply == "3" :
print("Sql")
elif reply == "4" :
print("C")
else :
print("Bad VOTE", name)
break
learnig = 1
while learnig == 1:
reply1 = input(" #")
if reply1 == "1" :
print("OKEY ?")
elif reply1 == "2" :
print("OKEY ?")
elif reply1 == "3" :
print("OKEY ?")
elif reply1 == "4" :
print("OKEY ?")
else :
print("Bad VOTE", name)
A:
You need a specific condition in which to put the "break". Here, at every iteration you are breaking.
By looking at the code, i think you have to indent break another time and put it under "print("Bad VOTE", name)". So in the case of a bad vote, the loop brakes.
| Python how do i get out of the while loop | I want to go back and forth between these two variables, but it ends the session with break. Can you be my assistant?
login = """
(1) # basic Python Learnig
(2) # JavaScrpit
(3) # SQL
(4) # C++
(0) # Exit.
"""
print(login)
python = """
(1) # -getting started python
(2) # -python syntax
(3) # -python comment lines
(4) # ---
(5) # --return to previous list
(0) # Exit.
"""
choice = 1
while choice == 1:
quest = ("What would you like to do ? " + name )
reply = input(" #")
if reply == "0" :
exit("..")
elif reply == "1" :
print(python)
elif reply == "2" :
print("JavaScrpit")
elif reply == "3" :
print("Sql")
elif reply == "4" :
print("C")
else :
print("Bad VOTE", name)
break
learnig = 1
while learnig == 1:
reply1 = input(" #")
if reply1 == "1" :
print("OKEY ?")
elif reply1 == "2" :
print("OKEY ?")
elif reply1 == "3" :
print("OKEY ?")
elif reply1 == "4" :
print("OKEY ?")
else :
print("Bad VOTE", name)
| [
"You need a specific condition in which to put the \"break\". Here, at every iteration you are breaking.\nBy looking at the code, i think you have to indent break another time and put it under \"print(\"Bad VOTE\", name)\". So in the case of a bad vote, the loop brakes.\n"
] | [
0
] | [] | [] | [
"python",
"while_loop"
] | stackoverflow_0074656883_python_while_loop.txt |
Q:
python not recognized in Windows CMD even after adding to PATH
I'm trying to -learn to write and- run Python scripts on my Windows 7 64 bit machine. I installed Python in C:/Python34, and I added this to my Windows' PATH variable :
C:\Python34; C:\Python34\python.exe
(the second one is probably meaningless but I tried) and still I get this error in Windows command line :
C:\Users\me>python test.py
'python' is not recognized as an internal or external command,
operable program or batch file.
So how do I truly install Python on my Windows x64 machine ?
A:
This might be trivial, but have you tried closing your command line window and opening a new one? This is supposed to reload all the environment variables.
Try typing
echo %PATH%
into the command prompt and see if you can find your Python directory there.
Also, the second part of your addition to the PATH environment variable is indeed unnecessary.
A:
I had the same problem: python not being recognized, with python in the path which was was not truncated.
Following the comment of eryksun in yossim's answer:
Also, if you installed for all users you should have %SystemRoot%\py.exe, which >is typically C:\Windows\py.exe. So without setting Python's directory in PATH >you can simply run py to start Python; if 2.x is installed use py -3 since >Python 2 is the default. – eryksun
I tried to use py instead of python and it worked.
Meaning:
python setup.py build -> does NOT work.
py setup.py build -> does work.
Hope it helps
A:
I was also having the same problem.
Turns out the path I added included '..\python.exe' at the end, which as turns out was not required. I only needed to add the directory in which 'python.exe' is in (which in my case is the Anaconda's distribution directory in Users folder), similar to what we do when installing JDK in our system's PATH variable.
Hope it helps!
A:
It wasn't working for me even after adding the path. What finally did the trick, was changing the order of listed paths in the PATH variable. I moved %USERPROFILE%\AppData\Local\Microsoft\WindowsApps down vs. having it the first path listed there.
A:
Environment PATH Length Limitation is 1024 characters
If restarting your cmd window does not work you might have reached the character limit for PATH, which is a surprisingly short 1024 characters.
Note that the user interface will happily allows you to define a PATH that is way longer than 1024, and will just truncate anything longer than this. Use
echo %PATH%
in your cmd window to see if the PATH being truncated.
Solution
Unfortunately, there is no good way to fix this besides removing something else from your PATH.
NOTE: Your PATH = SYSTEM_PATH + USER_PATH, so you need to make sure the combined is < 1024.
A:
Also, make sure to leave no spaces after the semi-colon.
For example, this didn't work for me:
C:\Windows\system32; C:\Python27; C:\Python27\Scripts;
But, this did:
C:\Windows\system32;C:\Python27;C:\Python27\Scripts;
A:
I'm late to the game here, but I'd like to share my solution for future users. The previous answers were on the right track, but if you do not open the CMD as an administrator, then you will be thrown that same error. I know this seems trivial and obvious, but after spending the past 8 hours programming before attempting to install Django for the first time, you might be surprised at the stupid mistakes you might make.
A:
I have faced same problem even though my path contains 400 characters.
Try to update the path from the command line(Run as administrator)
Command to update path: setx path "%path%;c:\examplePath"
After this command I could see that paths that I configured earlier in environment variables got updated and working.
To check the configured paths: echo %PATH%
A:
I was facing similar porblem. What helped me is where command.
C:\WINDOWS\system32>where python
C:\Users\xxxxxxx\AppData\Local\Microsoft\WindowsApps\python.exe
C:\Program Files (x86)\Microsoft Visual
Studio\Shared\Python39_86\python.exe
On updating PATH variable to point to only one desired directory (basically I removed %USERPROFILE%\AppData\Local\Microsoft\WindowsApps from PATH) fixed my problem.
A:
I had the same issue with Python 2.7 on Windows 10 until I changed the file path in Enviroment Variables to the folder path, ie C:\Python27\python.exe didn't work but C:\Python27\ did work.
A:
I did everything:
Added Python to PATH
Uninstall all the Pythons - Both from downloaded python.org and Microsoft Store and reinstall from python.org
Change the order of PATH
Deleted %USERPROFILE%\AppData\Local\Microsoft\WindowsApps from PATH
But nothing worked. What worked for me was:
Settings > Application > App execution aliases. Then disable all the Pyhtons from here and it worked!
| python not recognized in Windows CMD even after adding to PATH | I'm trying to -learn to write and- run Python scripts on my Windows 7 64 bit machine. I installed Python in C:/Python34, and I added this to my Windows' PATH variable :
C:\Python34; C:\Python34\python.exe
(the second one is probably meaningless but I tried) and still I get this error in Windows command line :
C:\Users\me>python test.py
'python' is not recognized as an internal or external command,
operable program or batch file.
So how do I truly install Python on my Windows x64 machine ?
| [
"This might be trivial, but have you tried closing your command line window and opening a new one? This is supposed to reload all the environment variables.\nTry typing\necho %PATH%\n\ninto the command prompt and see if you can find your Python directory there.\nAlso, the second part of your addition to the PATH environment variable is indeed unnecessary.\n",
"I had the same problem: python not being recognized, with python in the path which was was not truncated. \nFollowing the comment of eryksun in yossim's answer:\n\nAlso, if you installed for all users you should have %SystemRoot%\\py.exe, which >is typically C:\\Windows\\py.exe. So without setting Python's directory in PATH >you can simply run py to start Python; if 2.x is installed use py -3 since >Python 2 is the default. – eryksun \n\nI tried to use py instead of python and it worked.\nMeaning: \npython setup.py build -> does NOT work.\npy setup.py build -> does work.\nHope it helps\n",
"I was also having the same problem.\nTurns out the path I added included '..\\python.exe' at the end, which as turns out was not required. I only needed to add the directory in which 'python.exe' is in (which in my case is the Anaconda's distribution directory in Users folder), similar to what we do when installing JDK in our system's PATH variable.\nHope it helps!\n",
"It wasn't working for me even after adding the path. What finally did the trick, was changing the order of listed paths in the PATH variable. I moved %USERPROFILE%\\AppData\\Local\\Microsoft\\WindowsApps down vs. having it the first path listed there.\n",
"Environment PATH Length Limitation is 1024 characters\nIf restarting your cmd window does not work you might have reached the character limit for PATH, which is a surprisingly short 1024 characters.\nNote that the user interface will happily allows you to define a PATH that is way longer than 1024, and will just truncate anything longer than this. Use\necho %PATH%\n\nin your cmd window to see if the PATH being truncated.\nSolution\nUnfortunately, there is no good way to fix this besides removing something else from your PATH.\n\nNOTE: Your PATH = SYSTEM_PATH + USER_PATH, so you need to make sure the combined is < 1024.\n",
"Also, make sure to leave no spaces after the semi-colon.\nFor example, this didn't work for me:\nC:\\Windows\\system32; C:\\Python27; C:\\Python27\\Scripts;\nBut, this did:\nC:\\Windows\\system32;C:\\Python27;C:\\Python27\\Scripts;\n",
"I'm late to the game here, but I'd like to share my solution for future users. The previous answers were on the right track, but if you do not open the CMD as an administrator, then you will be thrown that same error. I know this seems trivial and obvious, but after spending the past 8 hours programming before attempting to install Django for the first time, you might be surprised at the stupid mistakes you might make.\n",
"I have faced same problem even though my path contains 400 characters.\nTry to update the path from the command line(Run as administrator)\nCommand to update path: setx path \"%path%;c:\\examplePath\"\nAfter this command I could see that paths that I configured earlier in environment variables got updated and working.\nTo check the configured paths: echo %PATH%\n",
"I was facing similar porblem. What helped me is where command.\n\nC:\\WINDOWS\\system32>where python\nC:\\Users\\xxxxxxx\\AppData\\Local\\Microsoft\\WindowsApps\\python.exe\nC:\\Program Files (x86)\\Microsoft Visual\nStudio\\Shared\\Python39_86\\python.exe\n\nOn updating PATH variable to point to only one desired directory (basically I removed %USERPROFILE%\\AppData\\Local\\Microsoft\\WindowsApps from PATH) fixed my problem.\n",
"I had the same issue with Python 2.7 on Windows 10 until I changed the file path in Enviroment Variables to the folder path, ie C:\\Python27\\python.exe didn't work but C:\\Python27\\ did work.\n",
"I did everything:\n\nAdded Python to PATH\nUninstall all the Pythons - Both from downloaded python.org and Microsoft Store and reinstall from python.org\nChange the order of PATH\nDeleted %USERPROFILE%\\AppData\\Local\\Microsoft\\WindowsApps from PATH\n\nBut nothing worked. What worked for me was:\nSettings > Application > App execution aliases. Then disable all the Pyhtons from here and it worked!\n\n"
] | [
24,
22,
8,
6,
4,
2,
1,
1,
1,
0,
0
] | [
"For me, installing the 'Windows x86-64 executable installer' from the official python portal did the trick.\nPython interpreter was not initially recognized, while i had installed 32 bit python.\nUninstalled python 32 bit and installed 64 bit.\nSo, if you are on a x-64 processor, install 64bit python.\n",
"I tried it multiple times with the default installer option, the first one, (Python 3.7.3) with both 'add to environment variable' and 'all users' checked, though the latter was greyed out and couldn't be unchecked.\nIt failed to work for other users except for the user I installed it under until I uninstalled it and chose \"Custom Install\". It then clearly showed the install path being in the C:\\Program Files\\Python37 directory when it was failing to install it there the other way even though the 'All Users' option was checked.\n",
"Same thing was happening with me when i was trying to open the python immediately with CMD.\nThen I kept my in sleep mode and started CMD using these Key Windows_key+R, typed cmd and OK. Then the package of python worked perfectly.\n",
"\nUninstall python and pyqt\nThen go to pyqt setup and open installation but don't install. You will see a message box saying something like pyqt version built with python version 32bit/64bit. \nThen see python version bit and download that version from python.org from all release menu. \nThen first install python and then install pyqt. It will work like butter.\n\n",
"I spent sometime checking and rechecking the path and restarting to no avail.\nThe only thing that worked for me was to rename the executable C:\\Python34\\python.exe to C:\\Python34\\python34.exe. This way, calling typing python34 at the command line now works.\nOn windows it seems that when calling 'python', the system finds C:\\Python27 in the path before it finds C:\\Python34\nI'm not sure if this is the right way to do this, seems like a hack, but it seems to work OK.\n"
] | [
-1,
-1,
-1,
-1,
-2
] | [
"cmd",
"command_line",
"python",
"python_3.x",
"windows"
] | stackoverflow_0024186823_cmd_command_line_python_python_3.x_windows.txt |
Q:
How to convert YOLO format annotations to x1, y1, x2, y2 coordinates in Python?
I would like to know how to convert annotations in YOLO format (e.g., center_X, center_y, width, height = 0.069824, 0.123535, 0.104492, 0.120117) to x1, y1, x2, y2 coordinates?
A:
If I recall correctly:
x1 = (center_X-width/2)*image_width
x2 = (center_X+width/2)*image_width
y1 = (center_y-height/2)*image_height
y2 = (center_y+height/2)*image_height
A:
Given that the upper-left corner of the image is [0,0]: For the upper-left corner you have to do [x,y] = [center_X, center_Y] - 1/2 * [width, height] . For the bottom-right corner [x,y] = [center_X, center_Y] + 1/2 * [width, height] .
A:
def get_coord(label_file, img_width, img_height):
lfile = open(label_file)
coords = []
all_coords = []
for line in lfile:
l = line.split(" ")
coords = list(map(float, list(map(float, l[1:5]))))
x1 = float(img_width) * (2.0 * float(coords[0]) - float(coords[2])) / 2.0
y1 = float(img_height) * (2.0 * float(coords[1]) - float(coords[3])) / 2.0
x2 = float(img_width) * (2.0 * float(coords[0]) + float(coords[2])) / 2.0
y2 = float(img_height) * (2.0 * float(coords[1]) + float(coords[3])) / 2.0
tmp = [x1, y1, x2, y2]
all_coords.append(list(map(int, tmp)))
lfile.close()
return all_coords
| How to convert YOLO format annotations to x1, y1, x2, y2 coordinates in Python? | I would like to know how to convert annotations in YOLO format (e.g., center_X, center_y, width, height = 0.069824, 0.123535, 0.104492, 0.120117) to x1, y1, x2, y2 coordinates?
| [
"If I recall correctly:\nx1 = (center_X-width/2)*image_width\nx2 = (center_X+width/2)*image_width\ny1 = (center_y-height/2)*image_height\ny2 = (center_y+height/2)*image_height\n\n",
"Given that the upper-left corner of the image is [0,0]: For the upper-left corner you have to do [x,y] = [center_X, center_Y] - 1/2 * [width, height] . For the bottom-right corner [x,y] = [center_X, center_Y] + 1/2 * [width, height] .\n",
"def get_coord(label_file, img_width, img_height):\n\nlfile = open(label_file)\ncoords = []\nall_coords = []\n\nfor line in lfile:\n l = line.split(\" \")\n \n coords = list(map(float, list(map(float, l[1:5]))))\n x1 = float(img_width) * (2.0 * float(coords[0]) - float(coords[2])) / 2.0\n y1 = float(img_height) * (2.0 * float(coords[1]) - float(coords[3])) / 2.0\n x2 = float(img_width) * (2.0 * float(coords[0]) + float(coords[2])) / 2.0\n y2 = float(img_height) * (2.0 * float(coords[1]) + float(coords[3])) / 2.0\n tmp = [x1, y1, x2, y2]\n all_coords.append(list(map(int, tmp)))\nlfile.close()\nreturn all_coords\n\n"
] | [
2,
1,
0
] | [] | [] | [
"computer_vision",
"python",
"yolo"
] | stackoverflow_0066801530_computer_vision_python_yolo.txt |
Q:
Windows 11 pycocotools package installation error
error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe' failed with exit code 2
I have installed C++ build tools and more.
Install Visual C++ 2015 Build Tools from https://go.microsoft.com/fwlink/?LinkId=691126 with default selection. This cannot be install.
Thank you for solutions.
A:
I assume you have already installed Visual C++ Latest Build Tools
from Visual Studio 2022 Build Tools: https://aka.ms/vs/17/release/vs_buildtools.exe
Now, Check the Pycocotools/PythonAPI folder and modify the setup.py file
change these couple of lines as the following code and run this command again
Previously :
extra_compile_args=['-Wno-cpp', '-Wno-unused-function', '-std=c99']
Remove '-Wno-cpp', '-Wno-unused-function' from the ext_modules Extension in setup.py
Only keep '-std=c99', and it will look like this.
extra_compile_args=['-std=c99']
Now run this command once again.
python setup.py build_ext --inplace
this should successfully build the pycocotools in windows 11.
| Windows 11 pycocotools package installation error | error: command 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe' failed with exit code 2
I have installed C++ build tools and more.
Install Visual C++ 2015 Build Tools from https://go.microsoft.com/fwlink/?LinkId=691126 with default selection. This cannot be install.
Thank you for solutions.
| [
"I assume you have already installed Visual C++ Latest Build Tools\nfrom Visual Studio 2022 Build Tools: https://aka.ms/vs/17/release/vs_buildtools.exe\nNow, Check the Pycocotools/PythonAPI folder and modify the setup.py file\nchange these couple of lines as the following code and run this command again\nPreviously :\nextra_compile_args=['-Wno-cpp', '-Wno-unused-function', '-std=c99']\nRemove '-Wno-cpp', '-Wno-unused-function' from the ext_modules Extension in setup.py\nOnly keep '-std=c99', and it will look like this.\nextra_compile_args=['-std=c99']\nNow run this command once again.\npython setup.py build_ext --inplace\nthis should successfully build the pycocotools in windows 11.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074596492_python.txt |
Q:
How to pick a random item from an input list?
I am making a program that asks how many players are playing, and then asks to input the names of those players. Then, I want it to print a random player, but I can't figure it out how.
The code right now prints a random letter from the last name given, I think:
import random
player_numberCount = input("How many players are there: ")
player_number = int(player_numberCount)
for i in range(player_number):
ask_player = input("name the players: ")
print(random.choice(ask_player))
A:
You need to add each player name entered to a list. Here is a starting point of what you need in your code:
from random import choice
number_of_players = int(input("How many players are there: "))
players = []
for _ in range(number_of_players):
players.append(input("name the players: "))
print(choice(players))
A:
That loop reassigns the ask_player variable on each iteration, erasing the previous value.
Presumably you meant to save each value in a list:
players = []
for i in range(player_number):
players.append(input("Player name: "))
print(random.choice(players))
A:
The problem is that in each for-loop iteration, you are reassigning the ask_player var to a string. When you pass a string to random.choice(...), it picks a random letter of that string (since strings can be indexed like arrays). Just define an array before the loop and append on each iteration:
import random
player_numberCount = input("How many players are there: ")
player_number = int(player_numberCount)
players = []
for i in range(player_number):
players.append(input(f"name player {i + 1}: "))
print(random.choice(players))
A:
import random
player_number = int(input("How many players are there: "))
player_list = []
for _ in range(player_number):
ask_player = input("name the players: ")
player_list.append(ask_player)
print(player_list[random.randint(0, player_number)])
| How to pick a random item from an input list? | I am making a program that asks how many players are playing, and then asks to input the names of those players. Then, I want it to print a random player, but I can't figure it out how.
The code right now prints a random letter from the last name given, I think:
import random
player_numberCount = input("How many players are there: ")
player_number = int(player_numberCount)
for i in range(player_number):
ask_player = input("name the players: ")
print(random.choice(ask_player))
| [
"You need to add each player name entered to a list. Here is a starting point of what you need in your code:\nfrom random import choice\n\nnumber_of_players = int(input(\"How many players are there: \"))\nplayers = []\n\nfor _ in range(number_of_players):\n players.append(input(\"name the players: \"))\n\nprint(choice(players))\n\n",
"That loop reassigns the ask_player variable on each iteration, erasing the previous value.\nPresumably you meant to save each value in a list:\nplayers = []\nfor i in range(player_number):\n players.append(input(\"Player name: \"))\n\nprint(random.choice(players))\n\n",
"The problem is that in each for-loop iteration, you are reassigning the ask_player var to a string. When you pass a string to random.choice(...), it picks a random letter of that string (since strings can be indexed like arrays). Just define an array before the loop and append on each iteration:\nimport random\n\nplayer_numberCount = input(\"How many players are there: \")\nplayer_number = int(player_numberCount)\n\nplayers = []\nfor i in range(player_number):\n players.append(input(f\"name player {i + 1}: \"))\n\nprint(random.choice(players))\n\n",
"import random\n\nplayer_number = int(input(\"How many players are there: \"))\nplayer_list = []\n\n\nfor _ in range(player_number):\n ask_player = input(\"name the players: \")\n player_list.append(ask_player)\n\nprint(player_list[random.randint(0, player_number)])\n\n"
] | [
2,
0,
0,
0
] | [] | [] | [
"input",
"python"
] | stackoverflow_0074651157_input_python.txt |
Q:
AttributeError: type Object 'Widget' has no attribute '_ipython_display_'
In one of my python test case I have mockcomm object and in the mockcomm function I am using ipywidgets. Recently Upgraded ipywidgets version from 7 to 8. The code is working fine in version 7 of ipywidgets but when upgraded I am facing the below error. Saying Object has not attribute defined. did any one faced the error can help me.
A:
According to this issue, from ipywidgets 8, widgets uses _repr_mimebundle_ instead of _ipython_display_.
So _widget_attrs['_ipython_display_'] will raise KeyError. Use _repr_mimebundle_ as key will sovle problem.
| AttributeError: type Object 'Widget' has no attribute '_ipython_display_' | In one of my python test case I have mockcomm object and in the mockcomm function I am using ipywidgets. Recently Upgraded ipywidgets version from 7 to 8. The code is working fine in version 7 of ipywidgets but when upgraded I am facing the below error. Saying Object has not attribute defined. did any one faced the error can help me.
| [
"According to this issue, from ipywidgets 8, widgets uses _repr_mimebundle_ instead of _ipython_display_.\nSo _widget_attrs['_ipython_display_'] will raise KeyError. Use _repr_mimebundle_ as key will sovle problem.\n"
] | [
0
] | [] | [] | [
"ipywidgets",
"jupyter_notebook",
"python"
] | stackoverflow_0073578290_ipywidgets_jupyter_notebook_python.txt |
Q:
User Login Authentication using Django Model and form
I am trying to setup user authentication for the login page using forms and comparing it to my database value but it does not work. I also tried using this particular questions User Login Authentication using forms and Django logic to solve my problem but it didn't help.
Models.py
from django.db import models
from django.contrib.auth.password_validation import validate_password
class student(models.Model):
first_name = models.CharField(max_length=150)
last_name = models.CharField(max_length=150)
matric_number = models.CharField(max_length=9)
email = models.EmailField(max_length=50)
password1 = models.CharField(max_length=255, validators=[validate_password])
password2 = models.CharField(max_length=255)
def __str__(self):
return (self.matric_number)
This view saves user info to database
def student(request):
if request.method == 'POST':
form = studentForm(request.POST)
if form.is_valid():
sign_up = form.save(commit=False)
#sign_up.password1 = make_password(form.cleaned_data['password1'])
#sign_up.password2 = make_password(form.cleaned_data['password2'])
sign_up.status = 1
sign_up.save()
user = form.cleaned_data.get('matric_number')
messages.success(request, "Account was created for "+str(user))
return redirect(signin)
else:
form = studentForm()
return render(request, 'Student.html',{
"form": form
})
This is the signin view
def signin(request):
if request.method == 'POST':
form = LoginForm(request.POST)
if form.is_valid():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
try:
student = student.object.get(username=username, password=password)
return redirect(files)
except:
messages.success(request, "Error")
else:
form = LoginForm()
return render(request, "SignIn.html",{
"form":form
})
This is my form.py
class studentForm(forms.ModelForm):
class Meta:
model=student
fields="__all__"
widgets={
'first_name':forms.TextInput(attrs={'placeholder': 'Enter Your First Name'}),
'last_name':forms.TextInput(attrs={'placeholder': 'Enter Your Last Name'}),
'matric_number':forms.TextInput(attrs={'placeholder': 'Enter Your Matric Number'}),
'email':forms.EmailInput(attrs={'placeholder': '[email protected]'}),
'password1':forms.PasswordInput(attrs={'placeholder': 'Enter Your Preferred Password','id':'password'}),
'password2':forms.PasswordInput(attrs={'placeholder':'Confirm Your Password', 'id':'password1'})
}
def clean(self):
super(studentForm, self).clean()
password1 = self.cleaned_data.get('password1')
password2 = self.cleaned_data.get('password2')
matric_number = self.cleaned_data.get('matric_number')
email = self.cleaned_data.get('email')
try:
if password1 != password2:
self.errors[''] = self.error_class(["The two password fields must match"])
elif len(matric_number) != 9:
self.errors[''] = self.error_class(["You have entered an invalid matric number"])
elif len(matric_number) == 9:
matric_number = int(matric_number)
except ValueError:
self.errors[''] = self.error_class(["You have entered an invalid matric number"])
for instance in student.objects.all():
if instance.matric_number == str(matric_number):
self.errors[''] = self.error_class(["Matric number already exist"])
elif instance.email == email:
self.errors[''] = self.error_class(["E-mail address already exist"])
class LoginForm(forms.Form):
matric_number = forms.CharField(max_length=9, widget=forms.TextInput(attrs={'id': 'username', 'placeholder': 'Enter Your Staff Id Or Matric Number'}))
password1 = forms.CharField(max_length=9, widget=forms.PasswordInput(attrs={'id': 'password', 'placeholder':'Enter Your password'}))
A:
Stop reinventing the wheel. Also, class names are supposed to be named with PascalCase.
Use AbstractUser model:
from django.contrib.auth.models import AbstractUser
class Student(AbstractUser):
...
and in your main urls.py:
from django.contrib.auth import views as auth_views
urlpatterns = [
...
path('login/', auth_views.LoginView.as_view(), name='login'),
...
]
It is much faster and SAFER way to create new user.
A:
So I figured out out how to solve my problem. By using the AbstractUser model,i was able to create a custom user and then create another model which i extended a ForeignKey on the User model therefore allowing me to tie every user to their profile.
Here is my models.py
from django.db import models
from django.contrib.auth.models import AbstractUser
# Create your models here.
class User(AbstractUser):
pass
def __str__(self):
return self.username
class UserProfile(models.Model):
"""
This is the one for model.py
"""
username = models.ForeignKey(User, on_delete=models.CASCADE, null=True, default="")
profile_picture = models.ImageField(blank=True, null=True, default="")
matricno = models.CharField(max_length=9, default="", primary_key=True)
email = models.EmailField(default="")
first_name = models.CharField(max_length=200, default="")
last_name = models.CharField(max_length=255, default="")
class Meta:
verbose_name_plural = "Users Profile"
def __str__(self):
return self.first_name+ " "+self.last_name
And here is my views.py
def signup(request):
if request.method == "POST":
form = Signup(request.POST)
if form.is_valid():
username = request.POST["username"]
email = request.POST["email"]
password = request.POST["password"]
password2 = request.POST["password2"]
user = User.objects.create_user(
username=username,
password=password,
email=email,
)
user.save()
login(request, user)
messages.success(request, "Account Created successfully for " + username)
return redirect(details)
else:
form = Signup()
return render(request, "accounts/register.html", {"form": form})
def details(request, username):
user = User.objects.get(username=username)
form = Details()
if request.method == "POST":
form = Details(request.POST, request.FILES)
if form.is_valid():
detail = form.save(commit=False)
detail.username = request.user
detail.save()
return redirect(success, pk=detail.pk)
else:
form = Details(initial={"matricno":request.user.username})
return render(request, "details.html", {"form":form})
And finally my forms.py that i use in creating a signup form and perfoming validation
class Signup(forms.Form):
username = forms.CharField(
max_length=9,
widget=forms.TextInput(attrs={"placeholder": "Enter Your Matric Number"}),
)
email = forms.EmailField(
max_length=255,
widget=forms.EmailInput(attrs={"placeholder": "Enter Your E-mail Address"}),
)
password = forms.CharField(
max_length=255,
widget=forms.PasswordInput(
attrs={"placeholder": "Enter Your Password", "id": "password"}
),
)
password2 = forms.CharField(
max_length=255,
widget=forms.PasswordInput(
attrs={"placeholder": "Confirm Your Password", "id": "password2"}
),
)
def clean(self):
super(Signup, self).clean()
password = self.cleaned_data.get("password")
password2 = self.cleaned_data.get("password2")
username = self.cleaned_data.get("username")
email = self.cleaned_data.get("email")
if password != password2:
self.errors[""] = self.error_class(["The two password fields must match"])
for instance in User.objects.all():
if instance.username == str(username):
self.errors[""] = self.error_class(["User already exist"])
elif instance.email == email:
self.errors[""] = self.error_class(["E-mail already in use"])
else:
pass
return self.cleaned_data
| User Login Authentication using Django Model and form | I am trying to setup user authentication for the login page using forms and comparing it to my database value but it does not work. I also tried using this particular questions User Login Authentication using forms and Django logic to solve my problem but it didn't help.
Models.py
from django.db import models
from django.contrib.auth.password_validation import validate_password
class student(models.Model):
first_name = models.CharField(max_length=150)
last_name = models.CharField(max_length=150)
matric_number = models.CharField(max_length=9)
email = models.EmailField(max_length=50)
password1 = models.CharField(max_length=255, validators=[validate_password])
password2 = models.CharField(max_length=255)
def __str__(self):
return (self.matric_number)
This view saves user info to database
def student(request):
if request.method == 'POST':
form = studentForm(request.POST)
if form.is_valid():
sign_up = form.save(commit=False)
#sign_up.password1 = make_password(form.cleaned_data['password1'])
#sign_up.password2 = make_password(form.cleaned_data['password2'])
sign_up.status = 1
sign_up.save()
user = form.cleaned_data.get('matric_number')
messages.success(request, "Account was created for "+str(user))
return redirect(signin)
else:
form = studentForm()
return render(request, 'Student.html',{
"form": form
})
This is the signin view
def signin(request):
if request.method == 'POST':
form = LoginForm(request.POST)
if form.is_valid():
username = form.cleaned_data.get('username')
password = form.cleaned_data.get('password')
try:
student = student.object.get(username=username, password=password)
return redirect(files)
except:
messages.success(request, "Error")
else:
form = LoginForm()
return render(request, "SignIn.html",{
"form":form
})
This is my form.py
class studentForm(forms.ModelForm):
class Meta:
model=student
fields="__all__"
widgets={
'first_name':forms.TextInput(attrs={'placeholder': 'Enter Your First Name'}),
'last_name':forms.TextInput(attrs={'placeholder': 'Enter Your Last Name'}),
'matric_number':forms.TextInput(attrs={'placeholder': 'Enter Your Matric Number'}),
'email':forms.EmailInput(attrs={'placeholder': '[email protected]'}),
'password1':forms.PasswordInput(attrs={'placeholder': 'Enter Your Preferred Password','id':'password'}),
'password2':forms.PasswordInput(attrs={'placeholder':'Confirm Your Password', 'id':'password1'})
}
def clean(self):
super(studentForm, self).clean()
password1 = self.cleaned_data.get('password1')
password2 = self.cleaned_data.get('password2')
matric_number = self.cleaned_data.get('matric_number')
email = self.cleaned_data.get('email')
try:
if password1 != password2:
self.errors[''] = self.error_class(["The two password fields must match"])
elif len(matric_number) != 9:
self.errors[''] = self.error_class(["You have entered an invalid matric number"])
elif len(matric_number) == 9:
matric_number = int(matric_number)
except ValueError:
self.errors[''] = self.error_class(["You have entered an invalid matric number"])
for instance in student.objects.all():
if instance.matric_number == str(matric_number):
self.errors[''] = self.error_class(["Matric number already exist"])
elif instance.email == email:
self.errors[''] = self.error_class(["E-mail address already exist"])
class LoginForm(forms.Form):
matric_number = forms.CharField(max_length=9, widget=forms.TextInput(attrs={'id': 'username', 'placeholder': 'Enter Your Staff Id Or Matric Number'}))
password1 = forms.CharField(max_length=9, widget=forms.PasswordInput(attrs={'id': 'password', 'placeholder':'Enter Your password'}))
| [
"Stop reinventing the wheel. Also, class names are supposed to be named with PascalCase.\nUse AbstractUser model:\nfrom django.contrib.auth.models import AbstractUser\n\nclass Student(AbstractUser):\n ...\n\nand in your main urls.py:\nfrom django.contrib.auth import views as auth_views\n\nurlpatterns = [\n ...\n path('login/', auth_views.LoginView.as_view(), name='login'),\n ...\n]\n\nIt is much faster and SAFER way to create new user.\n",
"So I figured out out how to solve my problem. By using the AbstractUser model,i was able to create a custom user and then create another model which i extended a ForeignKey on the User model therefore allowing me to tie every user to their profile.\nHere is my models.py\nfrom django.db import models\nfrom django.contrib.auth.models import AbstractUser\n\n\n# Create your models here.\nclass User(AbstractUser):\n pass\n def __str__(self):\n return self.username\n\nclass UserProfile(models.Model):\n \"\"\"\n This is the one for model.py\n \"\"\"\n username = models.ForeignKey(User, on_delete=models.CASCADE, null=True, default=\"\")\n profile_picture = models.ImageField(blank=True, null=True, default=\"\")\n matricno = models.CharField(max_length=9, default=\"\", primary_key=True)\n email = models.EmailField(default=\"\")\n first_name = models.CharField(max_length=200, default=\"\")\n last_name = models.CharField(max_length=255, default=\"\")\n\n class Meta:\n verbose_name_plural = \"Users Profile\"\n\n def __str__(self):\n return self.first_name+ \" \"+self.last_name\n\nAnd here is my views.py\ndef signup(request):\n if request.method == \"POST\":\n form = Signup(request.POST)\n if form.is_valid():\n username = request.POST[\"username\"]\n email = request.POST[\"email\"]\n password = request.POST[\"password\"]\n password2 = request.POST[\"password2\"]\n\n user = User.objects.create_user(\n username=username,\n password=password,\n email=email,\n )\n user.save()\n login(request, user)\n messages.success(request, \"Account Created successfully for \" + username)\n return redirect(details)\n else:\n form = Signup()\n return render(request, \"accounts/register.html\", {\"form\": form})\n\ndef details(request, username):\n user = User.objects.get(username=username)\n form = Details()\n if request.method == \"POST\":\n form = Details(request.POST, request.FILES)\n if form.is_valid():\n detail = form.save(commit=False)\n detail.username = request.user\n detail.save()\n return redirect(success, pk=detail.pk)\n else:\n form = Details(initial={\"matricno\":request.user.username})\n return render(request, \"details.html\", {\"form\":form})\n\nAnd finally my forms.py that i use in creating a signup form and perfoming validation\nclass Signup(forms.Form):\n username = forms.CharField(\n max_length=9,\n widget=forms.TextInput(attrs={\"placeholder\": \"Enter Your Matric Number\"}),\n )\n\n email = forms.EmailField(\n max_length=255,\n widget=forms.EmailInput(attrs={\"placeholder\": \"Enter Your E-mail Address\"}),\n )\n\n password = forms.CharField(\n max_length=255,\n widget=forms.PasswordInput(\n attrs={\"placeholder\": \"Enter Your Password\", \"id\": \"password\"}\n ),\n )\n\n password2 = forms.CharField(\n max_length=255,\n widget=forms.PasswordInput(\n attrs={\"placeholder\": \"Confirm Your Password\", \"id\": \"password2\"}\n ),\n )\n \n def clean(self):\n super(Signup, self).clean()\n password = self.cleaned_data.get(\"password\")\n password2 = self.cleaned_data.get(\"password2\")\n username = self.cleaned_data.get(\"username\")\n email = self.cleaned_data.get(\"email\")\n\n if password != password2:\n self.errors[\"\"] = self.error_class([\"The two password fields must match\"])\n\n for instance in User.objects.all():\n if instance.username == str(username):\n self.errors[\"\"] = self.error_class([\"User already exist\"])\n elif instance.email == email:\n self.errors[\"\"] = self.error_class([\"E-mail already in use\"])\n else:\n pass\n\n return self.cleaned_data\n\n"
] | [
0,
0
] | [] | [] | [
"authentication",
"django",
"django_forms",
"django_models",
"python"
] | stackoverflow_0073978305_authentication_django_django_forms_django_models_python.txt |
Q:
CUDA_HOME environment variable is not set
I have a working environment for using pytorch deep learning with gpu, and i ran into a problem when i tried using mmcv.ops.point_sample, which returned :
ModuleNotFoundError: No module named 'mmcv._ext'
I have read that you should actually use mmcv-full to solve it, but i got another error when i tried to install it:
pip install mmcv-full
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
Which seems logic enough since i never installed cuda on my ubuntu machine(i am not the administrator), but it still ran deep learning training fine on models i built myself, and i'm guessing the package came in with minimal code required for running cuda tensors operations.
So my main question is where is cuda installed when used through pytorch package, and can i use the same path as the environment variable for cuda_home?
Additionaly if anyone knows some nice sources for gaining insights on the internals of cuda with pytorch/tensorflow I'd like to take a look (I have been reading cudatoolkit documentation which is cool but this seems more targeted at c++ cuda developpers than the internal working between python and the library)
A:
you can chek it and check the paths with these commands :
which nvidia-smi
which nvcc
cat /usr/local/cuda/version.txt
| CUDA_HOME environment variable is not set | I have a working environment for using pytorch deep learning with gpu, and i ran into a problem when i tried using mmcv.ops.point_sample, which returned :
ModuleNotFoundError: No module named 'mmcv._ext'
I have read that you should actually use mmcv-full to solve it, but i got another error when i tried to install it:
pip install mmcv-full
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
Which seems logic enough since i never installed cuda on my ubuntu machine(i am not the administrator), but it still ran deep learning training fine on models i built myself, and i'm guessing the package came in with minimal code required for running cuda tensors operations.
So my main question is where is cuda installed when used through pytorch package, and can i use the same path as the environment variable for cuda_home?
Additionaly if anyone knows some nice sources for gaining insights on the internals of cuda with pytorch/tensorflow I'd like to take a look (I have been reading cudatoolkit documentation which is cool but this seems more targeted at c++ cuda developpers than the internal working between python and the library)
| [
"you can chek it and check the paths with these commands :\n\nwhich nvidia-smi\nwhich nvcc\ncat /usr/local/cuda/version.txt\n\n"
] | [
0
] | [] | [] | [
"python",
"pytorch"
] | stackoverflow_0074656874_python_pytorch.txt |
Q:
Python JSON serialize a Decimal object
I have a Decimal('3.9') as part of an object, and wish to encode this to a JSON string which should look like {'x': 3.9}. I don't care about precision on the client side, so a float is fine.
Is there a good way to serialize this? JSONDecoder doesn't accept Decimal objects, and converting to a float beforehand yields {'x': 3.8999999999999999} which is wrong, and will be a big waste of bandwidth.
A:
Simplejson 2.1 and higher has native support for Decimal type:
>>> json.dumps(Decimal('3.9'), use_decimal=True)
'3.9'
Note that use_decimal is True by default:
def dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True,
allow_nan=True, cls=None, indent=None, separators=None,
encoding='utf-8', default=None, use_decimal=True,
namedtuple_as_object=True, tuple_as_array=True,
bigint_as_string=False, sort_keys=False, item_sort_key=None,
for_json=False, ignore_nan=False, **kw):
So:
>>> json.dumps(Decimal('3.9'))
'3.9'
Hopefully, this feature will be included in standard library.
A:
I would like to let everyone know that I tried Michał Marczyk's answer on my web server that was running Python 2.6.5 and it worked fine. However, I upgraded to Python 2.7 and it stopped working. I tried to think of some sort of way to encode Decimal objects and this is what I came up with:
import decimal
class DecimalEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, decimal.Decimal):
return str(o)
return super(DecimalEncoder, self).default(o)
Note that this will convert the decimal to its string representation (e.g.; "1.2300") to a. not lose significant digits and b. prevent rounding errors.
This should hopefully help anyone who is having problems with Python 2.7. I tested it and it seems to work fine. If anyone notices any bugs in my solution or comes up with a better way, please let me know.
Usage example:
json.dumps({'x': decimal.Decimal('5.5')}, cls=DecimalEncoder)
A:
How about subclassing json.JSONEncoder?
class DecimalEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, decimal.Decimal):
# wanted a simple yield str(o) in the next line,
# but that would mean a yield on the line with super(...),
# which wouldn't work (see my comment below), so...
return (str(o) for o in [o])
return super(DecimalEncoder, self).default(o)
Then use it like so:
json.dumps({'x': decimal.Decimal('5.5')}, cls=DecimalEncoder)
A:
The native Django option is missing so I'll add it for the next guy/gall that looks for it.
Starting on Django 1.7.x there is a built-in DjangoJSONEncoder that you can get it from django.core.serializers.json.
import json
from django.core.serializers.json import DjangoJSONEncoder
from django.forms.models import model_to_dict
model_instance = YourModel.object.first()
model_dict = model_to_dict(model_instance)
json.dumps(model_dict, cls=DjangoJSONEncoder)
Presto!
A:
In my Flask app, Which uses python 2.7.11, flask alchemy(with 'db.decimal' types), and Flask Marshmallow ( for 'instant' serializer and deserializer), i had this error, every time i did a GET or POST. The serializer and deserializer, failed to convert Decimal types into any JSON identifiable format.
I did a "pip install simplejson", then
Just by adding
import simplejson as json
the serializer and deserializer starts to purr again. I did nothing else...
DEciamls are displayed as '234.00' float format.
A:
I tried switching from simplejson to builtin json for GAE 2.7, and had issues with the decimal. If default returned str(o) there were quotes (because _iterencode calls _iterencode on the results of default), and float(o) would remove trailing 0.
If default returns an object of a class that inherits from float (or anything that calls repr without additional formatting) and has a custom __repr__ method, it seems to work like I want it to.
import json
from decimal import Decimal
class fakefloat(float):
def __init__(self, value):
self._value = value
def __repr__(self):
return str(self._value)
def defaultencode(o):
if isinstance(o, Decimal):
# Subclass float with custom repr?
return fakefloat(o)
raise TypeError(repr(o) + " is not JSON serializable")
json.dumps([10.20, "10.20", Decimal('10.20')], default=defaultencode)
'[10.2, "10.20", 10.20]'
A:
For Django users:
Recently came across TypeError: Decimal('2337.00') is not JSON serializable
while JSON encoding i.e. json.dumps(data)
Solution:
# converts Decimal, Datetime, UUIDs to str for Encoding
from django.core.serializers.json import DjangoJSONEncoder
json.dumps(response.data, cls=DjangoJSONEncoder)
But, now the Decimal value will be a string, now we can explicitly set the decimal/float value parser when decoding data, using parse_float option in json.loads:
import decimal
data = json.loads(data, parse_float=decimal.Decimal) # default is float(num_str)
A:
3.9 can not be exactly represented in IEEE floats, it will always come as 3.8999999999999999, e.g. try print repr(3.9), you can read more about it here:
http://en.wikipedia.org/wiki/Floating_point
http://docs.sun.com/source/806-3568/ncg_goldberg.html
So if you don't want float, only option you have to send it as string, and to allow automatic conversion of decimal objects to JSON, do something like this:
import decimal
from django.utils import simplejson
def json_encode_decimal(obj):
if isinstance(obj, decimal.Decimal):
return str(obj)
raise TypeError(repr(obj) + " is not JSON serializable")
d = decimal.Decimal('3.5')
print simplejson.dumps([d], default=json_encode_decimal)
A:
My $.02!
I extend a bunch of the JSON encoder since I am serializing tons of data for my web server. Here's some nice code. Note that it's easily extendable to pretty much any data format you feel like and will reproduce 3.9 as "thing": 3.9
JSONEncoder_olddefault = json.JSONEncoder.default
def JSONEncoder_newdefault(self, o):
if isinstance(o, UUID): return str(o)
if isinstance(o, datetime): return str(o)
if isinstance(o, time.struct_time): return datetime.fromtimestamp(time.mktime(o))
if isinstance(o, decimal.Decimal): return str(o)
return JSONEncoder_olddefault(self, o)
json.JSONEncoder.default = JSONEncoder_newdefault
Makes my life so much easier...
A:
For those who don't want to use a third-party library... An issue with Elias Zamaria's answer is that it converts to float, which can run into problems. For example:
>>> json.dumps({'x': Decimal('0.0000001')}, cls=DecimalEncoder)
'{"x": 1e-07}'
>>> json.dumps({'x': Decimal('100000000000.01734')}, cls=DecimalEncoder)
'{"x": 100000000000.01733}'
The JSONEncoder.encode() method lets you return the literal json content, unlike JSONEncoder.default(), which has you return a json compatible type (like float) that then gets encoded in the normal way. The problem with encode() is that it (normally) only works at the top level. But it's still usable, with a little extra work (python 3.x):
import json
from collections.abc import Mapping, Iterable
from decimal import Decimal
class DecimalEncoder(json.JSONEncoder):
def encode(self, obj):
if isinstance(obj, Mapping):
return '{' + ', '.join(f'{self.encode(k)}: {self.encode(v)}' for (k, v) in obj.items()) + '}'
if isinstance(obj, Iterable) and (not isinstance(obj, str)):
return '[' + ', '.join(map(self.encode, obj)) + ']'
if isinstance(obj, Decimal):
return f'{obj.normalize():f}' # using normalize() gets rid of trailing 0s, using ':f' prevents scientific notation
return super().encode(obj)
Which gives you:
>>> json.dumps({'x': Decimal('0.0000001')}, cls=DecimalEncoder)
'{"x": 0.0000001}'
>>> json.dumps({'x': Decimal('100000000000.01734')}, cls=DecimalEncoder)
'{"x": 100000000000.01734}'
A:
From the JSON Standard Document, as linked in json.org:
JSON is agnostic about the semantics of numbers. In any programming language, there can be a variety of
number types of various capacities and complements, fixed or floating, binary or decimal. That can make
interchange between different programming languages difficult. JSON instead offers only the representation of
numbers that humans use: a sequence of digits. All programming languages know how to make sense of digit
sequences even if they disagree on internal representations. That is enough to allow interchange.
So it's actually accurate to represent Decimals as numbers (rather than strings) in JSON. Bellow lies a possible solution to the problem.
Define a custom JSON encoder:
import json
class CustomJsonEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Decimal):
return float(obj)
return super(CustomJsonEncoder, self).default(obj)
Then use it when serializing your data:
json.dumps(data, cls=CustomJsonEncoder)
As noted from comments on the other answers, older versions of python might mess up the representation when converting to float, but that's not the case anymore.
To get the decimal back in Python:
Decimal(str(value))
This solution is hinted in Python 3.0 documentation on decimals:
To create a Decimal from a float, first convert it to a string.
A:
This is what I have, extracted from our class
class CommonJSONEncoder(json.JSONEncoder):
"""
Common JSON Encoder
json.dumps(myString, cls=CommonJSONEncoder)
"""
def default(self, obj):
if isinstance(obj, decimal.Decimal):
return {'type{decimal}': str(obj)}
class CommonJSONDecoder(json.JSONDecoder):
"""
Common JSON Encoder
json.loads(myString, cls=CommonJSONEncoder)
"""
@classmethod
def object_hook(cls, obj):
for key in obj:
if isinstance(key, six.string_types):
if 'type{decimal}' == key:
try:
return decimal.Decimal(obj[key])
except:
pass
def __init__(self, **kwargs):
kwargs['object_hook'] = self.object_hook
super(CommonJSONDecoder, self).__init__(**kwargs)
Which passes unittest:
def test_encode_and_decode_decimal(self):
obj = Decimal('1.11')
result = json.dumps(obj, cls=CommonJSONEncoder)
self.assertTrue('type{decimal}' in result)
new_obj = json.loads(result, cls=CommonJSONDecoder)
self.assertEqual(new_obj, obj)
obj = {'test': Decimal('1.11')}
result = json.dumps(obj, cls=CommonJSONEncoder)
self.assertTrue('type{decimal}' in result)
new_obj = json.loads(result, cls=CommonJSONDecoder)
self.assertEqual(new_obj, obj)
obj = {'test': {'abc': Decimal('1.11')}}
result = json.dumps(obj, cls=CommonJSONEncoder)
self.assertTrue('type{decimal}' in result)
new_obj = json.loads(result, cls=CommonJSONDecoder)
self.assertEqual(new_obj, obj)
A:
You can create a custom JSON encoder as per your requirement.
import json
from datetime import datetime, date
from time import time, struct_time, mktime
import decimal
class CustomJSONEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, datetime):
return str(o)
if isinstance(o, date):
return str(o)
if isinstance(o, decimal.Decimal):
return float(o)
if isinstance(o, struct_time):
return datetime.fromtimestamp(mktime(o))
# Any other serializer if needed
return super(CustomJSONEncoder, self).default(o)
The Decoder can be called like this,
import json
from decimal import Decimal
json.dumps({'x': Decimal('3.9')}, cls=CustomJSONEncoder)
and the output will be:
>>'{"x": 3.9}'
A:
Based on stdOrgnlDave answer I have defined this wrapper that it can be called with optional kinds so the encoder will work only for certain kinds inside your projects. I believe the work should be done inside your code and not to use this "default" encoder since "it is better explicit than implicit", but I understand using this will save some of your time. :-)
import time
import json
import decimal
from uuid import UUID
from datetime import datetime
def JSONEncoder_newdefault(kind=['uuid', 'datetime', 'time', 'decimal']):
'''
JSON Encoder newdfeault is a wrapper capable of encoding several kinds
Use it anywhere on your code to make the full system to work with this defaults:
JSONEncoder_newdefault() # for everything
JSONEncoder_newdefault(['decimal']) # only for Decimal
'''
JSONEncoder_olddefault = json.JSONEncoder.default
def JSONEncoder_wrapped(self, o):
'''
json.JSONEncoder.default = JSONEncoder_newdefault
'''
if ('uuid' in kind) and isinstance(o, uuid.UUID):
return str(o)
if ('datetime' in kind) and isinstance(o, datetime):
return str(o)
if ('time' in kind) and isinstance(o, time.struct_time):
return datetime.fromtimestamp(time.mktime(o))
if ('decimal' in kind) and isinstance(o, decimal.Decimal):
return str(o)
return JSONEncoder_olddefault(self, o)
json.JSONEncoder.default = JSONEncoder_wrapped
# Example
if __name__ == '__main__':
JSONEncoder_newdefault()
A:
If someone is still looking for the answer, it is most probably you have a 'NaN' in your data that you are trying to encode. Because NaN is considered as float by Python.
A:
If you want to pass a dictionary containing decimals to the requests library (using the json keyword argument), you simply need to install simplejson:
$ pip3 install simplejson
$ python3
>>> import requests
>>> from decimal import Decimal
>>> # This won't error out:
>>> requests.post('https://www.google.com', json={'foo': Decimal('1.23')})
The reason of the problem is that requests uses simplejson only if it is present, and falls back to the built-in json if it is not installed.
A:
For anybody that wants a quick solution here is how I removed Decimal from my queries in Django
total_development_cost_var = process_assumption_objects.values('total_development_cost').aggregate(sum_dev = Sum('total_development_cost', output_field=FloatField()))
total_development_cost_var = list(total_development_cost_var.values())
Step 1: use , output_field=FloatField() in you r query
Step 2: use list eg list(total_development_cost_var.values())
Hope it helps
A:
This question is old, but there seems to be a better and much simpler solution in Python3 for most use-cases:
number = Decimal(0.55)
converted_number = float(number) # Returns: 0.55 (as type float)
You can just convert Decimal to float.
A:
My 2 cents for easy solution, if you're sure Decimal is the only bad guy on your json dumps method:
print(json.loads(json.dumps({
'a': Decimal(1230),
'b': Decimal(11111111123.22),
}, default=lambda x: eval(str(x)))))
>>> {'a': 1230, 'b': 11111111123.22}
The "smart" thing here is using default to convert Decimal to int or float, automatically, taking advantage of eval function: default=lambda x: eval(str(x))
But always be careful using eval on your code as it can lead to security issues ;)
A:
Decimal is not suitable to be converted through:
float due to precision problems
str due to openapi restrictions
We still need direct decimal to a number json serialisation.
Here is our extension of @tesdal 's fakefloat solution (closed in v3.5.2rc1).
It uses fakestr + monkeypatching to avoid quotation and "floatation" of decimals.
import json.encoder
from decimal import Decimal
def encode_fakestr(func):
def wrap(s):
if isinstance(s, fakestr):
return repr(s)
return func(s)
return wrap
json.encoder.encode_basestring = encode_fakestr(json.encoder.encode_basestring)
json.encoder.encode_basestring_ascii = encode_fakestr(json.encoder.encode_basestring_ascii)
class fakestr(str):
def __init__(self, value):
self._value = value
def __repr__(self):
return str(self._value)
class DecimalJsonEncoder(json.encoder.JSONEncoder):
def default(self, o):
if isinstance(o, Decimal):
return fakestr(o)
return super().default(o)
json.dumps([Decimal('1.1')], cls=DecimalJsonEncoder)
[1.1]
I don't understand why python developers force us using floats in places where it is not suitable.
A:
I will share what worked for me with flask 2.1.0
When I was creating the dictionary which had to be used from jsonify I used rounding:
json_dict['price'] = round(self.price, ndigits=2) if self.price else 0
So this way I could return D.DD number or 0 without using some global configuration. And this is nice because some Decimals has to be bigger, like latitude and longitude coordinates.
return jsonify(json_dict)
| Python JSON serialize a Decimal object | I have a Decimal('3.9') as part of an object, and wish to encode this to a JSON string which should look like {'x': 3.9}. I don't care about precision on the client side, so a float is fine.
Is there a good way to serialize this? JSONDecoder doesn't accept Decimal objects, and converting to a float beforehand yields {'x': 3.8999999999999999} which is wrong, and will be a big waste of bandwidth.
| [
"Simplejson 2.1 and higher has native support for Decimal type:\n>>> json.dumps(Decimal('3.9'), use_decimal=True)\n'3.9'\n\nNote that use_decimal is True by default:\ndef dumps(obj, skipkeys=False, ensure_ascii=True, check_circular=True,\n allow_nan=True, cls=None, indent=None, separators=None,\n encoding='utf-8', default=None, use_decimal=True,\n namedtuple_as_object=True, tuple_as_array=True,\n bigint_as_string=False, sort_keys=False, item_sort_key=None,\n for_json=False, ignore_nan=False, **kw):\n\nSo:\n>>> json.dumps(Decimal('3.9'))\n'3.9'\n\nHopefully, this feature will be included in standard library.\n",
"I would like to let everyone know that I tried Michał Marczyk's answer on my web server that was running Python 2.6.5 and it worked fine. However, I upgraded to Python 2.7 and it stopped working. I tried to think of some sort of way to encode Decimal objects and this is what I came up with:\nimport decimal\n\nclass DecimalEncoder(json.JSONEncoder):\n def default(self, o):\n if isinstance(o, decimal.Decimal):\n return str(o)\n return super(DecimalEncoder, self).default(o)\n\nNote that this will convert the decimal to its string representation (e.g.; \"1.2300\") to a. not lose significant digits and b. prevent rounding errors.\nThis should hopefully help anyone who is having problems with Python 2.7. I tested it and it seems to work fine. If anyone notices any bugs in my solution or comes up with a better way, please let me know.\nUsage example:\njson.dumps({'x': decimal.Decimal('5.5')}, cls=DecimalEncoder)\n\n",
"How about subclassing json.JSONEncoder?\nclass DecimalEncoder(json.JSONEncoder):\n def default(self, o):\n if isinstance(o, decimal.Decimal):\n # wanted a simple yield str(o) in the next line,\n # but that would mean a yield on the line with super(...),\n # which wouldn't work (see my comment below), so...\n return (str(o) for o in [o])\n return super(DecimalEncoder, self).default(o)\n\nThen use it like so:\njson.dumps({'x': decimal.Decimal('5.5')}, cls=DecimalEncoder)\n\n",
"The native Django option is missing so I'll add it for the next guy/gall that looks for it.\nStarting on Django 1.7.x there is a built-in DjangoJSONEncoder that you can get it from django.core.serializers.json.\nimport json\nfrom django.core.serializers.json import DjangoJSONEncoder\nfrom django.forms.models import model_to_dict\n\nmodel_instance = YourModel.object.first()\nmodel_dict = model_to_dict(model_instance)\n\njson.dumps(model_dict, cls=DjangoJSONEncoder)\n\nPresto!\n",
"In my Flask app, Which uses python 2.7.11, flask alchemy(with 'db.decimal' types), and Flask Marshmallow ( for 'instant' serializer and deserializer), i had this error, every time i did a GET or POST. The serializer and deserializer, failed to convert Decimal types into any JSON identifiable format. \nI did a \"pip install simplejson\", then \nJust by adding\nimport simplejson as json\n\nthe serializer and deserializer starts to purr again. I did nothing else...\nDEciamls are displayed as '234.00' float format.\n",
"I tried switching from simplejson to builtin json for GAE 2.7, and had issues with the decimal. If default returned str(o) there were quotes (because _iterencode calls _iterencode on the results of default), and float(o) would remove trailing 0.\nIf default returns an object of a class that inherits from float (or anything that calls repr without additional formatting) and has a custom __repr__ method, it seems to work like I want it to.\nimport json\nfrom decimal import Decimal\n\nclass fakefloat(float):\n def __init__(self, value):\n self._value = value\n def __repr__(self):\n return str(self._value)\n\ndef defaultencode(o):\n if isinstance(o, Decimal):\n # Subclass float with custom repr?\n return fakefloat(o)\n raise TypeError(repr(o) + \" is not JSON serializable\")\n\njson.dumps([10.20, \"10.20\", Decimal('10.20')], default=defaultencode)\n'[10.2, \"10.20\", 10.20]'\n\n",
"For Django users:\nRecently came across TypeError: Decimal('2337.00') is not JSON serializable\nwhile JSON encoding i.e. json.dumps(data)\nSolution:\n# converts Decimal, Datetime, UUIDs to str for Encoding\nfrom django.core.serializers.json import DjangoJSONEncoder \n\njson.dumps(response.data, cls=DjangoJSONEncoder)\n\nBut, now the Decimal value will be a string, now we can explicitly set the decimal/float value parser when decoding data, using parse_float option in json.loads:\nimport decimal \n\ndata = json.loads(data, parse_float=decimal.Decimal) # default is float(num_str)\n\n",
"3.9 can not be exactly represented in IEEE floats, it will always come as 3.8999999999999999, e.g. try print repr(3.9), you can read more about it here: \nhttp://en.wikipedia.org/wiki/Floating_point\nhttp://docs.sun.com/source/806-3568/ncg_goldberg.html \nSo if you don't want float, only option you have to send it as string, and to allow automatic conversion of decimal objects to JSON, do something like this:\nimport decimal\nfrom django.utils import simplejson\n\ndef json_encode_decimal(obj):\n if isinstance(obj, decimal.Decimal):\n return str(obj)\n raise TypeError(repr(obj) + \" is not JSON serializable\")\n\nd = decimal.Decimal('3.5')\nprint simplejson.dumps([d], default=json_encode_decimal)\n\n",
"My $.02!\nI extend a bunch of the JSON encoder since I am serializing tons of data for my web server. Here's some nice code. Note that it's easily extendable to pretty much any data format you feel like and will reproduce 3.9 as \"thing\": 3.9\nJSONEncoder_olddefault = json.JSONEncoder.default\ndef JSONEncoder_newdefault(self, o):\n if isinstance(o, UUID): return str(o)\n if isinstance(o, datetime): return str(o)\n if isinstance(o, time.struct_time): return datetime.fromtimestamp(time.mktime(o))\n if isinstance(o, decimal.Decimal): return str(o)\n return JSONEncoder_olddefault(self, o)\njson.JSONEncoder.default = JSONEncoder_newdefault\n\nMakes my life so much easier...\n",
"For those who don't want to use a third-party library... An issue with Elias Zamaria's answer is that it converts to float, which can run into problems. For example:\n>>> json.dumps({'x': Decimal('0.0000001')}, cls=DecimalEncoder)\n'{\"x\": 1e-07}'\n>>> json.dumps({'x': Decimal('100000000000.01734')}, cls=DecimalEncoder)\n'{\"x\": 100000000000.01733}'\n\nThe JSONEncoder.encode() method lets you return the literal json content, unlike JSONEncoder.default(), which has you return a json compatible type (like float) that then gets encoded in the normal way. The problem with encode() is that it (normally) only works at the top level. But it's still usable, with a little extra work (python 3.x):\nimport json\nfrom collections.abc import Mapping, Iterable\nfrom decimal import Decimal\n\nclass DecimalEncoder(json.JSONEncoder):\n def encode(self, obj):\n if isinstance(obj, Mapping):\n return '{' + ', '.join(f'{self.encode(k)}: {self.encode(v)}' for (k, v) in obj.items()) + '}'\n if isinstance(obj, Iterable) and (not isinstance(obj, str)):\n return '[' + ', '.join(map(self.encode, obj)) + ']'\n if isinstance(obj, Decimal):\n return f'{obj.normalize():f}' # using normalize() gets rid of trailing 0s, using ':f' prevents scientific notation\n return super().encode(obj)\n\nWhich gives you:\n>>> json.dumps({'x': Decimal('0.0000001')}, cls=DecimalEncoder)\n'{\"x\": 0.0000001}'\n>>> json.dumps({'x': Decimal('100000000000.01734')}, cls=DecimalEncoder)\n'{\"x\": 100000000000.01734}'\n\n",
"From the JSON Standard Document, as linked in json.org:\n\nJSON is agnostic about the semantics of numbers. In any programming language, there can be a variety of\n number types of various capacities and complements, fixed or floating, binary or decimal. That can make\n interchange between different programming languages difficult. JSON instead offers only the representation of\n numbers that humans use: a sequence of digits. All programming languages know how to make sense of digit\n sequences even if they disagree on internal representations. That is enough to allow interchange.\n\nSo it's actually accurate to represent Decimals as numbers (rather than strings) in JSON. Bellow lies a possible solution to the problem.\nDefine a custom JSON encoder:\nimport json\n\n\nclass CustomJsonEncoder(json.JSONEncoder):\n\n def default(self, obj):\n if isinstance(obj, Decimal):\n return float(obj)\n return super(CustomJsonEncoder, self).default(obj)\n\nThen use it when serializing your data:\njson.dumps(data, cls=CustomJsonEncoder)\n\nAs noted from comments on the other answers, older versions of python might mess up the representation when converting to float, but that's not the case anymore.\nTo get the decimal back in Python:\nDecimal(str(value))\n\nThis solution is hinted in Python 3.0 documentation on decimals:\n\nTo create a Decimal from a float, first convert it to a string.\n\n",
"This is what I have, extracted from our class\nclass CommonJSONEncoder(json.JSONEncoder):\n\n \"\"\"\n Common JSON Encoder\n json.dumps(myString, cls=CommonJSONEncoder)\n \"\"\"\n\n def default(self, obj):\n\n if isinstance(obj, decimal.Decimal):\n return {'type{decimal}': str(obj)}\n\nclass CommonJSONDecoder(json.JSONDecoder):\n\n \"\"\"\n Common JSON Encoder\n json.loads(myString, cls=CommonJSONEncoder)\n \"\"\"\n\n @classmethod\n def object_hook(cls, obj):\n for key in obj:\n if isinstance(key, six.string_types):\n if 'type{decimal}' == key:\n try:\n return decimal.Decimal(obj[key])\n except:\n pass\n\n def __init__(self, **kwargs):\n kwargs['object_hook'] = self.object_hook\n super(CommonJSONDecoder, self).__init__(**kwargs)\n\nWhich passes unittest:\ndef test_encode_and_decode_decimal(self):\n obj = Decimal('1.11')\n result = json.dumps(obj, cls=CommonJSONEncoder)\n self.assertTrue('type{decimal}' in result)\n new_obj = json.loads(result, cls=CommonJSONDecoder)\n self.assertEqual(new_obj, obj)\n\n obj = {'test': Decimal('1.11')}\n result = json.dumps(obj, cls=CommonJSONEncoder)\n self.assertTrue('type{decimal}' in result)\n new_obj = json.loads(result, cls=CommonJSONDecoder)\n self.assertEqual(new_obj, obj)\n\n obj = {'test': {'abc': Decimal('1.11')}}\n result = json.dumps(obj, cls=CommonJSONEncoder)\n self.assertTrue('type{decimal}' in result)\n new_obj = json.loads(result, cls=CommonJSONDecoder)\n self.assertEqual(new_obj, obj)\n\n",
"You can create a custom JSON encoder as per your requirement.\nimport json\nfrom datetime import datetime, date\nfrom time import time, struct_time, mktime\nimport decimal\n\nclass CustomJSONEncoder(json.JSONEncoder):\n def default(self, o):\n if isinstance(o, datetime):\n return str(o)\n if isinstance(o, date):\n return str(o)\n if isinstance(o, decimal.Decimal):\n return float(o)\n if isinstance(o, struct_time):\n return datetime.fromtimestamp(mktime(o))\n # Any other serializer if needed\n return super(CustomJSONEncoder, self).default(o)\n\nThe Decoder can be called like this,\nimport json\nfrom decimal import Decimal\njson.dumps({'x': Decimal('3.9')}, cls=CustomJSONEncoder)\n\nand the output will be:\n>>'{\"x\": 3.9}'\n\n",
"Based on stdOrgnlDave answer I have defined this wrapper that it can be called with optional kinds so the encoder will work only for certain kinds inside your projects. I believe the work should be done inside your code and not to use this \"default\" encoder since \"it is better explicit than implicit\", but I understand using this will save some of your time. :-)\nimport time\nimport json\nimport decimal\nfrom uuid import UUID\nfrom datetime import datetime\n\ndef JSONEncoder_newdefault(kind=['uuid', 'datetime', 'time', 'decimal']):\n '''\n JSON Encoder newdfeault is a wrapper capable of encoding several kinds\n Use it anywhere on your code to make the full system to work with this defaults:\n JSONEncoder_newdefault() # for everything\n JSONEncoder_newdefault(['decimal']) # only for Decimal\n '''\n JSONEncoder_olddefault = json.JSONEncoder.default\n\n def JSONEncoder_wrapped(self, o):\n '''\n json.JSONEncoder.default = JSONEncoder_newdefault\n '''\n if ('uuid' in kind) and isinstance(o, uuid.UUID):\n return str(o)\n if ('datetime' in kind) and isinstance(o, datetime):\n return str(o)\n if ('time' in kind) and isinstance(o, time.struct_time):\n return datetime.fromtimestamp(time.mktime(o))\n if ('decimal' in kind) and isinstance(o, decimal.Decimal):\n return str(o)\n return JSONEncoder_olddefault(self, o)\n json.JSONEncoder.default = JSONEncoder_wrapped\n\n# Example\nif __name__ == '__main__':\n JSONEncoder_newdefault()\n\n",
"If someone is still looking for the answer, it is most probably you have a 'NaN' in your data that you are trying to encode. Because NaN is considered as float by Python.\n",
"If you want to pass a dictionary containing decimals to the requests library (using the json keyword argument), you simply need to install simplejson:\n$ pip3 install simplejson \n$ python3\n>>> import requests\n>>> from decimal import Decimal\n>>> # This won't error out:\n>>> requests.post('https://www.google.com', json={'foo': Decimal('1.23')})\n\nThe reason of the problem is that requests uses simplejson only if it is present, and falls back to the built-in json if it is not installed.\n",
"For anybody that wants a quick solution here is how I removed Decimal from my queries in Django\ntotal_development_cost_var = process_assumption_objects.values('total_development_cost').aggregate(sum_dev = Sum('total_development_cost', output_field=FloatField()))\ntotal_development_cost_var = list(total_development_cost_var.values())\n\n\nStep 1: use , output_field=FloatField() in you r query\nStep 2: use list eg list(total_development_cost_var.values())\n\nHope it helps\n",
"This question is old, but there seems to be a better and much simpler solution in Python3 for most use-cases:\nnumber = Decimal(0.55)\nconverted_number = float(number) # Returns: 0.55 (as type float)\n\nYou can just convert Decimal to float.\n",
"My 2 cents for easy solution, if you're sure Decimal is the only bad guy on your json dumps method:\nprint(json.loads(json.dumps({\n 'a': Decimal(1230),\n 'b': Decimal(11111111123.22),\n}, default=lambda x: eval(str(x)))))\n\n>>> {'a': 1230, 'b': 11111111123.22}\n\nThe \"smart\" thing here is using default to convert Decimal to int or float, automatically, taking advantage of eval function: default=lambda x: eval(str(x))\nBut always be careful using eval on your code as it can lead to security issues ;)\n",
"Decimal is not suitable to be converted through:\n\nfloat due to precision problems\nstr due to openapi restrictions\n\nWe still need direct decimal to a number json serialisation.\nHere is our extension of @tesdal 's fakefloat solution (closed in v3.5.2rc1).\nIt uses fakestr + monkeypatching to avoid quotation and \"floatation\" of decimals.\nimport json.encoder\nfrom decimal import Decimal\n\n\ndef encode_fakestr(func):\n def wrap(s):\n if isinstance(s, fakestr):\n return repr(s)\n return func(s)\n return wrap\n\n\njson.encoder.encode_basestring = encode_fakestr(json.encoder.encode_basestring)\njson.encoder.encode_basestring_ascii = encode_fakestr(json.encoder.encode_basestring_ascii)\n\n\nclass fakestr(str):\n def __init__(self, value):\n self._value = value\n def __repr__(self):\n return str(self._value)\n\n\nclass DecimalJsonEncoder(json.encoder.JSONEncoder):\n def default(self, o):\n if isinstance(o, Decimal):\n return fakestr(o)\n return super().default(o)\n\n\njson.dumps([Decimal('1.1')], cls=DecimalJsonEncoder)\n\n[1.1]\n\nI don't understand why python developers force us using floats in places where it is not suitable.\n",
"I will share what worked for me with flask 2.1.0\nWhen I was creating the dictionary which had to be used from jsonify I used rounding:\njson_dict['price'] = round(self.price, ndigits=2) if self.price else 0\n\nSo this way I could return D.DD number or 0 without using some global configuration. And this is nice because some Decimals has to be bigger, like latitude and longitude coordinates.\nreturn jsonify(json_dict)\n\n"
] | [
265,
239,
178,
64,
57,
38,
28,
14,
14,
13,
11,
7,
4,
2,
1,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"decimal",
"floating_point",
"json",
"python"
] | stackoverflow_0001960516_decimal_floating_point_json_python.txt |
Q:
File "/model.py", line 33, in forward x_out = torch.cat(x_out, 1) IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
I read previous answers but couldnt fix this.
whenever I run the code, this error pops up at different epochs, sometimes the executu=ion goes till 50s and then suddenly this error appears and the execution stops. at some other times this error appears at epoch 16s and so on.
0it [00:00, ?it/s]/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py:1960: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
185it [00:07, 23.88it/s]
Traceback (most recent call last):
File "/content/drive/MyDrive/train.py", line 241, in <module>
train()
File "/content/drive/MyDrive/train.py", line 98, in train
text_aligned_match, image_aligned_match, pred_similarity_match = similarity_module(fixed_text, matched_image)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/model.py", line 106, in forward
text_encoding, image_encoding = self.encoding(text, image)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/model.py", line 70, in forward
text_encoding = self.shared_text_encoding(text)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/model.py", line 33, in forward
x_out = torch.cat(x_out, 1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
The line creating issue is
x_out = torch.cat(x_out, 1)
Code is:
import math
import random
from random import random, seed
import torch
import torch.nn as nn
from torch.distributions import Normal, Independent
from torch.nn.functional import softplus
#random.seed(825)
seed(825)
class FastCNN(nn.Module):
# a CNN-based altertative approach of bert for text encoding
def __init__(self, channel=32, kernel_size=(1, 2, 4, 8)):
super(FastCNN, self).__init__()
self.fast_cnn = nn.ModuleList()
for kernel in kernel_size:
self.fast_cnn.append(
nn.Sequential(
nn.Conv1d(200, channel, kernel_size=kernel),
nn.BatchNorm1d(channel),
nn.ReLU(),
nn.AdaptiveMaxPool1d(1)
)
)
def forward(self, x):
x = x.permute(0, 2, 1)
x_out = []
for module in self.fast_cnn:
x_out.append(module(x).squeeze())
x_out = torch.cat(x_out, 1)
return x_out
class EncodingPart(nn.Module):
def __init__(
self,
cnn_channel=32,
cnn_kernel_size=(1, 2, 4, 8),
shared_image_dim=128,
shared_text_dim=128
):
super(EncodingPart, self).__init__()
self.shared_text_encoding = FastCNN(
channel=cnn_channel,
kernel_size=cnn_kernel_size
)
self.shared_text_linear = nn.Sequential(
nn.Linear(128, 64),
nn.BatchNorm1d(64),
nn.ReLU(),
nn.Dropout(),
nn.Linear(64, shared_text_dim),
nn.BatchNorm1d(shared_text_dim),
nn.ReLU()
)
self.shared_image = nn.Sequential(
nn.Linear(512, 256),
nn.BatchNorm1d(256),
nn.ReLU(),
nn.Dropout(),
nn.Linear(256, shared_image_dim),
nn.BatchNorm1d(shared_image_dim),
nn.ReLU()
)
def forward(self, text, image):
text_encoding = self.shared_text_encoding(text)
text_shared = self.shared_text_linear(text_encoding)
image_shared = self.shared_image(image)
return text_shared, image_shared
class SimilarityModule(nn.Module):
def __init__(self, shared_dim=128, sim_dim=64):
super(SimilarityModule, self).__init__()
self.encoding = EncodingPart()
self.text_aligner = nn.Sequential(
nn.Linear(shared_dim, shared_dim),
nn.BatchNorm1d(shared_dim),
nn.ReLU(),
nn.Linear(shared_dim, sim_dim),
nn.BatchNorm1d(sim_dim),
nn.ReLU()
)
self.image_aligner = nn.Sequential(
nn.Linear(shared_dim, shared_dim),
nn.BatchNorm1d(shared_dim),
nn.ReLU(),
nn.Linear(shared_dim, sim_dim),
nn.BatchNorm1d(sim_dim),
nn.ReLU()
)
self.sim_classifier_dim = sim_dim * 2
self.sim_classifier = nn.Sequential(
nn.BatchNorm1d(self.sim_classifier_dim),
nn.Linear(self.sim_classifier_dim, 64),
nn.BatchNorm1d(64),
nn.ReLU(),
nn.Linear(64, 2)
)
def forward(self, text, image):
text_encoding, image_encoding = self.encoding(text, image)
text_aligned = self.text_aligner(text_encoding)
image_aligned = self.image_aligner(image_encoding)
sim_feature = torch.cat([text_aligned, image_aligned], 1)
pred_similarity = self.sim_classifier(sim_feature)
return text_aligned, image_aligned, pred_similarity
class Encoder(nn.Module):
def __init__(self, z_dim=2):
super(Encoder, self).__init__()
self.z_dim = z_dim
# Vanilla MLP
self.net = nn.Sequential(
nn.Linear(64, 64),
nn.ReLU(True),
nn.Linear(64, z_dim * 2),
)
def forward(self, x):
# x = x.view(x.size(0), -1) # Flatten the input
params = self.net(x)
mu, sigma = params[:, :self.z_dim], params[:, self.z_dim:]
sigma = softplus(sigma) + 1e-7
return Independent(Normal(loc=mu, scale=sigma), 1)
class AmbiguityLearning(nn.Module):
def __init__(self):
super(AmbiguityLearning, self).__init__()
self.encoding = EncodingPart()
self.encoder_text = Encoder()
self.encoder_image = Encoder()
def forward(self, text_encoding, image_encoding):
# text_encoding, image_encoding = self.encoding(text, image)
p_z1_given_text = self.encoder_text(text_encoding)
p_z2_given_image = self.encoder_image(image_encoding)
z1 = p_z1_given_text.rsample()
z2 = p_z2_given_image.rsample()
kl_1_2 = p_z1_given_text.log_prob(z1) - p_z2_given_image.log_prob(z1)
kl_2_1 = p_z2_given_image.log_prob(z2) - p_z1_given_text.log_prob(z2)
skl = (kl_1_2 + kl_2_1)/ 2.
skl = nn.functional.sigmoid(skl)
return skl
class UnimodalDetection(nn.Module):
def __init__(self, shared_dim=128, prime_dim = 16):
super(UnimodalDetection, self).__init__()
self.text_uni = nn.Sequential(
nn.Linear(shared_dim, shared_dim),
nn.BatchNorm1d(shared_dim),
nn.ReLU(),
nn.Linear(shared_dim, prime_dim),
nn.BatchNorm1d(prime_dim),
nn.ReLU()
)
self.image_uni = nn.Sequential(
nn.Linear(shared_dim, shared_dim),
nn.BatchNorm1d(shared_dim),
nn.ReLU(),
nn.Linear(shared_dim, prime_dim),
nn.BatchNorm1d(prime_dim),
nn.ReLU()
)
def forward(self, text_encoding, image_encoding):
text_prime = self.text_uni(text_encoding)
image_prime = self.image_uni(image_encoding)
return text_prime, image_prime
class CrossModule4Batch(nn.Module):
def __init__(self, text_in_dim=64, image_in_dim=64, corre_out_dim=64):
super(CrossModule4Batch, self).__init__()
self.softmax = nn.Softmax(-1)
self.corre_dim = 64
self.pooling = nn.AdaptiveMaxPool1d(1)
self.c_specific_2 = nn.Sequential(
nn.Linear(self.corre_dim, corre_out_dim),
nn.BatchNorm1d(corre_out_dim),
nn.ReLU()
)
def forward(self, text, image):
text_in = text.unsqueeze(2)
image_in = image.unsqueeze(1)
corre_dim = text.shape[1]
similarity = torch.matmul(text_in, image_in) / math.sqrt(corre_dim)
correlation = self.softmax(similarity)
correlation_p = self.pooling(correlation).squeeze()
correlation_out = self.c_specific_2(correlation_p)
return correlation_out
class DetectionModule(nn.Module):
def __init__(self, feature_dim=64+16+16, h_dim=64):
super(DetectionModule, self).__init__()
self.encoding = EncodingPart()
self.ambiguity_module = AmbiguityLearning()
self.uni_repre = UnimodalDetection()
self.cross_module = CrossModule4Batch()
self.classifier_corre = nn.Sequential(
nn.Linear(feature_dim, h_dim),
nn.BatchNorm1d(h_dim),
nn.ReLU(),
# nn.Dropout(),
nn.Linear(h_dim, h_dim),
nn.BatchNorm1d(h_dim),
nn.ReLU(),
# nn.Dropout(),
nn.Linear(h_dim, 2)
)
def forward(self, text_raw, image_raw, text, image):
# text_encoding, image_encoding = self.encoding_module(text, image)
skl = self.ambiguity_module(text, image)
text_prime, image_prime = self.encoding(text_raw, image_raw)
text_prime, image_prime = self.uni_repre(text_prime, image_prime)
correlation = self.cross_module(text, image)
weight_uni = (1-skl).unsqueeze(1)
weight_corre = skl.unsqueeze(1)
text_final = weight_uni * text_prime
img_final = weight_uni * image_prime
corre_final = weight_corre * correlation
final_corre = torch.cat([text_final, img_final, corre_final], 1)
pre_label = self.classifier_corre(final_corre)
return pre_label
I am new to this domain, please suggest a fix.
A:
x_out has a single dimension after the squeeze operation (I assume, hard to say from your code). Try printing x_out.shape to check. In this case, the solution would be torch.cat(x_out,dim = 0).
You may save yourself some code comprehension headaches by not performing the squeeze operation until the end. For instance, if each output i in x_out has some dimension [1,d_i], it's straightforward to see that you want to concatenate along dimension 1. Then you can squeeze x_out at the very end.
| File "/model.py", line 33, in forward x_out = torch.cat(x_out, 1) IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) | I read previous answers but couldnt fix this.
whenever I run the code, this error pops up at different epochs, sometimes the executu=ion goes till 50s and then suddenly this error appears and the execution stops. at some other times this error appears at epoch 16s and so on.
0it [00:00, ?it/s]/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py:1960: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
185it [00:07, 23.88it/s]
Traceback (most recent call last):
File "/content/drive/MyDrive/train.py", line 241, in <module>
train()
File "/content/drive/MyDrive/train.py", line 98, in train
text_aligned_match, image_aligned_match, pred_similarity_match = similarity_module(fixed_text, matched_image)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/model.py", line 106, in forward
text_encoding, image_encoding = self.encoding(text, image)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/model.py", line 70, in forward
text_encoding = self.shared_text_encoding(text)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/content/drive/MyDrive/model.py", line 33, in forward
x_out = torch.cat(x_out, 1)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
The line creating issue is
x_out = torch.cat(x_out, 1)
Code is:
import math
import random
from random import random, seed
import torch
import torch.nn as nn
from torch.distributions import Normal, Independent
from torch.nn.functional import softplus
#random.seed(825)
seed(825)
class FastCNN(nn.Module):
# a CNN-based altertative approach of bert for text encoding
def __init__(self, channel=32, kernel_size=(1, 2, 4, 8)):
super(FastCNN, self).__init__()
self.fast_cnn = nn.ModuleList()
for kernel in kernel_size:
self.fast_cnn.append(
nn.Sequential(
nn.Conv1d(200, channel, kernel_size=kernel),
nn.BatchNorm1d(channel),
nn.ReLU(),
nn.AdaptiveMaxPool1d(1)
)
)
def forward(self, x):
x = x.permute(0, 2, 1)
x_out = []
for module in self.fast_cnn:
x_out.append(module(x).squeeze())
x_out = torch.cat(x_out, 1)
return x_out
class EncodingPart(nn.Module):
def __init__(
self,
cnn_channel=32,
cnn_kernel_size=(1, 2, 4, 8),
shared_image_dim=128,
shared_text_dim=128
):
super(EncodingPart, self).__init__()
self.shared_text_encoding = FastCNN(
channel=cnn_channel,
kernel_size=cnn_kernel_size
)
self.shared_text_linear = nn.Sequential(
nn.Linear(128, 64),
nn.BatchNorm1d(64),
nn.ReLU(),
nn.Dropout(),
nn.Linear(64, shared_text_dim),
nn.BatchNorm1d(shared_text_dim),
nn.ReLU()
)
self.shared_image = nn.Sequential(
nn.Linear(512, 256),
nn.BatchNorm1d(256),
nn.ReLU(),
nn.Dropout(),
nn.Linear(256, shared_image_dim),
nn.BatchNorm1d(shared_image_dim),
nn.ReLU()
)
def forward(self, text, image):
text_encoding = self.shared_text_encoding(text)
text_shared = self.shared_text_linear(text_encoding)
image_shared = self.shared_image(image)
return text_shared, image_shared
class SimilarityModule(nn.Module):
def __init__(self, shared_dim=128, sim_dim=64):
super(SimilarityModule, self).__init__()
self.encoding = EncodingPart()
self.text_aligner = nn.Sequential(
nn.Linear(shared_dim, shared_dim),
nn.BatchNorm1d(shared_dim),
nn.ReLU(),
nn.Linear(shared_dim, sim_dim),
nn.BatchNorm1d(sim_dim),
nn.ReLU()
)
self.image_aligner = nn.Sequential(
nn.Linear(shared_dim, shared_dim),
nn.BatchNorm1d(shared_dim),
nn.ReLU(),
nn.Linear(shared_dim, sim_dim),
nn.BatchNorm1d(sim_dim),
nn.ReLU()
)
self.sim_classifier_dim = sim_dim * 2
self.sim_classifier = nn.Sequential(
nn.BatchNorm1d(self.sim_classifier_dim),
nn.Linear(self.sim_classifier_dim, 64),
nn.BatchNorm1d(64),
nn.ReLU(),
nn.Linear(64, 2)
)
def forward(self, text, image):
text_encoding, image_encoding = self.encoding(text, image)
text_aligned = self.text_aligner(text_encoding)
image_aligned = self.image_aligner(image_encoding)
sim_feature = torch.cat([text_aligned, image_aligned], 1)
pred_similarity = self.sim_classifier(sim_feature)
return text_aligned, image_aligned, pred_similarity
class Encoder(nn.Module):
def __init__(self, z_dim=2):
super(Encoder, self).__init__()
self.z_dim = z_dim
# Vanilla MLP
self.net = nn.Sequential(
nn.Linear(64, 64),
nn.ReLU(True),
nn.Linear(64, z_dim * 2),
)
def forward(self, x):
# x = x.view(x.size(0), -1) # Flatten the input
params = self.net(x)
mu, sigma = params[:, :self.z_dim], params[:, self.z_dim:]
sigma = softplus(sigma) + 1e-7
return Independent(Normal(loc=mu, scale=sigma), 1)
class AmbiguityLearning(nn.Module):
def __init__(self):
super(AmbiguityLearning, self).__init__()
self.encoding = EncodingPart()
self.encoder_text = Encoder()
self.encoder_image = Encoder()
def forward(self, text_encoding, image_encoding):
# text_encoding, image_encoding = self.encoding(text, image)
p_z1_given_text = self.encoder_text(text_encoding)
p_z2_given_image = self.encoder_image(image_encoding)
z1 = p_z1_given_text.rsample()
z2 = p_z2_given_image.rsample()
kl_1_2 = p_z1_given_text.log_prob(z1) - p_z2_given_image.log_prob(z1)
kl_2_1 = p_z2_given_image.log_prob(z2) - p_z1_given_text.log_prob(z2)
skl = (kl_1_2 + kl_2_1)/ 2.
skl = nn.functional.sigmoid(skl)
return skl
class UnimodalDetection(nn.Module):
def __init__(self, shared_dim=128, prime_dim = 16):
super(UnimodalDetection, self).__init__()
self.text_uni = nn.Sequential(
nn.Linear(shared_dim, shared_dim),
nn.BatchNorm1d(shared_dim),
nn.ReLU(),
nn.Linear(shared_dim, prime_dim),
nn.BatchNorm1d(prime_dim),
nn.ReLU()
)
self.image_uni = nn.Sequential(
nn.Linear(shared_dim, shared_dim),
nn.BatchNorm1d(shared_dim),
nn.ReLU(),
nn.Linear(shared_dim, prime_dim),
nn.BatchNorm1d(prime_dim),
nn.ReLU()
)
def forward(self, text_encoding, image_encoding):
text_prime = self.text_uni(text_encoding)
image_prime = self.image_uni(image_encoding)
return text_prime, image_prime
class CrossModule4Batch(nn.Module):
def __init__(self, text_in_dim=64, image_in_dim=64, corre_out_dim=64):
super(CrossModule4Batch, self).__init__()
self.softmax = nn.Softmax(-1)
self.corre_dim = 64
self.pooling = nn.AdaptiveMaxPool1d(1)
self.c_specific_2 = nn.Sequential(
nn.Linear(self.corre_dim, corre_out_dim),
nn.BatchNorm1d(corre_out_dim),
nn.ReLU()
)
def forward(self, text, image):
text_in = text.unsqueeze(2)
image_in = image.unsqueeze(1)
corre_dim = text.shape[1]
similarity = torch.matmul(text_in, image_in) / math.sqrt(corre_dim)
correlation = self.softmax(similarity)
correlation_p = self.pooling(correlation).squeeze()
correlation_out = self.c_specific_2(correlation_p)
return correlation_out
class DetectionModule(nn.Module):
def __init__(self, feature_dim=64+16+16, h_dim=64):
super(DetectionModule, self).__init__()
self.encoding = EncodingPart()
self.ambiguity_module = AmbiguityLearning()
self.uni_repre = UnimodalDetection()
self.cross_module = CrossModule4Batch()
self.classifier_corre = nn.Sequential(
nn.Linear(feature_dim, h_dim),
nn.BatchNorm1d(h_dim),
nn.ReLU(),
# nn.Dropout(),
nn.Linear(h_dim, h_dim),
nn.BatchNorm1d(h_dim),
nn.ReLU(),
# nn.Dropout(),
nn.Linear(h_dim, 2)
)
def forward(self, text_raw, image_raw, text, image):
# text_encoding, image_encoding = self.encoding_module(text, image)
skl = self.ambiguity_module(text, image)
text_prime, image_prime = self.encoding(text_raw, image_raw)
text_prime, image_prime = self.uni_repre(text_prime, image_prime)
correlation = self.cross_module(text, image)
weight_uni = (1-skl).unsqueeze(1)
weight_corre = skl.unsqueeze(1)
text_final = weight_uni * text_prime
img_final = weight_uni * image_prime
corre_final = weight_corre * correlation
final_corre = torch.cat([text_final, img_final, corre_final], 1)
pre_label = self.classifier_corre(final_corre)
return pre_label
I am new to this domain, please suggest a fix.
| [
"x_out has a single dimension after the squeeze operation (I assume, hard to say from your code). Try printing x_out.shape to check. In this case, the solution would be torch.cat(x_out,dim = 0).\nYou may save yourself some code comprehension headaches by not performing the squeeze operation until the end. For instance, if each output i in x_out has some dimension [1,d_i], it's straightforward to see that you want to concatenate along dimension 1. Then you can squeeze x_out at the very end.\n"
] | [
0
] | [] | [] | [
"machine_learning",
"python",
"pytorch"
] | stackoverflow_0074656868_machine_learning_python_pytorch.txt |
Q:
code running fine on computer and on pydroid 3 it doesnt
so i have made a script that when my phone connects to my wifi network it automatically turn on my computer and i downloaded pydroid on another phone to run it non stop and it outputs :
ping [-aAbBdDfhLnOqrRUvV] [-c count] [-i interval] [-I interface]
[-m mark] [-M pmtudisc_option] [-l preload] [-p pattern] [-Q tos]
[-s packetsize] [-S sndbuf] [-t ttl] [-T timestamp_option]
[-w deadline] [-W timeout] [hop1 ...] destination
and the code is this that works perfectly on a computer:
import subprocess
from wakeonlan import send_magic_packet
IP_DEVICE = 'phonesip'
devices = {
'my_pc': {'mac': 'mymacadress', 'ip_address': 'myipadress'}
}
def wake_device(device_name):
if device_name in devices:
mac, ip = devices[device_name].values()
send_magic_packet(mac, ip_address=ip)
print('Magic Packet Sent')
else:
print('Device Not Found')
proc = subprocess.Popen(["ping", '-t', IP_DEVICE], stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if not line:
break
else:
None
try:
connected_ip = line.decode('utf-8').split()[2].replace(':', '')
if connected_ip == IP_DEVICE:
print('Device connected!')
wake_device('my_pc')
# Do whatever you want when the device connects here...
break
else:
print('Pinging device...')
except:
pass
A:
Try omitting the -t from ping when running the script on your phone.
ping works differently on your phone. On windows you have to add -t for it to ping nonstop, on other operating systems that is the default behaviour.
| code running fine on computer and on pydroid 3 it doesnt | so i have made a script that when my phone connects to my wifi network it automatically turn on my computer and i downloaded pydroid on another phone to run it non stop and it outputs :
ping [-aAbBdDfhLnOqrRUvV] [-c count] [-i interval] [-I interface]
[-m mark] [-M pmtudisc_option] [-l preload] [-p pattern] [-Q tos]
[-s packetsize] [-S sndbuf] [-t ttl] [-T timestamp_option]
[-w deadline] [-W timeout] [hop1 ...] destination
and the code is this that works perfectly on a computer:
import subprocess
from wakeonlan import send_magic_packet
IP_DEVICE = 'phonesip'
devices = {
'my_pc': {'mac': 'mymacadress', 'ip_address': 'myipadress'}
}
def wake_device(device_name):
if device_name in devices:
mac, ip = devices[device_name].values()
send_magic_packet(mac, ip_address=ip)
print('Magic Packet Sent')
else:
print('Device Not Found')
proc = subprocess.Popen(["ping", '-t', IP_DEVICE], stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if not line:
break
else:
None
try:
connected_ip = line.decode('utf-8').split()[2].replace(':', '')
if connected_ip == IP_DEVICE:
print('Device connected!')
wake_device('my_pc')
# Do whatever you want when the device connects here...
break
else:
print('Pinging device...')
except:
pass
| [
"Try omitting the -t from ping when running the script on your phone.\nping works differently on your phone. On windows you have to add -t for it to ping nonstop, on other operating systems that is the default behaviour.\n"
] | [
0
] | [] | [] | [
"pydroid",
"python"
] | stackoverflow_0074633484_pydroid_python.txt |
Q:
Prepare json file for GPT
I would like to create a dataset to use it for fine-tuning GPT3. As I read from the following site https://beta.openai.com/docs/guides/fine-tuning, the dataset should look like this
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
...
For this reason I am creating the dataset with the following way
import json
# Data to be written
dictionary = {
"prompt": "<text1>", "completion": "<text to be generated1>"}, {
"prompt": "<text2>", "completion": "<text to be generated2>"}
with open("sample2.json", "w") as outfile:
json.dump(dictionary, outfile)
However, when I am trying to load it, it looks like this which is not as we want
import json
# Opening JSON file
with open('sample2.json', 'r') as openfile:
# Reading from json file
json_object = json.load(openfile)
print(json_object)
print(type(json_object))
>> [{'prompt': '<text1>', 'completion': '<text to be generated1>'}, {'prompt': '<text2>', 'completion': '<text to be generated2>'}]
<class 'list'>
Could you please let me know how can I face this problem?
A:
it's more like, writing \n a new line character after each json. so each line is JSON. somehow the link jsonlines throw server not found error on me.
you can have these options:
write \n after each line:
import json
with open("sample2_op1.json", "w") as outfile:
for e_json in dictionary:
json.dump(e_json, outfile)
outfile.write('\n')
#read file, as it has \n, read line by line and load as json
with open("sample2_op1.json","r") as file:
for line in file:
print(json.loads(line),type(json.loads(line)))
which have way to read file too, its jsonlines
install the module !pip install jsonlines
import jsonlines
#write to file
with jsonlines.open('sample2_op2.jsonl', 'w') as outfile:
outfile.write_all(dictionary)
#read the file
with jsonlines.open('sample2_op2.jsonl') as reader:
for obj in reader:
print(obj)
| Prepare json file for GPT | I would like to create a dataset to use it for fine-tuning GPT3. As I read from the following site https://beta.openai.com/docs/guides/fine-tuning, the dataset should look like this
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
{"prompt": "<prompt text>", "completion": "<ideal generated text>"}
...
For this reason I am creating the dataset with the following way
import json
# Data to be written
dictionary = {
"prompt": "<text1>", "completion": "<text to be generated1>"}, {
"prompt": "<text2>", "completion": "<text to be generated2>"}
with open("sample2.json", "w") as outfile:
json.dump(dictionary, outfile)
However, when I am trying to load it, it looks like this which is not as we want
import json
# Opening JSON file
with open('sample2.json', 'r') as openfile:
# Reading from json file
json_object = json.load(openfile)
print(json_object)
print(type(json_object))
>> [{'prompt': '<text1>', 'completion': '<text to be generated1>'}, {'prompt': '<text2>', 'completion': '<text to be generated2>'}]
<class 'list'>
Could you please let me know how can I face this problem?
| [
"it's more like, writing \\n a new line character after each json. so each line is JSON. somehow the link jsonlines throw server not found error on me.\nyou can have these options:\n\nwrite \\n after each line:\n\nimport json\nwith open(\"sample2_op1.json\", \"w\") as outfile:\n for e_json in dictionary:\n json.dump(e_json, outfile)\n outfile.write('\\n')\n#read file, as it has \\n, read line by line and load as json\nwith open(\"sample2_op1.json\",\"r\") as file:\n for line in file:\n print(json.loads(line),type(json.loads(line)))\n\n\nwhich have way to read file too, its jsonlines\ninstall the module !pip install jsonlines\n\nimport jsonlines\n#write to file\nwith jsonlines.open('sample2_op2.jsonl', 'w') as outfile:\n outfile.write_all(dictionary)\n#read the file\nwith jsonlines.open('sample2_op2.jsonl') as reader:\n for obj in reader:\n print(obj)\n\n"
] | [
1
] | [] | [] | [
"gpt_3",
"nlp",
"python"
] | stackoverflow_0074656790_gpt_3_nlp_python.txt |
Q:
Data in Pandas becomes NaN when I add it to another data frame
I am trying to pull Worldwide data from a data set and add the column to another dataframe using pandas, but the data becomes NaN everytime I run the code. The same code works for when I try to pull US data.
Code:
fb_us_data = us_data[us_data.app_id == fb_key]
fb_ww_data = ww_data[ww_data.app_id == fb_key]
fb_intl_data = fb_us_data[["app_id","date"]]
fb_intl_data['total_users_ww'] = fb_us_data['total_users']
fb_intl_data["total_users_us"] = fb_us_data['total_users']
print(fb_intl_data)`
Output:
app_id date total_users_ww total_users_us
158515 55c530a702ac64f9c0002dff 2015-09-28 NaN 114609170
158516 55c530a702ac64f9c0002dff 2015-10-05 NaN 115642838
158517 55c530a702ac64f9c0002dff 2015-10-12 NaN 116414827
158518 55c530a702ac64f9c0002dff 2015-10-19 NaN 117005866
158519 55c530a702ac64f9c0002dff 2015-10-26 NaN 118743332
... ... ... ... ...
158885 55c530a702ac64f9c0002dff 2022-10-31 NaN 228981651
158886 55c530a702ac64f9c0002dff 2022-11-07 NaN 229721851
158887 55c530a702ac64f9c0002dff 2022-11-14 NaN 228069299
158888 55c530a702ac64f9c0002dff 2022-11-21 NaN 228729072
158889 55c530a702ac64f9c0002dff 2022-11-28 NaN 225447696
[375 rows x 4 columns]
A:
the problem here come from indexes try reseting index before adding like this:
fb_us_data = us_data[us_data.app_id == fb_key].copy()
fb_ww_data = ww_data[ww_data.app_id == fb_key].copy()
fb_us_data.reset_index(drop=True, inplace=True)
fb_ww_data.reset_index(drop=True, inplace=True)
| Data in Pandas becomes NaN when I add it to another data frame | I am trying to pull Worldwide data from a data set and add the column to another dataframe using pandas, but the data becomes NaN everytime I run the code. The same code works for when I try to pull US data.
Code:
fb_us_data = us_data[us_data.app_id == fb_key]
fb_ww_data = ww_data[ww_data.app_id == fb_key]
fb_intl_data = fb_us_data[["app_id","date"]]
fb_intl_data['total_users_ww'] = fb_us_data['total_users']
fb_intl_data["total_users_us"] = fb_us_data['total_users']
print(fb_intl_data)`
Output:
app_id date total_users_ww total_users_us
158515 55c530a702ac64f9c0002dff 2015-09-28 NaN 114609170
158516 55c530a702ac64f9c0002dff 2015-10-05 NaN 115642838
158517 55c530a702ac64f9c0002dff 2015-10-12 NaN 116414827
158518 55c530a702ac64f9c0002dff 2015-10-19 NaN 117005866
158519 55c530a702ac64f9c0002dff 2015-10-26 NaN 118743332
... ... ... ... ...
158885 55c530a702ac64f9c0002dff 2022-10-31 NaN 228981651
158886 55c530a702ac64f9c0002dff 2022-11-07 NaN 229721851
158887 55c530a702ac64f9c0002dff 2022-11-14 NaN 228069299
158888 55c530a702ac64f9c0002dff 2022-11-21 NaN 228729072
158889 55c530a702ac64f9c0002dff 2022-11-28 NaN 225447696
[375 rows x 4 columns]
| [
"the problem here come from indexes try reseting index before adding like this:\nfb_us_data = us_data[us_data.app_id == fb_key].copy()\nfb_ww_data = ww_data[ww_data.app_id == fb_key].copy()\nfb_us_data.reset_index(drop=True, inplace=True)\nfb_ww_data.reset_index(drop=True, inplace=True)\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"typeerror"
] | stackoverflow_0074656491_dataframe_pandas_python_typeerror.txt |
Q:
Can I bulk gecode addresses from local OSM tile server to get lat/long?
I need to geocode a few million addresses within a single country. I know that paid geocode APIs charge for bulk geocoding and/or place limits for queries. I downloaded a map tile server to run within a docker but would like to know how to get address lat/long.
I used https://github.com/Overv/openstreetmap-tile-server to set up the openStreet map tile server.
Is there a better way of doing this? I am using python-
A:
You did set up a local tile server, not a geocoder. To perform local bulk geocoding you need a local geocoding server. For an OSM-based geocoder take a look at Nominatim.
| Can I bulk gecode addresses from local OSM tile server to get lat/long? | I need to geocode a few million addresses within a single country. I know that paid geocode APIs charge for bulk geocoding and/or place limits for queries. I downloaded a map tile server to run within a docker but would like to know how to get address lat/long.
I used https://github.com/Overv/openstreetmap-tile-server to set up the openStreet map tile server.
Is there a better way of doing this? I am using python-
| [
"You did set up a local tile server, not a geocoder. To perform local bulk geocoding you need a local geocoding server. For an OSM-based geocoder take a look at Nominatim.\n"
] | [
0
] | [] | [] | [
"docker",
"geocode",
"local",
"openstreetmap",
"python"
] | stackoverflow_0074657183_docker_geocode_local_openstreetmap_python.txt |
Q:
Remove specific things from string in Python
I am struggling to remove some characters in a string. This is inside a loop. So if the string contains either of the below, then it needs to remove them and leave the rest behind.
Characters to remove:
"-"
"1)", "2)" etc
Here is the loop:
for i in item:
if i != "":
items[heading].append(i)
I am just wondering if there is any advice as to where I can look for help please :)
Here is what I have tried, without the desired results:
for i in item:
if i != "":
i = i.replace('-', '')
i = i[i.find(')'):]
items[title].append(i)
A:
Try this!
bl = ["-", "1)", "2)"]
string = "-2)Hello1) World!"
for item in bl:
string = string.replace(item, "")
print(string)
| Remove specific things from string in Python | I am struggling to remove some characters in a string. This is inside a loop. So if the string contains either of the below, then it needs to remove them and leave the rest behind.
Characters to remove:
"-"
"1)", "2)" etc
Here is the loop:
for i in item:
if i != "":
items[heading].append(i)
I am just wondering if there is any advice as to where I can look for help please :)
Here is what I have tried, without the desired results:
for i in item:
if i != "":
i = i.replace('-', '')
i = i[i.find(')'):]
items[title].append(i)
| [
"Try this!\nbl = [\"-\", \"1)\", \"2)\"]\nstring = \"-2)Hello1) World!\"\n\nfor item in bl:\n string = string.replace(item, \"\")\n\nprint(string)\n\n"
] | [
-1
] | [] | [] | [
"python",
"string"
] | stackoverflow_0074657193_python_string.txt |
Q:
I use Django with PostgreSQL on docker compose, but django-test can't access database
I practice Writing your first Django app, part 5, it's django test section.
And my environment is:
Django 4.0
Python 3.9
Database PostgreSQL 14.2
Docker compose
In addition, the connection to PostgreSQL is configured via .pg_service.conf and .pgpass.
docker-compose.yml
version: "3.9"
services:
web:
build:
context: .
container_name: web
restart: unless-stopped
tty: true
working_dir: /opt/apps
volumes:
- .:/opt/apps/
- .pgpass:/root/.pgpass
- .pg_service.conf:/root/.pg_service.conf
entrypoint: "./entrypoint.sh"
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: db
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- ${DB_PORT}:5432
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
networks:
django-tutorial:
compose .env file
DB_USER=postgres
DB_PASSWORD=postgres
DB_HOST=db
DB_PORT=5432
.pg_service.conf
[DB]
host=db
user=postgres
dbname=django_tutorial
port=5432
.pgpass
db:5432:postgres:postgres:postgres
When docker-compose up -d is executed with the above configuration and python manage.py migrate is executed, the migration is performed no problems and the database is accessible.
However, when I try to run the test as in python manage.py test polls, I get the following error.
(The test is the same as the code in the link at the beginning of this article.)
Do I need any additional configuration for the test DB?
docker compose exec web python manage.py test polls ✔ 15:20:03
Found 1 test(s).
Creating test database for alias 'default'...
/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py:323: RuntimeWarning: Normally Django will use a connection to the 'postgres' database to avoid running initialization queries against the production database when it's not needed (for example, when running tests). Django was unable to create a connection to the 'postgres' database and will use the first PostgreSQL database instead.
warnings.warn(
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 225, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 203, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 318, in _nodb_cursor
with super()._nodb_cursor() as cursor:
File "/usr/local/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 656, in _nodb_cursor
with conn.cursor() as cursor:
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 284, in cursor
return self._cursor()
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 260, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 225, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 203, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/apps/manage.py", line 22, in <module>
main()
File "/opt/apps/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/test.py", line 24, in run_from_argv
super().run_from_argv(argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 414, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 460, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/test.py", line 68, in handle
failures = test_runner.run_tests(test_labels)
File "/usr/local/lib/python3.9/site-packages/django/test/runner.py", line 1000, in run_tests
old_config = self.setup_databases(
File "/usr/local/lib/python3.9/site-packages/django/test/runner.py", line 898, in setup_databases
return _setup_databases(
File "/usr/local/lib/python3.9/site-packages/django/test/utils.py", line 220, in setup_databases
connection.creation.create_test_db(
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/creation.py", line 63, in create_test_db
self._create_test_db(verbosity, autoclobber, keepdb)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/creation.py", line 199, in _create_test_db
with self._nodb_cursor() as cursor:
File "/usr/local/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 344, in _nodb_cursor
with conn.cursor() as cursor:
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 284, in cursor
return self._cursor()
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 260, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 224, in connect
conn_params = self.get_connection_params()
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 162, in get_connection_params
raise ImproperlyConfigured(
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the NAME or OPTIONS['service'] value.
A:
Using a service name for testing purposes is not supported at the moment.
I suspect this is why python manage.py test does not work.
See the ticket here https://code.djangoproject.com/ticket/33685
If that is the cause of your issue, python manage.py runserver should work.
If you need to use tests, try the old config.
| I use Django with PostgreSQL on docker compose, but django-test can't access database | I practice Writing your first Django app, part 5, it's django test section.
And my environment is:
Django 4.0
Python 3.9
Database PostgreSQL 14.2
Docker compose
In addition, the connection to PostgreSQL is configured via .pg_service.conf and .pgpass.
docker-compose.yml
version: "3.9"
services:
web:
build:
context: .
container_name: web
restart: unless-stopped
tty: true
working_dir: /opt/apps
volumes:
- .:/opt/apps/
- .pgpass:/root/.pgpass
- .pg_service.conf:/root/.pg_service.conf
entrypoint: "./entrypoint.sh"
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
container_name: db
environment:
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- ${DB_PORT}:5432
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
networks:
django-tutorial:
compose .env file
DB_USER=postgres
DB_PASSWORD=postgres
DB_HOST=db
DB_PORT=5432
.pg_service.conf
[DB]
host=db
user=postgres
dbname=django_tutorial
port=5432
.pgpass
db:5432:postgres:postgres:postgres
When docker-compose up -d is executed with the above configuration and python manage.py migrate is executed, the migration is performed no problems and the database is accessible.
However, when I try to run the test as in python manage.py test polls, I get the following error.
(The test is the same as the code in the link at the beginning of this article.)
Do I need any additional configuration for the test DB?
docker compose exec web python manage.py test polls ✔ 15:20:03
Found 1 test(s).
Creating test database for alias 'default'...
/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py:323: RuntimeWarning: Normally Django will use a connection to the 'postgres' database to avoid running initialization queries against the production database when it's not needed (for example, when running tests). Django was unable to create a connection to the 'postgres' database and will use the first PostgreSQL database instead.
warnings.warn(
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 225, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 203, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 318, in _nodb_cursor
with super()._nodb_cursor() as cursor:
File "/usr/local/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 656, in _nodb_cursor
with conn.cursor() as cursor:
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 284, in cursor
return self._cursor()
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 260, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 225, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 203, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/apps/manage.py", line 22, in <module>
main()
File "/opt/apps/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/test.py", line 24, in run_from_argv
super().run_from_argv(argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 414, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 460, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/test.py", line 68, in handle
failures = test_runner.run_tests(test_labels)
File "/usr/local/lib/python3.9/site-packages/django/test/runner.py", line 1000, in run_tests
old_config = self.setup_databases(
File "/usr/local/lib/python3.9/site-packages/django/test/runner.py", line 898, in setup_databases
return _setup_databases(
File "/usr/local/lib/python3.9/site-packages/django/test/utils.py", line 220, in setup_databases
connection.creation.create_test_db(
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/creation.py", line 63, in create_test_db
self._create_test_db(verbosity, autoclobber, keepdb)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/creation.py", line 199, in _create_test_db
with self._nodb_cursor() as cursor:
File "/usr/local/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 344, in _nodb_cursor
with conn.cursor() as cursor:
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 284, in cursor
return self._cursor()
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 260, in _cursor
self.ensure_connection()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 244, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 224, in connect
conn_params = self.get_connection_params()
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 162, in get_connection_params
raise ImproperlyConfigured(
django.core.exceptions.ImproperlyConfigured: settings.DATABASES is improperly configured. Please supply the NAME or OPTIONS['service'] value.
| [
"Using a service name for testing purposes is not supported at the moment.\nI suspect this is why python manage.py test does not work.\nSee the ticket here https://code.djangoproject.com/ticket/33685\nIf that is the cause of your issue, python manage.py runserver should work.\nIf you need to use tests, try the old config.\n"
] | [
0
] | [] | [] | [
"django",
"docker",
"postgresql",
"python",
"python_3.x"
] | stackoverflow_0072122960_django_docker_postgresql_python_python_3.x.txt |
Q:
I dont want means in my pivot table. I want both data
I had a big dataframe with training data: for each day and person the kind of training, and some data points per training.
I pivot the table with this code
trainingload = trainingload.pivot_table(index=('Date', 'About'), columns='NHV Training', values=['Duration', 'sRPE Cardio', 'sRPE Biomechanical'])
It works fine for most data. But sometimes the same person had two the same kind of trainings on one day. In the new dataframe the values duration, sRPE Cardio and sRPE biomechancial are now added as means, while i would like to keep both as seperate rows.
I could not find anything about this problem so I'm stuck, who can help me out?
A:
One solution would be to not use the pivot_table function and instead use the pivot function, which allows you to specify the aggregation function. Instead of using the default mean function, you can use the function that simply concatenates the values together, such as the sum function.
Here is an example:
trainingload = trainingload.pivot(index=('Date', 'About'), columns='NHV Training', values=['Duration', 'sRPE Cardio', 'sRPE Biomechanical'], aggfunc=sum)
This will result in the values for Duration, sRPE Cardio, and sRPE Biomechanical being concatenated together for each person and each day, rather than being averaged.
| I dont want means in my pivot table. I want both data | I had a big dataframe with training data: for each day and person the kind of training, and some data points per training.
I pivot the table with this code
trainingload = trainingload.pivot_table(index=('Date', 'About'), columns='NHV Training', values=['Duration', 'sRPE Cardio', 'sRPE Biomechanical'])
It works fine for most data. But sometimes the same person had two the same kind of trainings on one day. In the new dataframe the values duration, sRPE Cardio and sRPE biomechancial are now added as means, while i would like to keep both as seperate rows.
I could not find anything about this problem so I'm stuck, who can help me out?
| [
"One solution would be to not use the pivot_table function and instead use the pivot function, which allows you to specify the aggregation function. Instead of using the default mean function, you can use the function that simply concatenates the values together, such as the sum function.\nHere is an example:\ntrainingload = trainingload.pivot(index=('Date', 'About'), columns='NHV Training', values=['Duration', 'sRPE Cardio', 'sRPE Biomechanical'], aggfunc=sum)\n\nThis will result in the values for Duration, sRPE Cardio, and sRPE Biomechanical being concatenated together for each person and each day, rather than being averaged.\n"
] | [
0
] | [] | [] | [
"dataframe",
"pivot",
"python"
] | stackoverflow_0074657259_dataframe_pivot_python.txt |
Q:
Match case statement with multiple 'or' conditions in each case
Is there a way to assess whether a case statement variable is inside a particular list? Consider the following scenario. We have three lists.
a = [1, 2, 3]
b = [4, 5, 6]
c = [7, 8, 9]
Then I want to check whether x is in each list. Something like that (of course this is a Syntax Error but I hope you get the point).
match x:
case in a:
return "132"
case in b:
return "564"
case in c:
return "798"
This can be easy with an if-else scenario. Nonetheless, focusing on the match-case, if one has many lists. And big lists, it would be a mundane task to write them like that:
match x:
case 1 | 2 | 3:
return "132"
case 4 | 5 | 6:
return "564"
case 7 | 8 | 9:
return "762"
Is there an easy to way to check for multiple conditions for each case, without having to write them down?
I checked for duplicates, but I couldn't find them, I hope I don't miss something. Please be kind and let me know if there is a duplicate question.
A:
As it seems cases accept a "guard" clause starting with Python 3.10, which you can use for this purpose:
match x:
case w if w in a:
# this was the "case in a" in the question
case w if w in b:
# this was the "case in b" in the question
...
the w here actually captures the value of x, part of the syntax here too, but it's more useful in some other fancy cases listed on the linked whatsnew page.
| Match case statement with multiple 'or' conditions in each case | Is there a way to assess whether a case statement variable is inside a particular list? Consider the following scenario. We have three lists.
a = [1, 2, 3]
b = [4, 5, 6]
c = [7, 8, 9]
Then I want to check whether x is in each list. Something like that (of course this is a Syntax Error but I hope you get the point).
match x:
case in a:
return "132"
case in b:
return "564"
case in c:
return "798"
This can be easy with an if-else scenario. Nonetheless, focusing on the match-case, if one has many lists. And big lists, it would be a mundane task to write them like that:
match x:
case 1 | 2 | 3:
return "132"
case 4 | 5 | 6:
return "564"
case 7 | 8 | 9:
return "762"
Is there an easy to way to check for multiple conditions for each case, without having to write them down?
I checked for duplicates, but I couldn't find them, I hope I don't miss something. Please be kind and let me know if there is a duplicate question.
| [
"As it seems cases accept a \"guard\" clause starting with Python 3.10, which you can use for this purpose:\nmatch x:\n case w if w in a:\n # this was the \"case in a\" in the question\n case w if w in b:\n # this was the \"case in b\" in the question\n ...\n\nthe w here actually captures the value of x, part of the syntax here too, but it's more useful in some other fancy cases listed on the linked whatsnew page.\n"
] | [
2
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074655787_python_python_3.x.txt |
Q:
Performing arithmetic calculations on all possible digit combinations in a list
I create data in a format like this:
initial_data = [
"518-2", '533-3', '534-0',
'000-3', '000-4']
I need to perform several operations (add, sub, div, mult, factorial, power_to, root) on the part before the hyphen to see if there's an equation which equals the part after the hyphen.
Like so:
#5182
-5 - 1 + 8 = 2 or 5*(-1) - 1 + 8 = 2
#000-3
number, solution, number_of_solutions
000-3,(0! + 0!) + 0! = 3,2
or
000-4,,0
or
533-3,5 - (3! / 3) = 3,5
Every digit in the part before the hyphen can have an opposite sign, so I found this:
def inverter(data):
inverted_data = [-x for x in data]
res = list(product(*zip(data, inverted_data)))
return res
I'm supposed to create a CSV file like in the example above but I haven't gotten to that part yet and that seems like the easiest part. What I have are several disparate parts that I can't connect in a sensible way:
import numpy as np
from itertools import product
from math import factorial
def plus(a, b):
return a + b
def minus(a, b):
return a - b
def mult(a, b):
return a * b
def div(a, b):
if b!=0:
if a%b==0:
return a//b
return np.nan
def the_factorial(a, b):
try:
return factorial(int(a))
except ValueError:
return np.nan
def power_to(a:int, b:int)->int:
try:
return int(a**b)
except ValueError:
return np.nan
def root(a:int, b:int)->int:
try:
return int(b**(1 / a))
except (TypeError, ZeroDivisionError, ValueError):
return np.nan
def combinations(nums, funcs):
"""Both arguments are lists"""
t = []
for i in range(len(nums)-1):
t.append(nums)
t.append(funcs)
t.append(nums)
return list(itertools.product(*t))
def solve(instance):
instance = list(instance)
for i in range(len(instance)//2):
b = instance.pop()
func = instance.pop()
a = instance.pop()
instance.append(func(a, b))
return instance[0]
def main():
try:
# a = [1, 3 ,4]
a = [int(-5), int(-1), int(8)]
func = [plus, minus, mult, div, the_factorial, power_to, root]
combs = combinations(a, func)
solutions = [solve(i) for i in combs]
for i, j in zip(combs, solutions):
print(i, j)
except ValueError:
#If there's too many combinations
return np.nan
I'm having trouble transforming the data from the initial_data to inverter to main which currently only works on one example and returns an ugly readout with a function object in the middle.
Thanks in advance.
A:
I think this will help you a lot (tweaks are on you) but it doesn't write in a CSV, I leave that for you to try, just take into account that there are thousands of possible combinations and in some cases, the results are really huge (see comments in main()).
I've added missing types in function declarations for clarity and successful linting (compatible with older Python versions).
Also, I think that the function combinations() is not needed so I removed it.
In my proposed code, the function solve() is the one doing the magic :)
Said all that, here's the full code:
import numpy as np
from itertools import product
from math import factorial
from typing import Union, Callable, Tuple, List, Set
def plus(a: int, b: int) -> int:
return a + b
def minus(a: int, b: int) -> int:
return a - b
def mult(a: int, b: int) -> int:
return a * b
def div(a: int, b: int) -> Union[int, float]:
try:
retval = int(a / b)
except (ValueError, ZeroDivisionError):
retval = np.nan
return retval
def the_factorial(a: int) -> Union[int, float]:
try:
return factorial(int(a))
except ValueError:
return np.nan
except OverflowError:
return np.inf
def power_to(a: int, b: int) -> Union[int, float]:
try:
return int(a ** b)
except (ValueError, ZeroDivisionError):
return np.nan
def root(a: int, b: int) -> Union[int, float]:
try:
return int(b ** (1 / a))
except (TypeError, ZeroDivisionError, ValueError):
return np.nan
def solve(values: Tuple[int, int, int], ops: List[Callable]) -> list[Tuple[str, int]]:
# Iterate over available functions.
combs = list()
for f in FACTORS:
# Get values to operate with.
x, y, z = values
sx, sy, sz = x, y, z
a, b, c = f
# Calculate the factorial for the values (if applicable).
if a == 1:
sx = f"{x}!"
x = the_factorial(x)
if b == 1:
sy = f"{y}!"
y = the_factorial(y)
if c == 1:
sz = f"{z}!"
z = the_factorial(z)
for ext_op in ops: # External operation.
for int_op in ops: # Internal operation.
# Create equations by grouping the first 2 elements, e.g.: ((x + y) * z).
eq_str = f"{ext_op.__name__}({int_op.__name__}({sx}, {sy}), {sz})"
eq_val = ext_op(int_op(x, y), z)
combs.append((eq_str, eq_val))
# Create equations by grouping the last 2 elements, e.g.: (x + (y * z)).
eq_str = f"{ext_op.__name__}({sx}, {int_op.__name__}({sy}, {sz}))"
eq_val = ext_op(x, int_op(y, z))
combs.append((eq_str, eq_val))
return combs
def inverter(data: List[int]) -> List[Tuple[int, int, int]]:
inverted_data = [-x for x in data]
res = list(product(*zip(data, inverted_data)))
return res
# Data to process.
INITIAL_DATA: List[str] = [
"518-2",
'533-3',
# '534-0',
# '000-3',
# '000-4'
]
# Available functions.
FUNCTIONS: List[Callable] = [ # the_factorial() removed, see solve().
plus,
minus,
mult,
div,
power_to,
root
]
# Get posible combinations to apply the factor operation.
FACTORS: Set[Tuple] = set(product([1, 0, 0], repeat=3))
def main():
cases = 0 # Count all possible cases (for each input value).
data = list() # List with all final data to be dumped in CSV.
print("number, solution, number_of_solutions")
# Iterate over all initial data.
for eq in INITIAL_DATA:
# Get values before and after the hyphen.
nums, res = eq.split('-')
res = int(res)
# Get combinations with inverted values.
combs = inverter([int(n) for n in list(nums)])
# Iterate over combinations and generate a list with their many possible solutions.
sol_cnt = 0 # Number of solutions (for each input value).
solutions = list() # List with all final data to be dumped in CSV.
for i in [solve(i, FUNCTIONS) for i in combs]:
for j in i:
str_repr, value = j
# Some values exceed the 4300 digits, hence the 'try-catch'.
# The function 'sys.set_int_max_str_digits()' may be used instead to increase the str() capabilites.
try:
str(value)
except ValueError:
value = np.inf
if value == res:
sol_cnt += 1
solutions.append((eq, str_repr, value))
cases += 1
# Iterate over all data gathered, and add number of solutions.
for i in range(len(solutions)):
eq, str_repr, value = solutions[i]
solutions[i] += (sol_cnt,)
print(f"{eq}, {str_repr} = {value}, {sol_cnt}")
data.extend(solutions)
# Print all the solutions for this input.
print(f"\nThese are the {sol_cnt} solutions for input {eq}:")
solutions = [s for s in solutions if (type(s[2]) is int and s[2] == res)]
for i in range(len(solutions)):
print(f" {i:4}. {solutions[i][1]}")
print()
print(f"\nTotal cases: {cases}")
And for the output, note that solutions are printed/formatted using the name of your functions, not mathematical operators. This is just an excerpt of the output generated for the first value in initial_data using factorials in the 1st and 3rd digits:
number, solution, number_of_solutions
518-2, plus(plus(5!, 1), 8!) = 40441, 12
518-2, plus(5!, plus(1, 8!)) = 40441, 12
518-2, plus(minus(5!, 1), 8!) = 40439, 12
518-2, plus(5!, minus(1, 8!)) = -40199, 12
518-2, plus(mult(5!, 1), 8!) = 40440, 12
518-2, plus(5!, mult(1, 8!)) = 40440, 12
518-2, plus(div(5!, 1), 8!) = 40440, 12
518-2, plus(5!, div(1, 8!)) = 120, 12
518-2, plus(power_to(5!, 1), 8!) = 40440, 12
518-2, plus(5!, power_to(1, 8!)) = 121, 12
518-2, plus(root(5!, 1), 8!) = 40321, 12
518-2, plus(5!, root(1, 8!)) = 40440, 12
...
These are the 12 solutions for input 518-2:
0. plus(minus(-5, 1!), 8)
1. minus(-5, minus(1!, 8))
2. plus(minus(-5, 1), 8)
3. minus(-5, minus(1, 8))
4. minus(-5, plus(1!, -8))
5. minus(minus(-5, 1!), -8)
6. minus(-5, plus(1, -8))
7. minus(minus(-5, 1), -8)
8. plus(plus(-5, -1), 8)
9. plus(-5, plus(-1, 8))
10. plus(-5, minus(-1, -8))
11. minus(plus(-5, -1), -8)
Total cases: 4608
Note that 4608 cases were processed just for the first value in initial_data, so I recommend you to try with this one first and then add the rest, as for some cases it could take a lot of processing time.
Also, I noticed that you are truncating the values in div() and root() so bear it in mind. You will see lots of nan and inf in the full output because there are huge values and conditions like div/0, so it's expected.
| Performing arithmetic calculations on all possible digit combinations in a list | I create data in a format like this:
initial_data = [
"518-2", '533-3', '534-0',
'000-3', '000-4']
I need to perform several operations (add, sub, div, mult, factorial, power_to, root) on the part before the hyphen to see if there's an equation which equals the part after the hyphen.
Like so:
#5182
-5 - 1 + 8 = 2 or 5*(-1) - 1 + 8 = 2
#000-3
number, solution, number_of_solutions
000-3,(0! + 0!) + 0! = 3,2
or
000-4,,0
or
533-3,5 - (3! / 3) = 3,5
Every digit in the part before the hyphen can have an opposite sign, so I found this:
def inverter(data):
inverted_data = [-x for x in data]
res = list(product(*zip(data, inverted_data)))
return res
I'm supposed to create a CSV file like in the example above but I haven't gotten to that part yet and that seems like the easiest part. What I have are several disparate parts that I can't connect in a sensible way:
import numpy as np
from itertools import product
from math import factorial
def plus(a, b):
return a + b
def minus(a, b):
return a - b
def mult(a, b):
return a * b
def div(a, b):
if b!=0:
if a%b==0:
return a//b
return np.nan
def the_factorial(a, b):
try:
return factorial(int(a))
except ValueError:
return np.nan
def power_to(a:int, b:int)->int:
try:
return int(a**b)
except ValueError:
return np.nan
def root(a:int, b:int)->int:
try:
return int(b**(1 / a))
except (TypeError, ZeroDivisionError, ValueError):
return np.nan
def combinations(nums, funcs):
"""Both arguments are lists"""
t = []
for i in range(len(nums)-1):
t.append(nums)
t.append(funcs)
t.append(nums)
return list(itertools.product(*t))
def solve(instance):
instance = list(instance)
for i in range(len(instance)//2):
b = instance.pop()
func = instance.pop()
a = instance.pop()
instance.append(func(a, b))
return instance[0]
def main():
try:
# a = [1, 3 ,4]
a = [int(-5), int(-1), int(8)]
func = [plus, minus, mult, div, the_factorial, power_to, root]
combs = combinations(a, func)
solutions = [solve(i) for i in combs]
for i, j in zip(combs, solutions):
print(i, j)
except ValueError:
#If there's too many combinations
return np.nan
I'm having trouble transforming the data from the initial_data to inverter to main which currently only works on one example and returns an ugly readout with a function object in the middle.
Thanks in advance.
| [
"I think this will help you a lot (tweaks are on you) but it doesn't write in a CSV, I leave that for you to try, just take into account that there are thousands of possible combinations and in some cases, the results are really huge (see comments in main()).\nI've added missing types in function declarations for clarity and successful linting (compatible with older Python versions).\nAlso, I think that the function combinations() is not needed so I removed it.\nIn my proposed code, the function solve() is the one doing the magic :)\nSaid all that, here's the full code:\nimport numpy as np\nfrom itertools import product\nfrom math import factorial\nfrom typing import Union, Callable, Tuple, List, Set\n\n\ndef plus(a: int, b: int) -> int:\n return a + b\n\n\ndef minus(a: int, b: int) -> int:\n return a - b\n\n\ndef mult(a: int, b: int) -> int:\n return a * b\n\n\ndef div(a: int, b: int) -> Union[int, float]:\n try:\n retval = int(a / b)\n except (ValueError, ZeroDivisionError):\n retval = np.nan\n return retval\n\n\ndef the_factorial(a: int) -> Union[int, float]:\n try:\n return factorial(int(a))\n except ValueError:\n return np.nan\n except OverflowError:\n return np.inf\n\n\ndef power_to(a: int, b: int) -> Union[int, float]:\n try:\n return int(a ** b)\n except (ValueError, ZeroDivisionError):\n return np.nan\n\n\ndef root(a: int, b: int) -> Union[int, float]:\n try:\n return int(b ** (1 / a))\n except (TypeError, ZeroDivisionError, ValueError):\n return np.nan\n\n\ndef solve(values: Tuple[int, int, int], ops: List[Callable]) -> list[Tuple[str, int]]:\n # Iterate over available functions.\n combs = list()\n for f in FACTORS:\n # Get values to operate with.\n x, y, z = values\n sx, sy, sz = x, y, z\n a, b, c = f\n # Calculate the factorial for the values (if applicable).\n if a == 1:\n sx = f\"{x}!\"\n x = the_factorial(x)\n if b == 1:\n sy = f\"{y}!\"\n y = the_factorial(y)\n if c == 1:\n sz = f\"{z}!\"\n z = the_factorial(z)\n for ext_op in ops: # External operation.\n for int_op in ops: # Internal operation.\n # Create equations by grouping the first 2 elements, e.g.: ((x + y) * z).\n eq_str = f\"{ext_op.__name__}({int_op.__name__}({sx}, {sy}), {sz})\"\n eq_val = ext_op(int_op(x, y), z)\n combs.append((eq_str, eq_val))\n # Create equations by grouping the last 2 elements, e.g.: (x + (y * z)).\n eq_str = f\"{ext_op.__name__}({sx}, {int_op.__name__}({sy}, {sz}))\"\n eq_val = ext_op(x, int_op(y, z))\n combs.append((eq_str, eq_val))\n return combs\n\n\ndef inverter(data: List[int]) -> List[Tuple[int, int, int]]:\n inverted_data = [-x for x in data]\n res = list(product(*zip(data, inverted_data)))\n return res\n\n\n# Data to process.\nINITIAL_DATA: List[str] = [\n \"518-2\",\n '533-3',\n # '534-0',\n # '000-3',\n # '000-4'\n]\n# Available functions.\nFUNCTIONS: List[Callable] = [ # the_factorial() removed, see solve().\n plus,\n minus,\n mult,\n div,\n power_to,\n root\n]\n# Get posible combinations to apply the factor operation.\nFACTORS: Set[Tuple] = set(product([1, 0, 0], repeat=3))\n\n\ndef main():\n cases = 0 # Count all possible cases (for each input value).\n data = list() # List with all final data to be dumped in CSV.\n print(\"number, solution, number_of_solutions\")\n # Iterate over all initial data.\n for eq in INITIAL_DATA:\n # Get values before and after the hyphen.\n nums, res = eq.split('-')\n res = int(res)\n # Get combinations with inverted values.\n combs = inverter([int(n) for n in list(nums)])\n # Iterate over combinations and generate a list with their many possible solutions.\n sol_cnt = 0 # Number of solutions (for each input value).\n solutions = list() # List with all final data to be dumped in CSV.\n for i in [solve(i, FUNCTIONS) for i in combs]:\n for j in i:\n str_repr, value = j\n # Some values exceed the 4300 digits, hence the 'try-catch'.\n # The function 'sys.set_int_max_str_digits()' may be used instead to increase the str() capabilites.\n try:\n str(value)\n except ValueError:\n value = np.inf\n if value == res:\n sol_cnt += 1\n solutions.append((eq, str_repr, value))\n cases += 1\n # Iterate over all data gathered, and add number of solutions.\n for i in range(len(solutions)):\n eq, str_repr, value = solutions[i]\n solutions[i] += (sol_cnt,)\n print(f\"{eq}, {str_repr} = {value}, {sol_cnt}\")\n data.extend(solutions)\n # Print all the solutions for this input.\n print(f\"\\nThese are the {sol_cnt} solutions for input {eq}:\")\n solutions = [s for s in solutions if (type(s[2]) is int and s[2] == res)]\n for i in range(len(solutions)):\n print(f\" {i:4}. {solutions[i][1]}\")\n print()\n print(f\"\\nTotal cases: {cases}\")\n\nAnd for the output, note that solutions are printed/formatted using the name of your functions, not mathematical operators. This is just an excerpt of the output generated for the first value in initial_data using factorials in the 1st and 3rd digits:\nnumber, solution, number_of_solutions\n518-2, plus(plus(5!, 1), 8!) = 40441, 12\n518-2, plus(5!, plus(1, 8!)) = 40441, 12 \n518-2, plus(minus(5!, 1), 8!) = 40439, 12 \n518-2, plus(5!, minus(1, 8!)) = -40199, 12 \n518-2, plus(mult(5!, 1), 8!) = 40440, 12 \n518-2, plus(5!, mult(1, 8!)) = 40440, 12 \n518-2, plus(div(5!, 1), 8!) = 40440, 12 \n518-2, plus(5!, div(1, 8!)) = 120, 12\n518-2, plus(power_to(5!, 1), 8!) = 40440, 12 \n518-2, plus(5!, power_to(1, 8!)) = 121, 12 \n518-2, plus(root(5!, 1), 8!) = 40321, 12 \n518-2, plus(5!, root(1, 8!)) = 40440, 12\n\n...\n\nThese are the 12 solutions for input 518-2:\n 0. plus(minus(-5, 1!), 8)\n 1. minus(-5, minus(1!, 8))\n 2. plus(minus(-5, 1), 8)\n 3. minus(-5, minus(1, 8))\n 4. minus(-5, plus(1!, -8))\n 5. minus(minus(-5, 1!), -8)\n 6. minus(-5, plus(1, -8))\n 7. minus(minus(-5, 1), -8)\n 8. plus(plus(-5, -1), 8)\n 9. plus(-5, plus(-1, 8))\n 10. plus(-5, minus(-1, -8))\n 11. minus(plus(-5, -1), -8)\n\nTotal cases: 4608\n\nNote that 4608 cases were processed just for the first value in initial_data, so I recommend you to try with this one first and then add the rest, as for some cases it could take a lot of processing time.\nAlso, I noticed that you are truncating the values in div() and root() so bear it in mind. You will see lots of nan and inf in the full output because there are huge values and conditions like div/0, so it's expected.\n"
] | [
0
] | [] | [] | [
"functools",
"numpy",
"python"
] | stackoverflow_0074648564_functools_numpy_python.txt |
Q:
How to change the palette-legend in seaborn pairplot
I've just learned that I can change axis-label font-size using sns.set_context. Is there an analogous way to change the content and size of the text in the 'palette-legend' on the right?
I'd like to enlarge the text and relabel the '0' and '1', which were used for matrix manipulation, back to descriptive text.
A:
You can use set_title() and set_text() to set the names of the legend title & labels. Similarly, use plt.setp() to change the font to the size you need it to be... an example is shown below.
penguins = sns.load_dataset("penguins")
g=sns.pairplot(penguins, hue="species")
g._legend.set_title("New Title") ## Change text of Title
new_labels = ['Label 1', 'Label 2', 'Label 3']
for t, l in zip(g._legend.texts, new_labels):
t.set_text(l) ## Change text of labels
plt.setp(g._legend.get_title(), fontsize=30) ## Set the Title font to 30
plt.setp(g._legend.get_texts(), fontsize=20) ## Set the label font to 20
plt.show()
| How to change the palette-legend in seaborn pairplot | I've just learned that I can change axis-label font-size using sns.set_context. Is there an analogous way to change the content and size of the text in the 'palette-legend' on the right?
I'd like to enlarge the text and relabel the '0' and '1', which were used for matrix manipulation, back to descriptive text.
| [
"You can use set_title() and set_text() to set the names of the legend title & labels. Similarly, use plt.setp() to change the font to the size you need it to be... an example is shown below.\npenguins = sns.load_dataset(\"penguins\")\ng=sns.pairplot(penguins, hue=\"species\")\n\ng._legend.set_title(\"New Title\") ## Change text of Title\nnew_labels = ['Label 1', 'Label 2', 'Label 3']\nfor t, l in zip(g._legend.texts, new_labels):\n t.set_text(l) ## Change text of labels\n \nplt.setp(g._legend.get_title(), fontsize=30) ## Set the Title font to 30\nplt.setp(g._legend.get_texts(), fontsize=20) ## Set the label font to 20\n\nplt.show()\n\n\n"
] | [
0
] | [] | [] | [
"matplotlib",
"python",
"seaborn"
] | stackoverflow_0074656704_matplotlib_python_seaborn.txt |
Q:
Code giving NameError: name 'x' is not defined
I am new to Python, and I am trying to make a numerical analysis model of differential equations.
import sympy as sympy
def picard_solver(y_0, x_0, rhs_expression, iteration_count:int = 5):
x, phi = sympy.symbols("x phi")
phi = x_0
for i in range(iteration_count + 1):
phi = y_0 + sympy.integrate(rhs_expression(x, phi), (x, x_0, x))
return phi
import numpy
import plotly.graph_objects as go
y_set = [picard_solver(1, 0, lambda x, y: x * y, i) for i in range(1, 6)]
x_grid = numpy.linspace(-2, 2, 1000)
y_picard = list()
for y in y_set:
y_picard.append(numpy.array([float(y.evalf(subs={x: x_i})) for x_i in x_grid]))
y_exact = numpy.exp((x_grid) * (x_grid) / 2)
fig = go.Figure()
for i, y_order in enumerate(y_picard):
fig.add_trace(go.Scatter(x=x_grid, y=y_order, name=f"Picard Order {i + 1}"))
# fig.add_trace(go.Scatter(x=x_grid, y=y_picard, name="Picard Solution"))
fig.add_trace(go.Scatter(x=x_grid, y=y_exact, name="Exact Solution"))
fig.show()
fig.write_html("picard_vs_exact.html")
But when I try to run it, I get NameError: name 'x' is not defined error, can someone help me?
I want a graph to be shown.
A:
I think that you need to pass a string in the y.evalf(subs={'x': x_i}) part of your code.
| Code giving NameError: name 'x' is not defined | I am new to Python, and I am trying to make a numerical analysis model of differential equations.
import sympy as sympy
def picard_solver(y_0, x_0, rhs_expression, iteration_count:int = 5):
x, phi = sympy.symbols("x phi")
phi = x_0
for i in range(iteration_count + 1):
phi = y_0 + sympy.integrate(rhs_expression(x, phi), (x, x_0, x))
return phi
import numpy
import plotly.graph_objects as go
y_set = [picard_solver(1, 0, lambda x, y: x * y, i) for i in range(1, 6)]
x_grid = numpy.linspace(-2, 2, 1000)
y_picard = list()
for y in y_set:
y_picard.append(numpy.array([float(y.evalf(subs={x: x_i})) for x_i in x_grid]))
y_exact = numpy.exp((x_grid) * (x_grid) / 2)
fig = go.Figure()
for i, y_order in enumerate(y_picard):
fig.add_trace(go.Scatter(x=x_grid, y=y_order, name=f"Picard Order {i + 1}"))
# fig.add_trace(go.Scatter(x=x_grid, y=y_picard, name="Picard Solution"))
fig.add_trace(go.Scatter(x=x_grid, y=y_exact, name="Exact Solution"))
fig.show()
fig.write_html("picard_vs_exact.html")
But when I try to run it, I get NameError: name 'x' is not defined error, can someone help me?
I want a graph to be shown.
| [
"I think that you need to pass a string in the y.evalf(subs={'x': x_i}) part of your code.\n"
] | [
1
] | [] | [] | [
"nameerror",
"python"
] | stackoverflow_0074656877_nameerror_python.txt |
Q:
Python, reduce list of string doesn't work with newline?
I try to combine a list of string to string using reduce function but it doesn't work. I prefer to use reduce function anyway how do I fix this?
>> reduce(lambda x, y: x + y + "\n", ["dog", "cat"])
# this doesn't work
# dogcat
>> "\n".join(["dog", "cat"])
# this works
# dog
# cat
A:
The purpose of join, is to put the element between each
reduce(lambda x, y: x + "\n" + y, ["dog", "cat"])
A:
###################### METHOD 1 ######################
strings = ["This", "is", "a", "list", "of", "strings"]
# join the strings using lambda
joined = lambda strings: "\n".join(strings)
print(joined(strings))
###################### METHOD 2 ######################
mylist = ["a", "b", "c", "d", "e"]
# use list comprehension to join the list of strings
mystring = "\n".join([str(x) for x in mylist])
print(mystring)
strings = ["This", "is", "a", "list", "of", "strings"]
###################### METHOD 3 ######################
import functools
list_of_strings = ["a", "b", "c"]
print(functools.reduce(lambda x, y: x + "\n" + y, list_of_strings))
| Python, reduce list of string doesn't work with newline? | I try to combine a list of string to string using reduce function but it doesn't work. I prefer to use reduce function anyway how do I fix this?
>> reduce(lambda x, y: x + y + "\n", ["dog", "cat"])
# this doesn't work
# dogcat
>> "\n".join(["dog", "cat"])
# this works
# dog
# cat
| [
"The purpose of join, is to put the element between each\nreduce(lambda x, y: x + \"\\n\" + y, [\"dog\", \"cat\"])\n\n",
"###################### METHOD 1 ######################\n\nstrings = [\"This\", \"is\", \"a\", \"list\", \"of\", \"strings\"]\n\n# join the strings using lambda\njoined = lambda strings: \"\\n\".join(strings)\nprint(joined(strings))\n\n###################### METHOD 2 ######################\n\nmylist = [\"a\", \"b\", \"c\", \"d\", \"e\"]\n\n# use list comprehension to join the list of strings\nmystring = \"\\n\".join([str(x) for x in mylist])\nprint(mystring)\n\nstrings = [\"This\", \"is\", \"a\", \"list\", \"of\", \"strings\"]\n\n###################### METHOD 3 ######################\nimport functools\nlist_of_strings = [\"a\", \"b\", \"c\"]\nprint(functools.reduce(lambda x, y: x + \"\\n\" + y, list_of_strings))\n\n"
] | [
2,
0
] | [] | [] | [
"python"
] | stackoverflow_0074657363_python.txt |
Q:
pandas to_html: add attributes to table tag
I'm using the pandas to_html() method to build a table for my website. I want to add some attributes to the <table> tag; however I'm not sure how to do this.
my_table = Markup(df.to_html(classes="table"))
Which produces:
<table border="1" class="dataframe table">
I want to produce the following:
<table border="1" class="dataframe table" attribute="value" attribute2="value2">
A:
This can be achieved simply by manipulating the rendered html with a simple regular expression:
import re
df = pd.DataFrame(1, index=[1, 2], columns=list('AB'))
html = df.to_html(classes="table")
html = re.sub(
r'<table([^>]*)>',
r'<table\1 attribute="value" attribute2="value2">',
html
)
print(html.split('\n')[0])
<table border="1" class="dataframe table" attribute="value" attribute2="value2">
A:
For a pandas-only solution, you can use a Styler object:
df = pd.DataFrame(1, index=[1, 2], columns=list('AB'))
styled = df.style.set_table_attributes('foo="foo" bar="bar"')
print(styled.to_html())
Output:
<style type="text/css">
</style>
<table id="T_7d7d8" foo="foo" bar="bar">
<thead>
<tr>
<th class="blank level0" > </th>
<th id="T_7d7d8_level0_col0" class="col_heading level0 col0" >A</th>
<th id="T_7d7d8_level0_col1" class="col_heading level0 col1" >B</th>
</tr>
</thead>
<tbody>
<tr>
<th id="T_7d7d8_level0_row0" class="row_heading level0 row0" >1</th>
<td id="T_7d7d8_row0_col0" class="data row0 col0" >1</td>
<td id="T_7d7d8_row0_col1" class="data row0 col1" >1</td>
</tr>
<tr>
<th id="T_7d7d8_level0_row1" class="row_heading level0 row1" >2</th>
<td id="T_7d7d8_row1_col0" class="data row1 col0" >1</td>
<td id="T_7d7d8_row1_col1" class="data row1 col1" >1</td>
</tr>
</tbody>
</table>
| pandas to_html: add attributes to table tag | I'm using the pandas to_html() method to build a table for my website. I want to add some attributes to the <table> tag; however I'm not sure how to do this.
my_table = Markup(df.to_html(classes="table"))
Which produces:
<table border="1" class="dataframe table">
I want to produce the following:
<table border="1" class="dataframe table" attribute="value" attribute2="value2">
| [
"This can be achieved simply by manipulating the rendered html with a simple regular expression:\nimport re\n\ndf = pd.DataFrame(1, index=[1, 2], columns=list('AB'))\n\nhtml = df.to_html(classes=\"table\")\nhtml = re.sub(\n r'<table([^>]*)>',\n r'<table\\1 attribute=\"value\" attribute2=\"value2\">',\n html\n)\n\nprint(html.split('\\n')[0])\n\n<table border=\"1\" class=\"dataframe table\" attribute=\"value\" attribute2=\"value2\">\n\n",
"For a pandas-only solution, you can use a Styler object:\ndf = pd.DataFrame(1, index=[1, 2], columns=list('AB'))\nstyled = df.style.set_table_attributes('foo=\"foo\" bar=\"bar\"')\nprint(styled.to_html())\n\nOutput:\n<style type=\"text/css\">\n</style>\n<table id=\"T_7d7d8\" foo=\"foo\" bar=\"bar\">\n <thead>\n <tr>\n <th class=\"blank level0\" > </th>\n <th id=\"T_7d7d8_level0_col0\" class=\"col_heading level0 col0\" >A</th>\n <th id=\"T_7d7d8_level0_col1\" class=\"col_heading level0 col1\" >B</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th id=\"T_7d7d8_level0_row0\" class=\"row_heading level0 row0\" >1</th>\n <td id=\"T_7d7d8_row0_col0\" class=\"data row0 col0\" >1</td>\n <td id=\"T_7d7d8_row0_col1\" class=\"data row0 col1\" >1</td>\n </tr>\n <tr>\n <th id=\"T_7d7d8_level0_row1\" class=\"row_heading level0 row1\" >2</th>\n <td id=\"T_7d7d8_row1_col0\" class=\"data row1 col0\" >1</td>\n <td id=\"T_7d7d8_row1_col1\" class=\"data row1 col1\" >1</td>\n </tr>\n </tbody>\n</table>\n\n"
] | [
6,
0
] | [] | [] | [
"html",
"pandas",
"python"
] | stackoverflow_0043312995_html_pandas_python.txt |
Q:
Why does changing the kernel_initializer lead to NaN loss?
I am running an advantage actor-critic (A2C) reinforcement learning model, but when I change the kernel_initializer, it gives me an error where my state has value. Moreover, it works only when kernel_initializer=tf.zeros_initializer().
I have changed the model to this code, and I'm facing a different problem: repeating the same action. However, when I changed the kernel_initializer to tf.zeros_initializer(), it started to choose different actions.
state =[-103.91446672 -109. 7.93509779 0. 0.
1. ]
The model
class Actor:
"""The actor class"""
def __init__(self, sess, num_actions, observation_shape, config):
self._sess = sess
self._state = tf.placeholder(dtype=tf.float32, shape=observation_shape, name='state')
self._action = tf.placeholder(dtype=tf.int32, name='action')
self._target = tf.placeholder(dtype=tf.float32, name='target')
self._hidden_layer = tf.layers.dense(inputs=tf.expand_dims(self._state, 0), units=32, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
self._output_layer = tf.layers.dense(inputs=self._hidden_layer, units=num_actions, kernel_initializer=tf.zeros_initializer())
self._action_probs = tf.squeeze(tf.nn.softmax(self._output_layer))
self._picked_action_prob = tf.gather(self._action_probs, self._action)
self._loss = -tf.log(self._picked_action_prob) * self._target
self._optimizer = tf.train.AdamOptimizer(learning_rate=config.learning_rate)
self._train_op = self._optimizer.minimize(self._loss)
def predict(self, s):
return self._sess.run(self._action_probs, {self._state: s})
def update(self, s, a, target):
self._sess.run(self._train_op, {self._state: s, self._action: a, self._target: target})
class Critic:
"""The critic class"""
def __init__(self, sess, observation_shape, config):
self._sess = sess
self._config = config
self._name = config.critic_name
self._observation_shape = observation_shape
self._build_model()
def _build_model(self):
with tf.variable_scope(self._name):
self._state = tf.placeholder(dtype=tf.float32, shape=self._observation_shape, name='state')
self._target = tf.placeholder(dtype=tf.float32, name='target')
self._hidden_layer = tf.layers.dense(inputs=tf.expand_dims(self._state, 0), units=32, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
self._out = tf.layers.dense(inputs=self._hidden_layer, units=1, kernel_initializer=tf.zeros_initializer())
self._value_estimate = tf.squeeze(self._out)
self._loss = tf.squared_difference(self._out, self._target)
self._optimizer = tf.train.AdamOptimizer(learning_rate=self._config.learning_rate)
self._update_step = self._optimizer.minimize(self._loss)
def predict(self, s):
return self._sess.run(self._value_estimate, feed_dict={self._state: s})
def update(self, s, target):
self._sess.run(self._update_step, feed_dict={self._state: s, self._target: target})
The problem is that I need the learning process to be improved. So, I thought if I changed the kernel_initializer, it might improve, but it gave me this error message.
action = np.random.choice(np.arange(lenaction), p=action_prob)
File "mtrand.pyx", line 935, in numpy.random.mtrand.RandomState.choice
ValueError: probabilities contain NaN
Any Idea what causing this?
A:
Using a kernel_initializer of tf.zeros_initializer() for your dense layers in the actor and critic networks can lead to the issue you are experiencing, where the loss becomes NaN and the model repeats the same action. This is because using a kernel_initializer of tf.zeros_initializer() initializes all of the weights in the dense layers to zeros, which can prevent the network from learning.
In general, it is better to use a different kernel_initializer for your dense layers, such as tf.random_normal_initializer() or tf.glorot_uniform_initializer(). These initializers initialize the weights with random values, which allows the network to learn and produce more diverse outputs.
To fix the issue with your model, you can try changing the kernel_initializer for your dense layers to a different value, such as tf.random_normal_initializer() or tf.glorot_uniform_initializer(). This should allow your network to learn and avoid the issue where the loss becomes NaN and the model repeats the same action.
You can also try using a different optimizer, such as RMSProp or Adagrad, which may be better suited for this problem. Additionally, you can try adjusting the learning rate and other hyperparameters of the model to see if that improves its performance.
If the tf.zeros_initializer initializer is the only initializer that works for your network, but the performance is not good, there are several steps you can take to improve the performance of your network.
First, you can try adjusting the parameters of the tf.zeros_initializer initializer to fine-tune the starting weights for your network. The tf.zeros_initializer initializer does not have any parameters, so you will need to use a different initializer and adjust its parameters to control the starting weights for your network.
For example, you can try using the tf.random_normal_initializer initializer, which will provide random starting weights for the network. You can adjust the mean and stddev parameters to control the distribution of the starting weights, and experiment with different values to see which provides the best performance for your network.
Alternatively, you can try adjusting other hyperparameters, such as the learning rate or the optimizer, to improve the performance of your network. For example, you can try using a different optimizer, such as the Adam optimizer or the RMSprop optimizer, to see if it provides better performance for your network.
You can also try modifying the state, action, and reward definitions for your network to see if a different representation improves the performance of your network. For example, you can try using a different state representation, such as a different set of features or a different scaling or normalization method, to see if it improves the performance of your network.
Finally, you can try using more data or more complex network architectures to improve the performance of your network. For example, you can try using a larger dataset, or a deeper or wider network, to see if it provides better performance for your network. For more information, see the TensorFlow documentation on training and evaluating neural networks. https://www.tensorflow.org/guide/keras/train_and_evaluate
| Why does changing the kernel_initializer lead to NaN loss? | I am running an advantage actor-critic (A2C) reinforcement learning model, but when I change the kernel_initializer, it gives me an error where my state has value. Moreover, it works only when kernel_initializer=tf.zeros_initializer().
I have changed the model to this code, and I'm facing a different problem: repeating the same action. However, when I changed the kernel_initializer to tf.zeros_initializer(), it started to choose different actions.
state =[-103.91446672 -109. 7.93509779 0. 0.
1. ]
The model
class Actor:
"""The actor class"""
def __init__(self, sess, num_actions, observation_shape, config):
self._sess = sess
self._state = tf.placeholder(dtype=tf.float32, shape=observation_shape, name='state')
self._action = tf.placeholder(dtype=tf.int32, name='action')
self._target = tf.placeholder(dtype=tf.float32, name='target')
self._hidden_layer = tf.layers.dense(inputs=tf.expand_dims(self._state, 0), units=32, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
self._output_layer = tf.layers.dense(inputs=self._hidden_layer, units=num_actions, kernel_initializer=tf.zeros_initializer())
self._action_probs = tf.squeeze(tf.nn.softmax(self._output_layer))
self._picked_action_prob = tf.gather(self._action_probs, self._action)
self._loss = -tf.log(self._picked_action_prob) * self._target
self._optimizer = tf.train.AdamOptimizer(learning_rate=config.learning_rate)
self._train_op = self._optimizer.minimize(self._loss)
def predict(self, s):
return self._sess.run(self._action_probs, {self._state: s})
def update(self, s, a, target):
self._sess.run(self._train_op, {self._state: s, self._action: a, self._target: target})
class Critic:
"""The critic class"""
def __init__(self, sess, observation_shape, config):
self._sess = sess
self._config = config
self._name = config.critic_name
self._observation_shape = observation_shape
self._build_model()
def _build_model(self):
with tf.variable_scope(self._name):
self._state = tf.placeholder(dtype=tf.float32, shape=self._observation_shape, name='state')
self._target = tf.placeholder(dtype=tf.float32, name='target')
self._hidden_layer = tf.layers.dense(inputs=tf.expand_dims(self._state, 0), units=32, activation=tf.nn.relu, kernel_initializer=tf.zeros_initializer())
self._out = tf.layers.dense(inputs=self._hidden_layer, units=1, kernel_initializer=tf.zeros_initializer())
self._value_estimate = tf.squeeze(self._out)
self._loss = tf.squared_difference(self._out, self._target)
self._optimizer = tf.train.AdamOptimizer(learning_rate=self._config.learning_rate)
self._update_step = self._optimizer.minimize(self._loss)
def predict(self, s):
return self._sess.run(self._value_estimate, feed_dict={self._state: s})
def update(self, s, target):
self._sess.run(self._update_step, feed_dict={self._state: s, self._target: target})
The problem is that I need the learning process to be improved. So, I thought if I changed the kernel_initializer, it might improve, but it gave me this error message.
action = np.random.choice(np.arange(lenaction), p=action_prob)
File "mtrand.pyx", line 935, in numpy.random.mtrand.RandomState.choice
ValueError: probabilities contain NaN
Any Idea what causing this?
| [
"Using a kernel_initializer of tf.zeros_initializer() for your dense layers in the actor and critic networks can lead to the issue you are experiencing, where the loss becomes NaN and the model repeats the same action. This is because using a kernel_initializer of tf.zeros_initializer() initializes all of the weights in the dense layers to zeros, which can prevent the network from learning.\nIn general, it is better to use a different kernel_initializer for your dense layers, such as tf.random_normal_initializer() or tf.glorot_uniform_initializer(). These initializers initialize the weights with random values, which allows the network to learn and produce more diverse outputs.\nTo fix the issue with your model, you can try changing the kernel_initializer for your dense layers to a different value, such as tf.random_normal_initializer() or tf.glorot_uniform_initializer(). This should allow your network to learn and avoid the issue where the loss becomes NaN and the model repeats the same action.\nYou can also try using a different optimizer, such as RMSProp or Adagrad, which may be better suited for this problem. Additionally, you can try adjusting the learning rate and other hyperparameters of the model to see if that improves its performance.\nIf the tf.zeros_initializer initializer is the only initializer that works for your network, but the performance is not good, there are several steps you can take to improve the performance of your network.\nFirst, you can try adjusting the parameters of the tf.zeros_initializer initializer to fine-tune the starting weights for your network. The tf.zeros_initializer initializer does not have any parameters, so you will need to use a different initializer and adjust its parameters to control the starting weights for your network.\nFor example, you can try using the tf.random_normal_initializer initializer, which will provide random starting weights for the network. You can adjust the mean and stddev parameters to control the distribution of the starting weights, and experiment with different values to see which provides the best performance for your network.\nAlternatively, you can try adjusting other hyperparameters, such as the learning rate or the optimizer, to improve the performance of your network. For example, you can try using a different optimizer, such as the Adam optimizer or the RMSprop optimizer, to see if it provides better performance for your network.\nYou can also try modifying the state, action, and reward definitions for your network to see if a different representation improves the performance of your network. For example, you can try using a different state representation, such as a different set of features or a different scaling or normalization method, to see if it improves the performance of your network.\nFinally, you can try using more data or more complex network architectures to improve the performance of your network. For example, you can try using a larger dataset, or a deeper or wider network, to see if it provides better performance for your network. For more information, see the TensorFlow documentation on training and evaluating neural networks. https://www.tensorflow.org/guide/keras/train_and_evaluate\n"
] | [
0
] | [] | [] | [
"actor_critics",
"python",
"random",
"tensorflow"
] | stackoverflow_0074612124_actor_critics_python_random_tensorflow.txt |
Q:
Python unittests used in a project structure with multiple directories
I need to use unittest python library to execute tests about the 3 functions in src/arithmetics.py file. Here is my project structure.
.
├── src
│ └── arithmetics.py
└── test
└── lcm
├── __init__.py
├── test_lcm_exception.py
└── test_lcm.py
src/arithmetics.py
def lcm(p, q):
p, q = abs(p), abs(q)
m = p * q
while True:
p %= q
if not p:
return m // q
q %= p
if not q:
return m // p
def lcm_better(p, q):
p, q = abs(p), abs(q)
m = p * q
h = p % q
while h != 0:
p = q
q = h
h = p % q
h = m / q
return h
def lcm_faulty(p, q):
r, m = 0, 0
r = p * q
while (r > p) and (r > q):
if (r % p == 0) and (r % q == 0):
m = r
r = r - 1
return m
test/lcm/test_lcm.py
import unittest
from src.arithmetics import *
class LcmTest(unittest.TestCase):
def test_lcm(self):
for X in range(1, 100):
self.assertTrue(0 == lcm(0, X))
self.assertTrue(X == lcm(X, X))
self.assertTrue(840 == lcm(60, 168))
def test_lcm_better(self):
for X in range(1, 100):
self.assertTrue(0 == lcm_better(0, X))
self.assertTrue(X == lcm_better(X, X))
self.assertTrue(840 == lcm_better(60, 168))
def test_lcm_faulty(self):
self.assertTrue(0 == lcm_faulty(0, 0))
for X in range(1, 100):
self.assertTrue(0 == lcm_faulty(X, 0))
self.assertTrue(0 == lcm_faulty(0, X))
self.assertTrue(840 == lcm_faulty(60, 168))
if __name__ == '__main__':
unittest.main()
test/lcm/test_lcm_exception.py
import unittest
from src.arithmetics import *
class LcmExceptionTest(unittest.TestCase):
def test_lcm_exception(self):
for X in range(0, 100):
self.assertTrue(0 == lcm(0, 0)) # ZeroDivisionError
self.assertTrue(0 == lcm(X, 0)) # ZeroDivisionError
def test_lcm_better_exception(self):
for X in range(0, 100):
self.assertTrue(0 == lcm_better(0, 0)) # ZeroDivisionError
self.assertTrue(0 == lcm_better(X, 0)) # ZeroDivisionError
def test_lcm_faulty_exception(self):
for X in range(1, 100):
self.assertTrue(X == lcm_faulty(X, X)) # ppcm(1, 1) != 1
if __name__ == '__main__':
unittest.main()
test/lcm/__init__.py is an empty file
To execute my tests, I tried this command :
python3 -m unittest discover
But the output is :
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I don't understand how can I run my tests...
Thanks for helping me !
A:
Some files init.py are missing
I think that the problem is the missing of file __init__.py in your subfolders. Try to add this empty file in all your subfolders as I show you below:
test_lcm
├── __init__.py
├── src
| └── __init__py
│ └── arithmetics.py
└── test
└── __init__py
└── lcm
├── __init__.py
├── test_lcm_exception.py
└── test_lcm.py
If you see my tree folder I have create a folder test_lcm as root of the tree. You have to place inside it by cd command.
So in test_lcm folder execute:
# go to test_lcm folder
cd ~/test_lcm
# execute test
python3 -m unittest discover
The last part of the output is:
----------------------------------------------------------------------
Ran 6 tests in 0.002s
FAILED (failures=1, errors=2)
This show that are executed 6 tests with 2 errors (test_lcm_better_exception and test_lcm_exception fail).
| Python unittests used in a project structure with multiple directories | I need to use unittest python library to execute tests about the 3 functions in src/arithmetics.py file. Here is my project structure.
.
├── src
│ └── arithmetics.py
└── test
└── lcm
├── __init__.py
├── test_lcm_exception.py
└── test_lcm.py
src/arithmetics.py
def lcm(p, q):
p, q = abs(p), abs(q)
m = p * q
while True:
p %= q
if not p:
return m // q
q %= p
if not q:
return m // p
def lcm_better(p, q):
p, q = abs(p), abs(q)
m = p * q
h = p % q
while h != 0:
p = q
q = h
h = p % q
h = m / q
return h
def lcm_faulty(p, q):
r, m = 0, 0
r = p * q
while (r > p) and (r > q):
if (r % p == 0) and (r % q == 0):
m = r
r = r - 1
return m
test/lcm/test_lcm.py
import unittest
from src.arithmetics import *
class LcmTest(unittest.TestCase):
def test_lcm(self):
for X in range(1, 100):
self.assertTrue(0 == lcm(0, X))
self.assertTrue(X == lcm(X, X))
self.assertTrue(840 == lcm(60, 168))
def test_lcm_better(self):
for X in range(1, 100):
self.assertTrue(0 == lcm_better(0, X))
self.assertTrue(X == lcm_better(X, X))
self.assertTrue(840 == lcm_better(60, 168))
def test_lcm_faulty(self):
self.assertTrue(0 == lcm_faulty(0, 0))
for X in range(1, 100):
self.assertTrue(0 == lcm_faulty(X, 0))
self.assertTrue(0 == lcm_faulty(0, X))
self.assertTrue(840 == lcm_faulty(60, 168))
if __name__ == '__main__':
unittest.main()
test/lcm/test_lcm_exception.py
import unittest
from src.arithmetics import *
class LcmExceptionTest(unittest.TestCase):
def test_lcm_exception(self):
for X in range(0, 100):
self.assertTrue(0 == lcm(0, 0)) # ZeroDivisionError
self.assertTrue(0 == lcm(X, 0)) # ZeroDivisionError
def test_lcm_better_exception(self):
for X in range(0, 100):
self.assertTrue(0 == lcm_better(0, 0)) # ZeroDivisionError
self.assertTrue(0 == lcm_better(X, 0)) # ZeroDivisionError
def test_lcm_faulty_exception(self):
for X in range(1, 100):
self.assertTrue(X == lcm_faulty(X, X)) # ppcm(1, 1) != 1
if __name__ == '__main__':
unittest.main()
test/lcm/__init__.py is an empty file
To execute my tests, I tried this command :
python3 -m unittest discover
But the output is :
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
I don't understand how can I run my tests...
Thanks for helping me !
| [
"Some files init.py are missing\nI think that the problem is the missing of file __init__.py in your subfolders. Try to add this empty file in all your subfolders as I show you below:\ntest_lcm\n├── __init__.py\n├── src\n| └── __init__py\n│ └── arithmetics.py\n└── test\n └── __init__py\n └── lcm\n ├── __init__.py\n ├── test_lcm_exception.py\n └── test_lcm.py\n\nIf you see my tree folder I have create a folder test_lcm as root of the tree. You have to place inside it by cd command.\nSo in test_lcm folder execute:\n# go to test_lcm folder\ncd ~/test_lcm\n\n# execute test\npython3 -m unittest discover\n\nThe last part of the output is:\n----------------------------------------------------------------------\nRan 6 tests in 0.002s\n\nFAILED (failures=1, errors=2)\n\nThis show that are executed 6 tests with 2 errors (test_lcm_better_exception and test_lcm_exception fail).\n"
] | [
1
] | [] | [] | [
"python",
"python_3.x",
"python_unittest",
"unit_testing"
] | stackoverflow_0074655669_python_python_3.x_python_unittest_unit_testing.txt |
Q:
Issue when reading csv file from url using pandas.read_csv
I am trying to import a csv file from the following url
"https://www.marketwatch.com/games/stackoverflowq/download?view=holdings&pub=4JwsLs_Gm4kj&isDownload=true"
using the pandas read_csv function. However, I get the following error:
StopIteration:
The above exception was the direct cause of the following exception:
...
--> 386 raise EmptyDataError("No columns to parse from file") from err
388 line = self.names[:]
390 this_columns: list[Scalar | None] = []
EmptyDataError: No columns to parse from file
Downloading the csv manually and then reading it with pd.read_csv yields the expected output without issues. As I need to repeat this for multiple csvs, I would like to directly import the csvs without having to manually download them each time.
I have also tried this solution https://stackoverflow.com/questions/47243024/pandas-read-csv-on-dynamic-url-gives-emptydataerror-no-columns-to-parse-from-fi[](https://www.stackoverflow.com/), which also resulted in the 'No columns to parse from file' error.
I could only find a link from the html and the button on the website, without a .csv ending:
<a href="/games/stackoverflowq/download?view=holdings&pub=4JwsLs_Gm4kj&isDownload=true" download="Holdings - Stack Overflowq.csv" rel="nofollow">Download</a>
Edit: Cleaned up the question in case somebody has a similar issue.
A:
The issue was indeed that the data could only be accessed after logging in.
I have managed to resolve it using Selenium and this answer.
from io import StringIO
import pandas as pd
import requests
from selenium import webdriver
#start requests session with login from selenium driver
s = requests.Session()
selenium_user_agent = driver.execute_script("return navigator.userAgent;")
s.headers.update({"user-agent": selenium_user_agent})
#copy cookies from selenium driver
for cookie in driver.get_cookies():
s.cookies.set(cookie['name'], cookie['value'], domain=cookie['domain'])
#read csv
response = s.get(url)
if response.ok:
data = response.content.decode('utf8')
df = pd.read_csv(StringIO(data))
| Issue when reading csv file from url using pandas.read_csv | I am trying to import a csv file from the following url
"https://www.marketwatch.com/games/stackoverflowq/download?view=holdings&pub=4JwsLs_Gm4kj&isDownload=true"
using the pandas read_csv function. However, I get the following error:
StopIteration:
The above exception was the direct cause of the following exception:
...
--> 386 raise EmptyDataError("No columns to parse from file") from err
388 line = self.names[:]
390 this_columns: list[Scalar | None] = []
EmptyDataError: No columns to parse from file
Downloading the csv manually and then reading it with pd.read_csv yields the expected output without issues. As I need to repeat this for multiple csvs, I would like to directly import the csvs without having to manually download them each time.
I have also tried this solution https://stackoverflow.com/questions/47243024/pandas-read-csv-on-dynamic-url-gives-emptydataerror-no-columns-to-parse-from-fi[](https://www.stackoverflow.com/), which also resulted in the 'No columns to parse from file' error.
I could only find a link from the html and the button on the website, without a .csv ending:
<a href="/games/stackoverflowq/download?view=holdings&pub=4JwsLs_Gm4kj&isDownload=true" download="Holdings - Stack Overflowq.csv" rel="nofollow">Download</a>
Edit: Cleaned up the question in case somebody has a similar issue.
| [
"The issue was indeed that the data could only be accessed after logging in.\nI have managed to resolve it using Selenium and this answer.\nfrom io import StringIO \nimport pandas as pd\nimport requests\nfrom selenium import webdriver\n\n#start requests session with login from selenium driver\ns = requests.Session()\nselenium_user_agent = driver.execute_script(\"return navigator.userAgent;\")\ns.headers.update({\"user-agent\": selenium_user_agent})\n\n#copy cookies from selenium driver\nfor cookie in driver.get_cookies():\n s.cookies.set(cookie['name'], cookie['value'], domain=cookie['domain'])\n\n#read csv\nresponse = s.get(url)\nif response.ok:\n data = response.content.decode('utf8') \n df = pd.read_csv(StringIO(data))\n \n \n\n"
] | [
0
] | [] | [] | [
"csv",
"pandas",
"python"
] | stackoverflow_0074605550_csv_pandas_python.txt |
Q:
discord.py how to make a command have a cooldown?
want to have this command on cooldown for 30 seconds
@client.command()
@commands.cooldown(1,30,commands.BucketType.user)
if message.content.startswith('!sg hunt')
await message.channel.send('You hunted a...')
@work.error
async def work_error(ctx, error):
if isinstance(error, commands.CommandOnCooldown):
await ctx.send(f'This command is on cooldown, you can use it in {round(error.retry_after, 2)} seconds')
tried this buckettype thing
A:
Weidong Zhu Robot!
You probably forgot to make function definition. Also @client.command() returns context, not message, so don't forget about this too.
There's your quick fix
@client.command()
@commands.cooldown(1,30,commands.BucketType.user)
async def hunt_cmd(ctx):
message = ctx.message
if message.content.startswith('!sg hunt')
await message.channel.send('You hunted a...')```
| discord.py how to make a command have a cooldown? | want to have this command on cooldown for 30 seconds
@client.command()
@commands.cooldown(1,30,commands.BucketType.user)
if message.content.startswith('!sg hunt')
await message.channel.send('You hunted a...')
@work.error
async def work_error(ctx, error):
if isinstance(error, commands.CommandOnCooldown):
await ctx.send(f'This command is on cooldown, you can use it in {round(error.retry_after, 2)} seconds')
tried this buckettype thing
| [
"Weidong Zhu Robot!\nYou probably forgot to make function definition. Also @client.command() returns context, not message, so don't forget about this too.\nThere's your quick fix\[email protected]()\[email protected](1,30,commands.BucketType.user)\nasync def hunt_cmd(ctx):\n message = ctx.message\n if message.content.startswith('!sg hunt')\n await message.channel.send('You hunted a...')```\n\n"
] | [
0
] | [] | [] | [
"bots",
"discord",
"discord.py",
"python"
] | stackoverflow_0074579743_bots_discord_discord.py_python.txt |
Q:
Open and close new tab with Selenium WebDriver in OS X
I'm using the Firefox Webdriver in Python 2.7 on Windows to simulate opening (Ctrl+t) and closing (Ctrl + w) a new tab.
Here's my code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Firefox()
browser.get('https://www.google.com')
main_window = browser.current_window_handle
# open new tab
browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't')
browser.get('https://www.yahoo.com')
# close tab
browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 'w')
How to achieve the same on a Mac?
Based on this comment one should use browser.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't') to open a new tab but I don't have a Mac to test it and what about the equivalent of Ctrl-w?
Thanks!
A:
There's nothing easier and clearer than just running JavaScript.
Open new tab:
driver.execute_script("window.open('');")
A:
open a new tab:
browser.get('http://www.google.com')
close a tab:
browser.close()
switch to a tab:
browser.swith_to_window(window_name)
A:
You can choose which window you want to close:
window_name = browser.window_handles[0]
Switch window:
browser.switch_to.window(window_name=window_name)
Then close it:
browser.close()
A:
Just to combine the answers above for someone still curious. The below is based on Python 2.7 and a driver in Chrome.
Open new tab by: driver.execute_script("window.open('"+URL+"', '__blank__');")
where URL is a string such as "http://www.google.com".
Close tab by:
driver.close() [Note, this also doubles as driver.quit() when you only have 1 tab open].
Navigate between tabs by: driver.switch_to_window(driver.window_handles[0])
and driver.switch_to_window(driver.window_handles[1]).
A:
Open new tab:
browser.execute_script("window.open('"+your url+"', '_blank')")
Switch to new tab:
browser.switch_to.window(windows[1])
A:
IMHO all the above answers didn't exactly solve the original problem of closing a tab in a window and not closing the entire window or opening a blank tab.
my solution:
browser.switch_to.window("tab1") #change the tab1 to the id of the tab you want to close
browser.execute_script("window.close('','_parent','');")
| Open and close new tab with Selenium WebDriver in OS X | I'm using the Firefox Webdriver in Python 2.7 on Windows to simulate opening (Ctrl+t) and closing (Ctrl + w) a new tab.
Here's my code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Firefox()
browser.get('https://www.google.com')
main_window = browser.current_window_handle
# open new tab
browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 't')
browser.get('https://www.yahoo.com')
# close tab
browser.find_element_by_tag_name('body').send_keys(Keys.CONTROL + 'w')
How to achieve the same on a Mac?
Based on this comment one should use browser.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't') to open a new tab but I don't have a Mac to test it and what about the equivalent of Ctrl-w?
Thanks!
| [
"There's nothing easier and clearer than just running JavaScript.\nOpen new tab:\ndriver.execute_script(\"window.open('');\")\n",
"open a new tab:\nbrowser.get('http://www.google.com')\n\nclose a tab:\nbrowser.close()\n\nswitch to a tab:\nbrowser.swith_to_window(window_name)\n\n",
"You can choose which window you want to close:\nwindow_name = browser.window_handles[0]\n\nSwitch window:\nbrowser.switch_to.window(window_name=window_name)\n\nThen close it:\nbrowser.close()\n\n",
"Just to combine the answers above for someone still curious. The below is based on Python 2.7 and a driver in Chrome.\nOpen new tab by: driver.execute_script(\"window.open('\"+URL+\"', '__blank__');\")\nwhere URL is a string such as \"http://www.google.com\".\nClose tab by:\ndriver.close() [Note, this also doubles as driver.quit() when you only have 1 tab open].\nNavigate between tabs by: driver.switch_to_window(driver.window_handles[0])\nand driver.switch_to_window(driver.window_handles[1]).\n",
"Open new tab:\nbrowser.execute_script(\"window.open('\"+your url+\"', '_blank')\")\n\nSwitch to new tab:\nbrowser.switch_to.window(windows[1])\n\n",
"IMHO all the above answers didn't exactly solve the original problem of closing a tab in a window and not closing the entire window or opening a blank tab.\nmy solution:\nbrowser.switch_to.window(\"tab1\") #change the tab1 to the id of the tab you want to close\nbrowser.execute_script(\"window.close('','_parent','');\")\n\n"
] | [
14,
12,
11,
6,
2,
0
] | [] | [] | [
"macos",
"python",
"selenium"
] | stackoverflow_0025951968_macos_python_selenium.txt |
Q:
Min and max values of an array in Python
I want to calculate the minimum and maximum values of array A but I want to exclude all values less than 1e-12. I present the current and expected outputs.
import numpy as np
A=np.array([[9.49108487e-05],
[1.05634586e-19],
[5.68676707e-17],
[1.02453254e-06],
[2.48792902e-16],
[1.02453254e-06]])
Min=np.min(A)
Max=np.max(A)
print(Min,Max)
The current output is
1.05634586e-19 9.49108487e-05
The expected output is
1.02453254e-06 9.49108487e-05
A:
Slice with boolean indexing before getting the min/max:
B = A[A>1e-12]
Min = np.min(B)
Max = np.max(B)
print(Min, Max)
Output: 1.02453254e-06 9.49108487e-05
B: array([9.49108487e-05, 1.02453254e-06, 1.02453254e-06])
A:
You can just select the values of the array greater than 1e-12 first and obtain the min and max of that:
>>> A[A > 1e-12].min()
1.02453254e-06
>>> A[A > 1e-12].max()
9.49108487e-05
A:
arr = np.array([9.49108487e-05,1.05634586e-19,5.68676707e-17,1.02453254e-06,2.48792902e-16,1.02453254e-06])
mask = arr > 1e-12
Min = np.min(arr[mask])
Max = np.max(arr[mask])
| Min and max values of an array in Python | I want to calculate the minimum and maximum values of array A but I want to exclude all values less than 1e-12. I present the current and expected outputs.
import numpy as np
A=np.array([[9.49108487e-05],
[1.05634586e-19],
[5.68676707e-17],
[1.02453254e-06],
[2.48792902e-16],
[1.02453254e-06]])
Min=np.min(A)
Max=np.max(A)
print(Min,Max)
The current output is
1.05634586e-19 9.49108487e-05
The expected output is
1.02453254e-06 9.49108487e-05
| [
"Slice with boolean indexing before getting the min/max:\nB = A[A>1e-12]\nMin = np.min(B)\nMax = np.max(B)\nprint(Min, Max)\n\nOutput: 1.02453254e-06 9.49108487e-05\nB: array([9.49108487e-05, 1.02453254e-06, 1.02453254e-06])\n",
"You can just select the values of the array greater than 1e-12 first and obtain the min and max of that:\n>>> A[A > 1e-12].min()\n1.02453254e-06\n>>> A[A > 1e-12].max()\n9.49108487e-05\n\n",
"arr = np.array([9.49108487e-05,1.05634586e-19,5.68676707e-17,1.02453254e-06,2.48792902e-16,1.02453254e-06]) \nmask = arr > 1e-12 \nMin = np.min(arr[mask]) \nMax = np.max(arr[mask])\n\n"
] | [
3,
2,
1
] | [] | [] | [
"numpy",
"python"
] | stackoverflow_0074657268_numpy_python.txt |
Q:
I keep getting name 'message' not defined in python even though I made it a global variable in the function and I'm calling it
Code(python):
import tkinter as tk
root = tk.Tk()
root.geometry("600x400")
message_var2 = tk.StringVar()
def page2(message):
print(f'test\n{message}')
def getInputtemp():
global message
message = message_var2.get()
message_var2.set("")
message_entryi = tk.Entry(root, textvariable=message_var2, font=('calibre', 10, 'normal'))
message_entryi.pack()
save_btn2 = tk.Button(root, text='Send', command=getInputtemp)
save_btn2.pack()
if message in ['1886', '2022']:
page2(message)
root.mainloop()
I want to use the variable 'message' outside of the function but it keeps giving me the not defined error
Even though I made it a global variable and I'm calling the function before trying to use it I still get the error, Even though after making it global and calling it worked in the past with other things its not working here am I doing something wrong? Did I forget some small tiny detail?
A:
The issue you have here is that your function getInputtemp is not getting fired. It only gets fired when button save_btn2 is clicked. Also, the if statement where the error is occuring will only get fired once. To fix this, you can either do what @Tkirishima have suggested.
Or just move the if statement inside the getInputtemp function.
def getInputtemp():
#global message
#Then you would no longer need message as a global variable
message = message_var2.get()
message_var2.set("")
if message in ['1886', '2022']:
page2(message)
But, if you do want the if statement outside the function (which I wouldn't recommend as stated earlier that it would execute when you start the script and never again):
getInputtemp() #The function is called to create message as global variable
if message in ['1886', '2022']:
page2(message)
A:
The problem that you have, is the usage of the global keyword.
The thing is that, you use global even tho the value doesn't exists in the global scope.
If you want your program to work, the best thing to do is to define message at the start of the program as None, with that process, message exists in the global scope with a value of None.
...
root.geometry("600x400")
message_var2 = tk.StringVar()
message = None # <<<<<<<<<<
def page2(message):
print(f'test\n{message}')
...
Related:
https://www.programiz.com/python-programming/global-keyword
https://en.wikipedia.org/wiki/Scope_(computer_science)
| I keep getting name 'message' not defined in python even though I made it a global variable in the function and I'm calling it | Code(python):
import tkinter as tk
root = tk.Tk()
root.geometry("600x400")
message_var2 = tk.StringVar()
def page2(message):
print(f'test\n{message}')
def getInputtemp():
global message
message = message_var2.get()
message_var2.set("")
message_entryi = tk.Entry(root, textvariable=message_var2, font=('calibre', 10, 'normal'))
message_entryi.pack()
save_btn2 = tk.Button(root, text='Send', command=getInputtemp)
save_btn2.pack()
if message in ['1886', '2022']:
page2(message)
root.mainloop()
I want to use the variable 'message' outside of the function but it keeps giving me the not defined error
Even though I made it a global variable and I'm calling the function before trying to use it I still get the error, Even though after making it global and calling it worked in the past with other things its not working here am I doing something wrong? Did I forget some small tiny detail?
| [
"The issue you have here is that your function getInputtemp is not getting fired. It only gets fired when button save_btn2 is clicked. Also, the if statement where the error is occuring will only get fired once. To fix this, you can either do what @Tkirishima have suggested.\nOr just move the if statement inside the getInputtemp function.\ndef getInputtemp():\n #global message \n #Then you would no longer need message as a global variable\n message = message_var2.get()\n message_var2.set(\"\")\n if message in ['1886', '2022']:\n page2(message)\n\nBut, if you do want the if statement outside the function (which I wouldn't recommend as stated earlier that it would execute when you start the script and never again):\ngetInputtemp() #The function is called to create message as global variable\nif message in ['1886', '2022']:\n page2(message)\n\n",
"The problem that you have, is the usage of the global keyword.\nThe thing is that, you use global even tho the value doesn't exists in the global scope.\nIf you want your program to work, the best thing to do is to define message at the start of the program as None, with that process, message exists in the global scope with a value of None.\n...\nroot.geometry(\"600x400\")\nmessage_var2 = tk.StringVar()\nmessage = None # <<<<<<<<<<\n\ndef page2(message):\n print(f'test\\n{message}')\n...\n\nRelated:\n\nhttps://www.programiz.com/python-programming/global-keyword\nhttps://en.wikipedia.org/wiki/Scope_(computer_science)\n\n"
] | [
2,
1
] | [] | [] | [
"function",
"global_variables",
"python",
"tkinter",
"variables"
] | stackoverflow_0074657402_function_global_variables_python_tkinter_variables.txt |
Q:
Idiomatic way to drop Pandas DataFrame column in an idempotent fashion (without settings errors="ignore")
Is there a more Pythonic or Pandas-idiomatic way to drop a DataFrame column without just setting errors="ignore"?
Suppose I have the following DataFrame:
import pandas as pd
from pandas import DataFrame
df_initial: DataFrame = pd.DataFrame([
{
"country": "DE",
"price": 1,
"quantity": 10
}
])
If I am unsure about when exactly a function that drops a column might be called (I am thinking in the context of a Jupyter Notebook), is there a way to do this that isn't just ignoring errors (like below)?
df_country_dropped = df_initial.drop("country", axis=1, errors="ignore")
Perhaps I'm being too pernickety, but I had hoped that there would be a more Pythonic way to deal with this than just ignoring a KeyError.
I realise it is possible to check for the existence of the column before dropping:
def drop_country_if_exists(df):
if "country" in df:
return df.drop("country", axis=1)
df_country_dropped = drop_country_if_exists(df_initial)
But I was hoping there might be a more elegant way!
A:
If you're trying to avoid the overhead of creating an additional function, I'd say that list comprehensions are a Pythonic way of achieving what you need.
An approach like this would be idempotent, and elegant enough:
df[[col for col in df.columns if col != 'country']]
The upside with this method is that it's easily extensible if you want to drop a list of columns, by changing the != to a not in and passing your list of column names.
| Idiomatic way to drop Pandas DataFrame column in an idempotent fashion (without settings errors="ignore") | Is there a more Pythonic or Pandas-idiomatic way to drop a DataFrame column without just setting errors="ignore"?
Suppose I have the following DataFrame:
import pandas as pd
from pandas import DataFrame
df_initial: DataFrame = pd.DataFrame([
{
"country": "DE",
"price": 1,
"quantity": 10
}
])
If I am unsure about when exactly a function that drops a column might be called (I am thinking in the context of a Jupyter Notebook), is there a way to do this that isn't just ignoring errors (like below)?
df_country_dropped = df_initial.drop("country", axis=1, errors="ignore")
Perhaps I'm being too pernickety, but I had hoped that there would be a more Pythonic way to deal with this than just ignoring a KeyError.
I realise it is possible to check for the existence of the column before dropping:
def drop_country_if_exists(df):
if "country" in df:
return df.drop("country", axis=1)
df_country_dropped = drop_country_if_exists(df_initial)
But I was hoping there might be a more elegant way!
| [
"If you're trying to avoid the overhead of creating an additional function, I'd say that list comprehensions are a Pythonic way of achieving what you need.\nAn approach like this would be idempotent, and elegant enough:\ndf[[col for col in df.columns if col != 'country']]\n\nThe upside with this method is that it's easily extensible if you want to drop a list of columns, by changing the != to a not in and passing your list of column names.\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074655572_dataframe_pandas_python.txt |
Q:
Python AND operator on two boolean lists - how?
I have two boolean lists, e.g.,
x=[True,True,False,False]
y=[True,False,True,False]
I want to AND these lists together, with the expected output:
xy=[True,False,False,False]
I thought that expression x and y would work, but came to discover that it does not: in fact, (x and y) != (y and x)
Output of x and y: [True,False,True,False]
Output of y and x: [True,True,False,False]
Using list comprehension does have correct output. Whew!
xy = [x[i] and y[i] for i in range(len(x)]
Mind you I could not find any reference that told me the AND operator would work as I tried with x and y. But it's easy to try things in Python.
Can someone explain to me what is happening with x and y?
And here is a simple test program:
import random
random.seed()
n = 10
x = [random.random() > 0.5 for i in range(n)]
y = [random.random() > 0.5 for i in range(n)]
# Next two methods look sensible, but do not work
a = x and y
z = y and x
# Next: apparently only the list comprehension method is correct
xy = [x[i] and y[i] for i in range(n)]
print 'x : %s'%str(x)
print 'y : %s'%str(y)
print 'x and y : %s'%str(a)
print 'y and x : %s'%str(z)
print '[x and y]: %s'%str(xy)
A:
and simply returns either the first or the second operand, based on their truth value. If the first operand is considered false, it is returned, otherwise the other operand is returned.
Lists are considered true when not empty, so both lists are considered true. Their contents don't play a role here.
Because both lists are not empty, x and y simply returns the second list object; only if x was empty would it be returned instead:
>>> [True, False] and ['foo', 'bar']
['foo', 'bar']
>>> [] and ['foo', 'bar']
[]
See the Truth value testing section in the Python documentation:
Any object can be tested for truth value, for use in an if or while condition or as operand of the Boolean operations below. The following values are considered false:
[...]
any empty sequence, for example, '', (), [].
[...]
All other values are considered true — so objects of many types are always true.
(emphasis mine), and the Boolean operations section right below that:
x and y
if x is false, then x, else y
This is a short-circuit operator, so it only evaluates the second argument if the first one is True.
You indeed need to test the values contained in the lists explicitly. You can do so with a list comprehension, as you discovered. You can rewrite it with the zip() function to pair up the values:
[a and b for a, b in zip(x, y)]
A:
You could use numpy:
>>> import numpy as np
>>> x=np.array([True,True,False,False])
>>> y=np.array([True,False,True,False])
>>> x & y
array([ True, False, False, False], dtype=bool)
Numpy allows numerical and logical operations on arrays such as:
>>> z=np.array([1,2,3,4])
>>> z+1
array([2, 3, 4, 5])
You can perform bitwise and with the & operator.
Instead of a list comprehension, you can use numpy to generate the boolean array directly like so:
>>> np.random.random(10)>.5
array([ True, True, True, False, False, True, True, False, False, False], dtype=bool)
A:
Here is a simple solution:
np.logical_and(x,y)
A:
and is not necessarily a Boolean operator; it returns one of its two arguments, regardless of their type. If the first argument is false-ish (False, numeric zero, or an empty string/container), it returns that argument. Otherwise, it returns the second argument.
In your case, both x and y are non-empty lists, so the first argument is always true-ish, meaning x and y returns y and y and x returns x.
A:
This should do what you want:
xy = [a and b for a, b in zip(x, y)]
The reason x and y returns y and y and x returns x is because boolean operators in python return the last value checked that determines the true-ness of the expression. Non-empty list's evaluate to True, and since and requires both operands to evaluate True, the last operand checked is the second operand. Contrast with x or y, which would return x because it doesn't need to check y to determine the true-ness of the expression.
A:
To generalize on the zip approach, use all and any for any number of lists.
all for AND:
[all(i) for i in zip(a, b, c)] # zip all lists
and any for OR:
[any(i) for i in zip(a, b, c)]
A:
Instead of using
[a and b for a, b in zip(x, y)]
one could just use the possibility of numpy to multiply bool-values:
(np.array(x)*np.array(y))
>> array([ True, False, False, False], dtype=bool)
Or do I overlook a special case?
A:
You can use the zip function
x=[True,True,False,False]
y=[True,False,True,False]
z=[a and b for a,b in zip(x,y)]
A:
In addition to what @Martijn Pieters has answered, I would just add the following code to explain and and or operations in action.
and returns the first falsy value encountered else the last evaluated argument.
Similarly or returns the first truthy value encountered else the last evaluated argument.
nl1 = [3,3,3,3,0,0,0,0]
nl2 = [2,2,0,0,2,2,0,0]
nl3 = [1,0,1,0,1,0,1,0]
and_list = [a and b and c for a,b,c in zip(nl1,nl2,nl3)]
or_list = [a or b or c for a,b,c in zip(nl1,nl2,nl3)]
Values are
and_list = [1, 0, 0, 0, 0, 0, 0, 0]
or_list = [3, 3, 3, 3, 2, 2, 1, 0]
A:
Thanks for the answer @Martijn Pieters and @Tony.
I dig into the timing of the various options we have to make the AND of two lists and I would like to share my results, because I found them interesting.
Despite liking a lot the pythonic way [a and b for a,b in zip(x,y) ], turns out really slow.
I compare with a integer product of arrays (1*(array of bool)) * (1*(array of bool)) and it turns out to be more than 10x faster
import time
import numpy as np
array_to_filter = np.linspace(1,1000000,1000000) # 1 million of integers :-)
value_limit = 100
cycles = 100
# METHOD #1: [a and b for a,b in zip(x,y) ]
t0=time.clock()
for jj in range(cycles):
x = array_to_filter<np.max(array_to_filter)-value_limit # filter the values > MAX-value_limit
y = array_to_filter>value_limit # filter the values < value_limit
z= [a and b for a,b in zip(x,y) ] # AND
filtered = array_to_filter[z]
print('METHOD #1 = %.2f s' % ( (time.clock()-t0)))
# METHOD 1*(array of bool) AND 1*(array of bool)
t0=time.clock()
for jj in range(cycles):
x = 1*(array_to_filter<np.max(array_to_filter)-value_limit) # filter the values > MAX-value_limit
y = 1*(array_to_filter>value_limit) # filter the values < value_limit
z = x*y # AND
z = z.astype(bool) # convert back to array of bool
filtered = array_to_filter[z]
print('METHOD #2 = %.2f s' % ( (time.clock()-t0)))
The results are
METHOD #1 = 15.36 s
METHOD #2 = 1.85 s
The speed is almost affected equally by the size of the array or by the number of cycles.
I hope I helped someone code to be faster. :-)
| Python AND operator on two boolean lists - how? | I have two boolean lists, e.g.,
x=[True,True,False,False]
y=[True,False,True,False]
I want to AND these lists together, with the expected output:
xy=[True,False,False,False]
I thought that expression x and y would work, but came to discover that it does not: in fact, (x and y) != (y and x)
Output of x and y: [True,False,True,False]
Output of y and x: [True,True,False,False]
Using list comprehension does have correct output. Whew!
xy = [x[i] and y[i] for i in range(len(x)]
Mind you I could not find any reference that told me the AND operator would work as I tried with x and y. But it's easy to try things in Python.
Can someone explain to me what is happening with x and y?
And here is a simple test program:
import random
random.seed()
n = 10
x = [random.random() > 0.5 for i in range(n)]
y = [random.random() > 0.5 for i in range(n)]
# Next two methods look sensible, but do not work
a = x and y
z = y and x
# Next: apparently only the list comprehension method is correct
xy = [x[i] and y[i] for i in range(n)]
print 'x : %s'%str(x)
print 'y : %s'%str(y)
print 'x and y : %s'%str(a)
print 'y and x : %s'%str(z)
print '[x and y]: %s'%str(xy)
| [
"and simply returns either the first or the second operand, based on their truth value. If the first operand is considered false, it is returned, otherwise the other operand is returned.\nLists are considered true when not empty, so both lists are considered true. Their contents don't play a role here.\nBecause both lists are not empty, x and y simply returns the second list object; only if x was empty would it be returned instead:\n>>> [True, False] and ['foo', 'bar']\n['foo', 'bar']\n>>> [] and ['foo', 'bar']\n[]\n\nSee the Truth value testing section in the Python documentation:\n\nAny object can be tested for truth value, for use in an if or while condition or as operand of the Boolean operations below. The following values are considered false:\n[...]\n\nany empty sequence, for example, '', (), [].\n\n[...]\nAll other values are considered true — so objects of many types are always true.\n\n(emphasis mine), and the Boolean operations section right below that:\n\nx and y\n if x is false, then x, else y\nThis is a short-circuit operator, so it only evaluates the second argument if the first one is True.\n\nYou indeed need to test the values contained in the lists explicitly. You can do so with a list comprehension, as you discovered. You can rewrite it with the zip() function to pair up the values:\n[a and b for a, b in zip(x, y)]\n\n",
"You could use numpy:\n>>> import numpy as np\n>>> x=np.array([True,True,False,False])\n>>> y=np.array([True,False,True,False])\n>>> x & y\narray([ True, False, False, False], dtype=bool)\n\nNumpy allows numerical and logical operations on arrays such as:\n>>> z=np.array([1,2,3,4])\n>>> z+1\narray([2, 3, 4, 5])\n\nYou can perform bitwise and with the & operator.\nInstead of a list comprehension, you can use numpy to generate the boolean array directly like so:\n>>> np.random.random(10)>.5\narray([ True, True, True, False, False, True, True, False, False, False], dtype=bool)\n\n",
"Here is a simple solution:\nnp.logical_and(x,y)\n\n",
"and is not necessarily a Boolean operator; it returns one of its two arguments, regardless of their type. If the first argument is false-ish (False, numeric zero, or an empty string/container), it returns that argument. Otherwise, it returns the second argument.\nIn your case, both x and y are non-empty lists, so the first argument is always true-ish, meaning x and y returns y and y and x returns x.\n",
"This should do what you want:\nxy = [a and b for a, b in zip(x, y)]\n\nThe reason x and y returns y and y and x returns x is because boolean operators in python return the last value checked that determines the true-ness of the expression. Non-empty list's evaluate to True, and since and requires both operands to evaluate True, the last operand checked is the second operand. Contrast with x or y, which would return x because it doesn't need to check y to determine the true-ness of the expression.\n",
"To generalize on the zip approach, use all and any for any number of lists.\nall for AND:\n[all(i) for i in zip(a, b, c)] # zip all lists\n\nand any for OR:\n[any(i) for i in zip(a, b, c)]\n\n",
"Instead of using\n[a and b for a, b in zip(x, y)]\n\none could just use the possibility of numpy to multiply bool-values:\n(np.array(x)*np.array(y))\n>> array([ True, False, False, False], dtype=bool)\n\nOr do I overlook a special case?\n",
"You can use the zip function\nx=[True,True,False,False]\ny=[True,False,True,False]\nz=[a and b for a,b in zip(x,y)]\n\n",
"In addition to what @Martijn Pieters has answered, I would just add the following code to explain and and or operations in action.\nand returns the first falsy value encountered else the last evaluated argument. \nSimilarly or returns the first truthy value encountered else the last evaluated argument. \nnl1 = [3,3,3,3,0,0,0,0]\nnl2 = [2,2,0,0,2,2,0,0]\nnl3 = [1,0,1,0,1,0,1,0]\nand_list = [a and b and c for a,b,c in zip(nl1,nl2,nl3)]\nor_list = [a or b or c for a,b,c in zip(nl1,nl2,nl3)]\n\nValues are\nand_list = [1, 0, 0, 0, 0, 0, 0, 0]\nor_list = [3, 3, 3, 3, 2, 2, 1, 0]\n",
"Thanks for the answer @Martijn Pieters and @Tony.\nI dig into the timing of the various options we have to make the AND of two lists and I would like to share my results, because I found them interesting.\nDespite liking a lot the pythonic way [a and b for a,b in zip(x,y) ], turns out really slow.\nI compare with a integer product of arrays (1*(array of bool)) * (1*(array of bool)) and it turns out to be more than 10x faster\nimport time\nimport numpy as np\narray_to_filter = np.linspace(1,1000000,1000000) # 1 million of integers :-)\nvalue_limit = 100\ncycles = 100\n\n# METHOD #1: [a and b for a,b in zip(x,y) ]\nt0=time.clock()\nfor jj in range(cycles):\n x = array_to_filter<np.max(array_to_filter)-value_limit # filter the values > MAX-value_limit\n y = array_to_filter>value_limit # filter the values < value_limit\n z= [a and b for a,b in zip(x,y) ] # AND\n filtered = array_to_filter[z]\nprint('METHOD #1 = %.2f s' % ( (time.clock()-t0)))\n\n\n\n# METHOD 1*(array of bool) AND 1*(array of bool)\nt0=time.clock()\nfor jj in range(cycles):\n x = 1*(array_to_filter<np.max(array_to_filter)-value_limit) # filter the values > MAX-value_limit\n y = 1*(array_to_filter>value_limit) # filter the values < value_limit\n z = x*y # AND\n z = z.astype(bool) # convert back to array of bool\n filtered = array_to_filter[z]\nprint('METHOD #2 = %.2f s' % ( (time.clock()-t0)))\n\nThe results are\nMETHOD #1 = 15.36 s\nMETHOD #2 = 1.85 s\n\nThe speed is almost affected equally by the size of the array or by the number of cycles.\nI hope I helped someone code to be faster. :-)\n"
] | [
75,
20,
12,
7,
4,
2,
1,
0,
0,
0
] | [
"The following works for me:\n([True,False,True]) and ([False,False,True])\n\noutput:\n[False, False, True]\n\n"
] | [
-1
] | [
"boolean",
"list",
"operator_keyword",
"python"
] | stackoverflow_0032192163_boolean_list_operator_keyword_python.txt |
Q:
how to get index of a giving string in liste python?
my list is like this, in example the string is 'a' and 'b' ;
i want to return the index of string 'a' and for 'b' then i want to calculate how many time is 'a' repeated in the list1 :
list1=['a','a','b','a','a','b','a','a','b','a','b','a','a']
i want to return the order of evry 'a' in list1
the result should be like this :
a_position=[1,2,4,5,7,8,10,12,13]
and i want to calculate how many time 'a' is repeated in list1:
a_rep=9
A:
You could do below:
a_positions = [idx + 1 for idx, el in enumerate(list1) if el == 'a']
a_repitition = len(a_positions)
print(a_positions):
[1, 2, 4, 5, 7, 8, 10, 12, 13]
print(a_repitition):
9
If you need repititions of each element you can also use collections.Counter
from collections import Counter
counter = Counter(list1)
print(counter['a']):
9
A:
If you want to get the indices and counts of all letters:
list1=['a','a','b','a','a','b','a','a','b','a','b','a','a']
pos = {}
for i,c in enumerate(list1, start=1): # 1-based indexing
pos.setdefault(c, []).append(i)
pos
# {'a': [1, 2, 4, 5, 7, 8, 10, 12, 13],
# 'b': [3, 6, 9, 11]}
counts = {k: len(v) for k,v in pos.items()}
# {'a': 9, 'b': 4}
| how to get index of a giving string in liste python? | my list is like this, in example the string is 'a' and 'b' ;
i want to return the index of string 'a' and for 'b' then i want to calculate how many time is 'a' repeated in the list1 :
list1=['a','a','b','a','a','b','a','a','b','a','b','a','a']
i want to return the order of evry 'a' in list1
the result should be like this :
a_position=[1,2,4,5,7,8,10,12,13]
and i want to calculate how many time 'a' is repeated in list1:
a_rep=9
| [
"You could do below:\na_positions = [idx + 1 for idx, el in enumerate(list1) if el == 'a']\na_repitition = len(a_positions)\n\nprint(a_positions):\n[1, 2, 4, 5, 7, 8, 10, 12, 13]\n\nprint(a_repitition):\n9\n\nIf you need repititions of each element you can also use collections.Counter\nfrom collections import Counter\ncounter = Counter(list1)\n\nprint(counter['a']):\n9\n\n",
"If you want to get the indices and counts of all letters:\nlist1=['a','a','b','a','a','b','a','a','b','a','b','a','a']\npos = {}\nfor i,c in enumerate(list1, start=1): # 1-based indexing\n pos.setdefault(c, []).append(i)\npos\n# {'a': [1, 2, 4, 5, 7, 8, 10, 12, 13],\n# 'b': [3, 6, 9, 11]}\n\ncounts = {k: len(v) for k,v in pos.items()}\n# {'a': 9, 'b': 4}\n\n"
] | [
2,
1
] | [] | [] | [
"indexing",
"python"
] | stackoverflow_0074657346_indexing_python.txt |
Q:
Concatenate all 2 dimensional values in a dictionary. (Output is Torch tensor)
I want to concatenate all 2 dimensional values in a dictionary.
The number of rows of these values is always the same.
D = {'a': [[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]],
'b': [[1, 1],
[1, 1],
[1, 1]],
'c': [[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2]]
}
And the output must be form of a torch tensor.
tensor([[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2]])
Any help would be appreciated!!
A:
import torch
print(torch.cat(tuple([torch.tensor(D[name]) for name in D.keys()]), dim=1))
Output:
tensor([[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2]])
A:
from itertools import chain
l = []
for i in range(len(D)):
t = [ D[k][i] for k in D ]
l.append( list(chain.from_iterable(t)) )
Output:
[[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2]]
| Concatenate all 2 dimensional values in a dictionary. (Output is Torch tensor) | I want to concatenate all 2 dimensional values in a dictionary.
The number of rows of these values is always the same.
D = {'a': [[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]],
'b': [[1, 1],
[1, 1],
[1, 1]],
'c': [[2, 2, 2, 2],
[2, 2, 2, 2],
[2, 2, 2, 2]]
}
And the output must be form of a torch tensor.
tensor([[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],
[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2]])
Any help would be appreciated!!
| [
"import torch\nprint(torch.cat(tuple([torch.tensor(D[name]) for name in D.keys()]), dim=1))\n\nOutput:\ntensor([[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],\n [0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],\n [0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2]])\n\n",
"from itertools import chain \nl = []\nfor i in range(len(D)):\n t = [ D[k][i] for k in D ] \n l.append( list(chain.from_iterable(t)) )\n\nOutput:\n[[0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],\n [0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2],\n [0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 2]]\n\n"
] | [
1,
0
] | [] | [] | [
"dictionary",
"python",
"torch"
] | stackoverflow_0074656455_dictionary_python_torch.txt |
Q:
Jenkins groovy pipeline using try catch and variable from python exit
I have a pipeline on jenkins that inside a stage it uses the try-catch framework to try to run a python script. once run, the python script either prints a good value or prints a bad value and exits, depending on the input. My goal is to later use this to make a test, so my requirement is that I need to be able to diferentiate if the python script succeeded or if it was terminated with exit('ERR_MESSAGE').
I have made it work if python runs to the end. However, if python ends with the exit command, the jenkinsfile correctly understands that and it follows to catch, but it does not store the text printed by the python script before, as I need.
Can you help? What am I doing wrong? Please see below the jenkinsfile stage
stage('Test branch') {
steps {
script {
test_results = 'position 1'
try {
test_results = sh (
script: "python3 \${WORKSPACE}/testingjenkinsexit.py notpass",
returnStdout: true
).trim()
echo "Test results in passed test: ${test_results}"
} catch (err) {
echo "Test results in failed test numb 1: " + test_results
echo "Test results in failed test numb 2: ${test_results}"
echo err.getMessage()
println err.dump()
}
}
}
}
in the code abve, I am calling the script 'testingjenkinsexit.py' with input'notpass', as this is the one when the python script will terminate with exit. If I use input pass, then it works correctly as python does not end with exit.
and the python script below
from sys import argv
def testingjenkins(desired_output):
#print relevant test results. If at least one test failed, stop execution
if desired_output == "'pass'":
print(desired_output)
else:
print('tests did not pass')
exit('Deployement interrupted by python.')
desired_output = "'" + str(argv[1]) + "'"
if __name__ == "__main__":
testingjenkins(desired_output)
Thank you very much for your help.
I used try - catch within the jenkinsfile to call a python script that prints values and might terminate with exit('MESSAGE') if input is bad. I was expecting that the try-catch would be able to deal with the python ending with exit (what it does with success) and I was expecting that in both good execution and bad execution (that ends with exit) the try-catch would be able to store the messages printed by the python script (what it does not do).
A:
This is the expected behavior I suppose. When the script exits with a non-zero exit code the StandardOut will not be returned. If you want to get the output irrespective of the status you can do something like this. The following will combine both STDOUT and STDERR and return while exiting the script with exit code 0. This will not move the execution to the catch block. So you will have to add a condition and check the returned message.
test_results = sh (
script: "python3 \${WORKSPACE}/test.py notpass 2>&1 || echo \"status:\$?\"; exit 0",
returnStdout: true
)
# Output
Test results in passed test: tests did not pass
Deployement interrupted by python.
status:1
Another approach is to write the STDOUT to a file and read that in the catch block.
stage('Test branch') {
steps {
script {
test_results = 'position 1'
try {
test_results = sh (
script: "python3 \${WORKSPACE}/test.py notpass > output",
returnStdout: true
)
echo "Test results in passed test: ${test_results}"
} catch (err) {
output = readFile(file: 'output')
echo "Test results in failed test numb 1: " + output
echo "Test results in failed test numb 2: ${test_results}"
echo err.getMessage()
println err.dump()
}
}
}
}
| Jenkins groovy pipeline using try catch and variable from python exit | I have a pipeline on jenkins that inside a stage it uses the try-catch framework to try to run a python script. once run, the python script either prints a good value or prints a bad value and exits, depending on the input. My goal is to later use this to make a test, so my requirement is that I need to be able to diferentiate if the python script succeeded or if it was terminated with exit('ERR_MESSAGE').
I have made it work if python runs to the end. However, if python ends with the exit command, the jenkinsfile correctly understands that and it follows to catch, but it does not store the text printed by the python script before, as I need.
Can you help? What am I doing wrong? Please see below the jenkinsfile stage
stage('Test branch') {
steps {
script {
test_results = 'position 1'
try {
test_results = sh (
script: "python3 \${WORKSPACE}/testingjenkinsexit.py notpass",
returnStdout: true
).trim()
echo "Test results in passed test: ${test_results}"
} catch (err) {
echo "Test results in failed test numb 1: " + test_results
echo "Test results in failed test numb 2: ${test_results}"
echo err.getMessage()
println err.dump()
}
}
}
}
in the code abve, I am calling the script 'testingjenkinsexit.py' with input'notpass', as this is the one when the python script will terminate with exit. If I use input pass, then it works correctly as python does not end with exit.
and the python script below
from sys import argv
def testingjenkins(desired_output):
#print relevant test results. If at least one test failed, stop execution
if desired_output == "'pass'":
print(desired_output)
else:
print('tests did not pass')
exit('Deployement interrupted by python.')
desired_output = "'" + str(argv[1]) + "'"
if __name__ == "__main__":
testingjenkins(desired_output)
Thank you very much for your help.
I used try - catch within the jenkinsfile to call a python script that prints values and might terminate with exit('MESSAGE') if input is bad. I was expecting that the try-catch would be able to deal with the python ending with exit (what it does with success) and I was expecting that in both good execution and bad execution (that ends with exit) the try-catch would be able to store the messages printed by the python script (what it does not do).
| [
"This is the expected behavior I suppose. When the script exits with a non-zero exit code the StandardOut will not be returned. If you want to get the output irrespective of the status you can do something like this. The following will combine both STDOUT and STDERR and return while exiting the script with exit code 0. This will not move the execution to the catch block. So you will have to add a condition and check the returned message.\ntest_results = sh (\n script: \"python3 \\${WORKSPACE}/test.py notpass 2>&1 || echo \\\"status:\\$?\\\"; exit 0\",\n returnStdout: true\n )\n\n# Output\nTest results in passed test: tests did not pass\nDeployement interrupted by python.\nstatus:1\n\nAnother approach is to write the STDOUT to a file and read that in the catch block.\nstage('Test branch') {\n steps {\n script {\n test_results = 'position 1'\n try {\n test_results = sh (\n script: \"python3 \\${WORKSPACE}/test.py notpass > output\",\n returnStdout: true\n )\n echo \"Test results in passed test: ${test_results}\"\n } catch (err) {\n output = readFile(file: 'output')\n echo \"Test results in failed test numb 1: \" + output\n echo \"Test results in failed test numb 2: ${test_results}\"\n echo err.getMessage()\n println err.dump()\n }\n }\n }\n}\n\n"
] | [
0
] | [] | [] | [
"groovy",
"jenkins",
"jenkins_pipeline",
"python",
"try_catch"
] | stackoverflow_0074651092_groovy_jenkins_jenkins_pipeline_python_try_catch.txt |
Q:
Can you set a condition for start and end dates in a month period?
If a have a dataframe from which I get the total ocurrence of a value per year-month period, is there a way to change the month's start and end date?
For example, let's take this:
import pandas as pd
data= {
'date':
[
'2022-01-10', '2022-01-24', '2022-02-08', '2022-02-23', '2022-03-10',
'2022-03-24', '2022-04-08', '2022-04-23', '2022-05-08', '2022-05-23',
'2022-06-06', '2022-06-21', '2022-07-06', '2022-07-21', '2022-08-05',
'2022-08-19', '2022-09-03', '2022-09-18', '2022-10-03', '2022-10-18',
'2022-11-01', '2022-11-16', '2022-12-01', '2022-12-16', '2022-12-31'
],
'status':
[
'no', 'yes', 'no', 'yes', 'no', 'yes', 'no', 'no', 'no', 'no',
'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no', 'no',
'no', 'yes', 'yes', 'yes', 'yes'
]
}
df= pd.DataFrame(data=data)
df.date = pd.to_datetime(df.date)
What I have now is this:
df['period'] = df.date.dt.strftime('%Y-%m') # <-- this creates the 'period' column
check_yes = df['status'] == 'yes'
total_yes_period = df.loc[check_yes]['period'].value_counts().sort_index() # <-- obtain total 'yes' count per period
However, this works when a month is taken as 'June', 'November' (i.e. first to last day). My question is, is there a way to change this to a different period? (e.g. a 'month' starts on the 10th and ends on the 9th of the next).
A:
An alternative approach is to make a dictionary or dataframe in a format like CustomMonth | StartMonth | StartDay | EndMonth | EndDay
with one row for each of your custom months
From there you can query your data using this
A:
As an option, I can offer if there is data for each month. Let's transform the 'period' column and create an auxiliary one with days.
Check if the day is greater than or equal to 10, then we copy the data from 'period' to the 'start' column. Next, fill in the gaps with the previous values. Create a 'finish' column using the 'start' column data + 30 days and change the number to 9.
import pandas as pd
df['period'] = df.date.dt.strftime('%Y-%m')
df['period'] = pd.to_datetime(df['period'])
df['day'] = df['date'].dt.day
ind_st = df['day'] >= 10
df.loc[ind_st, 'start'] = df.loc[ind_st, 'period'] + pd.to_timedelta(9, unit='D')
df['start'] = df['start'].fillna(method="ffill")
df['finish'] = df['start'] + pd.to_timedelta(30, unit='D')
df['finish'] = df['finish'].apply(lambda dt: dt.replace(day=9))
print(df.loc[check_yes][['start', 'finish']].value_counts().sort_index())
Output
start finish
2022-01-10 2022-02-09 1
2022-02-10 2022-03-09 1
2022-03-10 2022-04-09 1
2022-05-10 2022-06-09 1
2022-06-10 2022-07-09 1
2022-08-10 2022-09-09 1
2022-11-10 2022-12-09 2
2022-12-10 2023-01-09 2
| Can you set a condition for start and end dates in a month period? | If a have a dataframe from which I get the total ocurrence of a value per year-month period, is there a way to change the month's start and end date?
For example, let's take this:
import pandas as pd
data= {
'date':
[
'2022-01-10', '2022-01-24', '2022-02-08', '2022-02-23', '2022-03-10',
'2022-03-24', '2022-04-08', '2022-04-23', '2022-05-08', '2022-05-23',
'2022-06-06', '2022-06-21', '2022-07-06', '2022-07-21', '2022-08-05',
'2022-08-19', '2022-09-03', '2022-09-18', '2022-10-03', '2022-10-18',
'2022-11-01', '2022-11-16', '2022-12-01', '2022-12-16', '2022-12-31'
],
'status':
[
'no', 'yes', 'no', 'yes', 'no', 'yes', 'no', 'no', 'no', 'no',
'yes', 'no', 'yes', 'no', 'no', 'no', 'yes', 'no', 'no', 'no',
'no', 'yes', 'yes', 'yes', 'yes'
]
}
df= pd.DataFrame(data=data)
df.date = pd.to_datetime(df.date)
What I have now is this:
df['period'] = df.date.dt.strftime('%Y-%m') # <-- this creates the 'period' column
check_yes = df['status'] == 'yes'
total_yes_period = df.loc[check_yes]['period'].value_counts().sort_index() # <-- obtain total 'yes' count per period
However, this works when a month is taken as 'June', 'November' (i.e. first to last day). My question is, is there a way to change this to a different period? (e.g. a 'month' starts on the 10th and ends on the 9th of the next).
| [
"An alternative approach is to make a dictionary or dataframe in a format like CustomMonth | StartMonth | StartDay | EndMonth | EndDay\nwith one row for each of your custom months\nFrom there you can query your data using this\n",
"As an option, I can offer if there is data for each month. Let's transform the 'period' column and create an auxiliary one with days.\nCheck if the day is greater than or equal to 10, then we copy the data from 'period' to the 'start' column. Next, fill in the gaps with the previous values. Create a 'finish' column using the 'start' column data + 30 days and change the number to 9.\nimport pandas as pd\n\ndf['period'] = df.date.dt.strftime('%Y-%m') \n\n\ndf['period'] = pd.to_datetime(df['period'])\ndf['day'] = df['date'].dt.day\nind_st = df['day'] >= 10\ndf.loc[ind_st, 'start'] = df.loc[ind_st, 'period'] + pd.to_timedelta(9, unit='D')\ndf['start'] = df['start'].fillna(method=\"ffill\")\ndf['finish'] = df['start'] + pd.to_timedelta(30, unit='D')\ndf['finish'] = df['finish'].apply(lambda dt: dt.replace(day=9))\n\n\nprint(df.loc[check_yes][['start', 'finish']].value_counts().sort_index())\n\nOutput\nstart finish \n2022-01-10 2022-02-09 1\n2022-02-10 2022-03-09 1\n2022-03-10 2022-04-09 1\n2022-05-10 2022-06-09 1\n2022-06-10 2022-07-09 1\n2022-08-10 2022-09-09 1\n2022-11-10 2022-12-09 2\n2022-12-10 2023-01-09 2\n\n"
] | [
0,
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074646435_pandas_python.txt |
Q:
Pyspark: cast element array with nested struct
I have pyspark dataframe with a column named received: ""
how to access and convert the "size" element that is as a string into a float usando pyspark?
root
|-- title: string (nullable = true)
|-- received: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- id: string (nullable = true)
| | |-- date: string (nullable = true)
| | |-- size: string (nullable = true)
|-- urls: struct (nullable = true)
| |-- body: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- scheme: string (nullable = true)
| | | |-- url: string (nullable = true)
|-- ...
|-- ...
I tried like this but I'm not getting success!
df.withColumn("received", SF.col("received").withField("delay", SF.col("received.delay").cast("float")))
Could someone guide me how to do this?
A:
I managed to solve it like this:
df = df.withColumn(
"received",
SF.expr("""transform(
received,
x -> struct(x.col1, x.col2, x.col3, x.col4, float(x.delay) as delay, x.col6))"""
)
)
| Pyspark: cast element array with nested struct | I have pyspark dataframe with a column named received: ""
how to access and convert the "size" element that is as a string into a float usando pyspark?
root
|-- title: string (nullable = true)
|-- received: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- id: string (nullable = true)
| | |-- date: string (nullable = true)
| | |-- size: string (nullable = true)
|-- urls: struct (nullable = true)
| |-- body: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- scheme: string (nullable = true)
| | | |-- url: string (nullable = true)
|-- ...
|-- ...
I tried like this but I'm not getting success!
df.withColumn("received", SF.col("received").withField("delay", SF.col("received.delay").cast("float")))
Could someone guide me how to do this?
| [
"I managed to solve it like this:\ndf = df.withColumn(\n \"received\",\n SF.expr(\"\"\"transform(\n received, \n x -> struct(x.col1, x.col2, x.col3, x.col4, float(x.delay) as delay, x.col6))\"\"\"\n )\n )\n\n"
] | [
0
] | [] | [] | [
"pyspark",
"python"
] | stackoverflow_0074647610_pyspark_python.txt |
Q:
Import sklearn doesn’t exist on my replit
For some odd reason when I do “import sklearn” it says ModuleNotFound or something like that. Can anyone please help?
I tried going online and using bash to fix it but still didn’t work.
A:
open a shell in the workspace with ctrl-shift-s
on mac command-shift-s command prompt and run this command, it will install scikit
pip install scikit-learn
| Import sklearn doesn’t exist on my replit | For some odd reason when I do “import sklearn” it says ModuleNotFound or something like that. Can anyone please help?
I tried going online and using bash to fix it but still didn’t work.
| [
"open a shell in the workspace with ctrl-shift-s\non mac command-shift-s command prompt and run this command, it will install scikit\n\npip install scikit-learn\n\n"
] | [
0
] | [] | [] | [
"python",
"replit",
"replit_database"
] | stackoverflow_0074657703_python_replit_replit_database.txt |
Q:
How to check if two pandas dataframes have same values and concatenate those rows?
I got a DF called "df" with 4 numerical columns [frame,id,x,y]
I made a loop that creates two dataframes called df1 and df2. Both df1 and df2 are subseted of the original dataframe.
What I want to do (and I am not understanding how to do it) is this: I want to CHECK if df1 and df2 have same VALUES in the column called "id". If they do, I want to concatenate those rows of df2 (that have the same id values) to df1.
For example: if df1 has rows with different id values (1,6,4,8) and df2 has this id values (12,7,8,10). I want to concatenate df2 rows that have the id value=8 to df1. That is all I need
This is my code:
for i in range(0,max(df['frame']),30):
df1=df[df['frame'].between(i, i+30)]
df2=df[df['frame'].between(i-30, i)]
A:
df3 = pd.concat([df1, df2[df2.id.isin(df1.id)]], axis = 0)
| How to check if two pandas dataframes have same values and concatenate those rows? | I got a DF called "df" with 4 numerical columns [frame,id,x,y]
I made a loop that creates two dataframes called df1 and df2. Both df1 and df2 are subseted of the original dataframe.
What I want to do (and I am not understanding how to do it) is this: I want to CHECK if df1 and df2 have same VALUES in the column called "id". If they do, I want to concatenate those rows of df2 (that have the same id values) to df1.
For example: if df1 has rows with different id values (1,6,4,8) and df2 has this id values (12,7,8,10). I want to concatenate df2 rows that have the id value=8 to df1. That is all I need
This is my code:
for i in range(0,max(df['frame']),30):
df1=df[df['frame'].between(i, i+30)]
df2=df[df['frame'].between(i-30, i)]
| [
"df3 = pd.concat([df1, df2[df2.id.isin(df1.id)]], axis = 0)\n"
] | [
0
] | [] | [] | [
"loops",
"pandas",
"python"
] | stackoverflow_0074657688_loops_pandas_python.txt |
Q:
How to add dynamic arguments in slash commands [discord.py]
The Question
I'm trying to make a command that shows you the schedule of your class, for that, the user first introduces as an argument its degree and then the class, but every degree has a different number of classes, so I can't show a generic list for all the degrees.
The actual code:
class Ciclos(enum.Enum):
ASIX = 1
DAM = 2
class ASIX(enum.Enum):
_1A = 1
_1B = 2
_1C = 3
_2A = 4
_2B = 5
class DAM(enum.Enum):
_1A = 1
_2A = 2
_2B = 3
@bot.tree.command(name="schedule", description="Shows the schedule of the selected class")
@app_commands.describe(ciclo="Choose your degree")
@app_commands.describe(clase="Choose your class")
async def schedule(interaction: discord.Interaction, ciclo: Ciclos, clase: Ciclos.name):
await interaction.response.send_message(file=discord.File(f'Media/schedules/schedule{ciclo.name}{clase.name}.png'))
This code doesn't work, but I hope it serves to illustrate what I am trying to accomplish. The problematic part is on the function parameters, specifically on clase: Ciclos.name, I don't know how to make it depend on what the user chooses on ciclo: Ciclos.
What I've tried
I've tried to put these expressions:
clase: {Ciclos.name}
I get -> AtributeError: name
clase: Ciclos.name
I get -> AtributeError: name
clase: ciclo
I get -> NameError: name 'ciclo' is not defined. Did you mean: 'Ciclos'?
No, I didn't mean that.
Expected behavior
The expected result is this:
class ASIX example
class DAM example
In order to send the schedule image corresponding to each class:
await interaction.response.send_message(file=discord.File(f'Media/schedules/schedule{ciclo.name}{clase.name}.png'))
So I get file names like:
"scheduleASIX_1A"
"scheduleDAM_2A"
A:
Sry, I don't think dynamic choises are build into the discord api, but I usually use the following to add choises to a Slash command, maybee it can help you.
@app_commands.choices(
ciclo=[
app_commands.Choice(name="ASIX", value="ASIX"),
app_commands.Choice(name="DAM", value="DAM")
],
clase=[app_commands.Choice(name="A1", value="A1")])
I modified it to fit your example a bit to fit your example, but as said, as far as I know there is no way to make dynamic choices with the slash commands.
A:
This isn't possible from a Discord side of things. The choices have to be known beforehand and synced, so they can't dynamically change based on other values (also - you can fill them in in any order, so that wouldn't even work).
You'll have to do it some other way. You can't refer to the current value of that argument as the type of the Choice.
One option could be to use Views with Select menus, for example. A more hybrid approach could be to have a slash command with Choices, and let that one answer with a Select menu for the specific options they can choose depending on their argument. There's an example for Select menus: https://github.com/Rapptz/discord.py/blob/master/examples/views/dropdown.py
| How to add dynamic arguments in slash commands [discord.py] | The Question
I'm trying to make a command that shows you the schedule of your class, for that, the user first introduces as an argument its degree and then the class, but every degree has a different number of classes, so I can't show a generic list for all the degrees.
The actual code:
class Ciclos(enum.Enum):
ASIX = 1
DAM = 2
class ASIX(enum.Enum):
_1A = 1
_1B = 2
_1C = 3
_2A = 4
_2B = 5
class DAM(enum.Enum):
_1A = 1
_2A = 2
_2B = 3
@bot.tree.command(name="schedule", description="Shows the schedule of the selected class")
@app_commands.describe(ciclo="Choose your degree")
@app_commands.describe(clase="Choose your class")
async def schedule(interaction: discord.Interaction, ciclo: Ciclos, clase: Ciclos.name):
await interaction.response.send_message(file=discord.File(f'Media/schedules/schedule{ciclo.name}{clase.name}.png'))
This code doesn't work, but I hope it serves to illustrate what I am trying to accomplish. The problematic part is on the function parameters, specifically on clase: Ciclos.name, I don't know how to make it depend on what the user chooses on ciclo: Ciclos.
What I've tried
I've tried to put these expressions:
clase: {Ciclos.name}
I get -> AtributeError: name
clase: Ciclos.name
I get -> AtributeError: name
clase: ciclo
I get -> NameError: name 'ciclo' is not defined. Did you mean: 'Ciclos'?
No, I didn't mean that.
Expected behavior
The expected result is this:
class ASIX example
class DAM example
In order to send the schedule image corresponding to each class:
await interaction.response.send_message(file=discord.File(f'Media/schedules/schedule{ciclo.name}{clase.name}.png'))
So I get file names like:
"scheduleASIX_1A"
"scheduleDAM_2A"
| [
"Sry, I don't think dynamic choises are build into the discord api, but I usually use the following to add choises to a Slash command, maybee it can help you.\n@app_commands.choices(\nciclo=[\n app_commands.Choice(name=\"ASIX\", value=\"ASIX\"),\n app_commands.Choice(name=\"DAM\", value=\"DAM\")\n ],\nclase=[app_commands.Choice(name=\"A1\", value=\"A1\")]) \n\nI modified it to fit your example a bit to fit your example, but as said, as far as I know there is no way to make dynamic choices with the slash commands.\n",
"This isn't possible from a Discord side of things. The choices have to be known beforehand and synced, so they can't dynamically change based on other values (also - you can fill them in in any order, so that wouldn't even work).\nYou'll have to do it some other way. You can't refer to the current value of that argument as the type of the Choice.\nOne option could be to use Views with Select menus, for example. A more hybrid approach could be to have a slash command with Choices, and let that one answer with a Select menu for the specific options they can choose depending on their argument. There's an example for Select menus: https://github.com/Rapptz/discord.py/blob/master/examples/views/dropdown.py\n"
] | [
0,
0
] | [] | [] | [
"discord",
"discord.py",
"python"
] | stackoverflow_0074647782_discord_discord.py_python.txt |
Q:
Deleting from file in python. OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect
Trying to delete a line from my file in python, and its throwing me this error. I have a student database, I want to delete a student/line that has the corresponding student id. E.g., line = 'SanVin22\tSanji\tVinsmoke\tWellington'. id is the inputted id.
def DelStudent(self, data):
self = id
with open(data, "r+") as datafile:
for line in datafile:
datum = line.split()
if datum[0] == id:
os.remove(line)
pass
Error is:
os.remove(line)
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'SanVin22\tSanji\tVinsmoke\[email protected]\tWellington\t'
I've tried replacing os.remove(line) with datafile.write(line), as all of the tutorials I've seen online, but that ends up deleting every list in the database.
A:
If the idea is to delete the line with the given id, then gather up all of the non-matching lines (i.e. those that we want to keep) and then write them back to the original file.
def DelStudent(self, data):
new_lines = []
with open(data, "r+") as datafile:
for line in datafile:
datum = line.split()
if datum[0] != id:
new_lines.append(line)
with open(data, "w") as datafile:
for line in new_lines:
datafile.write(line)
| Deleting from file in python. OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect | Trying to delete a line from my file in python, and its throwing me this error. I have a student database, I want to delete a student/line that has the corresponding student id. E.g., line = 'SanVin22\tSanji\tVinsmoke\tWellington'. id is the inputted id.
def DelStudent(self, data):
self = id
with open(data, "r+") as datafile:
for line in datafile:
datum = line.split()
if datum[0] == id:
os.remove(line)
pass
Error is:
os.remove(line)
OSError: [WinError 123] The filename, directory name, or volume label syntax is incorrect: 'SanVin22\tSanji\tVinsmoke\[email protected]\tWellington\t'
I've tried replacing os.remove(line) with datafile.write(line), as all of the tutorials I've seen online, but that ends up deleting every list in the database.
| [
"If the idea is to delete the line with the given id, then gather up all of the non-matching lines (i.e. those that we want to keep) and then write them back to the original file.\ndef DelStudent(self, data):\n new_lines = []\n with open(data, \"r+\") as datafile:\n for line in datafile:\n datum = line.split()\n if datum[0] != id:\n new_lines.append(line)\n with open(data, \"w\") as datafile:\n for line in new_lines:\n datafile.write(line)\n\n"
] | [
2
] | [] | [] | [
"file",
"python"
] | stackoverflow_0074657721_file_python.txt |
Q:
Interaction followup only seems to send one embed
I'm having issues sending multiple embeds in one response and I'm not entirely sure if it's my fault (mostly likely) or a bug with discord.py.
I have a list of embeds I'm trying to follow up on an initial deferral. However, my bot only seems to ever send one embed, rather than sending the full list. The full code is here but the important bits I'll describe below:
My bot asks the discord API to defer it's response in order to conduct the multitude of REST API requests, searching and parsing it has to do. This theoretically buys me 15 minutes to respond to the user's request properly
# Send directory contents if no search term given
await interaction.response.defer(thinking=True)
It then does a bunch of parsing and ends up with a dictionary of some discord.File's and embeds. The embeds are created using the basic discord.Embed format to the responses variable:
{'files': [], 'embeds': [<discord.embeds.Embed object at 0x106f3b0a0>, <discord.embeds.Embed object at 0x106f3aef0>, <discord.embeds.Embed object at 0x106f3b250>]}
I then try to send this dictionary in a reply but it only ever seems to send one embed:
print(f"SENDING RESPONSES: {responses}...")
await interaction.followup.send(embeds=responses["embeds"], files=responses["files"])
See image for the singular response in the Discord UI
Can someone please clarify for me what I should be doing or if this is a genuine bug or documentation issue in discord.py?
Thanks
A:
Now resolved thanks to the owner of discord.py
| Interaction followup only seems to send one embed | I'm having issues sending multiple embeds in one response and I'm not entirely sure if it's my fault (mostly likely) or a bug with discord.py.
I have a list of embeds I'm trying to follow up on an initial deferral. However, my bot only seems to ever send one embed, rather than sending the full list. The full code is here but the important bits I'll describe below:
My bot asks the discord API to defer it's response in order to conduct the multitude of REST API requests, searching and parsing it has to do. This theoretically buys me 15 minutes to respond to the user's request properly
# Send directory contents if no search term given
await interaction.response.defer(thinking=True)
It then does a bunch of parsing and ends up with a dictionary of some discord.File's and embeds. The embeds are created using the basic discord.Embed format to the responses variable:
{'files': [], 'embeds': [<discord.embeds.Embed object at 0x106f3b0a0>, <discord.embeds.Embed object at 0x106f3aef0>, <discord.embeds.Embed object at 0x106f3b250>]}
I then try to send this dictionary in a reply but it only ever seems to send one embed:
print(f"SENDING RESPONSES: {responses}...")
await interaction.followup.send(embeds=responses["embeds"], files=responses["files"])
See image for the singular response in the Discord UI
Can someone please clarify for me what I should be doing or if this is a genuine bug or documentation issue in discord.py?
Thanks
| [
"Now resolved thanks to the owner of discord.py\n"
] | [
0
] | [] | [] | [
"bots",
"discord",
"discord.py",
"python"
] | stackoverflow_0074524899_bots_discord_discord.py_python.txt |
Q:
Convert multiple same Rows as Column headers
This is my table:
pivot_notNone = pivot[pivot['GHG'].notna()]
pivot_notNone.head(10)
UOM
GHG Conversion Factor 2022
Unit
GHG
1
tonnes
3029.260000
kg
CO2
2
tonnes
2.250000
kg
CH4
3
tonnes
1.800000
kg
N2O
5
litres
1.742960
kg
CO2
6
litres
0.001290
kg
CH4
...
...
...
...
...
8032
tonnes
105.669500
kg
CO2
[4312 rows × 4 columns]
I would like to column names as N2O, CO2 and CH4 and their values should be GHG Conversion Factor Values. But when I tried this (using Pandas)
a = pivot_notNone.pivot(columns=['GHG'],values='GHG Conversion Factor 2022')
a
I got the following result:
GHG
CH4
CO2
N2O
1
NaN
3029.260000
NaN
2
2.25000
NaN
NaN
3
NaN
NaN
1.8
5
NaN
1.742960
NaN
6
0.00129
NaN
NaN
...
...
...
...
8032
NaN
1105.669500
NaN
8033
NaN
0.199021
NaN
8034
NaN
679.986742
NaN
8035
NaN
0.199021
NaN
8036
NaN
0.115069
NaN
[4312 rows × 3 columns]
My expectation is:
UOM
Unit
CH4
CO2
N2O
tonnes
kg
225.000
3.029.260.000
1.8
litres
kg
0.00129
1.742.960
11.105.669
kWh
kg
...
...
...
tonnes
kg
...
...
..
...
kg
...
...
...
A:
Well, if we assume that your starting data has sort of groups of 3 rows of related data, we can maybe get away with adding a new grouping field to group them, and then we can pivot using that as well.
df['new_grouping_field'] = df.index // 3 # this gives the whole-number piece of the division,
# so rows 0-2 will get a 0, rows 3-5 will get a 1, etc. Does require that the
# index is numbers, which it should be
df.pivot(
columns='GHG',
values='GHG Conversion Factor 2022',
index=['new_grouping_field','UOM','Unit']
)
| Convert multiple same Rows as Column headers | This is my table:
pivot_notNone = pivot[pivot['GHG'].notna()]
pivot_notNone.head(10)
UOM
GHG Conversion Factor 2022
Unit
GHG
1
tonnes
3029.260000
kg
CO2
2
tonnes
2.250000
kg
CH4
3
tonnes
1.800000
kg
N2O
5
litres
1.742960
kg
CO2
6
litres
0.001290
kg
CH4
...
...
...
...
...
8032
tonnes
105.669500
kg
CO2
[4312 rows × 4 columns]
I would like to column names as N2O, CO2 and CH4 and their values should be GHG Conversion Factor Values. But when I tried this (using Pandas)
a = pivot_notNone.pivot(columns=['GHG'],values='GHG Conversion Factor 2022')
a
I got the following result:
GHG
CH4
CO2
N2O
1
NaN
3029.260000
NaN
2
2.25000
NaN
NaN
3
NaN
NaN
1.8
5
NaN
1.742960
NaN
6
0.00129
NaN
NaN
...
...
...
...
8032
NaN
1105.669500
NaN
8033
NaN
0.199021
NaN
8034
NaN
679.986742
NaN
8035
NaN
0.199021
NaN
8036
NaN
0.115069
NaN
[4312 rows × 3 columns]
My expectation is:
UOM
Unit
CH4
CO2
N2O
tonnes
kg
225.000
3.029.260.000
1.8
litres
kg
0.00129
1.742.960
11.105.669
kWh
kg
...
...
...
tonnes
kg
...
...
..
...
kg
...
...
...
| [
"Well, if we assume that your starting data has sort of groups of 3 rows of related data, we can maybe get away with adding a new grouping field to group them, and then we can pivot using that as well.\ndf['new_grouping_field'] = df.index // 3 # this gives the whole-number piece of the division,\n# so rows 0-2 will get a 0, rows 3-5 will get a 1, etc. Does require that the\n# index is numbers, which it should be\ndf.pivot(\n columns='GHG',\n values='GHG Conversion Factor 2022',\n index=['new_grouping_field','UOM','Unit']\n)\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074638617_pandas_python.txt |
Q:
Write "null" if column doesn't exist with KeyError: "['Column'] not in index" in df.to_csv?
I am getting KeyError: "['CashFinancial'] not in index" on the df.to_csv line because 'GOOG' doesn't have the CashFinancial column. How can I have it write in null for the CashFinancial value for 'GOOG'?
import pandas as pd
from yahooquery import Ticker
symbols = ['AAPL','GOOG','MSFT'] #This will be 75,000 symbols.
header = ["asOfDate","CashAndCashEquivalents","CashFinancial","CurrentAssets","TangibleBookValue","CurrentLiabilities","TotalLiabilitiesNetMinorityInterest"]
for tick in symbols:
faang = Ticker(tick)
faang.balance_sheet(frequency='q')
df = faang.balance_sheet(frequency='q')
df.to_csv('output.csv', mode='a', index=True, header=False, columns=header)
A:
What about :
if tick == "GOOG"
df.loc[:,"CashFinancial"] = None
To set an entire CashFinancial column to "None" only if your "tick" was GOOG, before writing it to csv.
The full code from the example you posted would he something like :
import pandas as pd
from yahooquery import Ticker
symbols = ['AAPL','GOOG','MSFT']
header = ["asOfDate","CashAndCashEquivalents","CashFinancial","CurrentAssets","TangibleBookValue","CurrentLiabilities","TotalLiabilitiesNetMinorityInterest"]
for tick in symbols:
faang = Ticker(tick)
faang.balance_sheet(frequency='q')
df = faang.balance_sheet(frequency='q')#,{"symbol":[1],"asOfDate":[2],"CashAndCashEquivalents":[3],"CashFinancial":[4],"CurrentAssets":[5],"TangibleBookValue":[6],"CurrentLiabilities":[7],"TotalLiabilitiesNetMinorityInterest":[8],"marketCap":[9]}
for column_name in header :
if not column_name in df.columns :
#Here, if any column is missing from the names you defined
#in your "header" variable, we add this column and set all
#it's row values to None
df.loc[:,column_name ] = None
df.to_csv('output.csv', mode='a', index=True, header=False, columns=header)
A:
Load all dataframes into a list, then use pd.concat (it will create NaN in missing columns):
import pandas as pd
from yahooquery import Ticker
symbols = ["AAPL", "GOOG", "MSFT"]
header = [
"asOfDate",
"CashAndCashEquivalents",
"CashFinancial",
"CurrentAssets",
"TangibleBookValue",
"CurrentLiabilities",
"TotalLiabilitiesNetMinorityInterest",
]
all_dfs = []
for tick in symbols:
faang = Ticker(tick)
df = faang.balance_sheet(frequency="q")
all_dfs.append(df)
df = pd.concat(all_dfs)
for symbol, g in df.groupby(level=0):
print(symbol)
print(g[header])
# to save to CSV:
# g[header].to_csv('filename.csv')
print("-" * 80)
Prints:
AAPL
asOfDate CashAndCashEquivalents CashFinancial CurrentAssets TangibleBookValue CurrentLiabilities TotalLiabilitiesNetMinorityInterest
symbol
AAPL 2021-09-30 3.494000e+10 1.730500e+10 1.348360e+11 6.309000e+10 1.254810e+11 2.879120e+11
AAPL 2021-12-31 3.711900e+10 1.799200e+10 1.531540e+11 7.193200e+10 1.475740e+11 3.092590e+11
AAPL 2022-03-31 2.809800e+10 1.429800e+10 1.181800e+11 6.739900e+10 1.275080e+11 2.832630e+11
AAPL 2022-06-30 2.750200e+10 1.285200e+10 1.122920e+11 5.810700e+10 1.298730e+11 2.782020e+11
AAPL 2022-09-30 2.364600e+10 1.854600e+10 1.354050e+11 5.067200e+10 1.539820e+11 3.020830e+11
--------------------------------------------------------------------------------
GOOG
asOfDate CashAndCashEquivalents CashFinancial CurrentAssets TangibleBookValue CurrentLiabilities TotalLiabilitiesNetMinorityInterest
symbol
GOOG 2021-09-30 2.371900e+10 NaN 1.841100e+11 2.203950e+11 6.178200e+10 1.028360e+11
GOOG 2021-12-31 2.094500e+10 NaN 1.881430e+11 2.272620e+11 6.425400e+10 1.076330e+11
GOOG 2022-03-31 2.088600e+10 NaN 1.778530e+11 2.296810e+11 6.194800e+10 1.030920e+11
GOOG 2022-06-30 1.793600e+10 NaN 1.723710e+11 2.300930e+11 6.135400e+10 9.976600e+10
GOOG 2022-09-30 2.198400e+10 NaN 1.661090e+11 2.226000e+11 6.597900e+10 1.046290e+11
--------------------------------------------------------------------------------
MSFT
asOfDate CashAndCashEquivalents CashFinancial CurrentAssets TangibleBookValue CurrentLiabilities TotalLiabilitiesNetMinorityInterest
symbol
MSFT 2021-09-30 1.916500e+10 6.863000e+09 1.743260e+11 9.372900e+10 8.052800e+10 1.834400e+11
MSFT 2021-12-31 2.060400e+10 6.255000e+09 1.741880e+11 1.016270e+11 7.751000e+10 1.803790e+11
MSFT 2022-03-31 1.249800e+10 7.456000e+09 1.539220e+11 8.420500e+10 7.743900e+10 1.816830e+11
MSFT 2022-06-30 1.393100e+10 8.258000e+09 1.696840e+11 8.772000e+10 9.508200e+10 1.982980e+11
MSFT 2022-09-30 2.288400e+10 7.237000e+09 1.608120e+11 9.529900e+10 8.738900e+10 1.862180e+11
--------------------------------------------------------------------------------
| Write "null" if column doesn't exist with KeyError: "['Column'] not in index" in df.to_csv? | I am getting KeyError: "['CashFinancial'] not in index" on the df.to_csv line because 'GOOG' doesn't have the CashFinancial column. How can I have it write in null for the CashFinancial value for 'GOOG'?
import pandas as pd
from yahooquery import Ticker
symbols = ['AAPL','GOOG','MSFT'] #This will be 75,000 symbols.
header = ["asOfDate","CashAndCashEquivalents","CashFinancial","CurrentAssets","TangibleBookValue","CurrentLiabilities","TotalLiabilitiesNetMinorityInterest"]
for tick in symbols:
faang = Ticker(tick)
faang.balance_sheet(frequency='q')
df = faang.balance_sheet(frequency='q')
df.to_csv('output.csv', mode='a', index=True, header=False, columns=header)
| [
"What about :\nif tick == \"GOOG\"\n df.loc[:,\"CashFinancial\"] = None\n\nTo set an entire CashFinancial column to \"None\" only if your \"tick\" was GOOG, before writing it to csv.\nThe full code from the example you posted would he something like :\nimport pandas as pd\nfrom yahooquery import Ticker\nsymbols = ['AAPL','GOOG','MSFT']\nheader = [\"asOfDate\",\"CashAndCashEquivalents\",\"CashFinancial\",\"CurrentAssets\",\"TangibleBookValue\",\"CurrentLiabilities\",\"TotalLiabilitiesNetMinorityInterest\"]\n\nfor tick in symbols:\n faang = Ticker(tick)\n faang.balance_sheet(frequency='q')\n df = faang.balance_sheet(frequency='q')#,{\"symbol\":[1],\"asOfDate\":[2],\"CashAndCashEquivalents\":[3],\"CashFinancial\":[4],\"CurrentAssets\":[5],\"TangibleBookValue\":[6],\"CurrentLiabilities\":[7],\"TotalLiabilitiesNetMinorityInterest\":[8],\"marketCap\":[9]}\n for column_name in header :\n if not column_name in df.columns :\n #Here, if any column is missing from the names you defined \n #in your \"header\" variable, we add this column and set all \n #it's row values to None\n df.loc[:,column_name ] = None\n \n df.to_csv('output.csv', mode='a', index=True, header=False, columns=header)\n\n\n",
"Load all dataframes into a list, then use pd.concat (it will create NaN in missing columns):\nimport pandas as pd\nfrom yahooquery import Ticker\n\nsymbols = [\"AAPL\", \"GOOG\", \"MSFT\"]\nheader = [\n \"asOfDate\",\n \"CashAndCashEquivalents\",\n \"CashFinancial\",\n \"CurrentAssets\",\n \"TangibleBookValue\",\n \"CurrentLiabilities\",\n \"TotalLiabilitiesNetMinorityInterest\",\n]\n\nall_dfs = []\nfor tick in symbols:\n faang = Ticker(tick)\n df = faang.balance_sheet(frequency=\"q\")\n all_dfs.append(df)\n\ndf = pd.concat(all_dfs)\n\nfor symbol, g in df.groupby(level=0):\n print(symbol)\n print(g[header])\n # to save to CSV:\n # g[header].to_csv('filename.csv')\n print(\"-\" * 80)\n\nPrints:\nAAPL\n asOfDate CashAndCashEquivalents CashFinancial CurrentAssets TangibleBookValue CurrentLiabilities TotalLiabilitiesNetMinorityInterest\nsymbol \nAAPL 2021-09-30 3.494000e+10 1.730500e+10 1.348360e+11 6.309000e+10 1.254810e+11 2.879120e+11\nAAPL 2021-12-31 3.711900e+10 1.799200e+10 1.531540e+11 7.193200e+10 1.475740e+11 3.092590e+11\nAAPL 2022-03-31 2.809800e+10 1.429800e+10 1.181800e+11 6.739900e+10 1.275080e+11 2.832630e+11\nAAPL 2022-06-30 2.750200e+10 1.285200e+10 1.122920e+11 5.810700e+10 1.298730e+11 2.782020e+11\nAAPL 2022-09-30 2.364600e+10 1.854600e+10 1.354050e+11 5.067200e+10 1.539820e+11 3.020830e+11\n--------------------------------------------------------------------------------\nGOOG\n asOfDate CashAndCashEquivalents CashFinancial CurrentAssets TangibleBookValue CurrentLiabilities TotalLiabilitiesNetMinorityInterest\nsymbol \nGOOG 2021-09-30 2.371900e+10 NaN 1.841100e+11 2.203950e+11 6.178200e+10 1.028360e+11\nGOOG 2021-12-31 2.094500e+10 NaN 1.881430e+11 2.272620e+11 6.425400e+10 1.076330e+11\nGOOG 2022-03-31 2.088600e+10 NaN 1.778530e+11 2.296810e+11 6.194800e+10 1.030920e+11\nGOOG 2022-06-30 1.793600e+10 NaN 1.723710e+11 2.300930e+11 6.135400e+10 9.976600e+10\nGOOG 2022-09-30 2.198400e+10 NaN 1.661090e+11 2.226000e+11 6.597900e+10 1.046290e+11\n--------------------------------------------------------------------------------\nMSFT\n asOfDate CashAndCashEquivalents CashFinancial CurrentAssets TangibleBookValue CurrentLiabilities TotalLiabilitiesNetMinorityInterest\nsymbol \nMSFT 2021-09-30 1.916500e+10 6.863000e+09 1.743260e+11 9.372900e+10 8.052800e+10 1.834400e+11\nMSFT 2021-12-31 2.060400e+10 6.255000e+09 1.741880e+11 1.016270e+11 7.751000e+10 1.803790e+11\nMSFT 2022-03-31 1.249800e+10 7.456000e+09 1.539220e+11 8.420500e+10 7.743900e+10 1.816830e+11\nMSFT 2022-06-30 1.393100e+10 8.258000e+09 1.696840e+11 8.772000e+10 9.508200e+10 1.982980e+11\nMSFT 2022-09-30 2.288400e+10 7.237000e+09 1.608120e+11 9.529900e+10 8.738900e+10 1.862180e+11\n--------------------------------------------------------------------------------\n\n"
] | [
1,
1
] | [] | [] | [
"csv",
"dataframe",
"pandas",
"python"
] | stackoverflow_0074657789_csv_dataframe_pandas_python.txt |
Q:
Fatal Python error on Windows 10 when I try to access Python from command prompt
I did everything from these answers on a previous thread and nothing changed. Even uninstalling Python did not improve the situation. Everything was working fine but all of a sudden it stopped working.
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Python path configuration:
PYTHONHOME = (not set)
PYTHONPATH = (not set)
program name = 'python'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = 'C:\\iverilog\\gtkwave\\bin\\python.exe'
sys.base_prefix = 'D:\\a\\_temp\\msys\\msys64\\mingw64'
sys.base_exec_prefix = 'D:\\a\\_temp\\msys\\msys64\\mingw64'
sys.executable = 'C:\\iverilog\\gtkwave\\bin\\python.exe'
sys.prefix = 'D:\\a\\_temp\\msys\\msys64\\mingw64'
sys.exec_prefix = 'D:\\a\\_temp\\msys\\msys64\\mingw64'
sys.path = [
'D:\\a\\_temp\\msys\\msys64\\mingw64\\lib\\python38.zip',
'D:\\a\\_temp\\msys\\msys64\\mingw64\\lib\\python3.8',
'D:\\a\\_temp\\msys\\msys64\\mingw64\\lib\\python3.8',
'D:\\a\\_temp\\msys\\msys64\\mingw64\\lib\\lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x000029e8 (most recent call first):
<no Python frame>
A:
I've just had the same problem.
Looking back, the error is fairly clear: ModuleNotFoundError: No module named 'encodings' means Python is not able to find the encodings module, one of Python built-in modules. I had a look at my filesystem and found out that the encodings module is found at C:\msys64\mingw64\lib\python3.8\encodings. This meant that Python could not find that folder!
My simple solution was to add C:\msys64\mingw64\lib\python3.8 to the PYTHONPATH environment variable. On Windows you can either set it from the Windows settings UI - the permanent way - or set it temporarily from the command line, using this command: set PYTHONPATH=C:\msys64\mingw64\lib\python3.8\.
This kind of feels like a hack since, as a basic Python module, it should be found automatically and yet doesn't. However, at this point, I've had no other Python-related problems so I figure it really does the trick.
| Fatal Python error on Windows 10 when I try to access Python from command prompt | I did everything from these answers on a previous thread and nothing changed. Even uninstalling Python did not improve the situation. Everything was working fine but all of a sudden it stopped working.
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Python path configuration:
PYTHONHOME = (not set)
PYTHONPATH = (not set)
program name = 'python'
isolated = 0
environment = 1
user site = 1
import site = 1
sys._base_executable = 'C:\\iverilog\\gtkwave\\bin\\python.exe'
sys.base_prefix = 'D:\\a\\_temp\\msys\\msys64\\mingw64'
sys.base_exec_prefix = 'D:\\a\\_temp\\msys\\msys64\\mingw64'
sys.executable = 'C:\\iverilog\\gtkwave\\bin\\python.exe'
sys.prefix = 'D:\\a\\_temp\\msys\\msys64\\mingw64'
sys.exec_prefix = 'D:\\a\\_temp\\msys\\msys64\\mingw64'
sys.path = [
'D:\\a\\_temp\\msys\\msys64\\mingw64\\lib\\python38.zip',
'D:\\a\\_temp\\msys\\msys64\\mingw64\\lib\\python3.8',
'D:\\a\\_temp\\msys\\msys64\\mingw64\\lib\\python3.8',
'D:\\a\\_temp\\msys\\msys64\\mingw64\\lib\\lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x000029e8 (most recent call first):
<no Python frame>
| [
"I've just had the same problem.\nLooking back, the error is fairly clear: ModuleNotFoundError: No module named 'encodings' means Python is not able to find the encodings module, one of Python built-in modules. I had a look at my filesystem and found out that the encodings module is found at C:\\msys64\\mingw64\\lib\\python3.8\\encodings. This meant that Python could not find that folder!\nMy simple solution was to add C:\\msys64\\mingw64\\lib\\python3.8 to the PYTHONPATH environment variable. On Windows you can either set it from the Windows settings UI - the permanent way - or set it temporarily from the command line, using this command: set PYTHONPATH=C:\\msys64\\mingw64\\lib\\python3.8\\.\nThis kind of feels like a hack since, as a basic Python module, it should be found automatically and yet doesn't. However, at this point, I've had no other Python-related problems so I figure it really does the trick.\n"
] | [
0
] | [] | [] | [
"msys",
"python",
"python_3.x",
"windows"
] | stackoverflow_0071831380_msys_python_python_3.x_windows.txt |
Subsets and Splits